请稍等 ...
×

采纳答案成功!

向帮助你的同学说点啥吧!感谢那些助人为乐的人

IDEA连接云主机报错 Connection refused:

老师你好,

本地用虚拟机跑了一遍教程没有出现问题,所以开了一个云主机玩一下 Spark Streaming。
流程,顺序和配置都正确的,安全组也把所有的端口都开放了。
启动Flume也正常,netstat -anp|grep 44444 和 41414 都是正常占用,telnet 这两个端口也正常。
但是本地调试启动IDEA 去连接41414端口的时候会报错,
请问云主机一般这种情况的解决思路是什么呢?
报错信息如下:

20/09/11 13:39:33 ERROR ReceiverTracker: Deregistered receiver for stream 0: Error starting receiver 0 - java.io.IOException: Error connecting to hadoop/96.30.196.34:41414
at org.apache.avro.ipc.NettyTransceiver.getChannel(NettyTransceiver.java:261)
at org.apache.avro.ipc.NettyTransceiver.(NettyTransceiver.java:203)
at org.apache.avro.ipc.NettyTransceiver.(NettyTransceiver.java:138)
at org.apache.spark.streaming.flume.FlumePollingReceiverKaTeX parse error: $ within math modeanonfun$onStart1.apply(FlumePollingInputDStream.scala:82)atscala.collection.immutable.List.foreach(List.scala:381)atorg.apache.spark.streaming.flume.FlumePollingReceiver.onStart(FlumePollingInputDStream.scala:82)atorg.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:149)atorg.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:131)atorg.apache.spark.streaming.scheduler.ReceiverTracker1.apply(FlumePollingInputDStream.scala:82) at scala.collection.immutable.List.foreach(List.scala:381) at org.apache.spark.streaming.flume.FlumePollingReceiver.onStart(FlumePollingInputDStream.scala:82) at org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:149) at org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:131) at org.apache.spark.streaming.scheduler.ReceiverTracker1.apply(FlumePollingInputDStream.scala:82)atscala.collection.immutable.List.foreach(List.scala:381)atorg.apache.spark.streaming.flume.FlumePollingReceiver.onStart(FlumePollingInputDStream.scala:82)atorg.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:149)atorg.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:131)atorg.apache.spark.streaming.scheduler.ReceiverTrackerReceiverTrackerEndpointKaTeX parse error: $ within math modeanonfun9.apply(ReceiverTracker.scala:597)atorg.apache.spark.SparkContext9.apply(ReceiverTracker.scala:597) at org.apache.spark.SparkContext9.apply(ReceiverTracker.scala:597)atorg.apache.spark.SparkContext$anonfun34.apply(SparkContext.scala:2173)atorg.apache.spark.SparkContext34.apply(SparkContext.scala:2173) at org.apache.spark.SparkContext34.apply(SparkContext.scala:2173)atorg.apache.spark.SparkContext$anonfun34.apply(SparkContext.scala:2173)atorg.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)atorg.apache.spark.scheduler.Task.run(Task.scala:108)atorg.apache.spark.executor.Executor34.apply(SparkContext.scala:2173) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor34.apply(SparkContext.scala:2173)atorg.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)atorg.apache.spark.scheduler.Task.run(Task.scala:108)atorg.apache.spark.executor.ExecutorTaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Caused by: java.net.ConnectException: Connection refused: hadoop/96.30.196.34:41414
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:714)
at org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152)
at org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
… 3 more

正在回答 回答被采纳积分+3

1回答

Michael_PK 2020-09-12 01:59:43

你本地机器是否正确配置了云主机IP和hostname的隐射关系。云主机要注意,内部是使用内网IP,外部要使用外网IP才可以的。对于云主机一些配置都不是太了解的话,我是不太建议使用云主机的,而且云主机的硬件直接千万不能太低,就是所谓的乞丐版的配置,还是不要使用的好

0 回复 有任何疑惑可以回复我~
  • 提问者 Stefan章晓风 #1
    谢谢回复,这个映射关系hosts文件我是配好的,也用的是公网的IP, ping hadoop 可以通。 如果不用云主机的话,我想使用flume进行实时的数据收集,比如收集推特的数据,那就不能关电脑了吗?
    回复 有任何疑惑可以回复我~ 2020-09-12 02:04:11
  • Michael_PK 回复 提问者 Stefan章晓风 #2
    服务器一般都是7*24小时的。当然学习的时候无所谓了,学完就关,要采集再启动服务就行
    回复 有任何疑惑可以回复我~ 2020-09-12 02:05:35
问题已解决,确定采纳
还有疑问,暂不采纳
意见反馈 帮助中心 APP下载
官方微信