首页 文章

ConnectException:在Hadoop中运行mapreduce时拒绝连接

提问于
浏览
1

我设置了多机器模式的Hadoop(2.6.0):1个namenode 3 datanodes . 当我使用command:start-all.sh时,他们(namenode,datanode,资源管理器,节点管理器)工作正常 . 我用jps命令检查了它,每个节点的结果都是下面的:

NameNode:

7300 ResourceManager 6942 NameNode 7154 SecondaryNameNode

的DataNodes:

3840 DataNode 3924 NodeManager

我还在HDFS上传了示例文本文件:/user/hadoop/data/sample.txt . 那一刻绝对没有错误 .

但是当我尝试使用hadoop示例的jar运行mapreduce时:

hadoop jar hadoop-mapreduce-examples-2.6.0.jar wordcount /user/hadoop/data/sample.txt / user / hadoop / output

我有这个错误:

15/04/08 03:31:26 INFO mapreduce.Job: Job job_1428478232474_0001 running    in uber mode : false
15/04/08 03:31:26 INFO mapreduce.Job:  map 0% reduce 0%
15/04/08 03:31:26 INFO mapreduce.Job: Job job_1428478232474_0001 failed with     state FAILED due to: Application application_1428478232474_0001 failed 2 times due to Error launching appattempt_1428478232474_0001_000002. Got exception: java.net.ConnectException: Call From hadoop/127.0.0.1 to localhost:53245 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy31.startContainers(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:119)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:254)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
at org.apache.hadoop.ipc.Client.call(Client.java:1438)
    ... 9 more Failing the application.
15/04/08 03:31:26 INFO mapreduce.Job: Counters: 0

关于配置,确保namenode可以ssh到datanode,反之亦然,没有提示密码 . 我也不能使用IP6和修改/ etc / hosts文件:

127.0.0.1 localhost hadoop 192.168.56.102 hadoop-nn 192.168.56.103 hadoop-dn1 192.168.56.104 hadoop-dn2 192.168.56.105 hadoop-dn3

我不知道为什么mapreduced不能运行namenode和datanodes工作正常 . 我几乎被困在这里,你能帮我找到原因吗?

谢谢

编辑:这里配置hdfs-site.xml(namenode):

<property>
    <name>dfs.namenode.name.dir</name>
    <value>file:///usr/local/hadoop/hadoop_stores/hdfs/namenode</value>
    <description>NameNode directory for namespace and transaction logs storage.</description>
</property>
<property>
    <name>dfs.replication</name>
    <value>3</value>
</property>
<property>
    <name>dfs.permissions</name>
    <value>false</value>
</property>
<property>
    <name>dfs.datanode.use.datanode.hostname</name>
    <value>false</value>
</property>
<property>
    <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
    <value>false</value>
</property>
<property>
     <name>dfs.namenode.http-address</name>
     <value>hadoop-nn:50070</value>
     <description>Your NameNode hostname for http access.</description>
</property>
<property>
     <name>dfs.namenode.secondary.http-address</name>
     <value>hadoop-nn:50090</value>
     <description>Your Secondary NameNode hostname for http access.</description>
</property>

在datanodes:

<property>
    <name>dfs.datanode.data.dir</name>
    <value>file:///usr/local/hadoop/hadoop_stores/hdfs/data/datanode</value>
    <description>DataNode directory</description>
</property>

<property>
    <name>dfs.replication</name>
    <value>3</value>
</property>
<property>
    <name>dfs.permissions</name>
    <value>false</value>
</property>
<property>
    <name>dfs.datanode.use.datanode.hostname</name>
    <value>false</value>
</property>
<property>
     <name>dfs.namenode.http-address</name>
     <value>hadoop-nn:50070</value>
     <description>Your NameNode hostname for http access.</description>
</property>
<property>
     <name>dfs.namenode.secondary.http-address</name>
     <value>hadoop-nn:50090</value>
     <description>Your Secondary NameNode hostname for http access.</description>

这是命令的结果: hadoop fs -ls /user/hadoop/data

hadoop @ hadoop:〜/ DATA $ hadoop fs -ls / user / hadoop / data 15/04/09 00:23:27找到2项-rw-r - r-- 3 hadoop supergroup 29 2015-04-09 00:22> /user/hadoop/data/sample.txt -rw-r - r-- 3 hadoop supergroup 27 2015-04-09 00:22> /user/hadoop/data/sample1.txt

hadoop fs -ls /user/hadoop/output

ls:`/ user / hadoop / output':没有这样的文件或目录

2 回答

  • 0

    找到解决方案看到这篇文章yarn shows data nodes id/name as localhost

    Call From localhost.localdomain/127.0.0.1 to localhost.localdomain:56148 failed on connection exception: java.net.ConnectException: Connection refused;
    

    master和slave都在/ etc / hostname中拥有localhost.localdomain的主机名 .
    我将slave的主机名更改为slave1和slave2 . 那很有效 . 谢谢大家的时间 .

    @kate确保namenode中的etc / hostname和datanodes未设置为localhost . 只需在终端输入〜#hostname即可查看 . 您可以使用相同的命令设置新的主机名 .

    我的主人和 Worker 或奴隶'/ etc / hosts看起来像这样 -

    127.0.0.1    localhost localhost.localdomain localhost4 localhost4.localdomain4
    #127.0.1.1    localhost
    192.168.111.72  master
    192.168.111.65  worker1
    192.168.111.66  worker2
    

    worker1的主机名

    hduser@worker1:/mnt/hdfs/datanode$ cat /etc/hostname 
    worker1
    

    和worker2

    hduser@worker2:/usr/local/hadoop/logs$ cat /etc/hostname 
    worker2
    

    此外,您可能不希望具有带有环回接口的“hadoop”主机名 . 即

    127.0.0.1 localhost hadoop
    

    检查https://wiki.apache.org/hadoop/ConnectionRefused中的这一点(1) .

    谢谢 .

  • 0

    FIREWALL ISSUE:

    java.net.ConnectException:连接被拒绝

    此错误可能是由防火墙问题引起的 . 在终端中执行此操作:

    sudo apt-get install iptables-persistent
    sudo iptables -L
    sudo iptables-save > /usr/iptables-backup/iptables.v4.rules
    

    在继续之前检查文件是否已创建(因为如果出现问题,这将用于恢复防火墙) .

    现在,刷新iptable规则(即停止防火墙):

    sudo iptables -F
    

    现在试试,

    sudo iptables -L
    

    此命令不应返回任何规则 . 现在,尝试运行map / reduce作业 .

    Note: 如果要将iptables恢复到先前条件,请在终端中键入:

    sudo iptables-restore < /usr/iptables-backup/iptables.v4.rules

相关问题