我在AWS上设置了单节点Hadoop集群,配置了Hadoop并启动了HDFS / Yarn守护进程 . HDFS运行良好,但MapReduce示例(我试过grep和randomwriter)超时连接错误 .

Versions: Ubuntu 16.04 Hadoop 2.7.2 Java 1.8.0_101

/ etc / hosts:删除了localhost映射,添加了PTMaster

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhostsa
172.31.23.118 PTMaster

Hadoop Configuration

核心-site.xml中:

<property>
    <name>fs.defaultFS</name>
    <value>hdfs://PTMaster:9000</value>
</property>

HDFS-site.xml中

<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>
<property>
    <name>dfs.permission</name>
    <value>false</value>
</property>

mapred-site.xml中:

<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>

纱的site.xml:

<property>
    <name>yarn.resourcemanager.hostname</name>
    <value>PTMaster</value>
</property>

<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>

<property>
    <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

通过更新yarn-env.sh中的YARN_OPTS来禁用IPv6:

YARN_OPTS="$YARN_OPTS -Djava.net.preferIPv4Stack=true"

格式化HDFS并启动HDFS / YARN守护进程:

hdfs namenode -format
$HADOOP_HOME/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR start namenode
$HADOOP_HOME/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR start datanode
./sbin/yarn-daemon.sh --config $YARN_CONF_DIR  start resourcemanager
./sbin/yarn-daemon.sh --config $YARN_CONF_DIR  start nodemanager
./sbin/mr-jobhistory-daemon.sh start historyserver

这些java进程开始:

12354 DataNode
12451 ResourceManager
12517 NodeManager
12295 NameNode
12653 Jps
12575 JobHistoryServer

netstat显示这些连接:

tcp        0      0 172.31.23.118:8088      0.0.0.0:*               LISTEN      12451/java
tcp        0      0 0.0.0.0:45945           0.0.0.0:*               LISTEN      12517/java
tcp        0      0 0.0.0.0:13562           0.0.0.0:*               LISTEN      12517/java
tcp        0      0 0.0.0.0:50010           0.0.0.0:*               LISTEN      12354/java
tcp        0      0 0.0.0.0:50075           0.0.0.0:*               LISTEN      12354/java
tcp        0      0 172.31.23.118:8030      0.0.0.0:*               LISTEN      12451/java
tcp        0      0 172.31.23.118:8031      0.0.0.0:*               LISTEN      12451/java
tcp        0      0 172.31.23.118:8032      0.0.0.0:*               LISTEN      12451/java
tcp        0      0 172.31.23.118:8033      0.0.0.0:*               LISTEN      12451/java
tcp        0      0 127.0.0.1:39970         0.0.0.0:*               LISTEN      12354/java
tcp        0      0 0.0.0.0:50020           0.0.0.0:*               LISTEN      12354/java
tcp        0      0 0.0.0.0:8040            0.0.0.0:*               LISTEN      12517/java
tcp        0      0 172.31.23.118:9000      0.0.0.0:*               LISTEN      12295/java
tcp        0      0 0.0.0.0:8042            0.0.0.0:*               LISTEN      12517/java
tcp        0      0 0.0.0.0:50070           0.0.0.0:*               LISTEN      12295/java

现在,我可以运行HDFS命令并创建目录/文件:

hdfs dfs -put etc/hadoop/ /tmp/input
hdfs dfs -ls /tmp/input

但是,当我使用MapReduce运行grep时,我看到超时错误并且最终连接到EC2:

hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar grep /tmp/input /tmp/output 'dfs[a-z.]+' &*`

以下是yarn-nodemanager.log的摘录

2016-12-04 23:50:01,325 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1480895168558_0001_01_000001 transitioned from LOCALIZED to RUNNING
2016-12-04 23:50:01,412 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, /tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1480895168558_0001/container_1480895168558_0001_01_000001/default_container_executor.sh]
2016-12-04 23:50:03,090 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1480895168558_0001_01_000001
2016-12-04 23:50:03,268 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 13527 for container-id container_1480895168558_0001_01_000001: 49.4 MB of 2 GB physical memory used; 2.5 GB of 4.2 GB virtual memory used
2016-12-04 23:50:06,427 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 13527 for container-id container_1480895168558_0001_01_000001: 71.3 MB of 2 GB physical memory used; 2.6 GB of 4.2 GB virtual memory used
2016-12-04 23:50:12,873 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 13527 for container-id container_1480895168558_0001_01_000001: 85.6 MB of 2 GB physical memory used; 2.6 GB of 4.2 GB virtual memory used
2016-12-04 23:55:50,166 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 13527 for container-id container_1480895168558_0001_01_000001: 85.9 MB of 2 GB physical memory used; 2.6 GB of 4.2 GB virtual memory used
2016-12-04 23:55:50,246 ERROR org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Caught exception in status-updater
java.io.IOException: Failed on local exception: java.io.IOException: Connection reset by peer; Host Details : local host is: "PTMaster/172.31.23.118"; destination host is: "PTMaster":8031;
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:773)
        at org.apache.hadoop.ipc.Client.call(Client.java:1524)
        at org.apache.hadoop.ipc.Client.call(Client.java:1454)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:234)
        at com.sun.proxy.$Proxy72.nodeHeartbeat(Unknown Source)
        at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.nodeHeartbeat(ResourceTrackerPBClientImpl.java:80)
        at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy73.nodeHeartbeat(Unknown Source)
        at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl$1.run(NodeStatusUpdaterImpl.java:596)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Connection reset by peer
        at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
        at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
        at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
        at sun.nio.ch.IOUtil.read(IOUtil.java:197)
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
        at org.apache.hadoop.net.SocketInputStream$Reader.read_aroundBody0(SocketInputStream.java:57)
        at org.apache.hadoop.net.SocketInputStream$Reader.read_aroundBody1$advice(SocketInputStream.java:41)
        at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)
        at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
        at java.io.FilterInputStream.read(FilterInputStream.java:133)
        at java.io.FilterInputStream.read(FilterInputStream.java:133)
        at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:534)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
        at java.io.DataInputStream.readInt(DataInputStream.java:387)
        at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1111)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:996)
2016-12-04 23:55:52,252 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: PTMaster/172.31.23.118:8031. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-12-04 23:55:53,253 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: PTMaster/172.31.23.118:8031. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-12-04 23:55:53,286 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 13527 for container-id container_1480895168558_0001_01_000001: 142.5 MB of 2 GB physical memory used; 2.6 GB of 4.2 GB virtual memory used

.....

请建议Hadoop和TCP配置是否正确 .

[编辑]更正了hdfs-site.xml