我在 AWS EMR 上配置了Kerberos,它显示了hdfs和hadoop用户的成功配置 .

但是,发出"hdfs dfs -ls"命令会引发错误:"java.io.IOException: Server asks us to fall back to SIMPLE auth, but this client is configured to only allow secure connections."

在Kerberos客户端:

-bash-4.2 $ kinit hdfs@MYCLUSTER.COM的密码:

-bash-4.2 $ klist -k -t /etc/hadoop/conf/hdfs.keytab Keytab名称:FILE:/etc/hadoop/conf/hdfs.keytab KVNO Timestamp Principal


3 05/14/2017 00:14:04 hdfs@MYCLUSTER.COM 3 05/14/2017 00:14:04 hdfs@MYCLUSTER.COM

我检查了KDC服务器和客户端在NTP方面是否同步(没有观察到时间偏差) .

我启用了调试,这里是日志的片段:

[root@ip-172-31-49-79 etc]# HADOOP_ROOT_LOGGER=DEBUG,console hdfs dfs -ls /
17/05/14 02:12:22 DEBUG util.Shell: setsid exited with exit code 0
17/05/14 02:12:23 DEBUG conf.Configuration: parsing URL jar:file:/usr/lib/hadoop/hadoop-common-2.7.3-amzn-2.jar!/core-default.xml
17/05/14 02:12:23 DEBUG conf.Configuration: parsing input stream sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream@25359ed8
17/05/14 02:12:23 DEBUG conf.Configuration: parsing URL file:/etc/hadoop/conf.empty/core-site.xml
17/05/14 02:12:23 DEBUG conf.Configuration: parsing input stream java.io.BufferedInputStream@6f3b5d16
17/05/14 02:12:23 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of successful kerberos logins and latency (milliseconds)])
17/05/14 02:12:23 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of failed kerberos logins and latency (milliseconds)])
17/05/14 02:12:23 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[GetGroups])
17/05/14 02:12:23 DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group related metrics
17/05/14 02:12:23 DEBUG security.Groups:  Creating new Groups object
17/05/14 02:12:23 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
17/05/14 02:12:23 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
17/05/14 02:12:23 DEBUG security.JniBasedUnixGroupsMapping: Using JniBasedUnixGroupsMapping for Group resolution
17/05/14 02:12:23 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
17/05/14 02:12:23 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
17/05/14 02:12:23 DEBUG security.UserGroupInformation: hadoop login
17/05/14 02:12:23 DEBUG security.UserGroupInformation: hadoop login commit
17/05/14 02:12:23 DEBUG security.UserGroupInformation: using kerberos user:null
17/05/14 02:12:23 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: root
17/05/14 02:12:23 DEBUG security.UserGroupInformation: Using user: "UnixPrincipal: root" with name root
17/05/14 02:12:23 DEBUG security.UserGroupInformation: User entry: "root"
17/05/14 02:12:23 DEBUG security.UserGroupInformation: UGI loginUser:root (auth:KERBEROS)
17/05/14 02:12:23 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
17/05/14 02:12:23 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
17/05/14 02:12:23 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
17/05/14 02:12:23 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path =
17/05/14 02:12:23 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
17/05/14 02:12:23 DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@6b58b9e9
17/05/14 02:12:23 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@4167d97b
17/05/14 02:12:24 DEBUG azure.NativeAzureFileSystem: finalize() called.
17/05/14 02:12:24 DEBUG azure.NativeAzureFileSystem: finalize() called.
17/05/14 02:12:24 DEBUG unix.DomainSocketWatcher: org.apache.hadoop.net.unix.DomainSocketWatcher$2@3213a316: starting with interruptCheckPeriodMs = 60000
17/05/14 02:12:24 DEBUG util.PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket are disabled.
17/05/14 02:12:24 DEBUG sasl.DataTransferSaslUtil: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
17/05/14 02:12:24 DEBUG ipc.Client: The ping interval is 60000 ms.
17/05/14 02:12:24 DEBUG ipc.Client: Connecting to ip-172-31-49-79.ec2.internal/172.31.49.79:8020
17/05/14 02:12:24 DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:KERBEROS) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:725)
17/05/14 02:12:24 DEBUG security.SaslRpcClient: Sending sasl message state: NEGOTIATE

17/05/14 02:12:24 DEBUG security.SaslRpcClient: Received SASL message state: NEGOTIATE
auths {
  method: "TOKEN"
  mechanism: "DIGEST-MD5"
  protocol: ""
  serverId: "default"
  challenge: "realm=\"default\",nonce=\"b44IiskXjHckOO9qu4sNJ6Dm7lqzWmH+iyShWzt4\",qop=\"auth\",charset=utf-8,algorithm=md5-sess"
}
auths {
  method: "SIMPLE"
  mechanism: ""
}

17/05/14 02:12:24 DEBUG security.SaslRpcClient: Get token info proto:interface org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB info:@org.apache.hadoop.security.token.TokenInfo(value=class org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSelector)
17/05/14 02:12:24 DEBUG security.SaslRpcClient: Use SIMPLE authentication for protocol ClientNamenodeProtocolPB
17/05/14 02:12:24 DEBUG security.SaslRpcClient: Sending sasl message state: INITIATE
auths {
  method: "SIMPLE"
  mechanism: ""
}

17/05/14 02:12:24 DEBUG ipc.Client: closing ipc connection to ip-172-31-49-79.ec2.internal/172.31.49.79:8020: Server asks us to fall back to SIMPLE auth, but this client is configured to only allow secure connections.
java.io.IOException: Server asks us to fall back to SIMPLE auth, but this client is configured to only allow secure connections.
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:754)
    at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
    at org.apache.hadoop.ipc.Client.call(Client.java:1451)
    at org.apache.hadoop.ipc.Client.call(Client.java:1412)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
    at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
    at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
    at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
    at org.apache.hadoop.fs.Globber.glob(Globber.java:265)
    at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1732)
    at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1713)
    at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:326)
    at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
    at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
    at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
    at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
    at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
    at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
17/05/14 02:12:24 DEBUG ipc.Client: IPC Client (735937428) connection to ip-172-31-49-79.ec2.internal/172.31.49.79:8020 from root: closed
ls: Failed on local exception: java.io.IOException: Server asks us to fall back to SIMPLE auth, but this client is configured to only allow secure connections.; Host Details : local host is: "ip-172-31-49-79.ec2.internal/172.31.49.79"; destination host is: "ip-172-31-49-79.ec2.internal":8020;
17/05/14 02:12:24 DEBUG ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@4167d97b
17/05/14 02:12:24 DEBUG ipc.Client: removing client from cache: org.apache.hadoop.ipc.Client@4167d97b
17/05/14 02:12:24 DEBUG ipc.Client: stopping actual client because no more references remain: org.apache.hadoop.ipc.Client@4167d97b
17/05/14 02:12:24 DEBUG ipc.Client: Stopping client