首页 文章

尝试与kerberized Hadoop集群通信时,“客户端和服务器之间没有共同的保护层”

提问于
浏览
0

我正在尝试以编程方式与Kerberized的Hadoop集群进行通信(CDH 5.3 / HDFS 2.5.0) .

我在客户端有一个有效的Kerberos令牌 . 但我收到如下错误,“客户端和服务器之间没有共同的保护层” .

这个错误意味着什么,有没有办法解决或解决它?

这是与HDFS-5688相关的吗?票证似乎意味着必须设置属性"hadoop.rpc.protection",大概是"authentication"(也可以是this) .

是否需要在群集中的所有服务器上设置,然后群集会被退回?我无法轻松访问集群,因此我需要了解'hadoop.rpc.protection'是否是实际原因 . 似乎'authentication'应该是默认使用的值,至少根据core-default.xml文档 .

java.io.IOException:本地异常失败:java.io.IOException:无法为principal1 /server1.acme.net@xxx.acme.net Build 连接到server2.acme.net/10.XX.XXX.XXX :8020;主机详细信息:本地主机为:“some-host.acme.net/168.XX.XXX.XX”;目标主机是:“server2.acme.net”:8020;

at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)

    at org.apache.hadoop.ipc.Client.call(Client.java:1415)

    at org.apache.hadoop.ipc.Client.call(Client.java:1364)

    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)

    at com.sun.proxy.$Proxy24.getFileInfo(Unknown Source)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

    at java.lang.reflect.Method.invoke(Method.java:498)

    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)

    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)

    at com.sun.proxy.$Proxy24.getFileInfo(Unknown Source)

    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:707)

    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1785)

    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1068)

    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)

    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)

    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)

    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1398)

    ... 11 more

引起:java.io.IOException:无法为principal1 /server1.acme.net@xxx.acme.net Build 连接到server2.acme.net/10.XX.XXX.XXX:8020;

at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:671)

    at java.security.AccessController.doPrivileged(Native Method)

    at javax.security.auth.Subject.doAs(Subject.java:422)

    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)

    at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:642)

    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:725)

    at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)

    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1463)

    at org.apache.hadoop.ipc.Client.call(Client.java:1382)

    ... 31 more

引发者:javax.security.sasl.SaslException:客户端和服务器之间没有公共保护层

at com.sun.security.sasl.gsskerb.GssKrb5Client.doFinalHandshake(GssKrb5Client.java:251)

    at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:186)

    at org.apache.hadoop.security.SaslRpcClient.saslEvaluateToken(SaslRpcClient.java:483)

    at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:427)

    at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:552)

    at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:367)

    at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:717)

    at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:713)

    at java.security.AccessController.doPrivileged(Native Method)

    at javax.security.auth.Subject.doAs(Subject.java:422)

    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)

    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)

    ... 34 more

1 回答

  • 2

    要修复来自SASL的“客户端和服务器之间没有通用保护”错误,我需要将“hadoop.rpc.protection”设置为与群集中服务器端设置的值相同的值 . 在这种情况下恰好是“隐私” .

    此外,群集已配置为HA,因此我必须选择要在HDFS URI(“fs.defaultFS”)和“dfs.namenode.kerberos.principal”属性中使用的正确主机名:

    Configuration config = new Configuration();
    config.set("fs.defaultFS", "hdfs://host1.acme.com:8020");
    config.set("hadoop.security.authentication", "kerberos");
    config.set("hadoop.rpc.protection", "privacy");
    // Need this or we get the error "Server has invalid Kerberos principal":
    config.set("dfs.namenode.kerberos.principal",  
        "hdfs/host1.acme.com@ACME.DYN.ROOT.NET");
    

相关问题