首页 文章

由于字节顺序不匹配,无法使用Flink RocksDB状态后端

提问于
浏览
0

我的Flink Job从kafka主题读取并将数据存储在RocksDB状态后端中,以便利用可查询状态 . 我能够运行该作业并在本地计算机中查询状态 . 但是在集群上部署时,我收到以下错误:

Caused by: org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply$mcV$sp(JobManager.scala:900)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:843)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:843)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

Caused by: java.lang.IllegalStateException: Could not initialize keyed state backend.
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initKeyedState(AbstractStreamOperator.java:286)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:199)
at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeOperators(StreamTask.java:666)
at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:654)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:257)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:655)
at java.lang.Thread.run(Thread.java:745)

Caused by: java.lang.Exception: Could not load the native RocksDB library
at org.apache.flink.contrib.streaming.state.RocksDBStateBackend.ensureRocksDBIsLoaded(RocksDBStateBackend.java:483)
at org.apache.flink.contrib.streaming.state.RocksDBStateBackend.createKeyedStateBackend(RocksDBStateBackend.java:235)
at org.apache.flink.streaming.runtime.tasks.StreamTask.createKeyedStateBackend(StreamTask.java:785)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initKeyedState(AbstractStreamOperator.java:277)
... 6 more

Caused by: java.lang.UnsatisfiedLinkError: /data/hadoop/tmp/rocksdb-lib-32bc6a5551b331596649309a808b287d/librocksdbjni-linux64.so: /data/hadoop/tmp/rocksdb-lib-32bc6a5551b331596649309a808b287d/librocksdbjni-linux64.so: ELF file data encoding not big-endian (Possible cause: endianness mismatch)
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
at java.lang.Runtime.load0(Runtime.java:809)
at java.lang.System.load(System.java:1086)
at org.rocksdb.NativeLibraryLoader.loadLibraryFromJar(NativeLibraryLoader.java:78)
at org.rocksdb.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:56)
at org.apache.flink.contrib.streaming.state.RocksDBStateBackend.ensureRocksDBIsLoaded(RocksDBStateBackend.java:460)

我已经尝试在集群级别和作业级别设置rocksDB状态后端 . 当它设置在作业级别时,我将其作为着色依赖项提供 . 我也尝试在主机群集机器上编译代码 . 在所有情况下我都得到同样的错误 .

ELF file data encoding not big-endian (Possible cause: endianness mismatch)

我该如何解决这个错误?

1 回答

  • 0

    解决了这个问题:

    • clone frocksdb .

    • this commit应用rocksdb补丁 .

    • 在目标平台上运行 make rocksdbjava .

    • 将生成的 rocksdbjni-<version-platform>.jar 添加为本地maven依赖项

    • 在目标平台上构建flink, flink-statebackend-rocksdb/pom.xmlflink-statebackend-rocksdb/pom.xml 指向本地maven jar .

    注意:此修补程序已在最新版本的 rocksdb 中提供 . 将 frocksdb 更新为当前版本的 rocksdb 后,不应发生此问题 .

相关问题