使用大量Flink SQL查询(下面的100个),Flink命令行客户端在Yarn群集上失败,“JobManager在600000 ms内没有响应”,即作业永远不会在群集上启动 .
-
JobManager日志在最后一个TaskManager启动后没有任何内容,除了带有"job with ID 5cd95f89ed7a66ec44f2d19eca0592f7 not found in JobManager"的DEBUG日志,表明它可能卡住(创建ExecutionGraph?) .
-
与本地独立的java程序相同(最初的CPU高)
-
注意:structStream中的每一行包含515列(许多最终为null),包括具有原始消息的列 .
-
在YARN集群中,我们为TaskManager指定18GB,为JobManager指定18GB,每个指定5个插槽,并且725的并行度(我们的Kafka源中的分区) .
Flink SQL查询:
select count (*), 'idnumber' as criteria, Environment, CollectedTimestamp,
EventTimestamp, RawMsg, Source
from structStream
where Environment='MyEnvironment' and Rule='MyRule' and LogType='MyLogType'
and Outcome='Success'
group by tumble(proctime, INTERVAL '1' SECOND), Environment,
CollectedTimestamp, EventTimestamp, RawMsg, Source
代码
public static void main(String[] args) throws Exception {
FileSystems.newFileSystem(KafkaReadingStreamingJob.class
.getResource(WHITELIST_CSV).toURI(), new HashMap<>());
final StreamExecutionEnvironment streamingEnvironment = getStreamExecutionEnvironment();
final StreamTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(streamingEnvironment);
final DataStream<Row> structStream = getKafkaStreamOfRows(streamingEnvironment);
tableEnv.registerDataStream("structStream", structStream);
tableEnv.scan("structStream").printSchema();
for (int i = 0; i < 100; i++) {
for (String query : Queries.sample) {
// Queries.sample has one query that is above.
Table selectQuery = tableEnv.sqlQuery(query);
DataStream<Row> selectQueryStream =
tableEnv.toAppendStream(selectQuery, Row.class);
selectQueryStream.print();
}
}
// execute program
streamingEnvironment.execute("Kafka Streaming SQL");
}
private static DataStream<Row> getKafkaStreamOfRows(StreamExecutionEnvironment environment) throws Exception {
Properties properties = getKafkaProperties();
// TestDeserializer deserializes the JSON to a ROW of string columns (515)
// and also adds a column for the raw message.
FlinkKafkaConsumer011 consumer = new
FlinkKafkaConsumer011(KAFKA_TOPIC_TO_CONSUME, new TestDeserializer(getRowTypeInfo()), properties);
DataStream<Row> stream = environment.addSource(consumer);
return stream;
}
private static RowTypeInfo getRowTypeInfo() throws Exception {
// This has 515 fields.
List<String> fieldNames = DDIManager.getDDIFieldNames();
fieldNames.add("rawkafka"); // rawMessage added by TestDeserializer
fieldNames.add("proctime");
// Fill typeInformationArray with StringType to all but the last field which is of type Time
.....
return new RowTypeInfo(typeInformationArray, fieldNamesArray);
}
private static StreamExecutionEnvironment getStreamExecutionEnvironment() throws IOException {
final StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
env.enableCheckpointing(60000);
env.setStateBackend(new FsStateBackend(CHECKPOINT_DIR));
env.setParallelism(725);
return env;
}
private static DataStream<Row> getKafkaStreamOfRows(StreamExecutionEnvironment environment) throws Exception {
Properties properties = getKafkaProperties();
// TestDeserializer deserializes the JSON to a ROW of string columns (515)
// and also adds a column for the raw message.
FlinkKafkaConsumer011 consumer = new FlinkKafkaConsumer011(KAFKA_TOPIC_TO_CONSUME, new TestDeserializer(getRowTypeInfo()), properties);
DataStream<Row> stream = environment.addSource(consumer);
return stream;
}
private static RowTypeInfo getRowTypeInfo() throws Exception {
// This has 515 fields.
List<String> fieldNames = DDIManager.getDDIFieldNames();
fieldNames.add("rawkafka"); // rawMessage added by TestDeserializer
fieldNames.add("proctime");
// Fill typeInformationArray with StringType to all but the last field which is of type Time
.....
return new RowTypeInfo(typeInformationArray, fieldNamesArray);
}
private static StreamExecutionEnvironment getStreamExecutionEnvironment() throws IOException {
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
env.enableCheckpointing(60000);
env.setStateBackend(new FsStateBackend(CHECKPOINT_DIR));
env.setParallelism(725);
return env;
}
1 回答
这看起来好像JobManager过载了太多并发运行的作业 . 我建议将作业分发给更多的JobManagers / Flink集群 .