首页 文章

Flume ng / Avro源,内存通道和HDFS接收器 - 太多小文件

提问于
浏览
0

我面临一个奇怪的问题 . 我希望将大量信息从水槽汇总到HDFS . 我应用推荐配置以避免太多小文件,但它不起作用 . 这是我的配置文件 .

# single-node Flume configuration
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = avro
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 5458
a1.sources.r1.threads = 20

# Describe the HDFS sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://myhost:myport/user/myuser/flume/events/%{senderType}/%{senderName}/%{senderEnv}/%y-%m-%d/%H%M
a1.sinks.k1.hdfs.filePrefix = logs-
a1.sinks.k1.hdfs.fileSuffix = .jsonlog
a1.sinks.k1.hdfs.fileType = DataStream
a1.sinks.k1.hdfs.writeFormat = Text
a1.sinks.k1.hdfs.batchSize = 100
a1.sinks.k1.hdfs.useLocalTimeStamp = true
#never roll-based on time
a1.sinks.k1.hdfs.rollInterval=0
##10MB=10485760, 128MB=134217728, 256MB=268435456
a1.sinks.kl.hdfs.rollSize=10485760
##never roll base on number of events
a1.sinks.kl.hdfs.rollCount=0
a1.sinks.kl.hdfs.round=false

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 5000
a1.channels.c1.transactionCapacity = 1000

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

这个配置有效,我看到了我的文件 . 但文件的重量平均值为1.5kb . Flume控制台输出提供此类信息 .

16/08/03 09:48:31 INFO hdfs.BucketWriter: Creating  hdfs://myhost:myport/user/myuser/flume/events/a/b/c/16-08-03/0948/logs-.1470210484507.jsonlog.tmp
16/08/03 09:48:31 INFO hdfs.BucketWriter: Closing hdfs://myhost:myport/user/myuser/flume/events/a/b/c/16-08-03/0948/logs-.1470210484507.jsonlog.tmp
16/08/03 09:48:31 INFO hdfs.BucketWriter: Renaming hdfs://myhost:myport/user/myuser/flume/events/a/b/c/16-08-03/0948/logs-.1470210484507.jsonlog.tmp to hdfs://myhost:myport/user/myuser/flume/events/a/b/c/16-08-03/0948/logs-.1470210484507.jsonlog
16/08/03 09:48:31 INFO hdfs.BucketWriter: Creating hdfs://myhost:myport/user/myuser/flume/events/a/b/c/16-08-03/0948/logs-.1470210484508.jsonlog.tmp
16/08/03 09:48:31 INFO hdfs.BucketWriter: Closing hdfs://myhost:myport/user/myuser/flume/events/a/b/c/16-08-03/0948/logs-.1470210484508.jsonlog.tmp
16/08/03 09:48:31 INFO hdfs.BucketWriter: Renaming hdfs://myhost:myport/user/myuser/flume/events/a/b/c/16-08-03/0948/logs-.1470210484508.jsonlog.tmp to hdfs://myhost:myport/user/myuser/flume/events/a/b/c/16-08-03/0948/logs-.1470210484508.jsonlog
16/08/03 09:48:31 INFO hdfs.BucketWriter: Creating hdfs://myhost:myport/user/myuser/flume/events/a/b/c/16-08-03/0948/logs-.1470210484509.jsonlog.tmp
16/08/03 09:48:31 INFO hdfs.BucketWriter: Closing hdfs://myhost:myport/user/myuser/flume/events/a/b/c/16-08-03/0948/logs-.1470210484509.jsonlog.tmp

有人对这个问题有所了解吗?


这里有一些关于水槽行为的信息 .

该命令是flume-ng agent -n a1 -c / path / to / flume / conf --conf-file sample-flume.conf -Dflume.root.logger = TRACE,console -Xms8192m -Xmx16384m

Note : Logger 指令没有理解为什么但是我......

水槽开始输出是:

16/08/03 15:32:55 INFO node.PollingPropertiesFileConfigurationProvider: Configuration provider starting
16/08/03 15:32:55 INFO node.PollingPropertiesFileConfigurationProvider: Reloading configuration file:sample-flume.conf
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:k1
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:kl
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Added sinks: k1 Agent: a1
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:k1
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:k1
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:k1
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:k1
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:kl
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:k1
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:k1
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:kl
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:k1
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:k1
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:k1
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration for agents: [a1]
16/08/03 15:32:55 INFO node.AbstractConfigurationProvider: Creating channels
16/08/03 15:32:55 INFO channel.DefaultChannelFactory: Creating instance of channel c1 type memory
16/08/03 15:32:55 INFO node.AbstractConfigurationProvider: Created channel c1
16/08/03 15:32:55 INFO source.DefaultSourceFactory: Creating instance of source r1, type avro
16/08/03 15:32:55 INFO sink.DefaultSinkFactory: Creating instance of sink: k1, type: hdfs
16/08/03 15:32:56 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false
16/08/03 15:32:56 INFO node.AbstractConfigurationProvider: Channel c1 connected to [r1, k1]
16/08/03 15:32:56 INFO node.Application: Starting new configuration:{ sourceRunners:{r1=EventDrivenSourceRunner: { source:Avro source r1: { bindAddress: 0.0.0.0, port: 5458 } }} sinkRunners:{k1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@466ab18a counterGroup:{ name:null counters:{} } }} channels:{c1=org.apache.flume.channel.MemoryChannel{name: c1}} }
16/08/03 15:32:56 INFO node.Application: Starting Channel c1
16/08/03 15:32:56 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: c1: Successfully registered new MBean.
16/08/03 15:32:56 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: c1 started
16/08/03 15:32:56 INFO node.Application: Starting Sink k1
16/08/03 15:32:56 INFO node.Application: Starting Source r1
16/08/03 15:32:56 INFO source.AvroSource: Starting Avro source r1: { bindAddress: 0.0.0.0, port: 5458 }...
16/08/03 15:32:56 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SINK, name: k1: Successfully registered new MBean.
16/08/03 15:32:56 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: k1 started
16/08/03 15:32:56 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SOURCE, name: r1: Successfully registered new MBean.
16/08/03 15:32:56 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: r1 started
16/08/03 15:32:56 INFO source.AvroSource: Avro source r1 started.

由于我不能有更详细的输出,我不得不假设信息如

[...]
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Added sinks: k1 Agent: a1
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:k1
[...]

表示接收器配置正确 .


PS:我看到了以下答案,但没有一个是有效的(我应该错过一些东西......) .

flume-hdfs-sink-generates-lots-of-tiny-files-on-hdfs

too-many-small-files-hdfs-sink-flume

flume-tiering-data-flows-using-the-avro-source-and-sink

flume-hdfs-sink-keeps-rolling-small-files

1 回答

  • 0

    根据您的要求增加批量大小

    a1.sinks.k1.hdfs.batchSize =

相关问题