首页 文章

无法在flume-ng中创建类型为HDFS的接收器

提问于
浏览
2

我有一个将日志写入HDFS的flume-ng .
我在一个节点中创建了一个代理 .
但它没有运行 .
有我的配置 .


#example2.conf:单节点Flume配置

#为此代理命名组件
agent1.sources = source1
agent1.sinks = sink1
agent1.channels = channel1

#描述/配置source1
agent1.sources.source1.type = avro
agent1.sources.source1.bind = localhost
agent1.sources.source1.port = 41414

#使用缓冲内存中事件的通道
agent1.channels.channel1.type = memory
agent1.channels.channel1.capacity = 10000
agent1.channels.channel1.transactionCapacity = 100

#描述sink1
agent1.sinks.sink1.type = HDFS
agent1.sinks.sink1.hdfs.path = hdfs://dbkorando.kaist.ac.kr:9000 / flume

#绑定源并接收通道
agent1.sources.source1.channels = channel1
agent1.sinks.sink1.channel = channel1


我命令

flume-ng agent -n agent1 -c conf -C /home/hyahn/hadoop-0.20.2/hadoop-0.20.2-core.jar -f conf/example2.conf -Dflume.root.logger=INFO,console

结果是


信息:包括通过(/home/hyahn/hadoop-0.20.2/bin/hadoop)找到的Hadoop库,用于HDFS访问
exec /usr/java/jdk1.7.0_02/bin/java -Xmx20m -Dflume.root.logger = INFO,console -cp '/etc/flume-ng/conf:/usr/lib/flume-ng/lib/*:/home/hyahn/hadoop-0.20.2/hadoop-0.20.2-core.jar' -Djava.library.path =:/ home / hyahn / hadoop-0.20.2 / bin / ../lib/native/Linux-amd64-64 org.apache.flume.node.Application -n agent1 -f conf / example2.conf
2012-11-27 15:33:17,250(主要)[INFO - org.apache.flume.lifecycle.LifecycleSupervisor.start(LifecycleSupervisor.java:67)]启动生命周期主管1
2012-11-27 15:33:17,253(主要)[INFO - org.apache.flume.node.FlumeNode.start(FlumeNode.java:54)] Flume节点启动 - agent1
2012-11-27 15:33:17,257(lifecycleSupervisor-1-1)[INFO - org.apache.flume.conf.file.AbstractFileConfigurationProvider.start(AbstractFileConfigurationProvider.java:67)]配置提供程序启动
2012-11-27 15:33:17,257(lifecycleSupervisor-1-0)[INFO - org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.start(DefaultLogicalNodeManager.java:203)]节点管理器启动
2012-11-27 15:33:17,258(lifecycleSupervisor-1-0)[INFO - org.apache.flume.lifecycle.LifecycleSupervisor.start(LifecycleSupervisor.java:67)]启动生命周期主管9
2012-11-27 15:33:17,258(conf-file-poller-0)[INFO - org.apache.flume.conf.file.AbstractFileConfigurationProvider $ FileWatcherRunnable.run(AbstractFileConfigurationProvider.java:195)]重新加载配置文件:conf /example2.conf
2012-11-27 15:33:17,266(conf-file-poller-0)[INFO - org.apache.flume.conf.FlumeConfiguration $ AgentConfiguration.addProperty(FlumeConfiguration.java:988)]处理:sink1
2012-11-27 15:33:17,266(conf-file-poller-0)[INFO - org.apache.flume.conf.FlumeConfiguration $ AgentConfiguration.addProperty(FlumeConfiguration.java:988)]处理:sink1
2012-11-27 15:33:17,267(conf-file-poller-0)[INFO - org.apache.flume.conf.FlumeConfiguration $ AgentConfiguration.addProperty(FlumeConfiguration.java:988)]处理:sink1
2012-11-27 15:33:17,268(conf-file-poller-0)[INFO - org.apache.flume.conf.FlumeConfiguration $ AgentConfiguration.addProperty(FlumeConfiguration.java:902)]添加了汇:sink1代理:agent1
2012-11-27 15:33:17,290(conf-file-poller-0)[INFO - org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:122)]验证后的水槽配置包含代理的配置:[agent1]
2012-11-27 15:33:17,290(conf-file-poller-0)[INFO - org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.loadChannels(PropertiesFileConfigurationProvider.java:249)]创建 Channels
2012-11-27 15:33:17,354(conf-file-poller-0)[INFO - org.apache.flume.instrumentation.MonitoredCounterGroup . (MonitoredCounterGroup.java:68)]监控计数器组的类型:CHANNEL,name: channel1,注册成功 .
2012-11-27 15:33:17,355(conf-file-poller-0)[INFO - org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.loadChannels(PropertiesFileConfigurationProvider.java:273)]创建了 Channels channel1
2012-11-27 15:33:17,368(conf-file-poller-0)[INFO - org.apache.flume.instrumentation.MonitoredCounterGroup . (MonitoredCounterGroup.java:68)]监控计数器组的类型:SOURCE,name: source1,已成功注册 .
2012-11-27 15:33:17,378(conf-file-poller-0)[INFO - org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:70)]创建sink:sink1的实例,输入: HDFS


如上所述,发生了水槽产生部分的水槽停止的问题 . 问题是什么?

1 回答

  • 1

    你需要打开另一个窗口并在端口 41414 发送avro命令:

    bin/flume-ng avro-client --conf conf -H localhost -p 41414 -F /home/hadoop1/aaa.txt -Dflume.root.logger=DEBUG,console
    

    这里我在 /home/hadoop1/ 目录下有一个名为 aaa.txt 的文件

    你的水槽将读取此文件并发送到hdfs .

相关问题