我正在尝试使用flume从我的本地文件系统加载一些".log"文件到hdfs . 我使用假脱机目录作为源,hdfs作为接收器 . 当我使用下面的命令运行代理时
bin/flume-ng agent --conf /home/Flume/conf --conf-file /home/Flume/conf/test.conf --name agent

当我执行上面的命令时,它只是下面的列表,没有任何事情发生(卡住) .

信息:采购环境配置脚本/home/Flume/conf/flume-env.sh信息:包括通过(/home/hadoop-2.7.2/bin/hadoop)找到的Hadoop库,用于HDFS访问exec / usr / java / jdk1 .8.0_144 / bin / java -Xmx20m -cp'/ home / Flume / conf:/ home / Flume / lib /:/ usr / local / hadoop / etc / hadoop:/ usr / local / hadoop / share / hadoop / common /:/ usr / local / hadoop / share / hadoop / hdfs /:/ usr / local / hadoop / share / hadoop / yarn /:/ usr / local / hadoop / share / hadoop / mapreduce / *'-Djava.library . path =:/ usr / java / packages / lib / amd64:/ usr / lib64:/ lib64:/ lib:/ usr / lib org.apache.flume.node.Application --conf-file / home / Flume / conf / test.conf --name代理

请在下面找到conf文件的内容

agent.sources = src1
agent.channels = chan1
agent.sinks = sink1
agent.sources.src1.type = spooldir
agent.sources.src1.spoolDir = /home//FlumeTesting/flume_sink
agent.sources.src1.basenameHeader = true
agent.sources.src1.deletePolicy = immediate
agent.sources.src1.fileHeader = true
agent.channels.chan1.type = memory
agent.channels.chan1.capacity = 10000
agent.sinks.sink1.type = hdfs
agent.sinks.sink1.hdfs.path = hdfs://localhost:9000/flume_sink
agent.sinks.sink1.hdfs.fileType = DataStream
agent.sinks.sink1.hdfs.rollCount = 1000
agent.sinks.sink1.hdfs.rollSize = 5000
agent.sinks.sink1.hdfs.idleTimeout = 60
agent.sinks.sink1.rollInterval = 500
agent.sinks.sink1.hdfs.filePrefix = %{basename}
agent.sinks.sink1.hdfs.fileSuffix = .log
agent.sources.src1.channels = chan1
agent.sinks.sink1.channel = chan1

如果此代码中有任何错误或错过任何内容,请纠正我 .

提前致谢