hadoop - flume doesn't work with hdfs -


I had set the flu with two nodes. I want to load hdfs data from slave01. Slave01: example-conf.properties

  agent.sources = baksrc agent.channels = Memory Channel Agentis = Aveiro-Forward-sync agent.sources.baksrc.type = exec agent.sources.baksrc Command = tail -f /root/hadoo/test/data.txt agent.ources.baksrc.checkperiodic = 1000 agent.sources.baksrc.channels = Memory Channel Agent.Cannel. Memory channel.type = memory agent.channels.memorychannel.k -alive = 30 agent.channels.memoryChannel.capacity = 10000 agent.channels.memoryChannel.transactionCapacity = 10000 agent.sinks.avro-forward-sink.type = Aveiro agent. Sinks.avro-forward-sync.hostname = Master Agent Sinks.avro-forward-sink.port = 23004 agent.sinks.avro-forward-sink.channel = Memory Channel  

Master: Example-conf.properties

  agent.sources = avrosrc agent.sinks = hdfs-write agent.channels = memory channel agent.sources.avrosrc.type = aero agent.ources.avrosrc.bind = Master agent.ources.avrosrc.port = 23004 agent.sources. Avrosrc.channels = Memory Channel Agent.Anels.memoryChannel.type = memory agent.channels.memoryChannel.keep-alive = 30 agent.channels.memoryChannel.capacity = 10000 agent.channels.memoryChannel.transactionCapacity = 10000 agent.sinks.hdfs -write.type = hdfs agent.smb.hdfs-write.hdfs.path = hdfs: //172.16.86.38: 9000 / flum / webdata agent.sinks.hdfs-write.hdfs.rollInterval = 0 agent.sinks.hdfs- Write.hdfs.rollSize = 4000000 agent. Sinks.hdfs-write.hdfs.rollCount = 0 Agent.Sinks.hdfs-write.hdfs.writeFormat = Text Agent.Sinks.hdfs-write.hdfs.fileType = Datastream Agent.Sinks.hdfs-write.hdfs.batchSize = 10 Agent.sinks.hdfs-write.channel = Memory Channel  

Then I run a shell script: such as:

  #! In {1..1000000} for Bin / sh i; Do "Hbase $ i" and "gt;> /root/hadoop/test/data.txt; Sleep 0.1; done  

Start the Flu: Flu-NG Agent - Conif Conph - Config file example - config - name agent- Dflume.root.logger = DEBUG, I did not find the console error on the console.

  14/05/06 16:38:44 Information source: AeroSource: Aviro source is blocking aeroscrack: Avro source Arooscrack: {Bind address: Master, Port: 23004} 14/06/06 16:38:44 Information ipc.NettyServer: [id: 0x49f2de1b, /172.16 .86.3 9: 935 9: & gt; /172.16.86.38:23004] Disk Selected 14/05/06 16:38:44 Information ipc.NettyServer: [id: 0x49f2de1b, /172.16.86.3 9: 935 9: & gt; /172.16.86.38:23004] Unbound 14/05/06 16: 38:44 Information ipc.NettyServer: [id: 0x49f2de1b, /172.16.86.3 9: 935 9:> /172.16.86.38:23004] Close  

but I do not file in HDFS Can I see, is there a problem in my configuration? I have tested it on the master, it works fine.

What version do you use?
Have you set up HADOOP_HOME?
Is the production of Classpath from HODOP with HONDAOP jar?
If you are using Apache Flow, step by step: 1. Set HADOOP_HOME
2. Edit Hadop Core-site .xml, make sure that the nominated IP is correct.
3. Use the hdfs path: agent.sinks.hdfs-write.hdfs.path = / flume / webdata
4. Start the flu


Comments

Popular posts from this blog

Pass DB Connection parameters to a Kettle a.k.a PDI table Input step dynamically from Excel -

multithreading - PhantomJS-Node in a for Loop -

c++ - MATLAB .m file to .mex file using Matlab Compiler -