반응형
flume error 해결 org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable
해결에 참고한 내용
입력되는 값
문자열이 탭으로 구분된 데이터 sfae a;ldfja;sldj alwkejhfalskdhf a;slekjafabcd def de ab d eef ddefsef sfae a;ldfja;sldj alwkejhfalskdhf a;slekjafabcd def de ab d eef ddefsef sfae a;ldfja;sldj alwkejhfalskdhf a;slekjafabcd def de ab d eef ddefsef sfae a;ldfja;sldj alwkejhfalskdhf a;slekjaabcd def de ab d eef ddefsef sfae a;ldfja;sldj alwkejhfalskdhf a;slekjafabcd def de ab d eef ddefsef sfae a;ldfja;sldj alwkejhfalskdhf a;slekjaf sfae a;ldfja;sldj alwkejhfalskdhf a;slekjafabcd def de ab d eef ddefsef sfae a;ldfja;sldj alwkejhfalskdhf a;slekjafabcd def de ab d eef ddefsef sfae a;ldfja;sldj alwkejhfalskdhf a;slekjafabcd def de ab d eef ddefsef sfae a;ldfja;sldj alwkejhfalskdhf a;slekjaabcd def de ab d eef ddefsef sfae a;ldfja;sldj alwkejhfalskdhf a;slekjafabcd def de |
에러로 출력되는 결과
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable €藪???pR?? NC+? abcd def de NC+? ab d eef ddefsef 8 NC+? ,sfae a;ldfja;sldj alwkejhfalskdhf a;slekjaf NC+? abcd def de + NC+? ab d eef ddefsef O NC+? Csfae a;ldfja;sldj alwkejhfalskdhf a;slekjafabcd def de + NC+? ab d eef ddefsef O NC+? Csfae a;ldfja;sldj alwkejhfalskdhf a;slekjafabcd def de + NC+? ab d eef ddefsef O NC+? Csfae a;ldfja;sldj alwkejhfalskdhf a;slekjafabcd def de |
아래 설정에서 파란색 라인을 추가 하는 것으로 해결됨
agent.sources = logfilereadSource
agent.channels = memoryChannel
agent.sinks = loggerSink
# source
# For each one of the sources, the type is defined
agent.sources.logfilereadSource.type = exec
agent.sources.logfilereadSource.command = tail -F /home/hadoop/download/flumeTestfile.log
agent.sources.logfilereadSource.batchSize = 10
agent.sources.logfilereadSource.channels = memoryChannel
# channel
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.capacity = 20480
# Sink
#Specify the channel the sink should use
agent.sinks.loggerSink.type = hdfs
agent.sinks.loggerSink.channel = memoryChannel
agent.sinks.loggerSink.hdfs.path = hdfs://localhost:8020/user/admin/flumetest/log
agent.sinks.loggerSink.hdfs.fileType = DataStream
agent.sinks.loggerSink.serializer = com.adaltas.flume.serialization.HeaderAndBodyTextEventSerializer$Builder
agent.sinks.loggerSink.serializer.format = CSV
agent.sinks.loggerSink.serializer.appendNewline = true
agent.sinks.loggerSink.serializer.delimiter = '\t'
agent.channels = memoryChannel
agent.sinks = loggerSink
# source
# For each one of the sources, the type is defined
agent.sources.logfilereadSource.type = exec
agent.sources.logfilereadSource.command = tail -F /home/hadoop/download/flumeTestfile.log
agent.sources.logfilereadSource.batchSize = 10
agent.sources.logfilereadSource.channels = memoryChannel
# channel
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.capacity = 20480
# Sink
#Specify the channel the sink should use
agent.sinks.loggerSink.type = hdfs
agent.sinks.loggerSink.channel = memoryChannel
agent.sinks.loggerSink.hdfs.path = hdfs://localhost:8020/user/admin/flumetest/log
agent.sinks.loggerSink.hdfs.fileType = DataStream
agent.sinks.loggerSink.serializer = com.adaltas.flume.serialization.HeaderAndBodyTextEventSerializer$Builder
agent.sinks.loggerSink.serializer.format = CSV
agent.sinks.loggerSink.serializer.appendNewline = true
agent.sinks.loggerSink.serializer.delimiter = '\t'
위 설정 추가후
ab d eef ddefsef sfae a;ldfja;sldj alwkejhfalskdhf a;slekjafabcd def de ab d eef ddefsef sfae a;ldfja;sldj alwkejhfalskdhf a;slekjafabcd def de ab d eef ddefsef sfae a;ldfja;sldj alwkejhfalskdhf a;slekjaabcd def de ab d eef ddefsef sfae a;ldfja;sldj alwkejhfalskdhf a;slekjafabcd def de ab d eef ddefsef sfae a;ldfja;sldj alwkejhfalskdhf a;slekjaf |
반응형