보물창고/Big Data2014.11.02 23:30





윈도우 하둡 하이브 외부 테이블 생성 에이브로 스키마 설정, Azure HDInsight Hive Create external table avro file read







HDInsight환경 hive에서 avro파일을 읽을수 있도록하는 create table 쿼리 입니다

테스트하는 동안 발생한 에러들도 포함해서 적습니다

에러로 찾아오시는 분들에게 도움이 되었으면 합니다



Azure HDInsight를 공부하면서 본 링크들을 정리했습니다







위 링크 중 [사이트 2]에 있는 hive코드를 참고했습니다
문서에는 java, hadoop streaming, hive, pig에서 사용할 수 있는 샘플 코드들이 있습니다
문서 앞부분에는 샘플 avro데이터 파일과 avsc파일이 있어 이걸 사용하면 바로 테스트 해 볼 수 있습니다





  • avsc파일을 url로 적는 방법으로 하다 적용이 안되어 create 할때 스키마를 직접 적어주는 방법으로 했습니다
    'avro.schema.literal' // 스키마를 직접 입력하는 것으로 사용함
    'avro.schema.url' //스키마를 경로로 지정하면서 파일을 사용함
  • azure hdinsight에서는 hdfs:///avrotest/schema/twitter.avsc 이방법으로는 파읽을 읽지 못하는거 같습니다





HDInsight에서 잘 동작하는 Create 문입니다


CREATE EXTERNAL TABLE tweets

COMMENT "A table backed by Avro data with the Avro schema stored in HDFS"
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
WITH SERDEPROPERTIES (
'avro.schema.literal' = '{
"name" : "Tweet",
"type" : "record",
"namespace" : "com.miguno.avro",
"fields" : [ {
"name" : "username",
"type" : "string"
}, {
"name" : "tweet",
"type" : "string"
}, {
"name" : "timestamp",
"type" : "long"
} ]
}'
)
STORED AS
INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'

LOCATION '/avrotest/data/';








잘 적용된 결과 확인


hive> describe tweets;
OK
username string from deserializer
tweet string from deserializer
timestamp bigint

hive> select * from tweets;
OK
miguno Rock: Nerf paper, scissors is fine. 1366150681
BlizzardCS Works as intended. Terran is IMBA. 1366154481
DarkTemplar From the shadows I come! 1366154681
VoidRay Prismatic core online! 1366160000
VoidRay Fire at will, commander. 1366160010
DarkTemplar I am the blade of Shakuras! 1366174681
Immortal I return to serve! 1366175681
Immortal En Taro Adun! 1366176283
VoidRay There is no greater void than the one between your ears. 13661763
00
DarkTemplar I strike from the shadows! 1366184681

Time taken: 0.172 seconds, Fetched: 10 row(s)









아래는 테스트 중 나온 에러들 입니다


'avro.schema.url'을 'avro.schema.literal'로 바꾸는 것으로 해결 했습니다

url도 사용가능하러 같은데 어떤 경로를 넣어줘야 하는지는 Azure HDInsight에서 확인해 봐야 합니다



CREATE EXTERNAL TABLE tweets
COMMENT "A table backed by Avro data with the Avro schema stored in HDFS"
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED AS
INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
LOCATION '/user/YOURUSER/examples/input/'
TBLPROPERTIES (
'avro.schema.url'='hdfs:///user/YOURUSER/examples/schema/twitter.avsc'
);


CREATE EXTERNAL TABLE tweets
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED AS
INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
LOCATION '/avrotest/data/'
TBLPROPERTIES (
'avro.schema.url'='hdfs:///avrotest/schema/twitter.avsc'
);

설정이 잘 안되서 다른걸로 시도중

describe tweets;

에러 스크랩
hive>
> CREATE EXTERNAL TABLE tweets
> ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
> STORED AS
> INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
> OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'

> LOCATION '/avrotest/data/'
> TBLPROPERTIES (
>
Display all 445 possibilities? (y or n)
>
Display all 445 possibilities? (y or n)
> 'avro.schema.url'='hdfs:///avrotest/schema/twitter.avsc'
> );
OK
Time taken: 0.579 seconds
hive>
>
> describe tweets;
OK
error_error_error_error_error_error_error string from des
erializer
cannot_determine_schema string from deserializer
check string from deserializer
schema string from deserializer
url string from deserializer
and string from deserializer
literal string from deserializer
-- 이런 에러 출력이 된다면 avsc 설정이 제대로 안된 상태 입니다
Time taken: 0.904 seconds, Fetched: 7 row(s)
hive> select * from tweets;
OK
Failed with exception java.io.IOException:org.apache.avro.AvroTypeException: Fou
nd com.miguno.avro.Tweet, expecting org.apache.hadoop.hive.CannotDetermineSchema
Sentinel
Time taken: 1.611 seconds
hive> show tabels;
NoViableAltException(26@[654:1: ddlStatement : ( createDatabaseStatement | switc
hDatabaseStatement | dropDatabaseStatement | createTableStatement | dropTableSta
tement | truncateTableStatement | alterStatement | descStatement | showStatement
| metastoreCheck | createViewStatement | dropViewStatement | createFunctionStat
ement | createMacroStatement | createIndexStatement | dropIndexStatement | dropF
unctionStatement | dropMacroStatement | analyzeStatement | lockStatement | unloc
kStatement | lockDatabase | unlockDatabase | createRoleStatement | dropRoleState
ment | grantPrivileges | revokePrivileges | showGrants | showRoleGrants | showRo
lePrincipals | showRoles | grantRole | revokeRole | setRole | showCurrentRole );
])
at org.antlr.runtime.DFA.noViableAlt(DFA.java:158)
at org.antlr.runtime.DFA.predict(DFA.java:144)
at org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.ja
va:2090)
at org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.j
ava:1398)
at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:
1036)
at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:19
9)
at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:16
6)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:409)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:323)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:980)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1045)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:916)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:906)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:2
68)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:793
)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:686)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
FAILED: ParseException line 1:5 cannot recognize input near 'show' 'tabels' '<EO
F>' in ddl statement
hive> show tables;
OK
hivesampletable
tweets
Time taken: 0.11 seconds, Fetched: 2 row(s)
hive> show tables;
OK
hivesampletable
tweets
Time taken: 0.062 seconds, Fetched: 2 row(s)
hive> select * from tweets;
OK
Failed with exception java.io.IOException:org.apache.avro.AvroTypeException: Fou
nd com.miguno.avro.Tweet, expecting org.apache.hadoop.hive.CannotDetermineSchema
Sentinel
Time taken: 0.297 seconds

--

위에것하고 같은글인데 git에서 관리되면서 더 최신으로 보임
https://github.com/miguno/avro-hadoop-starter

CREATE EXTERNAL TABLE tweets
COMMENT "A table backed by Avro data with the Avro schema stored in HDFS"
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
WITH SERDEPROPERTIES (
'avro.schema.url' = 'hdfs:///avrotest/schema/twitter.avsc'
)
STORED AS
INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
LOCATION '/avrotest/data/';


에러 스크랩
hive>
> CREATE EXTERNAL TABLE tweets
> COMMENT "A table backed by Avro data with the Avro schema stored in HDFS"
> ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
> WITH SERDEPROPERTIES (
> 'avro.schema.url' = 'hdfs:///avrotest/schema/twitter.avsc'
> )
> STORED AS
> INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
> OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'

> LOCATION '/avrotest/data/';
OK
Time taken: 0.431 seconds
hive>
> describe tweets;
OK
error_error_error_error_error_error_error string from des
erializer
cannot_determine_schema string from deserializer
check string from deserializer
schema string from deserializer
url string from deserializer
and string from deserializer
literal string from deserializer
-- 이런 에러 출력이 된다면 avsc 설정이 제대로 안된 상태 입니다
Time taken: 0.875 seconds, Fetched: 7 row(s)
hive>


--


CREATE EXTERNAL TABLE tweets
COMMENT "A table backed by Avro data with the Avro schema stored in HDFS"
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
WITH SERDEPROPERTIES (
'avro.schema.url' = 'hdfs:///avrotest/schema/twitter.avsc'
)
STORED AS
INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
LOCATION 'hdfs:///avrotest/data/';

에러 스크랩
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: java.io.IOException Incomplete HDFS URI, no host: hdfs:/avrotest/data)

--

CREATE EXTERNAL TABLE tweets
COMMENT "A table backed by Avro data with the Avro schema stored in HDFS"
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
WITH SERDEPROPERTIES (
'avro.schema.url' = '{
"type" : "record",
"name" : "Tweet",
"namespace" : "com.miguno.avro",
"fields" : [ {
"name" : "username",
"type" : "string"
}, {
"name" : "tweet",
"type" : "string"
}, {
"name" : "timestamp",
"type" : "long"
} ]
}'
)
STORED AS
INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
LOCATION '/avrotest/data2/';

에러 스크랩
hive> describe tweets;
OK
error_error_error_error_error_error_error string from deserializer
cannot_determine_schema string from deserializer
check string from deserializer
schema string from deserializer
url string from deserializer
and string from deserializer
literal string from deserializer
-- 이런 에러 출력이 된다면 avsc 설정이 제대로 안된 상태 입니다
Time taken: 0.585 seconds, Fetched: 7 row(s)
hive>

--
"앞에 역슬레시가 있어서 따라해 봤음
https://social.msdn.microsoft.com/Forums/azure/en-US/5e0e93f9-3c02-40dc-afa4-656faacde69b/cannot-read-data-from-hive-external-table-to-a-avro-serilized-files-failed-with-exception?forum=hdinsight

CREATE EXTERNAL TABLE tweets
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
WITH SERDEPROPERTIES (
'avro.schema.url' = '{
\"type\" : \"record\",
\"name\" : \"Tweet\",
\"namespace\" : \"com.miguno.avro\",
\"fields\" : [ {
\"name\" : \"username\",
\"type\" : \"string\"
}, {
\"name\" : \"tweet\",
\"type\" : \"string\"
}, {
\"name\" : \"timestamp\",
\"type\" : \"long\"
} ]
}'
)
STORED AS
INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
LOCATION '/avrotest/data2/';


에러 스크랩
hive>
>
> CREATE EXTERNAL TABLE tweets
> ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
> WITH SERDEPROPERTIES (
> 'avro.schema.url' = '{
> \"type\" : \"record\",
> \"name\" : \"Tweet\",
> \"namespace\" : \"com.miguno.avro\",
> \"fields\" : [ {
> \"name\" : \"username\",
> \"type\" : \"string\"
> }, {
> \"name\" : \"tweet\",
> \"type\" : \"string\"
> }, {
> \"name\" : \"timestamp\",
> \"type\" : \"long\"
> } ]
> }'
> )
> STORED AS
> INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
> OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'

> LOCATION '/avrotest/data2/';
OK
Time taken: 0.234 seconds
hive>
>
> show tables;
OK
hivesampletable
tweets
Time taken: 0.063 seconds, Fetched: 2 row(s)
hive> describe tweets;
OK
error_error_error_error_error_error_error string from deserializer
cannot_determine_schema string from deserializer
check string from deserializer
schema string from deserializer
url string from deserializer
and string from deserializer
literal string from deserializer
-- 이런 에러 출력이 된다면 avsc 설정이 제대로 안된 상태 입니다
Time taken: 0.516 seconds, Fetched: 7 row(s)
hive>

--

CREATE EXTERNAL TABLE tweets
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
WITH SERDEPROPERTIES (
'avro.schema.url' = '{}'
)
STORED AS
INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'

LOCATION '/avrotest/data2/';


신고





Posted by 파란물
보물창고/Big Data2014.10.28 23:30





윈도우 하둡 스터디 하면서 본 MS 한글 문서 Getting Started Azure HDInsight 한글 문서들 Window hadoop










  • 아래 번호 순서대로 읽으면서 전체적인 개념을 파악할 수 있었습니다
  • 모두 한글 문서 입니다








  1. Azure HDInsight를 사용하여 시작
    1. Getting Start의 문서
    2. Azure에서 HDInsight를 만들고 워드 카운트를 하고 엑셀에서 데이터를 읽어와 편집까지 30분 강좌
    3. http://azure.microsoft.com/ko-kr/documentation/articles/hdinsight-get-started/

  2. HDInsight와 함께 Azure Blob 저장소 사용
    1. HDInsight에서 Blob의 위치와 역할을 파악할 수 있음
    2. http://azure.microsoft.com/ko-kr/documentation/articles/hdinsight-use-blob-storage/


  3. HDInsight에 데이터 업로드
    1. Blob에 데이터를 올리는 방법을 설명 합니다
    2. http://azure.microsoft.com/ko-kr/documentation/articles/hdinsight-upload-data/


  4. HDInsight에서 Sqoop 사용
    1. New-AzureHDInsightSqoopJobDefinition -Command "export --connect $connectionString --table $tableName_log4j --export-dir $exportDir_log4j --input-fields-terminated-by \0x20 -m 1"
    2. 위 문장을 사용해서 sqoop을 실행함
    3. http://azure.microsoft.com/ko-kr/documentation/articles/hdinsight-use-sqoop/


  5. HDInsight 클러스터 프로비전
    1. 최초 생성 방법을 설명
    2. Blob을 사용한다는 내용이 적혀 있음 : 일반적으로 HDInsight 클러스터를 프로비전하고, 해당 작업을 실행한 후에 비용을 줄이기 위해 클러스터를 삭제합니다. (클러스터는 HDInsight 로컬 디스크를 의미하는듯)
    3. http://azure.microsoft.com/ko-kr/documentation/articles/hdinsight-provision-clusters/

  6. 프로그래밍 방식으로 Hadoop 작업 제출
    1.  powersell을 사용하여 작업 제출
    2. mapreduce, hive, sqoop 등
    3. http://azure.microsoft.com/ko-kr/documentation/articles/hdinsight-submit-hadoop-jobs-programmatically/


신고





Posted by 파란물
보물창고/Big Data2014.10.21 22:56





윈도우 하둡 window hadoop HDinsight Azure 압축 호환에 관련된 글 (Gzip, Bzip2, LZO, LZ4, snappy)










  • HDInsight에서 LZOP 압축을 사용할 수 있는지에대한 자료를 찾다가 위의 문서를 찾았습니다 해당 자료는 2013년 1월에 만들어진 자료지만 제가 찾은 LZOP에대한 정보는 어느정도 알 수 있는 자료였습니다

  • 하둡 0.20+ 버전부터 GNU라이센스 문제로 하둡배포판에서 LZOP가 빠졌다고합니다 하지만 직접 다운로드해서 추가하면 사용할 수 있도록 포맷을 지원한다고 합니다





아래 글을 보고 사용할 수 있다고 판단되어집니다


LZOP


Note: LZO has been removed from Hadoop in version 0.20+, because the LZO libraries are licensed under the GNU General Public License (GPL). If you need to use LZO, the format is still supported, so you can download the codec separately and enable it manually in your Hadoop cluster.


Depending on your version of Hadoop, you might need to download the LZOP codec should separately and enabled the codec manually in your Hadoop cluster.




https://github.com/twitter/hadoop-lzo

이 링크에 리눅스와 윈도우에 lzo를 설치하는 방법이 안내되고 있습니다





문서에 포함된 사용 가능한 압축 포맷 입니다


Bzip2

Bzip is a popular tool which includes a version for Windows.

 

Download site

http://gnuwin32.sourceforge.net/packages/bzip2.htm

Supported codecs

BZip2

License

This library has been released under the GPL license.

Supported platforms

Apache Hadoop distribution

HDInsight

 


Gzip

The latest version of the suite includes both Bzip2.exe and Gzip.exe.

 

Download site

http://gnuwin32.sourceforge.net/packages/gzip.htm

Supported codecs

GZip

License

This library has been released under the GPL license.

Supported platform

Apache Hadoop distribution

HDInsight

 


LZOP

Lzop is a file compression utility very similar to Bzip2. The LZO algorithm is optimized for speed and does not compress as much. This can be either an advantage or disadvantage, depending on whether you want speed or space.

Download site

http://www.lzop.org/

Supported codes

LZO

License

LZOP is Open Source software but is copyrighted, and is distributed under the terms of GNU General Public License (GPL).

Supported platform

Apache Hadoop distribution only.

Depending on your version of Hadoop, you might need to download the LZOP codec should separately and enabled the codec manually in your Hadoop cluster.

 


LZ4

LZ4 is a very fast and lossless compression algorithm.

If you are using Hadoop on Windows, you can also use the LZ4 compression algorithm through a command-line utility and a small Windows application.  In our tests we used the command line tool and recommend it over the standalone application.

 

Download site

Standalone Windows application

Command Line Utility for Windows

http://www.lzop.org/

http://sd-1.archive-host.com/membres/up/182754578/LZ4_install_v13b.exe

http://sd-1.archive-host.com/membres/up/182754578/LZ4v13b.zip

Supported codecs

LZ4

License

LZ4 is currently included in the Hadoop distribution.

Supported platforms

Apache Hadoop distribution only

 


7Zip

7Zip is a powerful compression tool that we highly recommend for use as a local compression processor. It is capable of maximizing computation resources on the local computer, and the user can configure the number of threads that can be used for file compression.

7zip provides several interfaces for interacting with the compression tools:

·         Command line interface

·         Graphical interface

·         Microsoft windows shell integration

7zip has its own documentation, and it is easy to learn how to use the shell to process files. To view the 7ZIP documentation, open the directory where you installed 7zip, and look for the help documentation (typically provided as standalone document or text file.)

 

Download site

http://www.7-zip.org/download.html

Supported codes

BZip2, Gzip

(7Zip does not provide LZO compression.)

License

7Zip is open source software; most of the source code is under the GNU LGPL license.

Supported platforms

Apache Hadoop distribution

HDInsight


신고





Posted by 파란물