How do you make a HIVE table out of JSON data?

2019-01-30 07:28发布

问题:

I want to create a Hive table out of some JSON data (nested) and run queries on it? Is this even possible?

I've gotten as far as uploading the JSON file to S3 and launching an EMR instance but I don't know what to type in the hive console to get the JSON file to be a Hive table?

Does anyone have some example command to get me started, I can't find anything useful with Google ...

回答1:

You'll need to use a JSON serde in order for Hive to map your JSON to the columns in your table.

A really good example showing you how is here:

http://aws.amazon.com/articles/2855

Unfortunately the JSON serde supplied doesn't handle nested JSON very well so you might need to flatten your JSON in order to use it.

Here's an example of the correct syntax from the article:

create external table impressions (
    requestBeginTime string, requestEndTime string, hostname string
  )
  partitioned by (
    dt string
  )
  row format 
    serde 'com.amazon.elasticmapreduce.JsonSerde'
    with serdeproperties ( 
      'paths'='requestBeginTime, requestEndTime, hostname'
    )
  location 's3://my.bucket/' ;


回答2:

It's actually not necessary to use the JSON SerDe. There is a great blog post here (I'm not affiliated with the author in any way):

http://pkghosh.wordpress.com/2012/05/06/hive-plays-well-with-json/

Which outlines a strategy using the builtin-function json_tuple to parse the json at time of query (NOT at the time of table definition):

https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-json_tuple

So basically, your table schema is simply to load each line as a single 'string' column and then extract the relevant json fields as needed on a per query basis. e.g. this query from that blog post:

SELECT b.blogID, c.email FROM comments a LATERAL VIEW json_tuple(a.value, 'blogID', 'contact') b 
AS blogID, contact  LATERAL VIEW json_tuple(b.contact, 'email', 'website') c 
AS email, website WHERE b.blogID='64FY4D0B28';

In my humble experience, this has proven more reliable (I encountered various cryptic issues dealing with the JSON serdes, especially with nested objects).



回答3:

I just had to solve the same problem, and none of the as of yet linked to JSON SerDes seemed good enough. Amazon's might be good, but I can't find the source for it anywhere (does anyone have a link?).

HCatalog's built in JsonSerDe is working for me, even though I'm not actually using HCatalog anywhere else.

https://github.com/apache/hcatalog/blob/branch-0.5/core/src/main/java/org/apache/hcatalog/data/JsonSerDe.java

To use HCatalog's JsonSerDe, add the hcatalog-core .jar to Hive's auxpath and create your hive table:

$ hive --auxpath /path/to/hcatalog-core.jar

hive (default)>
create table my_table(...)
ROW FORMAT SERDE
  'org.apache.hcatalog.data.JsonSerDe'
...
;

I wrote a post here with more details

http://ottomata.org/tech/too-many-hive-json-serdes/



回答4:

Hive 0.12 and later in hcatalog-core has JsonSerDe which will serialize and deserialize your JSON data. So, all you need to do is create an external table like the following example:

CREATE EXTERNAL TABLE json_table (
    username string,
    tweet string,
    timestamp long)
ROW FORMAT SERDE
'org.apache.hive.hcatalog.data.JsonSerDe'
STORED AS TEXTFILE
LOCATION
 'hdfs://data/some-folder-in-hdfs'

The corresponsing json data file should look like the following example:

{"username":"miguno","tweet":"Rock: Nerf paper, scissors is fine.","timestamp": 1366150681 }
{"username":"BlizzardCS","tweet":"Works as intended.  Terran is IMBA.","timestamp": 1366154481 }


回答5:

Generating SerDe schema from .json file

If your .json file is big, it might be tedious to write the the schema by hand. If so, you can use this handy tool to generate it automatically.

https://github.com/strelec/hive-serde-schema-gen