I'm trying to learn Hive. Surprisingly, I can't find an example of how to write a simple word count job. Is the following correct?
Let's say I have an input file input.tsv
:
hello, world
this is an example input file
I create a splitter in Python to turn each line into words:
import sys
for line in sys.stdin:
for word in line.split():
print word
And then I have the following in my Hive script:
CREATE TABLE input (line STRING);
LOAD DATA LOCAL INPATH 'input.tsv' OVERWRITE INTO TABLE input;
-- temporary table to hold words...
CREATE TABLE words (word STRING);
add file splitter.py;
INSERT OVERWRITE TABLE words
SELECT TRANSFORM(text)
USING 'python splitter.py'
AS word
FROM input;
SELECT word, count(*) AS count FROM words GROUP BY word;
I'm not sure if I'm missing something, or if it really is this complicated. (In particular, do I need the temporary words
table, and do I need to write the external splitter function?)
If you want a simple one see the following:
I use a lateral view to enable the use of a table valued function (explode) which takes the list that comes out of split function and outputs a new row for every value. In practice I use a UDF that wraps IBM's ICU4J word breaker. I generally don't use transform scripts and use UDFs for everything. You don't need a temporary words table.
You may sentences built-in UDF in hive as follows:
1) Step 1: Create a temp table with a single column named sentence of data type array
2) Step 2: Select your words from the temp table again exploding the column sentence