我不知道是否有可能转动的表在Apache的猪一遍。
输入:
Id Column1 Column2 Column3
1 Row11 Row12 Row13
2 Row21 Row22 Row23
输出:
Id Name Value
1 Column1 Row11
1 Column2 Row12
1 Column3 Row13
2 Column1 Row21
2 Column2 Row22
2 Column3 Row23
实际数据有几十列。
我可以做到这一点AWK一通,然后用Hadoop流运行。 但大多数我的代码是Apache的猪,所以我不知道它是否可以有效地做到这一点的猪。
你可以做到这一点在2种方式:1.写一个UDF返回一个元组的袋子。 这将是最灵活的解决方案,但需要Java代码; 2.编写一个脚本刚性:
inpt = load '/pig_fun/input/pivot.txt' as (Id, Column1, Column2, Column3);
bagged = foreach inpt generate Id, TOBAG(TOTUPLE('Column1', Column1), TOTUPLE('Column2', Column2), TOTUPLE('Column3', Column3)) as toPivot;
pivoted_1 = foreach bagged generate Id, FLATTEN(toPivot) as t_value;
pivoted = foreach pivoted_1 generate Id, FLATTEN(t_value);
dump pivoted;
运行此脚本让我下面的结果:
(1,Column1,11)
(1,Column2,12)
(1,Column3,13)
(2,Column1,21)
(2,Column2,22)
(2,Column3,23)
(3,Column1,31)
(3,Column2,32)
(3,Column3,33)
我删除COL3来自ID,以显示如何处理可选(NULL)数据
ID名称值1列1 Row11 1列2 Row12 2列1 Row21 2列2 Row22 2栏3 Row23
--pigscript.pig
data1 = load 'data.txt' using PigStorage() as (id:int, key:chararray, value:chararray);
grped = group data1 by id;
pvt = foreach grped {
col1 = filter data1 by key =='Column1';
col2 =filter data1 by key =='Column2';
col3 =filter data1 by key =='Column3';
generate flatten(group) as id,
flatten(col1.value) as col1,
flatten(col2.value) as col2,
flatten((IsEmpty(col3.value) ? {('NULL')} : col3.value)) as col3; --HANDLE NULL
};
dump pvt;
结果:
(1,Row11,Row12,NULL)
(2,Row21,Row22,Row23)