SparkSQL : Can I explode two different variables i

2019-03-11 16:34发布

问题:

I have the following explode query, which works fine:

data1 = sqlContext.sql("select explode(names) as name from data")

I want to explode another field "colors", so the final output could be the cartesian product of names and colors. So I did:

data1 = sqlContext.sql("select explode(names) as name, explode(colors) as color from data")

But I got the errors:

 Only one generator allowed per select but Generate and and Explode found.;

Does any one have any idea?


I can actually make it work by doing two steps:

   data1 = sqlContext.sql("select explode(names) as name from data")
   data1.registerTempTable('data1')
   data1 = sqlContext.sql("select explode(colors) as color from data1")

But I am wondering if it is possible to do it in one step? Thanks a lot!

回答1:

The correct syntax is

select name, color 
from data 
lateral view explode(names) exploded_names as name 
lateral view explode(colors) exploded_colors as color

The reason why Rashid's answer did not work is that it did not "name" the table generated by LATERAL VIEW.

Explanation

Think of it this way: LATERAL VIEW works like an implicit JOIN with with an ephemeral table created for every row from the structs in the collection being "viewed". So, the way to parse the syntax is:

LATERAL VIEW table_generation_function(collection_column) table_name AS col1, ...

Multiple output columns

If you use a table generating function such as posexplode() then you still have one output table but with multiple output columns:

LATERAL VIEW posexplode(orders) exploded_orders AS order_number, order

Nesting

You can also "nest" LATERAL VIEW by repeatedly exploding nested collections, e.g.,

LATERAL VIEW posexplode(orders) exploded_orders AS order_number, order
LATERAL VIEW posexplode(order.items) exploded_items AS item_number, item

Performance considerations

While we are on the topic of LATERAL VIEW it is important to note that using it via SparkSQL is more efficient than using it via the DataFrame DSL, e.g., myDF.explode(). The reason is that SQL can reason accurately about the schema while the DSL API has to perform type conversion between a language type and the dataframe row. What the DSL API loses in terms of performance, however, it gains in flexibility as you can return any supported type from explode, which means that you can perform a more complicated transformation in one step.



回答2:

Try lateral view explode instead.

select name, color from data lateral view explode(names) as name lateral view explode(colors) as color;


回答3:

More than one explode is not allowed in spark sql as it is too confusing. This is because you get an implicit cartesian product of the two things you are exploding. If you want to do more than one explode, you have to use more than one select. Hive has a lateral view which can achieve what you need(explained by Rashid Ali in his answer here) . I would personally recommend two select's with data frames as it is much efficient in spark. Now assuming 'data' is a data frame.

val data1 = data.select($"id",$"names",$explode($"colors").alias("colors"))
           //select required columns from colors 
            .select($"id",$"colors.field1",explode($"names").alias("names"))
            //now select required cols from names
            .select($"id",$"field1",$"names.col1",$"names.col2")

You can do above select's in multiple dataframes or in a single one like above, it doesn't make a difference coming to performance.