Spark best approach Look-up Dataframe to improve p

2020-04-30 02:57发布

Dataframe A (millions of records) one of the column is create_date,modified_date

Dataframe B 500 records has start_date and end_date

Current approach:

Select a.*,b.* from a join b on a.create_date between start_date and end_date

The above job takes half hour or more to run.

how can I improve the performance

spark job details

enter image description here

2条回答
女痞
2楼-- · 2020-04-30 03:10

As others suggested, one of the approach is to broadcast the smaller dataframe. This can be done automatically also by configuring the below parameter.

spark.sql.autoBroadcastJoinThreshold

If the dataframe size is smaller than the value specified here, Spark automatically broadcasts the smaller dataframe instead of performing a join. You can read more about this here.

查看更多
何必那么认真
3楼-- · 2020-04-30 03:21

DataFrames currently doesn't have an approach for direct joins like that. It will fully read both tables before performing a join.

https://issues.apache.org/jira/browse/SPARK-16614

You can use the RDD API to take advantage of the joinWithCassandraTable function

https://github.com/datastax/spark-cassandra-connector/blob/master/doc/2_loading.md#using-joinwithcassandratable

查看更多
登录 后发表回答