I'm getting confused about spill to disk
and shuffle write
. Using the default Sort shuffle manager, we use an appendOnlyMap
for aggregating and combine partition records, right? Then when execution memory fill up, we start sorting map, spilling it to disk and then clean up the map for the next spill(if occur), my questions are :
What is the difference between spill to disk and shuffle write? They consist basically in creating file on local file system and also record.
Admit are different, so Spill records are sorted because the are passed through the map, instead shuffle write records no because they don't pass from the map.
- I have the idea that the total size of the spilled file, should be equal to the size of the Shuffle write, maybe I'm missing something, please help to understand that phase.
Thanks.
Giorgio
spill to disk
andshuffle write
are two different thingsspill to disk
- Data move from Host RAM to Host Disk - is used when there is no enough RAM on your machine, and it place part of its RAM into diskhttp://spark.apache.org/faq.html
Does my data need to fit in memory to use Spark?
shuffle write
- Data move from Executor(s) to another Executor(s) - is used when data needs to move between executors (e.g. due to JOIN, groupBy, etc)more data can be found here:
An edge case example which might help clearing this issue:
Assuming that the data holds one key, Performing groupByKey, will bring all the data into one partition.
Shuffle size
will be 9*128MB (9 executors will transfer their data into the last executor), and there won't be anyspill to disk
as the executor has 100GB of RAM and only 1GB of dataRegarding AppendOnlyMap :
The fact that two different modules uses the same low-level function doesn't mean that those functions are related in hi-level.