Spark: Not enough space to cache red in container

2019-04-01 23:36发布

I have a 30 node cluster, each node has 32 core, 240 G memory (AWS cr1.8xlarge instance). I have the following configurations:

--driver-memory 200g --driver-cores 30 --executor-memory 70g --executor-cores 8 --num-executors 90 

I can see from the job tracker that I still have a lot of total storage memory left, but in one of the containers, I got the following message saying Storage limit = 28.3 GB. I am wondering where does this 28.3 GB came from? My memoryFraction for storage is 0.45

And how do I solve this Not enough space to cache rdd issue? Should I do more partition or change default parallelism ... since I still have a lot of total storage memory unused. Thanks!

15/12/05 22:39:36 WARN storage.MemoryStore: Not enough space to cache rdd_31_310 in memory! (computed 1326.6 MB so far)
15/12/05 22:39:36 INFO storage.MemoryStore: Memory use = 9.6 GB (blocks) + 18.1 GB (scratch space shared across 4 tasks(s)) = 27.7 GB. Storage limit = 28.3 GB.
15/12/05 22:39:36 WARN storage.MemoryStore: Not enough space to cache rdd_31_136 in memory! (computed 1835.8 MB so far)
15/12/05 22:39:36 INFO storage.MemoryStore: Memory use = 9.6 GB (blocks) + 18.1 GB (scratch space shared across 5 tasks(s)) = 27.7 GB. Storage limit = 28.3 GB.
15/12/05 22:39:36 INFO executor.Executor: Finished task 136.0 in stage 12.0 (TID 85168). 1272 bytes result sent to driver

0条回答
登录 后发表回答