Spark workers stopped after driver commanded a shu

2019-05-06 23:57发布

Basically, Master node also perform as a one of the slave. Once slave on master completed it called the SparkContext to stop and hence this command propagate to all the slaves which stop the execution in mid of the processing.

Error log in one of the worker:

INFO SparkHadoopMapRedUtil: attempt_201612061001_0008_m_000005_18112: Committed

INFO Executor: Finished task 5.0 in stage 8.0 (TID 18112). 2536 bytes result sent to driver

INFO CoarseGrainedExecutorBackend: Driver commanded a shutdown

ERROR CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERMtdown

1条回答
Explosion°爆炸
2楼-- · 2019-05-07 00:27

Check your resource manager user interface, in case you see any executor failed - it details about memory error. However if executor has not failed but still driver called for shut down - usually this is due to driver memory, please try to increase driver memory. Let me know how it goes.

查看更多
登录 后发表回答