Discussion:
container_xxxx is running beyond physical memory limits
徐河松
2018-10-30 10:58:02 UTC
Permalink
Hi£¬Friends

When I running hive on spark ,getting these errors:
ExecutorLostFailure (executor 8 exited caused by one of the running tasks) Reason: Container marked as failed: container_1534244004648_46447_01_000012 on host: zgc-e14-71.54-hadoop.cn. Exit status: 143. Diagnostics: Container [pid=168012,containerID=container_1534244004648_46447_01_000012] is running beyond physical memory limits. Current usage: 8.0 GB of 8 GB physical memory used; 9.8 GB of 32 GB virtual memory used. Killing container.

Any help would be appreciated.
zhankun tang
2018-11-01 09:51:59 UTC
Permalink
Hi Hesong,
"8.0 GB of 8 GB physical memory used;"
Seems memory shortage?

Zhankun
HiFriends
ExecutorLostFailure (executor 8 exited caused by one of the running tasks)
Reason: Container marked as failed: container_1534244004648_46447_01_000012
Container [pid=168012,containerID=container_1534244004648_46447_01_000012]
is running beyond physical memory limits. Current usage: 8.0 GB of 8 GB
physical memory used; 9.8 GB of 32 GB virtual memory used. Killing
container.
Any help would be appreciated.
Jhon Anderson Cardenas Diaz
2018-11-01 15:04:39 UTC
Permalink
Hi
When you deploy spark workers inside containers, the amount of memory
depends on three things:

1. *Spark daemon memory*: Memory you give to spark daemon process.
Usually 1G is enough. This needs to be passed as SPARK_DAEMON_MEMORY
environment variable.
2. *Spark worker memory*: Actual memory you give to the worker itself.
This depends on your needs. This needs to be passed as SPARK_WORKER_MEMORY
environment variable.
3. *Free memory for OS*: Memory you give for OS related stuffs. From my
experience from 2 to 4 GB is a good value.

Then the total amount of memory you should assign to your container would
be the sum of the previous values, for your case 1 GB for daemon + 8 GB for
worker + 2 GB (or 4) for OS = 11 GB.

This is for spark 2.1.1.
Post by zhankun tang
Hi Hesong,
"8.0 GB of 8 GB physical memory used;"
Seems memory shortage?
Zhankun
HiFriends
ExecutorLostFailure (executor 8 exited caused by one of the running
container_1534244004648_46447_01_000012 on host: zgc-e14-71.54-hadoop.cn.
Exit status: 143. Diagnostics: Container
[pid=168012,containerID=container_1534244004648_46447_01_000012] is running
beyond physical memory limits. Current usage: 8.0 GB of 8 GB physical
memory used; 9.8 GB of 32 GB virtual memory used. Killing container.
Any help would be appreciated.
Loading...