325.2 MB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used 2020-04-01 15:25:37,410 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 24195 for container-id container_1585725830038_0002_02_000001: 209.6 MB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used 2020-04-01 15:25:40,419 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 24064 for container-id container_1585725830038_0003_02_000001: 336.3 MB of 1 GB physical memory used; 2.3 GB of 2.1 GB virtual memory used 2020-04-01 15:25:40,427 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 24195 for container-id container_1585725830038_0002_02_000001: 340.0 MB of 1 GB physical memory used; 2.3 GB of 2.1 GB virtual memory used 2020-04-01 15:25:43,450 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 24064 for container-id container_1585725830038_0003_02_000001: 336.4 MB of 1 GB physical memory used; 2.3 GB of 2.1 GB virtual memory used 2020-04-01 15:25:43,481 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 24195 for container-id container_1585725830038_0002_02_000001: 340.1 MB of 1 GB physical memory used; 2.3 GB of 2.1 GB virtual memory used 2020-04-01 15:25:46,503 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 24064 for container-id container_1585725830038_0003_02_000001: 336.4 MB of 1 GB physical memory used; 2.3 GB of 2.1 GB virtual memory used 2020-04-01 15:25:46,526 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 24195 for container-id container_1585725830038_0002_02_000001: 334.9 MB of 1 GB physical memory used; 2.3 GB of 2.1 GB virtual memory used 2020-04-01 15:25:49,545 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 24064 for container-id container_1585725830038_0003_02_000001: 336.4 MB of 1 GB physical memory used; 2.3 GB of 2.1 GB virtual memory used 2020-04-01 15:25:49,586 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 24195 for container-id container_1585725830038_0002_02_000001: 334.8 MB of 1 GB physical memory used; 2.3 GB of 2.1 GB virtual memory used 2020-04-01 15:25:52,607 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 24064 for container-id container_1585725830038_0003_02_000001: 336.6 MB of 1 GB physical memory used; 2.3 GB of 2.1 GB virtual memory used 2020-04-01 15:25:52,640 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 24195 for container-id container_1585725830038_0002_02_000001: 334.9 MB of 1 GB physical memory used; 2.3 GB of 2.1 GB virtual memory used 2020-04-01 15:25:53,040 WARN org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection: Directory /opt/module/hadoop-2.7.2/data/tmp/nm-local-dir error, used space above threshold of 90.0%, removing from list of valid directories 2020-04-01 15:25:53,040 WARN org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection: Directory /opt/module/hadoop-2.7.2/logs/userlogs error, used space above threshold of 90.0%, removing from list of valid directories 2020-04-01 15:25:53,040 INFO org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService: Disk(s) failed: 1/1 local-dirs are bad: /opt/module/hadoop-2.7.2/data/tmp/nm-local-dir; 1/1 log-dirs are bad: /opt/module/hadoop-2.7.2/logs/userlogs 2020-04-01 15:25:53,040 ERROR org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService: Most of the disks failed. 1/1 local-dirs are bad: /opt/module/hadoop-2.7.2/data/tmp/nm-local-dir; 1/1 log-dirs are bad: /opt/module/hadoop-2.7.2/logs/userlogs
<property> <name>yarn.nodemanager.vmem-check-enabled</name> <value>true</value> <description>Whether virtual memory limits will be enforced for containers.</description> </property>
<property> <name>yarn.nodemanager.vmem-pmem-ratio</name> <value>2.1</value> <description> Ratio between virtual memory to physical memory when setting memory limits for containers. Container allocations are expressed in terms of physical memory, and virtual memory usage is allowed to exceed this allocation by this ratio.</description> </property>
<property> <name>yarn.scheduler.minimum-allocation-vcores</name> <value>1</value> <description>The minimum allocation for every container request at the RM in terms of virtual CPU cores. Requests lower than this will be set to the value of this property. Additionally, a node manager that is configured to have fewer virtual cores than this value will be shut down by the resource manager.</description> </property>
<property> <name>yarn.scheduler.maximum-allocation-vcores</name> <value>4</value> <description> The maximum allocation for every container request at the RM in terms of virtual CPU cores. Requests higher than this will throw an InvalidResourceRequestException.</description> </property>
<property> <name>yarn.nodemanager.elastic-memory-control.enabled</name> <value>false</value> <description>Enable elastic memory control. This is a Linux only feature. When enabled, the node manager adds a listener to receive an event, if all the containers exceeded a limit. The limit is specified by yarn.nodemanager.resource.memory-mb. If this is not set, the limit is set based on the capabilities. See yarn.nodemanager.resource.detect-hardware-capabilities for details. The limit applies to the physical or virtual (rss+swap) memory depending on whether yarn.nodemanager.pmem-check-enabled or yarn.nodemanager.vmem-check-enabled is set.</description> </property>