Configuring Resources for a NodeManager Role Instance¶
Scenario¶
If the hardware resources (such as the number of CPU cores and memory size) of the nodes for deploying NodeManagers are different but the NodeManager available hardware resources are set to the same value, the resources may be wasted or the status may be abnormal. You need to change the hardware resource configuration for each NodeManager to ensure that the hardware resources can be fully utilized.
Impact on the System¶
NodeManager role instances must be restarted for the new configuration to take effect, and the role instances are unavailable during restart.
Prerequisites¶
You have logged in to Manager.
Procedure¶
Choose Cluster > Name of the desired cluster > Services > Yarn > Instance.
Click the role instance name corresponding to the node where NodeManager is deployed, switch to Instance Configuration, and select All Configurations.
Enter yarn.nodemanager.resource.cpu-vcores in the search box, and set the number of vCPUs that can be used by NodeManager on the current node. You are advised to set this parameter to 1.5 to 2 times the number of actual logical CPUs on the node. Enter yarn.nodemanager.resource.memory-mb in the search box, and set the physical memory size that can be used by NodeManager on the current node. You are advised to set this parameter to 75% of the actual physical memory size of the node.
Note
Enter yarn.scheduler.maximum-allocation-vcores in the search box, and set the maximum number of available CPUs in a container. Enter yarn.scheduler.maximum-allocation-mb in the search box, and set the maximum available memory of a container. The instance level cannot be changed. The parameter values need to be changed in the configuration of the Yarn service, and the Yarn service needs to be restarted for the changes to take effect.
Click Save, and then click OK. to restart the NodeManager role instance.
A message is displayed, indicating that the operation is successful. Click Finish. The NodeManager role instance is started successfully.