Hadoop Monitoring User Guide
Overview
Hadoop monitoring is a Gateway configuration file that enables monitoring of the Hadoop cluster, nodes, and daemons through the JMX and Toolkit plug-ins.
This Hadoop integration template consists of the following components:
- Hadoop Distributed File System (HDFS)
- Yet Another Resource Negotiator (YARN)
The Hadoop Distributed File System or HDFS provides scalable data storage that can be deployed on hardware and optimised operations for large datasets.
The other component Yet Another Resource Negotiator or YARN assigns the computation resources for executing the application:
- YARN ResourceManager - takes inventory of available and allocate resources to running applications.
- YARN NodeManagers - monitors resource usage and communicates with the ResourceManager.
To view the sample metrics and dataviews, see Hadoop Monitoring Technical Reference.
Intended audience
This guide is intended for users who are setting up, configuring, troubleshooting and maintaining this integration. Once the integration is set up, the samplers providing the dataviews become available to that Gateway.
As a user, you should be familiar with Java or any other database, and with the administration of the Hadoop services.
Prerequisites
The following requirements must be met prior to the installation and setup of the template:
- A machine running the Netprobe must have access to the host where the Hadoop instance is installed and the port Hadoop is listening to.
- Netprobe 4.6 or higher.
- Gateway 4.8 or higher.
- Hadoop 3.0.0 or higher.
- Python 2.7/3.6 or higher.
Installation procedure
Ensure that you have read and can follow the system requirements prior to installation and setup of this integration template.
- Download the integration package
geneos-integration-hadoop-<version>.zip
from the ITRS Downloads site. - Open Gateway Setup Editor.
- In the Navigation panel, click Includes to create a new file.
- Enter the location of the file to include in the Location field. In this example, it is the
include/HadoopMonitoring.xml
. - Update the Priority field. This can be any value except
1
. If you input a priority of1
, the Gateway Setup Editor returns an error. - Expand the file location in the Includes section.
- Select Click to load.
- Click Yes to load the new Hadoop include file.
- Click Managed entities in the Navigation panel.
- Add the Hadoop-Cluster and Hadoop-Node types to the Managed Entity section that you will use to monitor Hadoop.
- Click the Validate button to check your configuration and save if everything is correct.
Set up the samplers
These are the pre-configured samplers available to use in HadoopMonitoring.xml
.
Configure the required fields by referring to the table below:
Samplers |
---|
Hadoop-HDFS-NamenodeInfo |
Hadoop-HDFS-NamenodeCluster |
Hadoop-HDFS-SecondaryNamenodeInfo |
Hadoop-HDFS-DatanodesSummary |
Hadoop-HDFS-DatanodeVolumeInfo |
Hadoop-YARN-ResourceManager |
Hadoop-YARN-NodeManagersSummary |
Set up the variables
The HadoopMonitoring.xml
template provides the variables that are set in the Environments section.
Samplers | Description |
---|---|
HADOOP_HOST_NAMENODE | IP/Hostname where Namenode daemon is running. |
HADOOP_HOST_SECONDARYNAMENODE | IP/Hostname where Secondarynamenode daemon is running. |
HADOOP_HOST_DATANODE | IP/Hostname where the specific Datanode daemon is running. |
HADOOP_HOST_RESOURCEMANAGER | IP/Hostname where ResourceManager is running. |
HADOOP_PORT_JMX_NAMENODE | Namenode JMX port. |
HADOOP_PORT_JMX_SECONDARYNAMENODE | Secondarynamenode JMX port. |
HADOOP_PORT_WEBJMX_DATANODE | Datanode web UI port. Default: 9864 |
HADOOP_PORT_JMX_RESOURCEMANAGER | ResourceManager JMX port |
HADOOP_PORT_WEBJMX_NAMENODE | Namenode UI port . Default: 9870 |
HADOOP_PORT_WEBJMX_RESOURCEMANAGER | ResourceManager web UI port. Default: 8088 |
PYTHON_EXECUTABLE_PATH | Script that runs the Python program. |
Set up the rules
The HadoopMonitoring-SampleRules.xml
template also provides a separate sample rules that you can use to configure the Gateway Setup Editor.
Your configuration rules must be set in the Includes section.
The table below shows the included rule setup in the configuration file:
Sample Rules | Description |
---|---|
Hadoop-NameNodeCluster-Disk-Remaining | Checks the remaining disk ratio of the entire Hadoop cluster. |
Hadoop-DataNode-Disk-Remaining | Checks the remaining disk ration of a single datanode HADOOP_RULE_DISK_REMAINING_THRESHOLD: Possible values 1.0 - 100. |
Hadoop-Datanodes-In-Errors | Checks the number of nodes with errors HADOOP_RULE_DATANODES_ERROR_THRESHOLD: Integer values. |
Hadoop-Blocks-In-Error | Checks the number of blocks with error HADOOP_RULE_BLOCKS_ERROR_THRESHOLD: Integer values. |
Hadoop-Nodemanager-In-Error: | Checks the number of nodemanagers with error HADOOP_RULE_NODEMANAGER_ERROR_THRESHOLD: Integer value. |
Hadoop-Applications-In-Error: | Checks the number of application with error HADOOP_RULE_APPLICATION_ERROR_THRESHOLD: Integer values. |
Hadoop-SecondaryNamenode-Status : | Checks the connection status of JMX plugin to Secondarynamenode service. |
Hadoop-NodeManager-State: | Checks the state of nodemanager HADOOP_RULE_NODEMANAGER_UNHEALTHY: Default: UNHEALTY |