["Geneos > Netprobe"]["User Guide"]

Control-M Monitoring User Guide


The end of life (EOL) date for Control-M (9.0.00) integration is 30 November 2021. To continue monitoring your Control-M version 9.0.18 and newer, upgrade to Control-M plugin.

Introduction

The Control-M monitoring integration allows you to monitor scheduled jobs that are running on a Control-M server.

Batches in Control-M are defined as a series of jobs. These jobs have relationships and predefined rules to ensure that the jobs run within a specific time window and only after all of the prerequisites (in conditions) are met. Each job has a job name, and each day, a set of jobs is ordered in to run for that current batch. An instance of any given job on a particular day will be allocated a unique order ID. If the job is run multiple times for the same batch, it has a separate instance count.

This integration supports Control-M versions below 9.0.18. If you are using Control-M versions 9.0.18 and above, see the Control-M Plug-in User Guide.

Architecture

The Control-M solution is divided into three separate parts:

  • Job Views
  • Service Monitoring
  • Infrastructure Monitoring

Prerequisites

The following requirements must be met prior to the installation and setup of the template:

  • Control-M version 9.0.0.
  • For Control-M beginning 9.0.18, use the Control-M plugin. For more information, see Application and plug-in specific information.
  • Requires JRE, see Java support.
  • Control-M integration with dependent libs (these are included in the lib subdirectory)
  • Control-M integration licence: ControlmMonitor.lic. Please contact ITRS Support for a trial licence, or your ITRS Account Manager for more information.

You must have Java installed on the machine running the Netprobe. For information on supported Java versions, see Java support in 5.x Compatibility Matrix.

Installation

Sampler

Set up a sampler. This is set up as an API plug-in.

  1. Set the name to “Jobs”. If you wish to change the name, make sure that it matches the name in the ctmemapi.properties file.
  2. Set the plugin type to API.
<sampler name="Jobs">
<plugin>
<api></api>
</plugin>
</sampler>

Netprobe

Select a netprobe, preferably on the machine where you will be running the plug-in code. In this example, it’s called “Control-M”.

Managed Entity

Set up a managed entity that joins the probe and the sampler.

  1. Set the name to “Control-M”. If you wish to change the name, make sure that this value is used in the ctmemapi.properties file.
  2. Set Options to probe, and select the probe you set up in Netprobe.
  3. Reference the sampler you set up in Sampler.
<managedEntity name="Control-M">
<probe ref="Control-M"></probe>
<sampler ref="Jobs"></sampler>
</managedEntity>

Control-M Permissions

Using the Control-M client, ensure that the user has permissions to view the jobs that you want to see. The easiest way to do this is to add the User to the Browse Group.

If you wish to see all the BIM reports, make sure this is set in the Privileges tab:

And double-check the Services tab:

Control-M integration with dependent libs

Create a directory on the server where you are running the netprobe you want to use to monitor Control-M. Copy the contents of the tar file to this location. The following is displayed:

ControlmMonitor/
ControlmMonitor.jar
ctmemapi.properties (You can rename the default.properties file to get you started)
BMC_JNI.dll
jacorb.properties
log4j.properties
lib/
antlr-2.7.2.jar
avalon-framework-4.1.5.jar
bdIT.jar
classes.jar
commons-codec-1.3.jar
concurrent-1.3.2.jar
emapi.jar
jacorb.jar
jbcl.jar
log4j-1.2.16.jar
log4j-1.2.8.jar
logkit-1.2.jar
NamingViewer.jar
ws-commons-util-1.0.2.jar
xercesImpl.jar
xml-apis.jar
xmlrpc-client-3.1.3.jar
xmlrpc-common-3.1.3.jar
xmlrpc-server-3.1.3.jar
xmldata/
EMAPI_700
EMAPI_800
EMAPI_register
EMAPI_unregister
List_Services
retrieve_jobs
SOAP_env
jobview/
All_jobs
Failed_Jobs
Running_Jobs
Waiting_Jobs

Plug-in configuration

ctmemapi.properties

The plug-in uses the same config file as the Control-M API called ctmemapi.properties. In it, set the connection details for the server. Confirm that the ctmemapi.properties file has the correct settings especially:

netprobeServer=localhost *
netprobePort=7036
ManagedEntity=Control-M
Sampler=Jobs

#Instance name of the Control-M server (default is the Hostname of the server on which it resides)
com.bmc.ctmem.emapi.GSR.hostname= **

#Location of the xml request files
com.bmc.ctmem.emapi.XMLDATAPATH=xmldata

username= ***
password= ***

Each plug-in will monitor only one Control-M server.

Note: It is possible to have the netprobe run on a different server than the Control-M monitor process. If the probe is going to run on a different server, then you would have to change localhost to some other hostname.

This setting corresponds to the Control-M server to connect to, which can be found out from the Control-M client as pictured below.

In version 8, you can get these details from the Control-M client that is used to connect to the server:

In version 7, you can get these details from the Control-M client once you are logged in:

The Control-M server is highlighted in red.

Note: It is possible to store passwords in an encrypted format by adding the suffix .encrypted to the setting name in the config file.

Therefore:

password=cleartext

would become:

password.encrypted= f1wSYimqEj5Xa0n6HZ4PCg==

Note: It is possible to encrypt the password using a utility that comes with the integration, which is the bdIT.jar library, located in the lib folder of the integration.

Run it and pass in the password on the command line to encrypt the password:

java -jar bdIT.jar <password>

jacorb.properties

In the jacorb.properties file, it is required to set the naming server and port on line 21 with the following syntax:

ORBInitRef.NameService=corbaloc:iiop:1.2@NAMINGSERVICE:PORTNameService

You can get these details from the Control-M client that is used to connect to the server:

The naming server and port are highlighted in yellow.

Logging Configuration

The logging is configured using log4j. By default, it is configured to log to the console and a log file (controlmmonitor.log) that will roll twice a day (AM and PM).

The config files are available in the logConfig directory. Move the file that you wish to use into the main directory and make sure it is called log4j2.xml. There are three logging configurations provided:

  • log4j2.xml - logs INFO and above to logs/controlmmonitor.log and archives the file if it gets larger than 10MB.
  • log4j2.verbose.xml - logs INFO and above to logs/controlmmonitor.log and archives the file if it gets larger than 10MB. Also logs everything to controlmmonitor_debug.log.
  • log4j2-test.xml - sends everything to the console.

Licence

Place the licence file (ControlmMonitor.lic) into the main directory where the ControlmMonitor.jar file is located.

Initialisation

To run the ControlmMonitor.jar file:

java -Dlog4j.configurationFile=log4j2.xml -jar ControlmMonitor.jar

Configuration

The job views are configured in the xml files in the jobview folder. Each xml document in this folder creates a separate view based on the query defined in the ctmem:search_criterion node.

Failed Jobs:

<Failed_jobs name="Failed jobs" group="Control-M">
<ctmem:retrieve_jobs_criterion>
<ctmem:include>
<ctmem:search_criterion>
<ctmem:param>
<ctmem:name>STATUS</ctmem:name>
<ctmem:operator>EQ</ctmem:operator>
<ctmem:value>Ended Not OK</ctmem:value>
</ctmem:param>
<ctmem:param>
<ctmem:name>ODATE</ctmem:name>
<ctmem:operator>EQ</ctmem:operator>
<ctmem:value>%%ODATE</ctmem:value>
</ctmem:param>
</ctmem:search_criterion>
</ctmem:include>
</ctmem:retrieve_jobs_criterion>
</Failed_jobs>						

View Details

Job Views

The job view is the main feature of the integration. It displays a selection of jobs that are currently ordered into your Control-M schedule. There are several job views that are defined out of the box with the integration:

  • All Jobs - a list of all the jobs that are currently loaded into the environment.
  • Failed Jobs - a list of all the jobs that have failed in the environment.
  • Running jobs - jobs that have not yet completed.
  • Waiting jobs - jobs that have been held for some reason.

All the job views have the same column structure. Their contents are defined by the XML soap messages in the jobview folder.

You will be able to use these views to filter the jobs be a series of criteria. For example:

Criteria Description
DATA_CENTER The data centre to which the job belongs.
APPLICATION The name of the application to which the job’s group belongs.
APPL_TYPE The external application on which the job runs.
GROUP_NAME The name of the group to which the job belongs.
MEMNAME The name of the file that contains the job script.
JOB_NAME The name of the job.
TASK_TYPE Type of the job (task) to be performed by Control-M.
CRITICAL When selected, resources for the job are reserved exclusively for that job as they become available. When all necessary resources are available, the job is executed.
CYCLIC When selected, indicates that the current job is cyclic (it should be rerun at specified intervals).
Part_of_BIM_service Indicates if the job is included in a Business Service.
STATUS The job execution status.
DELETE_FLAG Job was deleted.
Run As User The user (user ID) on whose behalf the job is executed. This parameter is used by the Control-M security mechanism.
HOSTGROUP The name of the host or host group on which following iterations of a job is run.
START_TIME The start time of the job.
END_TIME The end time of the job.
Incond Name The name of the in-condition.
Incond Date The date of the in-condition.
Outcond Name The name of the out-condition.
Outcond Date The date of the out-condition.

Service Monitoring

The integrations connects with Batch Impact Manager (BIM) to view active services that are currently running. This allows you to:

  • View the detail of services currently running in BIM.
  • View the summary of jobs in each running service and their status including:
    • Maximum / minimum runtimes
    • SLA breaches
  • Display jobs that are related to a BIM service in a job monitoring view.

Note: This functionality is only available if you have purchased the optional add-on, Batch Impact Manager (http://www.bmc.com/products/control-m-batch-scheduling/batch-impact-manager.html).

Infrastructure Monitoring

Agent Monitoring

To monitor the Control-M agents, an instance of the plug-in needs to run on each server where the Agent runs. All of these plug-ins can communicate with an individual netprobe. The plug-in uses information from the Control-M configuration to display information about the Agents, and it uses the command line tool ag_ping to confirm that the agent can correctly communicate with the Control-M server.

Note: To reduce the load on the agent servers and the Control-M API, it is best to remove the job views from all the agent servers except one.

Server Monitoring

With the features available in the standard Geneos product, you are able to monitor the whole scheduling infrastructure and use the menu commands to go from a selected job to display metrics of the server on which the instance of that job is running.

We also provide a template to enable monitoring of the Control-M database.