Publishing

Publishing data Copied

The Gateway can publish data to external systems using a dynamically loaded adapter.

The following adapters are available:

The adapters are provided as shared objects in the lib64 directory. If you need to run the Gateway from a directory that is not the parent of lib64, please ensure that any adapters you are using can be located either via the LD_LIBRARY_PATH environment variable or via the adapter.library.path setting. See publishing > additionalSettings.

When Publishing is enabled, the Gateway streams the following set of data:

You can filter this data using strategies.

HTTP adapter Copied

The Gateway can use the HTTP adapter to publish data to an arbitrary HTTP or HTTPS endpoint. The data is published as POST requests, with a MIME type of application/json.

The HTTP adapter supports TLS encryption and verification of a server’s identity and it can authenticate the Gateway to the server using Basic authentication.

Kafka adapter Copied

The Gateway supports publishing to a Kafka cluster (minimum version 0.9), as a Kafka producer client. This provides a resilient mechanism for publishing and allows other systems to consume Geneos data by implementing a Kafka consumer client. In a Hot Standby configuration, only the active Gateway publishes to the Kafka cluster, and data consumers are isolated from any failover.

The Kafka adapter supports TLS encryption and authentication, and this must be configured both on the Kafka cluster and in the Gateway setup file. See publishing > additionalSettings for more details.

This data is published using JSON-based formats which are described below.

Configuration Copied

publishing Copied

This section defines the parameters controlling publishing to external systems.

publishing > enabled Copied

This setting allows publishing to be enabled/disabled. By default publishing is turned off, however if a publishing section exists then it is turned on. This setting allows that to be overridden, so publishing is disabled but there is configuration saved for a later date. Mandatory: No Default: true

Note

Publishing and Gateway Hub can be enabled at the same time. Errors generated by attempting to publish using Publishing do not affect the operation of Gateway Hub. Similarly, errors generated by attempting to publish using Gateway Hub do not affect the operation of Publishing.

publishing > adapter Copied

The adapter section allows the adapter to be selected and holds settings specific to the selected adapter.

The following adapters are available:

publishing > adapter > HTTP Copied

Settings for the HTTP adapter.

publishing > adapter > HTTP > Url Copied

Specify the HTTP or HTTPS endpoint that Gateway should publish data to.

publishing > adapter > HTTP > Authentication Copied

Specify the form of authentication to use, the following options are available:

publishing > adapter > HTTP > Authentication > basic Copied

Specify a username and password.

publishing > adapter > HTTP > Verification Copied

You can enable the verification option to check the TLS certificates supplied by the target endpoint include trusted root certificates in their trust chain.

If no root certificates are provided, then Gateway will use the hosts trusted certificates for verification.

publishing > adapter > HTTP > Verification > On Copied

Enable or disable verification.

publishing > adapter > HTTP > Verification > On > Root certificates Copied

Specify the trusted root certificates to use for verification.

You can provide either a pemString or the path to a pemFile.

publishing > adapter > kafka Copied

Settings for the Kafka adapter.

publishing > adapter > kafka > topicPrefix Copied

Specifies the first part of the topic names under which data is published and the first part of the topic name to which the Kafka adapter subscribes for requests, such as metrics snapshots. By default, the publishing topics used include geneos-probes, geneos-raw.table, (and eight other names beginning geneos-) and the request topic is geneos-requests. This setting allows the topics to be changed to, for example, my-test-probes, my-test-raw.table and so on. Mandatory: No Default: geneos-

publishing > adapter > kafka > brokerList Copied

Specifies a comma-separated list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The adapter makes use of all servers irrespective of which servers are specified here for bootstrapping - this list only impacts the initial hosts used to discover the full set of brokers. Please refer to the description of bootstrap.servers in the Kafka documentation for more information. Mandatory: No Default: localhost:9092

publishing > additionalSettings Copied

This advanced configuration option allows additional settings to be specified. These settings either control the way the adapter is loaded or are interpreted by the adapter itself.

Settings are entered using the same syntax as a Java properties file, each line specifies a key-value pair.

Settings with the prefix adapter. control the way the adapter is loaded. The settings available are:

The Kafka adapter uses the librdkafka library to connect to Kafka and supports transparent pass-through of settings to the library. Any global setting (other than callbacks) defined in the librdkafka documentation can be used by prefixing their names with kafka..

In the example above, the key kafka.client.id refers to the librdkafka option documented as client.id.

Note

The Kafka adapter discards incoming messages when the librdkafka publishing queue is full. When publishing is resumed, the adapter reports the number of discarded messages. The length of the queue is modified by including a kafka.queue.buffering.max.message setting here.
SASL PLAIN authentication Copied

To use SASL PLAIN for authentication on a normal connection, use the following template, replacing the username and password with your credentials:

kafka.sasl.mechanism=PLAIN
kafka.security.protocol=SASL_PLAINTEXT
kafka.sasl.username=<username>
kafka.sasl.password=<password>

Note

Username and password are cleartext.
SSL encryption Copied

To use SSL encryption for the connection, the SSL protocol must be enabled and the certificate used to sign the Kafka brokers’ public keys must be trusted by the Kafka adapter. Use the following template, replacing the location with your credentials:

kafka.security.protocol=ssl
kafka.ssl.ca.location=<location of CA certificate PEM file>
SSL encrypted connection with SASL PLAIN authentication Copied

To use SSL encryption with SASL PLAIN for authentication, use the following template, replacing the location, username, and password with your credentials:

kafka.sasl.mechanism=PLAIN
kafka.security.protocol=SASL_SSL
kafka.ssl.ca.location=<location of CA certificate PEM file>
kafka.sasl.username=<username>
kafka.sasl.password=<password>

Note

Username and password are cleartext.
SSL encryption with SSL authentication Copied

To use SSL encryption with SSL authentication, the Kafka adapter must be able to present a certificate which is signed using a certificate trusted by the broker(s). Use the following template, replacing the locations with your credentials:

kafka.security.protocol=ssl
kafka.ssl.ca.location=<location of CA certificate PEM file>
kafka.ssl.certificate.location=<location of client certificate>
kafka.ssl.key.location=<location of private key for client certificate>
SSL encryption with SCRAM-SHA-256 authentication Copied

To use SSL encryption with SCRAM-SHA-256 authentication, use the following template:

kafka.security.protocol=sasl_ssl
kafka.sasl.mechanism=SCRAM-SHA-256
kafka.sasl.username=xxxxxxx
kafka.sasl.password=xxxxxxx
kafka.ssl.ca.location=/opt/ITRS/cert/cert2/ca-cert/CA_cert.pem
kafka.ssl.certificate.location=/opt/ITRS/cert/cert2/server.pem
kafka.ssl.key.location=/opt/ITRS/cert/cert2/server.key
Use Kerberos to connect to Kafka on Linux 64-bit Gateways Copied

Linux 64-bit Gateways can connect to Kafka using Kerberos. Use the following template:

kafka.sasl.kerberos.service.name=<service name>
kafka.security.protocol=SASL_PLAINTEXT
kafka.sasl.kerberos.keytab=<keytab file>
kafka.sasl.kerberos.principal=<Kerberos principal>

The default value of kafka.sasl.kerberos.service.name is kafka. Use this value unless the Service Principal Names for the Kafka cluster have been set up with a different service name.

Use your credentials for the kafka.sasl.kerberos.keytab and kafka.sasl.kerberos.principal fields. The keytab file must encode the password for the username specified as kafka.sasl.kerberos.principal. You can generate the keytab file using the Unix utility ktutil or the Windows Server built-in utility ktpass.exe.

Note

If you are using Kerberos to connect to Kafka, and the Gateway’s working directory is not the package directory, you must set the SASL_PATH environment variable. It must point to the sasl2 directory inside the lib64 directory of the Gateway package. Ensure you have also set the LD_LIBRARY_PATH environment variable or the adapter.library.path setting to locate the required libraries.
Test your connection to Kerberos Copied

Before you test Kerberos with your Gateway, use the commands kinit and klist to test that the keytab file can connect to your Kerberos server and a valid ticket is returned. Use the following template for kinit, replacing the service name, broker hostname, keytab file, and Kerberos principal with your credentials:

kinit -S <service name>/<broker hostname> -k -t <keytab file> <Kerberos principal>

Mandatory: No

publishing > secureSettings Copied

This section allows settings that cannot be set in cleartext, such as passwords, to be encrypted in the Gateway setup file.

publishing > secureSettings > setting > name Copied

The name of the secure setting. Mandatory: Yes

publishing > secureSettings > setting > value Copied

The value of the secure setting.

This can either be:

Strategies Copied

Schedule Copied

Setting Description
Every Number of intervals between publishing operations.
Interval

Size of the interval, the following options are available:

  • Days

  • Hours

  • Minutes

Starting at Specify a starting time for the schedule.
Timezone Specify the timezone used to set the schedule.
   
Setting Description Default
Name Specifies a name to uniquely identify the strategy. New Strategy
Targets One or more XPaths identifying the data items to which the strategy applies.

If the filter option is selected, then the target XPaths must point to one or more Netprobes, Managed Entities, samplers, or dataviews. XPaths that reference run-time values such as severity or connection state, or XPaths that point to individual cells are not supported.

If the pivotDataview or schedule option is selected, then the target must be a dataview and this is checked when setup is validated.

Options

Specifies what type of strategy to use. The following options are available:

  • include filter — publish only the data items specified by the targets, their ancestors, and their descendants. The metrics, severity messages, snooze messages, and user assignment messages of all data items not explicitly or implicitly targeted are not published. Severity is propagated through published data based only on included data items.
  • exclude filter — do not publish the data items specified by the targets or their descendants. Metrics, severity messages, snooze messages, and user assignment messages are not published for any targeted data items. Severity is propagated through published data based only on data items which are not excluded.

Note: If you specify both include filters and exclude filters, a data item is published if it is selected (directly or as an ancestor or descendant) by at least one include target and it is not selected (directly or as a descendant) by any exclude target.

  • pivotDataview — pivots the target dataviews such that the rownames of the original dataview become column names and the adjacent columns become rows. Pivot behaviour is determined by the settings in the publisher tab of target samplers.

  • schedule — publish only at increments of the specified interval.

filter

Strategy Group Copied

Strategies may be grouped for organisational purposes. This has no effect on their application but is useful for grouping categories and making them easier to find in the navigation tree.

You must specify a name when creating a strategy group.

kafkacat Copied

kafkacat is an open source utility written and maintained by the author of the librdkafka library used by Geneos. kafkacat is shipped with Linux 64-bit Gateways to ease the testing of connecting to your Kafka infrastructure. For more information about kafkacat, see https://github.com/edenhill/kafkacat.

To ensure that kafkacat uses the same Kafka, SSL, and SASL libraries as the Gateway, kafkacat must be run with the following environment variables:

Enable Kafka debug logging Copied

To enable Kafka debug logging, follow these steps:

  1. Open your GSE.
  2. Click Operating environment in the Navigation tree.
  3. Select the Debug tab.
  4. Open the drop-down list by the Publishing, and tick adapter.
  5. Click Publishing in the Navigation tree.
  6. Select the Advanced tab.
  7. In Additional settings, add kafka.debug= followed by the debug categories. We recommend it is set to topic,protocol. - For a full set of debug categories, see the librdkafka documentation. The queue and all categories are extremely verbose.
  8. Click Save current document .

Standardised formatting Copied

Some data can vary in format from dataview to dataview or even from row to row or column to column. Standardised Formatting allows normalisation of this data to a standard format for downstream systems.

Currently dateTime is supported. The dateTime formatter can be applied as shown below: Date-times, dates and times are formatted to ISO 8601 format.

Note the adjustment to UTC. Cells containing only times are assumed to be for the current day and formatted as a date-time.

There are two types of standardised formatting: System supplied formatting and User specified formatting.

System supplied formatting Copied

The Gateway provides a set of converters for cells in Geneos plugins.

These are defined in a file located in <gateway resources directory>/standardisedformats/formats.json. By default, the gateway resources directory is <gateway directory>/resources but you can modify this using a Gateway command line options.

This data in this file is in JavaScript Object Notation (JSON).

It is recommended that this file be left unchanged. However, in unusual circumstances it may be preferable to update the formats.json file rather than generate a set of user specified formatting.

The file has a number of entries under the “formats” label at the start. Taking one as an example it breaks down as follows:

{
"formats" : [
{
"type"     : "datetime",
"format"   : "%a %b %d %H:%M:%S %Y",
"region"   : "gateway",
"plugin"   : "Gateway-gatewayData",
"cell"     : { "row" : "licenseExpiryDate", "column" : "value"}
},
...
]}

Note

Some “formats” entries may end with a special format called “raw”. This prevents errors from being logged where none of the formats were able to process the input.
Element Description
type

Data type of the data being formatted.

Currently the only format is "datetime".

format

Specifies the expected format of the data.

See the following section for a description of the time parsing codes.

Where you need several of these to accommodate different servers that may generate dates in differing formats use the "formats" tag defined below.

formats Where a variable may be in a number of formats an array of alternative formats may be specified.
{
...
"formats",
["%a %b %d %H:%M:%S %Y", "%s"]
...
}
region

Timezone information for the formatter.

For our purposes these should be "gateway" or "netprobe" depending on the location of the sampler.

The formatter will use the timezone of the specified component. Where netprobe timezone isn't known it will fall back to the Gateway.

plugin The name of the plugin this formatter applies to. The formatter applies to all dataviews created by the plugin unless you restrict this.
dataview

Only applies to variables within the named dataview.

{
...
"dataview", "overview"
...
}
cell

Targets the formatting of a specific row / column pair.

For more flexibility, you can target a range of columns.

{
...
"cell"  : { "row" : "licenseExpiryDate", "column" : "Value"}
...
}

cols

An array or regular expressions used to target columns within a dataview.

{
...
"cols"     : ["applied|changed"]
...
}

Will target all column headings containing "applied" or "changed" as part of the column name. i.e. "appliedDate"

headlines An array or regular expressions used to target columns witin a dataview. Works as cols above and can be used in conjunction.

User specified formatting Copied

Certain dataviews contain cells where the format of the cell cannot be determined. This may be because the data is generated by a user script, or it may be because a Geneos plugin is extracting data whose format is determied by the enviroment of the server on which the plugin is running.

The standardised formatting tab on samplers allows the user to describe the originating format of variables (cells and headlines). This allows them to be transformed into standardised format. You can also specify which cells in the sampler the format is applied to by using the dataview name and the applicability sections.

Configuration Copied

samplers > sampler > publishing > standardisedFormatting Copied

See Standardised formatting in Publish to Kafka.

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview Copied

Provides a list of standardised formatting variables to apply to one or more dataviews in the sampler. When the name of the dataview is set the variable definitions are restricted to the named dataview. If the name is left unset the variable definitions applies to all dataviews belonging to the sampler.

Mandatory: No

Default: Not set

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > Name Copied

Name of the variable.

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable Copied

Variable definition specifying type and applicability of variable.

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > type Copied

Specifies the type of the variable. Currently on dateTime is supported.

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > type > dateTime Copied

Specifies that the variable is a date-time (includes dates and times which are assumed to be for the current day).

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > type > dateTime > formats > format Copied

Specifies the expected format of the data. See the following section for a description of the time parsing codes. There can be several of these to accommodate that different servers may generate dates in differing formats.

Mandatory: Yes

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > type > dateTime > exceptions > exception Copied

Specifies exceptions to Standardised Formatting. If the text specified here matches the value of a published cell, no formatting is applied and no error is logged. This is for instances where dataviews have invalid values such as a blank string ("") or “N/A” as a date.

For data published via Publishing using Kafka, the data is published unchanged.

For data published to Gateway Hub, any exceptions are published as “N/A”. This allows the Gateway Hub to recognise and process the data.

Mandatory: No

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > type > dateTime > ignoreErrorsIfFormatsFail Copied

If set to true and the none of the formats can translate the data then the errors generated are suppressed. This can be used to specify a NULL format that overrides a system provided one with no formatting. This can be useful for example if you don’t care about formatting these and the output is in an unexpected format due to locale.

Mandatory: No

Default: false

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > type > dateTime > overrideTimezone Copied

The default timezone date is that of the dataview. i.e. if the Probe has a timezone set the date would be interpreted from that timezone. If not the timezone will match the Gateway’s timezone. This setting allows you to explicitly set the timezone from which the data is being received.

Mandatory: No

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > applicability Copied

Allows matching of formats to dataview variables.

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > applicability > headlines Copied

Allows you to match variables to headline variables.

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > applicability > headlines > regex Copied

A regular expression to match against the headline names.

All values within the matching headlines will have the formatter applied for publishing.

Mandatory: No

Expression Effect
date Matches all headlines containing date.
date$ Matches all headlines ending in date$.
^date$ Matches the healdine date.

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > applicability > columns > regex Copied

A regular expression to match against the column names.

All values within the matching columns will have the formatter applied for publishing.

Mandatory: No

Expression Effect
date matches all columns containing date
date$ matches all columns ending in date$
^date$ matches the column date

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > applicability > cells Copied

Maps to a specific row and column. These mappings are good for dataviews made from name, value pairs and take priority on lookup over column regular expressions. As these are specific the row and columns are not regular expressions.

Mandatory: No

operatingEnvironment > debug > DirectoryManager > showStandardisedFormattingErrors Copied

Turns on error reporting for errors found processing inputs to standardised formats. If this is not enabled and errors are found then a summary of the number of errors found is logged at 10 minute intervals.

Type Format
Date-time 2012-07-27T19:12:00Z
Date-time with micro seconds 2012-07-27T19:12:00.123456Z
Date 2012-07-27

Kafka message formats overview Copied

Kafka topics, keys and messages Copied

A Kafka message has the following components:

Message key format Copied

In principle, the message key is an opaque string used to allocate messages to partitions in Kafka. Applications needing to filter messages should ignore the key and use the "target" object within the payload.

In practice, the message key is roughly equivalent to a reversed XPath delimited by dots:

<datview>.<sampler>.<type>.<managed entity>.<probe>.<gateway>

For example (note that if a component name is not applicable, it will be omitted, but the following dot will not):

CPU.CPU..myEntiy.theProbe.Ad-hoc GW
....theProbe.Ad-hoc GW

The number of name components (and hence the number of dots) is the same for all metrics and metadata messages.

Message payloads Copied

The examples in this section are pretty-printed: the messages published by Gateway have no redundant whitespace.

Directory messages Copied

Messages on the probes, managedEntities and dataviews topics provide information about the corresponding items in the Gateway directory hierarchy. They are published as these items are created and deleted and as their run-time attributes are updated.

Messages in this category are also retransmitted when publishing setup is reconfigured (for example to change the Kafka topic prefix).

Probes Copied
Example messages Copied

A message sent as a Netprobe comes up:

{  "data": {    "timestamp": "2015-07-01T16:18:23.263Z",    "name": "theProbe",    "gateway": "Ad-hoc GW",    "osType": "Linux",    "HostName": "linux-dev",    "Port": "7036",    "ConState": "Up",    "OS": "Linux 2.6.18-371.4.1.el5",    "Version": "GA3.1.1-150515"  },  "operation": "update"
}

A message describing a virtual probe:

{  "data": {    "timestamp": "2015-10-20T09:15:04.732Z",    "name": "vp",    "gateway": "Ad-hoc GW",    "osType": "Virtual",    "ConState": "Up"  },  "operation": "update"
}

Points to Note:

Managed Entities Copied
Example message Copied
{  "data": {    "timestamp": "2016-05-25T15:02:18.255Z",    "name": "theEntity",    "probe": "theProbe",    "gateway": "Ad-hoc GW",    "attributes": {      "Team": "Middleware",      "Purpose": "Testing"    }  },  "operation": "update"
}

Note

An update message is published whenever a managed entity attribute is changed, added or deleted.
Dataviews Copied
{  "data": {    "timestamp": "2016-05-27T12:51:16.009Z",    "dataview": "CPU",    "sampler": "CPU",    "pluginName": "CPU",    "type": "Default Samplers",    "managedEntity": "basics",    "probe": "theProbe",    "gateway": "Ad-hoc GW",    "topicSuffix": "CPU.CPU.Default Samplers.basics.theProbe.Ad-hoc GW",    "availableTopics": [      "enriched.table.CPU.CPU.Default Samplers.basics.theProbe.Ad-hoc GW",      "enriched.headlines.CPU.CPU.Default Samplers.basics.theProbe.Ad-hoc GW",      "raw.table.CPU.CPU.Default Samplers.basics.theProbe.Ad-hoc GW",      "raw.headlines.CPU.CPU.Default Samplers.basics.theProbe.Ad-hoc GW"    ]  },  "operation": "create"
}

Note

Update messages are only sent on the dataviews topic in response to a client request or a change in Publishing configuration.

Metrics messages Copied

Raw and enriched forms of metrics data Copied

Metrics data is available in two forms:

When new data for a row is provided by the data source and then values in that row are computed by Gateway rules, an update message will be published for each of these changes as they occur: first the change from the data source and then each change made by a rule.

Headline data Copied

A raw message:

{  "data": {    "sampleTime": "2016-05-27T12:54:29.685Z",    "target": {      "gateway": "Ad-hoc GW",      "probe": "theProbe",      "managedEntity": "basics",      "type": "Default Samplers",      "sampler": "CPU",      "dataview": "CPU",      "filter": {        "osType": "Linux",        "pluginName": "CPU"      }    },    "samplingStatus": "OK",    "numOnlineCpus": "2",    "loadAverage1Min": "0.00",    "loadAverage5Min": "0.00",    "loadAverage15Min": "0.00",    "numPhysicalCpus":  ,    "HyperThreadingStatus": "DISABLED",    "numCpuCores": "2"  },  "operation": "create"
}

This message represents the headlines of a dataview.

The ’target’ property identifies the dataview. The ‘filter’ property of the target provides additional information which may help a consumer of the message to determine which properties should appear in the rest of the message. For example, the Geneos CPU plugin on Windows publishes a dataview that has different headlines and column names from the CPU plugin on Linux.

The properties that follow ’target’ are the headlines for the dataview. ‘samplingStatus’ is always present as the next property after ’target’; in some cases it is the only headine present.

An enriched message for the headlines of the same dataview:

{  "data": {    "sampleTime": "2016-05-27T12:54:29.877Z",    "target": {      "gateway": "Ad-hoc GW",      "probe": "theProbe",      "managedEntity": "basics",      "type": "Default Samplers",      "sampler": "CPU",      "dataview": "CPU",      "filter": {        "osType": "Linux",        "pluginName": "CPU"      }    },    "samplingStatus": "OK",    "numOfflineCpus": "0",    "numOnlineCpus": "2",    "loadAverage1Min": "0.00",    "loadAverage5Min": "0.00",    "loadAverage15Min": "0.00",    "numPhysicalCpus":  ,    "HyperThreadingStatus": "DISABLED",    "numCpuCores": "2"  },  "operation": "update"
}

This message represents the headlines of the same dataview. The differences from the “raw” message are:

Table (cell) data Copied

A raw message:

{  "data": {    "sampleTime": "2016-05-27T12:59:56.085Z",    "target": {      "gateway": "Ad-hoc GW", "probe": "theProbe", "managedEntity": "basics",      "type": "Default Samplers", "sampler": "CPU", "dataview": "CPU",      "filter": { "osType": "Linux", "pluginName": "CPU" }    },    "name": "Average_cpu",    "row": {      "type": "",      "state": "",      "clockSpeed": "",      "percentUtilisation": "0.97 %",      "percentUserTime": "0.53 %",      "percentKernelTime": "0.18 %",      "percentWaitTime": "0.25 %",      "percentIdle": "99.03 %"    }  },  "operation": "update"
}

This message represents a row from a dataview. As in the case of headline messages, the ’target’ property identifies the dataview and provides additional information to help determine which properties should appear in the rest of the message.

The ’name’ property identifies the row; the ‘row’ property contains the cell data. Because this is a “raw” message, no computed cells are shown and the ‘sampleTime’ property is the time the netprobe published the sample.

An enriched message for the same table row

{  "data": {    "sampleTime": "2016-05-27T12:59:56.283Z",    "target": {      "gateway": "Ad-hoc GW", "probe": "theProbe", "managedEntity": "basics",      "type": "Default Samplers", "sampler": "CPU", "dataview": "CPU",      "filter": { "osType": "Linux", "pluginName": "CPU" }    },    "name": "Average_cpu",    "row": {      "workDone": "3194.4816",      "type": "",      "state": "",      "clockSpeed": "",      "percentUtilisation": "0.97 %",      "percentUserTime": "0.53 %",      "percentKernelTime": "0.18 %",      "percentWaitTime": "0.25 %",      "percentIdle": "99.03 %"    }  },  "operation": "update"
}

This message represents the same row from the same dataview. The differences from the “raw” message are:

Sample times for imported data Copied

Where Gateway sharing is used, the “raw form” messages for imported metrics data may include rows, columns or headlines added by the exporting Gateway. For each update, the sample time will reflect the original source of the data.

That is, when a new dataview sample is provided by a Netprobe, the exporting Gateway will first forward that sample, with sample time provided by the Netprobe and then, if a rule is triggered, will send a further update for computed data using its own time for the sample time. The raw form published by the importing Gateway will therefore include sample times from both the Netprobe and the exporting Gateway. Although all timestamps are UTC, if the operating system clocks are not synchronised, the sample times in the “raw form” published messages may be inconsistent. For example, computed updates may be published with earlier sample times than the data from which they are calculated.

Points to Note:

Metadata messages Copied

These messages provide information about severity, snooze status and user assignment of data items at all levels of the directory hierarchy.

Severity messages Copied

Severity messages are published on the topic metadata.severity.

Here are two of the messages generated when the status of a dataview headline changed from “WARNING” to “OK”:

{  "data": {    "timestamp": "2016-05-25T15:02:18.357Z",    "target": {      "gateway": "Ad-hoc GW",      "probe": "vp",      "managedEntity": "Gateway",      "sampler": "probeData",      "type": "",      "dataview": "probeData",      "headline": "samplingStatus",      "filter": {        "osType": "Virtual",        "pluginName": "Gateway-probeData"      }    },    "severity": "OK",    "active": true, `"snoozed": false, "snoozedParents": 2, "userAssigned": false,`    "value": {      "cell": "OK"    }  },  "operation": "update"
}

{  "data": {    "timestamp": "2016-05-25T15:02:18.358Z",    "target": {      "gateway": "Ad-hoc GW",      "probe": "vp",      "managedEntity": "Gateway",      "filter": {        "osType": "Virtual"      }    },    "severity": "OK",    "active": true `"snoozed": false, "snoozedParents": 2, "userAssigned": false,`  },  "operation": "update"
}

Here are some examples of severity messages for dataview items resulting from severity change:

{   "data": {     "timestamp": "2016-07-19T14:51:15.451Z",     "target": {       "gateway": "Ad-hoc GW",       "probe": "vp",       "managedEntity": "Gateway",       "sampler": "gateway",       "type": "",       "dataview": "gateway",       "row": "databaseHost",       "column": "value",       "filter": {         "osType": "Virtual",         "pluginName": "Gateway-gatewayData"       }     },     "severity": "OK",     "active": true, `"snoozed": false, "snoozedParents": 2, "userAssigned": false,`     "value": {       "cell": "dbhost"     }   },   "operation": "update" }
 {   "data": {     "timestamp": "2016-07-19T14:51:15.446Z",     "target": {       "gateway": "Ad-hoc GW",       "probe": "vp",       "managedEntity": "m",       "sampler": "gw",       "type": "",       "dataview": "gw",       "row": "releaseAge",       "column": "value",       "filter": {         "osType": "Virtual",         "pluginName": "Gateway-gatewayData"       }     },     "severity": "CRITICAL",     "active": true, `"snoozed": false, "snoozedParents": 2, "userAssigned": false,`     "value": {       "cell": "4 days",       "number": 4     }   },   "operation": "create" }
 {   "data": {     "timestamp": "2016-07-18T15:47:34.288Z",     "target": {       "gateway": "Ad-hoc GW",       "probe": "vp",       "managedEntity": "m2",       "sampler": "gw",       "type": "",       "dataview": "gw",       "row": "licenseExpiryDate",       "column": "value",       "filter": {         "osType": "Virtual",         "pluginName": "Gateway-gatewayData"       }     },     "severity": "OK",     "active": true, `"snoozed": false, "snoozedParents": 2, "userAssigned": false,`     "value": {       "cell": "2021-01-31T00:00:00Z",       "dateTime": "2021-01-31T00:00:00Z"     }   },   "operation": "update"
}

Points to Note:

Snooze messages Copied

Snooze messages are published on the topic metadata.snooze.

Here are the messages generated when a managed entity is snoozed and unsnoozed:

{  "data": {    "timestamp": "2016-05-27T14:51:10.000Z",    "target": {      "gateway": "Ad-hoc GW",      "probe": "vp",      "managedEntity": "Gateway",      "sampler": "gatewayLoad",      "type": "",      "filter": {        "osType": "Virtual"      }    },    "snoozed": {      "snoozed": true,      "snoozedBy": "ActiveConsole1",      "comment": "Simple example",      "period": "Manual"    }  },  "operation": "update"
}

{  "data": {    "timestamp": "2016-05-27T14:51:10.000Z",    "target": {      "gateway": "Ad-hoc GW",      "probe": "vp",      "managedEntity": "Gateway",      "sampler": "gatewayLoad",      "type": "",      "filter": {        "osType": "Virtual"      }    },    "snoozed": {      "snoozed": false`, "unsnoozedBy: "ActiveConsole1"`    }  },  "operation": "update"
}

Here is an example of snoozing a headline, using some more complex options:

{  "data": {    "timestamp": "2016-05-27T14:54:51.000Z",    "target": {      "gateway": "Ad-hoc GW",      "probe": "vp",      "managedEntity": "Gateway",      "sampler": "gatewayLoad",      "type": "",      "dataview": "gatewayLoad",      "headline": "samplingStatus",      "filter": {        "osType": "Virtual",        "pluginName": "Gateway-gatewayLoad"      }    },    "snoozed": {      "snoozed": true,      "snoozedBy": "ActiveConsole1",      "comment": "Complex options",      "period": "Until",      "untilSeverity": "OK",      "untilTime": "2016-05-27T15:54:51.000Z",      "untilValue": "NOTE: Stats collection is disabled"    }  },  "operation": "update"
}

Points to Note:

User assignment messages Copied

User assignment messages are published on the topic metadata.userassignment.

Here are the messages generated when a probe is assigned and unassigned:

{  "data": {    "timestamp": "2016-05-27T15:09:39.000Z",    "target": {      "gateway": "Ad-hoc GW",      "probe": "vp",      "filter": {        "osType": "Virtual"      }    },    "userAssignment": {      "userAssigned": true,        "assignedTo": "Ann Administrator",`"assignedBy": "Joe Bloggs",`      "comment": "",      "period": "Manual"    }  },  "operation": "update"
}

{  "data": {    "timestamp": "2016-05-27T15:10:30.677Z",    "target": {      "gateway": "Ad-hoc GW",      "probe": "vp",      "filter": {        "osType": "Virtual"      }    },    "userAssignment": {      "userAssigned": false`, "unassignedBy: "John Doe"`    }  },  "operation": "update"
}

Here is an example of assigning a table cell, using some more complex options:

{  "data": {    "timestamp": "2016-05-27T15:17:15.000Z",    "target": {      "gateway": "Ad-hoc GW",      "probe": "vp",      "managedEntity": "Gateway",      "sampler": "probeData",      "type": "",      "dataview": "probeData",      "row": "theProbe",      "column": "security",      "filter": {        "osType": "Virtual",        "pluginName": "Gateway-probeData"      }    },    "userAssignment": {      "userAssigned": true,      "assignedTo": "John Doe",`"assignedBy": "Ann Administrator",`      "comment": "Please set up secure config",      "period": "Until a change in value",      "untilValue": "INSECURE"    }  },  "operation": "update"
}

Points to Note:

Features common to all types of message Copied

Timestamps Copied

Timestamps are formatted using the following ISO 8601 format: YYYY-MM-DDThh:mm:ss.sssZ. Note that fractional seconds are shown, to millisecond precision, and that all timestamps are in UTC. For raw metrics (see Raw and enriched forms of metrics data in Message Payloads section), the timestamp shown is the sample time reported by the sampler. Otherwise the timestamp, including the sample time shown for enriched metrics is the time on the Gateway host when the message is formatted.

Operation Copied

The possible values for the operation property of the message payload are as follows:

The data item described has been added to the configuration, or, in the case of a table row, re-created with a new set of columns.

The data item has been updated in some way, including, for the headlines topics, the addition or removal of a headline.

The current state of the data item is being sent in response to a client request or a change in Publishing configuration.

The current state of the data item is being sent as a result of the use of the “Schedule” publishing strategy.

The data item described has been removed from the configuration, or, in the case of a table row, is about to be re-created with a new set of columns. When a ‘delete’ message is sent, the properties of the ‘data’ part of the message reflect the last known state of the item.

Request/reply messages Copied

In addition to the publish/subscribe mechanism used for the messages described so far, Kafka messages support a mechanism which clients can connect to make requests and receive replies.

To send a request using Kafka, the client publishes a request to the Geneos request topic. The name of this topic is normally ‘geneos-requests’; the ‘geneos-’ prefix is the same as the (configurable) prefix used for the publishing topics. The Gateway does not acknowledge the request. All the Gateways that publish to a given Kafka cluster using the same topic prefix will receive and act on the request.

Resend directory request Copied

A client can request that the Gateway re-send the directory information by sending the following request:

{"request":"resend-directory"}

The Gateway resends all the data for the directory topics. This is sent via the normal Kafka publishing topics.

Snapshot metrics request Copied

A client can request that the Gateway provide a snapshot of selected metrics data by sending a request of the following form:

{"request":"snapshot-metrics","target":{"key":"value","key":"value"},"match":"exact"}

The JSON object provided as the value of “target” specifies the dataviews for which to provide a snapshot.

For example, to request a snapshot of all dataviews from Linux CPU samplers in managed entities with attribute “Region” set to “London” and attribute “Division” set to “FIXED INCOME”, a client could send this request:

{ "request":"snapshot-metrics", "target":{           "attributes":{"Region":"London","Division":"FIXED INCOME"},           "osType":"Linux",           "pluginName":"CPU"          }, "match":"exact"
}

To request a snaphot of all dataviews from Gateway plugins, one could use this request:

{ "request":"snapshot-metrics", "target":{           "pluginName":"Gateway*"          }, "match":"wildcard"
}

The Gateway will send a snapshot of the metrics (raw and enriched, headlines and table data) for all dataviews which match the target specification. This is sent via the normal Kafka publishing topics.

Target syntax Copied

Keys and values are all case-sensitive. The following keys are supported:

The osType determines the operating system which the probe is running on. This can be found in the properties of a managed entity that has a running probe:

Active Console identifies the type of operating system based on a numeric value:

Exact vs wildcard matching Copied

If the match key in the snapshot request has the value wildcard, then values may include the wildcard characters ‘*’ and ‘?’. As wildcards, ‘*’ matches zero or more unspecified characters and ‘?’ matches exactly one unspecified character. ‘*’ and ‘?’ can be escaped (so that they match a literal asterisk or question mark) by preceding them with a backslash, ‘'. Note that to encode a backslash in JSON, it needs to be doubled.

If the match key in the snapshot request has the value exact, then all characters match themselves.

Snapshot request for events Copied

A client can request that the Gateway provide a snapshot of selected event data.

The request can be for separate snapshots of severity, snooze, and user assign events.

The snapshot responds with the severity, snooze, or userAssign state of all matching target data items (i.e. dataviews and cells), and also the state of its parent data items (i.e. sampler, managed entity, gateway).

The form and syntax required to request the snapshot is very similar to the Snapshot metrics request, with the only difference being the value of the request.

The request values are:

For example, to make a request for severity events use:

{ "request":"snapshot-severity", "target":{          [YOUR key:value PAIRS HERE]          }, "match":"exact"
}

Snapshot request for all events and metrics Copied

A client can request that the Gateway provide a snapshot of all event (severity, snooze, and userAssign) and metrics data. To make a request for this data, send a request in the following form:

{ "request":"snapshot-all", "target":{           [YOUR key:value PAIRS HERE]          }, "match":"exact"
}

HTTP message formats overview Copied

Message structure Copied

Data published by the HTTP adapter are sent as self-contained JSON objects. All messages are posted to the same user-supplied endpoint. A type field at the top level of the message structure may be used for routing by a user application.

Each message object contains three key-value pairs:

This contains at least three key-value pairs:

A JSON schema for the publishing message payloads can be downloaded by clicking the following link: HTTP schema.

The following sections provide further information about the semantics of the different message types. The examples in these sections are pretty-printed: the messages published by Gateway have no redundant whitespace.

Directory messages Copied

Messages with the probe and managedEntity types are published because these types of directories are created, deleted, or have their run-time attributes updated. These messages are also retransmitted when the publishing setup is changed such as changing the destination URL.

Netprobe messages Copied


{ "data": {   "timestamp": "2015-07-01T16:18:23.263Z",   "target": {     "gateway": "Ad-hoc GW",     "probe": "theProbe"   },   "parameters": {     "osType": "Linux",     "HostName": "linux-dev",     "Port": "7036",     "ConState": "Up",     "OS": "Linux 2.6.18-371.4.1.el5",     "Version": "GA3.1.1-150515"   } }, "operation": "update", "type": "probe"
}

Points to Note:

Managed Entity messages Copied


{ "data": {   "timestamp": "2016-05-25T15:02:18.255Z",   "target": {     "gateway": "Ad-hoc GW",     "probe": "theProbe",     "managedEntity": "theEntity"   },   "attributes": {     "Team": "Middleware",     "Purpose": "Testing"   } }, "operation": "update", "type": "managedEntity"
}

Points to Note:

Metrics messages Copied

Metrics messages have a headline or a table type. They contain the rows and cells provided by the data source, such as a Netprobe plugin. They may also include additional rows and cells configured on the publishing Gateway.

Headlines Copied

Headline message represents the headlines in the dataview.


{ "data": {  "sampleTime": "2019-01-24T10:41:14.098Z",  "netprobeTime": "2019-01-24T10:41:14.098Z",  "target": {     "gateway": "ExampleGateway",     "probe": "vp",     "managedEntity": "GatewayInfo",     "type": "",     "sampler": "Gateway SQL",     "dataview": "Fixed",     "filter": {        "osType": "Virtual",        "pluginName": "Gateway-sql"     }  },  "row": {     "samplingStatus": "OK"  } }, "type": "headline", "operation": "create"
}

Points to Note:

Table rows Copied

This message represents the column values for a single row of a dataview.


{ "data": {  "sampleTime": "2019-01-24T10:41:14.240Z",  "netprobeTime": "2019-01-24T10:41:14.098Z",  "target": {     "gateway": "ExampleGateway",     "probe": "vp",     "managedEntity": "GatewayInfo",     "type": "",     "sampler": "Gateway SQL",     "dataview": "Static",     "row": "whatever",     "filter": {        "osType": "Virtual",        "pluginName": "Gateway-sql"     }  },  "row": {     "name": "whatever",     "fromRule": "rule output",     "answer": "17",     "approxPi": "3.1416"  },  "computedColumn": "fromRule" }, "type": "table", "operation": "update"
}

Points to Note:

Event messages Copied

Event messages provide information about the severity, snooze status, and user assignment of data items at all levels of the directory hierarchy.

Severity messages Copied

Severity is set directly by rules on a headline or a table cell, and indirectly by propagation up the Gateway directory hierarchy. Severity messages are published for both types of update.

The following message is an example of a severity change on a cell. This message includes the value of the cell value at the time of the change.


{ "data": {  "timestamp": "2016-07-18T15:47:34.288Z",  "target": {     "gateway": "Ad-hoc GW",     "probe": "vp",     "managedEntity": "m2",     "sampler": "gw",     "type": "",     "dataview": "gw",     "row": "licenseExpiryDate",     "column": "value",     "filter": {        "osType": "Virtual",        "pluginName": "Gateway-gatewayData"      }  },  "data": {     "severity": "OK",     "active": true,     "snoozed": false,     "snoozedParents": 2,     "userAssigned": false,     "value": {        "cell": "2021-01-31T00:00:00Z",        "dateTime": "2021-01-31T00:00:00Z"     }  }  },  "operation": "update",  "type": "severity"
}

The following is an example of a propagated severity change on a Managed Entity:


{  "data": {  "timestamp": "2018-11-08T10:17:55.169Z",  "target": {     "gateway": "ExampleGateway",     "probe": "theProbe",     "managedEntity": "Misc",     "filter": {        "osType": "Unknown"     }  },  "data": {     "severity": "WARNING",     "active": true,     "snoozed": false,     "snoozedParents": 0,     "userAssigned": false  } }, "operation": "update", "type": "severity"
}

Points to Note:

Snooze messages Copied

Snooze messages are generated when data is snoozed or unsnoozed.

The following message is an example of a table cell being snoozed:


{ "data": {  "timestamp": "2019-02-07T14:08:52.000Z",  "target": {     "row":  ,     "column": "security",     "gateway": "ExampleGateway",     "probe": "vp",     "managedEntity": "GatewayInfo",     "type": "",     "sampler": "Client",     "dataview": "Client",     "filter": {        "osType": "Virtual",        "pluginName": "Gateway-clientConnectionData"     }  },  "data": {     "snoozed": true,     "snoozedBy": "ryoung",     "comment": "Snoozing until severity is OK",     "period": "SeverityTo",     "untilSeverity": "OK"  } }, "operation": "update", "type": "snooze"
}

The following message is an example of a sampler being unsnoozed:


{ "data": {  "timestamp": "2019-02-07T14:09:10.291Z",  "target": {     "gateway": "ExampleGateway",     "probe": "vp",     "managedEntity": "GatewayInfo",     "type": "",     "sampler": "Client",     "filter": {        "osType": "Virtual",        "pluginName": "Gateway-clientConnectionData"     }  },  "data": {     "snoozed": false,     "unsnoozedBy": "ryoung"  } }, "operation": "update", "type": "snooze"
}

Points to Note:

User assignment messages Copied

A user assignment message is generated when a data item is assigned or unassigned to a user.

The following message is an example of a Managed Entity being assigned to a user:


{ "data": {  "timestamp": "2019-02-07T14:08:23.000Z",  "target": {     "gateway": "ExampleGateway",     "probe": "vp",     "managedEntity": "GatewayInfo",     "filter": {        "osType": "Virtual"     }  },  "data": {     "userAssigned": true,     "assignedTo": "ryoung",     "assignedBy": "ryoung",     "comment": "Assigning this item to myself",     "period": "Manual"  } }, "operation": "update", "type": "userassignment"
}

The following message is an example of a Managed Entity being unassigned:


{ "data": {  "timestamp": "2019-02-07T14:10:02.949Z",  "target": {     "gateway": "ExampleGateway",     "probe": "vp",     "managedEntity": "GatewayInfo",     "filter": {        "osType": "Virtual"     }  },  "data": {     "userAssigned": false,     "unassignedBy": "ryoung",     "comment": "Unassign"  } }, "operation": "update", "type": "userassignment"
}

Points to Note:

["Geneos"] ["Geneos > Gateway"] ["Technical Reference"]

Was this topic helpful?