Publishing
Publishing data Copied
The Gateway can publish data to external systems using a dynamically loaded adapter.
The following adapters are available:
- HTTP
- Kafka
The adapters are provided as shared objects in the lib64 directory. If you need to run the Gateway from a directory that is not the parent of lib64, please ensure that any adapters you are using can be located either via the LD_LIBRARY_PATH
environment variable or via the adapter.library.path
setting. See publishing > additionalSettings.
When Publishing is enabled, the Gateway streams the following set of data:
- directory information — data about each probe, managed entity, and dataview known to the .
- metrics — all dataview cells and headlines.
- metadata — severity, snooze, and user assignment information.
You can filter this data using strategies.
HTTP adapter Copied
The Gateway can use the HTTP adapter to publish data to an arbitrary HTTP or HTTPS endpoint. The data is published as POST requests, with a MIME type of application/json
.
The HTTP adapter supports TLS encryption and verification of a server’s identity and it can authenticate the Gateway to the server using Basic authentication.
Kafka adapter Copied
The Gateway supports publishing to a Kafka cluster (minimum version 0.9), as a Kafka producer client. This provides a resilient mechanism for publishing and allows other systems to consume Geneos data by implementing a Kafka consumer client. In a Hot Standby configuration, only the active Gateway publishes to the Kafka cluster, and data consumers are isolated from any failover.
The Kafka adapter supports TLS encryption and authentication, and this must be configured both on the Kafka cluster and in the Gateway setup file. See publishing > additionalSettings for more details.
This data is published using JSON-based formats which are described below.
Configuration Copied
publishing Copied
This section defines the parameters controlling publishing to external systems.
publishing > enabled Copied
This setting allows publishing to be enabled/disabled. By default publishing is turned off, however if a publishing section exists then it is turned on. This setting allows that to be overridden, so publishing is disabled but there is configuration saved for a later date. Mandatory: No Default: true
Note
Publishing and Gateway Hub can be enabled at the same time. Errors generated by attempting to publish using Publishing do not affect the operation of Gateway Hub. Similarly, errors generated by attempting to publish using Gateway Hub do not affect the operation of Publishing.
publishing > adapter Copied
The adapter
section allows the adapter to be selected and holds settings specific to the selected adapter.
The following adapters are available:
- HTTP
- Kafka
publishing > adapter > HTTP Copied
Settings for the HTTP adapter.
publishing > adapter > HTTP > Url Copied
Specify the HTTP or HTTPS endpoint that Gateway should publish data to.
publishing > adapter > HTTP > Authentication Copied
Specify the form of authentication to use, the following options are available:
- basic
- none
publishing > adapter > HTTP > Authentication > basic Copied
Specify a username and password.
publishing > adapter > HTTP > Verification Copied
You can enable the verification option to check the TLS certificates supplied by the target endpoint include trusted root certificates in their trust chain.
If no root certificates are provided, then Gateway will use the hosts trusted certificates for verification.
publishing > adapter > HTTP > Verification > On Copied
Enable or disable verification.
publishing > adapter > HTTP > Verification > On > Root certificates Copied
Specify the trusted root certificates to use for verification.
You can provide either a pemString
or the path to a pemFile
.
publishing > adapter > kafka Copied
Settings for the Kafka adapter.
publishing > adapter > kafka > topicPrefix Copied
Specifies the first part of the topic names under which data is published and the first part of the topic name to which the Kafka adapter subscribes for requests, such as metrics snapshots. By default, the publishing topics used include geneos-probes
, geneos-raw.table
, (and eight other names beginning geneos-
) and the request topic is geneos-requests
. This setting allows the topics to be changed to, for example, my-test-probes
, my-test-raw.table
and so on.
Mandatory: No
Default: geneos-
publishing > adapter > kafka > brokerList Copied
Specifies a comma-separated list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The adapter makes use of all servers irrespective of which servers are specified here for bootstrapping - this list only impacts the initial hosts used to discover the full set of brokers. Please refer to the description of bootstrap.servers
in the Kafka documentation for more information.
Mandatory: No
Default: localhost:9092
publishing > additionalSettings Copied
This advanced configuration option allows additional settings to be specified. These settings either control the way the adapter is loaded or are interpreted by the adapter itself.
Settings are entered using the same syntax as a Java properties file, each line specifies a key-value pair.
Settings with the prefix adapter.
control the way the adapter is loaded. The settings available are:
adapter.library.path
— allows you to specify the location of the shared library that implements the adapter. By default, this is loaded from the lib64 sub-directory of the Gateway’s working directory; you can specify an alternate location, either as a path relative to the Gateways’s working directory or as an absolute path (as in the example above.)adapter.gateway.name
— used with the topic prefix to construct a unique Kafka consumer group id for the Gateway. The default group id is<topicprefix><gatewayname>
, e.g.geneos-DevGateway
. You can override the Gateway name part with this setting, but there is generally no need to do so.
The Kafka adapter uses the librdkafka library to connect to Kafka and supports transparent pass-through of settings to the library. Any global setting (other than callbacks) defined in the librdkafka documentation can be used by prefixing their names with kafka.
.
In the example above, the key kafka.client.id
refers to the librdkafka option documented as client.id
.
Note
The Kafka adapter discards incoming messages when the librdkafka publishing queue is full. When publishing is resumed, the adapter reports the number of discarded messages. The length of the queue is modified by including akafka.queue.buffering.max.message
setting here.
SASL PLAIN authentication Copied
To use SASL PLAIN for authentication on a normal connection, use the following template, replacing the username and password with your credentials:
kafka.sasl.mechanism=PLAIN
kafka.security.protocol=SASL_PLAINTEXT
kafka.sasl.username=<username>
kafka.sasl.password=<password>
Note
Username and password are cleartext.
SSL encryption Copied
To use SSL encryption for the connection, the SSL protocol must be enabled and the certificate used to sign the Kafka brokers’ public keys must be trusted by the Kafka adapter. Use the following template, replacing the location with your credentials:
kafka.security.protocol=ssl
kafka.ssl.ca.location=<location of CA certificate PEM file>
SSL encrypted connection with SASL PLAIN authentication Copied
To use SSL encryption with SASL PLAIN for authentication, use the following template, replacing the location, username, and password with your credentials:
kafka.sasl.mechanism=PLAIN
kafka.security.protocol=SASL_SSL
kafka.ssl.ca.location=<location of CA certificate PEM file>
kafka.sasl.username=<username>
kafka.sasl.password=<password>
Note
Username and password are cleartext.
SSL encryption with SSL authentication Copied
To use SSL encryption with SSL authentication, the Kafka adapter must be able to present a certificate which is signed using a certificate trusted by the broker(s). Use the following template, replacing the locations with your credentials:
kafka.security.protocol=ssl
kafka.ssl.ca.location=<location of CA certificate PEM file>
kafka.ssl.certificate.location=<location of client certificate>
kafka.ssl.key.location=<location of private key for client certificate>
SSL encryption with SCRAM-SHA-256 authentication Copied
To use SSL encryption with SCRAM-SHA-256 authentication, use the following template:
kafka.security.protocol=sasl_ssl
kafka.sasl.mechanism=SCRAM-SHA-256
kafka.sasl.username=xxxxxxx
kafka.sasl.password=xxxxxxx
kafka.ssl.ca.location=/opt/ITRS/cert/cert2/ca-cert/CA_cert.pem
kafka.ssl.certificate.location=/opt/ITRS/cert/cert2/server.pem
kafka.ssl.key.location=/opt/ITRS/cert/cert2/server.key
Use Kerberos to connect to Kafka on Linux 64-bit Gateways Copied
Linux 64-bit Gateways can connect to Kafka using Kerberos. Use the following template:
kafka.sasl.kerberos.service.name=<service name>
kafka.security.protocol=SASL_PLAINTEXT
kafka.sasl.kerberos.keytab=<keytab file>
kafka.sasl.kerberos.principal=<Kerberos principal>
The default value of kafka.sasl.kerberos.service.name
is kafka
. Use this value unless the Service Principal Names for the Kafka cluster have been set up with a different service name.
Use your credentials for the kafka.sasl.kerberos.keytab
and kafka.sasl.kerberos.principal
fields. The keytab file must encode the password for the username specified as kafka.sasl.kerberos.principal
. You can generate the keytab file using the Unix utility ktutil
or the Windows Server built-in utility ktpass.exe
.
Note
If you are using Kerberos to connect to Kafka, and the Gateway’s working directory is not the package directory, you must set theSASL_PATH
environment variable. It must point to the sasl2 directory inside the lib64 directory of the Gateway package. Ensure you have also set theLD_LIBRARY_PATH
environment variable or theadapter.library.path
setting to locate the required libraries.
Test your connection to Kerberos Copied
Before you test Kerberos with your Gateway, use the commands kinit
and klist
to test that the keytab file can connect to your Kerberos server and a valid ticket is returned. Use the following template for kinit
, replacing the service name, broker hostname, keytab file, and Kerberos principal with your credentials:
kinit -S <service name>/<broker hostname> -k -t <keytab file> <Kerberos principal>
Mandatory: No
publishing > secureSettings Copied
This section allows settings that cannot be set in cleartext, such as passwords, to be encrypted in the Gateway setup file.
publishing > secureSettings > setting > name Copied
The name of the secure setting. Mandatory: Yes
publishing > secureSettings > setting > value Copied
The value of the secure setting.
This can either be:
stdAES
— AES 256-bit encryption. The password is entered in the Set password box.var
— a reference to a variable that provides a password in Operating Environment. References to invalid variables, or variables that are not AES-encrypted, are treated as errors.
Strategies Copied
Schedule Copied
Setting | Description |
---|---|
Every | Number of intervals between publishing operations. |
Interval |
Size of the interval, the following options are available:
|
Starting at | Specify a starting time for the schedule. |
Timezone | Specify the timezone used to set the schedule. |
Strategy Group Copied
Strategies may be grouped for organisational purposes. This has no effect on their application but is useful for grouping categories and making them easier to find in the navigation tree.
You must specify a name when creating a strategy group.
kafkacat Copied
kafkacat is an open source utility written and maintained by the author of the librdkafka library used by Geneos. kafkacat is shipped with Linux 64-bit Gateways to ease the testing of connecting to your Kafka infrastructure. For more information about kafkacat, see https://github.com/edenhill/kafkacat.
To ensure that kafkacat uses the same Kafka, SSL, and SASL libraries as the Gateway, kafkacat must be run with the following environment variables:
LD_LIBRARY_PATH
— This must point at the lib64 library supplied as part of the Gateway bundle.SASL_PATH
— This must point at the sasl2 directory in the Gateway lib64 directory.
Enable Kafka debug logging Copied
To enable Kafka debug logging, follow these steps:
- Open your GSE.
- Click Operating environment in the Navigation tree.
- Select the Debug tab.
- Open the drop-down list by the Publishing, and tick adapter.
- Click Publishing in the Navigation tree.
- Select the Advanced tab.
- In Additional settings, add
kafka.debug=
followed by the debug categories. We recommend it is set totopic,protocol
. - For a full set of debug categories, see the librdkafka documentation. Thequeue
andall
categories are extremely verbose. - Click Save current document .
Standardised formatting Copied
Some data can vary in format from dataview to dataview or even from row to row or column to column. Standardised Formatting allows normalisation of this data to a standard format for downstream systems.
Currently dateTime is supported. The dateTime formatter can be applied as shown below: Date-times, dates and times are formatted to ISO 8601 format.
Note the adjustment to UTC. Cells containing only times are assumed to be for the current day and formatted as a date-time.
There are two types of standardised formatting: System supplied formatting and User specified formatting.
System supplied formatting Copied
The Gateway provides a set of converters for cells in Geneos plugins.
These are defined in a file located in <gateway resources directory>/standardisedformats/formats.json
. By default, the gateway resources directory is <gateway directory>/resources
but you can modify this using a Gateway command line options.
This data in this file is in JavaScript Object Notation (JSON).
It is recommended that this file be left unchanged. However, in unusual circumstances it may be preferable to update the formats.json file rather than generate a set of user specified formatting.
The file has a number of entries under the “formats” label at the start. Taking one as an example it breaks down as follows:
{
"formats" : [
{
"type" : "datetime",
"format" : "%a %b %d %H:%M:%S %Y",
"region" : "gateway",
"plugin" : "Gateway-gatewayData",
"cell" : { "row" : "licenseExpiryDate", "column" : "value"}
},
...
]}
Note
Some “formats” entries may end with a special format called “raw”. This prevents errors from being logged where none of the formats were able to process the input.
Element | Description |
---|---|
type |
Data type of the data being formatted. Currently the only format is "datetime". |
format |
Specifies the expected format of the data. See the following section for a description of the time parsing codes. Where you need several of these to accommodate different servers that may generate dates in differing formats use the "formats" tag defined below. |
formats | Where a variable may be in a number of
formats an array of alternative formats may
be specified.
|
region |
Timezone information for the formatter. For our purposes these should be "gateway" or "netprobe" depending on the location of the sampler. The formatter will use the timezone of the specified component. Where netprobe timezone isn't known it will fall back to the Gateway. |
plugin | The name of the plugin this formatter applies to. The formatter applies to all dataviews created by the plugin unless you restrict this. |
dataview |
Only applies to variables within the named dataview.
|
cell |
Targets the formatting of a specific row / column pair. For more flexibility, you can target a range of columns.
|
cols |
An array or regular expressions used to target columns within a dataview.
Will target all column headings containing "applied" or "changed" as part of the column name. i.e. "appliedDate" |
headlines | An array or regular expressions used to target columns witin a dataview. Works as cols above and can be used in conjunction. |
User specified formatting Copied
Certain dataviews contain cells where the format of the cell cannot be determined. This may be because the data is generated by a user script, or it may be because a Geneos plugin is extracting data whose format is determied by the enviroment of the server on which the plugin is running.
The standardised formatting tab on samplers allows the user to describe the originating format of variables (cells and headlines). This allows them to be transformed into standardised format. You can also specify which cells in the sampler the format is applied to by using the dataview name and the applicability sections.
Configuration Copied
samplers > sampler > publishing > standardisedFormatting Copied
See Standardised formatting in Publish to Kafka.
samplers > sampler > publishing > standardisedFormatting > dataviews > dataview Copied
Provides a list of standardised formatting variables to apply to one or more dataviews in the sampler. When the name of the dataview is set the variable definitions are restricted to the named dataview. If the name is left unset the variable definitions applies to all dataviews belonging to the sampler.
Mandatory: No
Default: Not set
samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > Name Copied
Name of the variable.
samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable Copied
Variable definition specifying type and applicability of variable.
samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > type Copied
Specifies the type of the variable. Currently on dateTime is supported.
samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > type > dateTime Copied
Specifies that the variable is a date-time (includes dates and times which are assumed to be for the current day).
samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > type > dateTime > formats > format Copied
Specifies the expected format of the data. See the following section for a description of the time parsing codes. There can be several of these to accommodate that different servers may generate dates in differing formats.
Mandatory: Yes
samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > type > dateTime > exceptions > exception Copied
Specifies exceptions to Standardised Formatting. If the text specified here matches the value of a published cell, no formatting is applied and no error is logged. This is for instances where dataviews have invalid values such as a blank string ("") or “N/A” as a date.
For data published via Publishing using Kafka, the data is published unchanged.
For data published to Gateway Hub, any exceptions are published as “N/A”. This allows the Gateway Hub to recognise and process the data.
Mandatory: No
samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > type > dateTime > ignoreErrorsIfFormatsFail Copied
If set to true and the none of the formats can translate the data then the errors generated are suppressed. This can be used to specify a NULL format that overrides a system provided one with no formatting. This can be useful for example if you don’t care about formatting these and the output is in an unexpected format due to locale.
Mandatory: No
Default: false
samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > type > dateTime > overrideTimezone Copied
The default timezone date is that of the dataview. i.e. if the Probe has a timezone set the date would be interpreted from that timezone. If not the timezone will match the Gateway’s timezone. This setting allows you to explicitly set the timezone from which the data is being received.
Mandatory: No
samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > applicability Copied
Allows matching of formats to dataview variables.
samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > applicability > headlines Copied
Allows you to match variables to headline variables.
samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > applicability > headlines > regex Copied
A regular expression to match against the headline names.
All values within the matching headlines will have the formatter applied for publishing.
Mandatory: No
Expression | Effect |
---|---|
date | Matches all headlines containing date. |
date$ | Matches all headlines ending in date$. |
^date$ | Matches the healdine date. |
samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > applicability > columns > regex Copied
A regular expression to match against the column names.
All values within the matching columns will have the formatter applied for publishing.
Mandatory: No
Expression | Effect |
---|---|
date | matches all columns containing date |
date$ | matches all columns ending in date$ |
^date$ | matches the column date |
samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > applicability > cells Copied
Maps to a specific row and column. These mappings are good for dataviews made from name, value pairs and take priority on lookup over column regular expressions. As these are specific the row and columns are not regular expressions.
Mandatory: No
operatingEnvironment > debug > DirectoryManager > showStandardisedFormattingErrors Copied
Turns on error reporting for errors found processing inputs to standardised formats. If this is not enabled and errors are found then a summary of the number of errors found is logged at 10 minute intervals.
Type | Format |
---|---|
Date-time | 2012-07-27T19:12:00Z |
Date-time with micro seconds | 2012-07-27T19:12:00.123456Z |
Date | 2012-07-27 |
Kafka message formats overview Copied
Kafka topics, keys and messages Copied
A Kafka message has the following components:
- Topic — A string identifying the category of message, for example “raw.headlines”. Messages within a topic have similar schemas (in fact, apart from metrics messages, all messages in a topic have the same schema.) The message topic is prefixed with a configurable string (the default is
geneos-
) . - Key — A string identifying a subset of a topic that is used to allocate messages to partitions within the Kafka topic. The directory message topics have low traffic and are intended to use a single partition, they use an empty string as a key.
- Payload — The actual data to be consumed by a client system. This represents the message body. The payload is a JSON object, described in detail at Message payloads below. You can download a JSON schema for publishing message by clicking the following link: Kafka schema.
Message key format Copied
In principle, the message key is an opaque string used to allocate messages to partitions in Kafka. Applications needing to filter messages should ignore the key and use the "target"
object within the payload.
In practice, the message key is roughly equivalent to a reversed XPath delimited by dots:
<datview>.<sampler>.<type>.<managed entity>.<probe>.<gateway>
For example (note that if a component name is not applicable, it will be omitted, but the following dot will not):
CPU.CPU..myEntiy.theProbe.Ad-hoc GW
....theProbe.Ad-hoc GW
The number of name components (and hence the number of dots) is the same for all metrics and metadata messages.
Message payloads Copied
The examples in this section are pretty-printed: the messages published by Gateway have no redundant whitespace.
Directory messages Copied
Messages on the probes
, managedEntities
and dataviews
topics provide information about the corresponding items in the Gateway directory hierarchy. They are published as these items are created and deleted and as their run-time attributes are updated.
Messages in this category are also retransmitted when publishing setup is reconfigured (for example to change the Kafka topic prefix).
Probes Copied
Example messages Copied
A message sent as a Netprobe comes up:
{ "data": { "timestamp": "2015-07-01T16:18:23.263Z", "name": "theProbe", "gateway": "Ad-hoc GW", "osType": "Linux", "HostName": "linux-dev", "Port": "7036", "ConState": "Up", "OS": "Linux 2.6.18-371.4.1.el5", "Version": "GA3.1.1-150515" }, "operation": "update"
}
A message describing a virtual probe:
{ "data": { "timestamp": "2015-10-20T09:15:04.732Z", "name": "vp", "gateway": "Ad-hoc GW", "osType": "Virtual", "ConState": "Up" }, "operation": "update"
}
Points to Note:
- The properties whose names start with capital letters are configuration (‘HostName’, ‘Port’) or run-time (‘ConState’, ‘OS’, ‘Version’) parameters. Of these, only ‘ConState’ is applicable to virtual probes.
- When a connection is established to a probe, several update messages will be published, one for each run-time parameter.
Managed Entities Copied
Example message Copied
{ "data": { "timestamp": "2016-05-25T15:02:18.255Z", "name": "theEntity", "probe": "theProbe", "gateway": "Ad-hoc GW", "attributes": { "Team": "Middleware", "Purpose": "Testing" } }, "operation": "update"
}
Note
An update message is published whenever a managed entity attribute is changed, added or deleted.
Dataviews Copied
{ "data": { "timestamp": "2016-05-27T12:51:16.009Z", "dataview": "CPU", "sampler": "CPU", "pluginName": "CPU", "type": "Default Samplers", "managedEntity": "basics", "probe": "theProbe", "gateway": "Ad-hoc GW", "topicSuffix": "CPU.CPU.Default Samplers.basics.theProbe.Ad-hoc GW", "availableTopics": [ "enriched.table.CPU.CPU.Default Samplers.basics.theProbe.Ad-hoc GW", "enriched.headlines.CPU.CPU.Default Samplers.basics.theProbe.Ad-hoc GW", "raw.table.CPU.CPU.Default Samplers.basics.theProbe.Ad-hoc GW", "raw.headlines.CPU.CPU.Default Samplers.basics.theProbe.Ad-hoc GW" ] }, "operation": "create"
}
Note
Update messages are only sent on thedataviews
topic in response to a client request or a change in Publishing configuration.
Metrics messages Copied
Raw and enriched forms of metrics data Copied
Metrics data is available in two forms:
- Raw form — includes only the rows and cells provided by the data source (normally a Netprobe, but possibly a Gateway plugin, or, in the case of Gateway sharing, an exporting Gateway.) In this case the published sample time is provided by the data source.
- Enriched form — also includes additional rows and cells configured on the publishing Gateway and (usually) populated by rules. In this case the published sample time is the time on the publishing Gateway that the dataview was last updated (either by receiving an update or by processing a rule.)
When new data for a row is provided by the data source and then values in that row are computed by Gateway rules, an update message will be published for each of these changes as they occur: first the change from the data source and then each change made by a rule.
Headline data Copied
- Raw headline messages are published on the topic
raw.headlines
. - Enriched headline messages are published on the topic
enriched.headlines
.
A raw message:
{ "data": { "sampleTime": "2016-05-27T12:54:29.685Z", "target": { "gateway": "Ad-hoc GW", "probe": "theProbe", "managedEntity": "basics", "type": "Default Samplers", "sampler": "CPU", "dataview": "CPU", "filter": { "osType": "Linux", "pluginName": "CPU" } }, "samplingStatus": "OK", "numOnlineCpus": "2", "loadAverage1Min": "0.00", "loadAverage5Min": "0.00", "loadAverage15Min": "0.00", "numPhysicalCpus": , "HyperThreadingStatus": "DISABLED", "numCpuCores": "2" }, "operation": "create"
}
This message represents the headlines of a dataview.
The ’target’ property identifies the dataview. The ‘filter’ property of the target provides additional information which may help a consumer of the message to determine which properties should appear in the rest of the message. For example, the Geneos CPU plugin on Windows publishes a dataview that has different headlines and column names from the CPU plugin on Linux.
The properties that follow ’target’ are the headlines for the dataview. ‘samplingStatus’ is always present as the next property after ’target’; in some cases it is the only headine present.
An enriched message for the headlines of the same dataview:
{ "data": { "sampleTime": "2016-05-27T12:54:29.877Z", "target": { "gateway": "Ad-hoc GW", "probe": "theProbe", "managedEntity": "basics", "type": "Default Samplers", "sampler": "CPU", "dataview": "CPU", "filter": { "osType": "Linux", "pluginName": "CPU" } }, "samplingStatus": "OK", "numOfflineCpus": "0", "numOnlineCpus": "2", "loadAverage1Min": "0.00", "loadAverage5Min": "0.00", "loadAverage15Min": "0.00", "numPhysicalCpus": , "HyperThreadingStatus": "DISABLED", "numCpuCores": "2" }, "operation": "update"
}
This message represents the headlines of the same dataview. The differences from the “raw” message are:
- computed headlines are included: in this example there is a computed headline called “numOfflineCpus” (a meaningless number included for the sake of this example);
- the ‘sampleTime’ is the time the Gateway generated the message.
Table (cell) data Copied
- Raw table messages are published on the topic
raw.table
. - Enriched table messages are published on the topic
enriched.table
.
A raw message:
{ "data": { "sampleTime": "2016-05-27T12:59:56.085Z", "target": { "gateway": "Ad-hoc GW", "probe": "theProbe", "managedEntity": "basics", "type": "Default Samplers", "sampler": "CPU", "dataview": "CPU", "filter": { "osType": "Linux", "pluginName": "CPU" } }, "name": "Average_cpu", "row": { "type": "", "state": "", "clockSpeed": "", "percentUtilisation": "0.97 %", "percentUserTime": "0.53 %", "percentKernelTime": "0.18 %", "percentWaitTime": "0.25 %", "percentIdle": "99.03 %" } }, "operation": "update"
}
This message represents a row from a dataview. As in the case of headline messages, the ’target’ property identifies the dataview and provides additional information to help determine which properties should appear in the rest of the message.
The ’name’ property identifies the row; the ‘row’ property contains the cell data. Because this is a “raw” message, no computed cells are shown and the ‘sampleTime’ property is the time the netprobe published the sample.
An enriched message for the same table row
{ "data": { "sampleTime": "2016-05-27T12:59:56.283Z", "target": { "gateway": "Ad-hoc GW", "probe": "theProbe", "managedEntity": "basics", "type": "Default Samplers", "sampler": "CPU", "dataview": "CPU", "filter": { "osType": "Linux", "pluginName": "CPU" } }, "name": "Average_cpu", "row": { "workDone": "3194.4816", "type": "", "state": "", "clockSpeed": "", "percentUtilisation": "0.97 %", "percentUserTime": "0.53 %", "percentKernelTime": "0.18 %", "percentWaitTime": "0.25 %", "percentIdle": "99.03 %" } }, "operation": "update"
}
This message represents the same row from the same dataview. The differences from the “raw” message are:
- computed columns are included: in this case the cell data includes a computed column called “workDone” (a meaningless number computed for the sake of this example);
- the ‘sampleTime’ is the time the Gateway generated the message.
Sample times for imported data Copied
Where Gateway sharing is used, the “raw form” messages for imported metrics data may include rows, columns or headlines added by the exporting Gateway. For each update, the sample time will reflect the original source of the data.
That is, when a new dataview sample is provided by a Netprobe, the exporting Gateway will first forward that sample, with sample time provided by the Netprobe and then, if a rule is triggered, will send a further update for computed data using its own time for the sample time. The raw form published by the importing Gateway will therefore include sample times from both the Netprobe and the exporting Gateway. Although all timestamps are UTC, if the operating system clocks are not synchronised, the sample times in the “raw form” published messages may be inconsistent. For example, computed updates may be published with earlier sample times than the data from which they are calculated.
Points to Note:
- All dataview cells and headlines are published as strings. Although data may be provided as strings or numbers by plugins, some plugins may vary the data type of a column from one row to the next or from one sample to the next. If the Gateway were to attempt to use the current data type of a cell when formatting JSON messages, the implied schema might vary from one message to the next.
- Certain cells are marked as dateTime cells by the gateway. This is done using Standardised Formatting (see Standardised Formatting). If a cell is marked as a dateTime cell then the contents of the cell will be reformatted prior to being published. The format used is ISO 8601 in the form e.g. “1970-02-01T09:00:00.123456”. Note the fractional part is optional and may go to microsecond accuracy depending on the data. A dateTime can also be sent as a pure date e.g. “1970-02-01”.
Metadata messages Copied
These messages provide information about severity, snooze status and user assignment of data items at all levels of the directory hierarchy.
Severity messages Copied
Severity messages are published on the topic metadata.severity
.
Here are two of the messages generated when the status of a dataview headline changed from “WARNING” to “OK”:
{ "data": { "timestamp": "2016-05-25T15:02:18.357Z", "target": { "gateway": "Ad-hoc GW", "probe": "vp", "managedEntity": "Gateway", "sampler": "probeData", "type": "", "dataview": "probeData", "headline": "samplingStatus", "filter": { "osType": "Virtual", "pluginName": "Gateway-probeData" } }, "severity": "OK", "active": true, `"snoozed": false, "snoozedParents": 2, "userAssigned": false,` "value": { "cell": "OK" } }, "operation": "update"
}
{ "data": { "timestamp": "2016-05-25T15:02:18.358Z", "target": { "gateway": "Ad-hoc GW", "probe": "vp", "managedEntity": "Gateway", "filter": { "osType": "Virtual" } }, "severity": "OK", "active": true `"snoozed": false, "snoozedParents": 2, "userAssigned": false,` }, "operation": "update"
}
Here are some examples of severity messages for dataview items resulting from severity change:
{ "data": { "timestamp": "2016-07-19T14:51:15.451Z", "target": { "gateway": "Ad-hoc GW", "probe": "vp", "managedEntity": "Gateway", "sampler": "gateway", "type": "", "dataview": "gateway", "row": "databaseHost", "column": "value", "filter": { "osType": "Virtual", "pluginName": "Gateway-gatewayData" } }, "severity": "OK", "active": true, `"snoozed": false, "snoozedParents": 2, "userAssigned": false,` "value": { "cell": "dbhost" } }, "operation": "update" }
{ "data": { "timestamp": "2016-07-19T14:51:15.446Z", "target": { "gateway": "Ad-hoc GW", "probe": "vp", "managedEntity": "m", "sampler": "gw", "type": "", "dataview": "gw", "row": "releaseAge", "column": "value", "filter": { "osType": "Virtual", "pluginName": "Gateway-gatewayData" } }, "severity": "CRITICAL", "active": true, `"snoozed": false, "snoozedParents": 2, "userAssigned": false,` "value": { "cell": "4 days", "number": 4 } }, "operation": "create" }
{ "data": { "timestamp": "2016-07-18T15:47:34.288Z", "target": { "gateway": "Ad-hoc GW", "probe": "vp", "managedEntity": "m2", "sampler": "gw", "type": "", "dataview": "gw", "row": "licenseExpiryDate", "column": "value", "filter": { "osType": "Virtual", "pluginName": "Gateway-gatewayData" } }, "severity": "OK", "active": true, `"snoozed": false, "snoozedParents": 2, "userAssigned": false,` "value": { "cell": "2021-01-31T00:00:00Z", "dateTime": "2021-01-31T00:00:00Z" } }, "operation": "update"
}
Points to Note:
- A severity message is generated for each data item in the Gateway directory hierarchy whose status is changed.
- The ’target’ property identifies the data item to which the message refers. Only the path components that apply to that item are present.
- For example, the ’type’ property is present in the first example, even though it is empty, but none of the properties from ‘sampler’ on are present in the second example.
- ‘create’ messages are never published for severity metadata, because all data items are created in active state and with undefined severity.
- ‘delete’ messages are only published if the item being deleted is inactive or has a severity other than “UNDEFINED”.
- The “value” field is made up of “cell” which is the original cell value. If a number is detected then the optional “number” field is also present. If the value is identifiable is a date-time then the “dateTime” field is present. As with all date-times published this is formated to ISO 8601 format. This provides metadata to downstream systems that require type information.
Snooze messages Copied
Snooze messages are published on the topic metadata.snooze
.
Here are the messages generated when a managed entity is snoozed and unsnoozed:
{ "data": { "timestamp": "2016-05-27T14:51:10.000Z", "target": { "gateway": "Ad-hoc GW", "probe": "vp", "managedEntity": "Gateway", "sampler": "gatewayLoad", "type": "", "filter": { "osType": "Virtual" } }, "snoozed": { "snoozed": true, "snoozedBy": "ActiveConsole1", "comment": "Simple example", "period": "Manual" } }, "operation": "update"
}
{ "data": { "timestamp": "2016-05-27T14:51:10.000Z", "target": { "gateway": "Ad-hoc GW", "probe": "vp", "managedEntity": "Gateway", "sampler": "gatewayLoad", "type": "", "filter": { "osType": "Virtual" } }, "snoozed": { "snoozed": false`, "unsnoozedBy: "ActiveConsole1"` } }, "operation": "update"
}
Here is an example of snoozing a headline, using some more complex options:
{ "data": { "timestamp": "2016-05-27T14:54:51.000Z", "target": { "gateway": "Ad-hoc GW", "probe": "vp", "managedEntity": "Gateway", "sampler": "gatewayLoad", "type": "", "dataview": "gatewayLoad", "headline": "samplingStatus", "filter": { "osType": "Virtual", "pluginName": "Gateway-gatewayLoad" } }, "snoozed": { "snoozed": true, "snoozedBy": "ActiveConsole1", "comment": "Complex options", "period": "Until", "untilSeverity": "OK", "untilTime": "2016-05-27T15:54:51.000Z", "untilValue": "NOTE: Stats collection is disabled" } }, "operation": "update"
}
Points to Note:
- A snooze message is generated only for the data item which is snoozed or unsnoozed, not for its ancestors or descendants.
- The ’target’ property identifies the data item to which the message refers. Only the path components that apply to that item are present.
- When an item is snoozed, the properties of the ‘snoozed’ object depend on the form of the snooze command used.
- When an item is unsnoozed, the ‘snoozed’ object contains only the ‘snoozed’ boolean.
- Only ‘update’ messages are published for snooze metadata, because, if an item is deleted and recreated, its snooze status will be preserved.
User assignment messages Copied
User assignment messages are published on the topic metadata.userassignment
.
Here are the messages generated when a probe is assigned and unassigned:
{ "data": { "timestamp": "2016-05-27T15:09:39.000Z", "target": { "gateway": "Ad-hoc GW", "probe": "vp", "filter": { "osType": "Virtual" } }, "userAssignment": { "userAssigned": true, "assignedTo": "Ann Administrator",`"assignedBy": "Joe Bloggs",` "comment": "", "period": "Manual" } }, "operation": "update"
}
{ "data": { "timestamp": "2016-05-27T15:10:30.677Z", "target": { "gateway": "Ad-hoc GW", "probe": "vp", "filter": { "osType": "Virtual" } }, "userAssignment": { "userAssigned": false`, "unassignedBy: "John Doe"` } }, "operation": "update"
}
Here is an example of assigning a table cell, using some more complex options:
{ "data": { "timestamp": "2016-05-27T15:17:15.000Z", "target": { "gateway": "Ad-hoc GW", "probe": "vp", "managedEntity": "Gateway", "sampler": "probeData", "type": "", "dataview": "probeData", "row": "theProbe", "column": "security", "filter": { "osType": "Virtual", "pluginName": "Gateway-probeData" } }, "userAssignment": { "userAssigned": true, "assignedTo": "John Doe",`"assignedBy": "Ann Administrator",` "comment": "Please set up secure config", "period": "Until a change in value", "untilValue": "INSECURE" } }, "operation": "update"
}
Points to Note:
- The ’target’ property identifies the data item to which the message refers. Only the path components that apply to that item are present.
- When an item is assigned, the properties of the ‘userAssignment’ object depend on the form of the user assignment command used.
- When an item is unassigned, the ‘userAssignment’ object contains only the ‘userAssigned’ boolean.
- Only ‘update’ messages are published for user assignment metadata, because, if an item is deleted and recreated, its user assignment status will be preserved.
Features common to all types of message Copied
Timestamps Copied
Timestamps are formatted using the following ISO 8601 format: YYYY-MM-DDThh:mm:ss.sssZ
. Note that fractional seconds are shown, to millisecond precision, and that all timestamps are in UTC. For raw metrics (see Raw and enriched forms of metrics data in Message Payloads section), the timestamp shown is the sample time reported by the sampler. Otherwise the timestamp, including the sample time shown for enriched metrics is the time on the Gateway host when the message is formatted.
Operation Copied
The possible values for the operation
property of the message payload are as follows:
- create
The data item described has been added to the configuration, or, in the case of a table row, re-created with a new set of columns.
- update
The data item has been updated in some way, including, for the headlines topics, the addition or removal of a headline.
- snapshot
The current state of the data item is being sent in response to a client request or a change in Publishing configuration.
- repeat
The current state of the data item is being sent as a result of the use of the “Schedule” publishing strategy.
- delete
The data item described has been removed from the configuration, or, in the case of a table row, is about to be re-created with a new set of columns. When a ‘delete’ message is sent, the properties of the ‘data’ part of the message reflect the last known state of the item.
Request/reply messages Copied
In addition to the publish/subscribe mechanism used for the messages described so far, Kafka messages support a mechanism which clients can connect to make requests and receive replies.
To send a request using Kafka, the client publishes a request to the Geneos request topic. The name of this topic is normally ‘geneos-requests’; the ‘geneos-’ prefix is the same as the (configurable) prefix used for the publishing topics. The Gateway does not acknowledge the request. All the Gateways that publish to a given Kafka cluster using the same topic prefix will receive and act on the request.
Resend directory request Copied
A client can request that the Gateway re-send the directory information by sending the following request:
{"request":"resend-directory"}
The Gateway resends all the data for the directory topics. This is sent via the normal Kafka publishing topics.
Snapshot metrics request Copied
A client can request that the Gateway provide a snapshot of selected metrics data by sending a request of the following form:
{"request":"snapshot-metrics","target":{"key":"value","key":"value"},"match":"exact"}
The JSON object provided as the value of “target” specifies the dataviews for which to provide a snapshot.
For example, to request a snapshot of all dataviews from Linux CPU samplers in managed entities with attribute “Region” set to “London” and attribute “Division” set to “FIXED INCOME”, a client could send this request:
{ "request":"snapshot-metrics", "target":{ "attributes":{"Region":"London","Division":"FIXED INCOME"}, "osType":"Linux", "pluginName":"CPU" }, "match":"exact"
}
To request a snaphot of all dataviews from Gateway plugins, one could use this request:
{ "request":"snapshot-metrics", "target":{ "pluginName":"Gateway*" }, "match":"wildcard"
}
The Gateway will send a snapshot of the metrics (raw and enriched, headlines and table data) for all dataviews which match the target specification. This is sent via the normal Kafka publishing topics.
Target syntax Copied
Keys and values are all case-sensitive. The following keys are supported:
gateway
— Gateway name.probe
— Probe name.osType
— Probe operating system type, e.g. “Linux”, “Windows”. To select virtual probes, use “Virtual”.managedEntity
— Managed entity name.attributes
— Collection of name/value pairs of managed entity attributes.sampler
— Sampler name.pluginName
— Name of the plugin, e.g. “FKM”.dataview
— Dataview name.
The osType
determines the operating system which the probe is running on. This can be found in the properties of a managed entity that has a running probe:
Active Console identifies the type of operating system based on a numeric value:
0
— Unknown1
— Other2
— Windows3
— Solaris4
— Linux5
— HPUX6
— AIX_OS7
— Solaris_x86
Exact vs wildcard matching Copied
If the match
key in the snapshot request has the value wildcard
, then values may include the wildcard characters ‘*’ and ‘?’. As wildcards, ‘*’ matches zero or more unspecified characters and ‘?’ matches exactly one unspecified character. ‘*’ and ‘?’ can be escaped (so that they match a literal asterisk or question mark) by preceding them with a backslash, ‘'. Note that to encode a backslash in JSON, it needs to be doubled.
If the match
key in the snapshot request has the value exact
, then all characters match themselves.
Snapshot request for events Copied
A client can request that the Gateway provide a snapshot of selected event data.
The request can be for separate snapshots of severity, snooze, and user assign events.
The snapshot responds with the severity, snooze, or userAssign state of all matching target data items (i.e. dataviews and cells), and also the state of its parent data items (i.e. sampler, managed entity, gateway).
The form and syntax required to request the snapshot is very similar to the Snapshot metrics request, with the only difference being the value of the request.
The request values are:
snapshot-severity
snapshot-snooze
snapshot-userassignment
For example, to make a request for severity events use:
{ "request":"snapshot-severity", "target":{ [YOUR key:value PAIRS HERE] }, "match":"exact"
}
Snapshot request for all events and metrics Copied
A client can request that the Gateway provide a snapshot of all event (severity, snooze, and userAssign) and metrics data. To make a request for this data, send a request in the following form:
{ "request":"snapshot-all", "target":{ [YOUR key:value PAIRS HERE] }, "match":"exact"
}
HTTP message formats overview Copied
Message structure Copied
Data published by the HTTP adapter are sent as self-contained JSON objects. All messages are posted to the same user-supplied endpoint. A type
field at the top level of the message structure may be used for routing by a user application.
Each message object contains three key-value pairs:
type
— a string indicating what type of data the message describes. Possible values areprobe
,managedEntity
,headline
,table
,severity
,snooze
, anduserassignment
.operation
— indicates which event triggered the message. Possible values arecreate
,update
,delete
, andsnapshot
.data
— a nested object defining the message payload.
This contains at least three key-value pairs:
- one or more timestamps +
target
— an object which gives the Gateway directory path for the data item that the message applies to. + at least one further object giving details of the data item or event.
A JSON schema for the publishing message payloads can be downloaded by clicking the following link: HTTP schema.
The following sections provide further information about the semantics of the different message types. The examples in these sections are pretty-printed: the messages published by Gateway have no redundant whitespace.
Directory messages Copied
Messages with the probe
and managedEntity
types are published because these types of directories are created, deleted, or have their run-time attributes updated. These messages are also retransmitted when the publishing setup is changed such as changing the destination URL.
Netprobe messages Copied
{ "data": { "timestamp": "2015-07-01T16:18:23.263Z", "target": { "gateway": "Ad-hoc GW", "probe": "theProbe" }, "parameters": { "osType": "Linux", "HostName": "linux-dev", "Port": "7036", "ConState": "Up", "OS": "Linux 2.6.18-371.4.1.el5", "Version": "GA3.1.1-150515" } }, "operation": "update", "type": "probe"
}
Points to Note:
- When a connection is established to a Netprobe, several updated messages will be published, one for each run-time parameter.
- The parameters object lists both configuration (
HostName
andPort
) as well as run-time (osType
,ConState
,OS
, andVersion
) parameters. OnlyosType
andConState
apply to virtual Netprobes.
Managed Entity messages Copied
{ "data": { "timestamp": "2016-05-25T15:02:18.255Z", "target": { "gateway": "Ad-hoc GW", "probe": "theProbe", "managedEntity": "theEntity" }, "attributes": { "Team": "Middleware", "Purpose": "Testing" } }, "operation": "update", "type": "managedEntity"
}
Points to Note:
- An update message is published whenever a Managed Entity attribute is changed, added, or deleted.
Metrics messages Copied
Metrics messages have a headline
or a table
type. They contain the rows and cells provided by the data source, such as a Netprobe plugin. They may also include additional rows and cells configured on the publishing Gateway.
Headlines Copied
Headline message represents the headlines in the dataview.
{ "data": { "sampleTime": "2019-01-24T10:41:14.098Z", "netprobeTime": "2019-01-24T10:41:14.098Z", "target": { "gateway": "ExampleGateway", "probe": "vp", "managedEntity": "GatewayInfo", "type": "", "sampler": "Gateway SQL", "dataview": "Fixed", "filter": { "osType": "Virtual", "pluginName": "Gateway-sql" } }, "row": { "samplingStatus": "OK" } }, "type": "headline", "operation": "create"
}
Points to Note:
- There are usually two timestamps in this message:
sampleTime
— time when the message was published by the Gateway. -netprobeTime
— reported time by the data source. This timestamp is not present if the dataview is comprised only of cells populated by rules.
- The
row
object contains the value of each headline on the dataview. - If an update is caused by a rule firing, then an additional key,
computedColumn
, will be present; its value is the name of the headline that was updated.
Table rows Copied
This message represents the column values for a single row of a dataview.
{ "data": { "sampleTime": "2019-01-24T10:41:14.240Z", "netprobeTime": "2019-01-24T10:41:14.098Z", "target": { "gateway": "ExampleGateway", "probe": "vp", "managedEntity": "GatewayInfo", "type": "", "sampler": "Gateway SQL", "dataview": "Static", "row": "whatever", "filter": { "osType": "Virtual", "pluginName": "Gateway-sql" } }, "row": { "name": "whatever", "fromRule": "rule output", "answer": "17", "approxPi": "3.1416" }, "computedColumn": "fromRule" }, "type": "table", "operation": "update"
}
Points to Note:
- There are usually two timestamps in this message:
sampleTime
— time when the message was published by the Gateway. -netprobeTime
— reported time by the data source. This timestamp is not present if the dataview is comprised only of cells populated by rules.
- The
row
object contains the value of each cell in the row defined in therow
in thetarget
object. - The row name also appears as a column value in the
row
object to include the correct column heading in the message. - If an update is caused by a rule firing, then an additional key,
computedColumn
, will be present; its value is the name of the column that was updated. - If a message concerns a row that is added via Gateway Setup Editor, then an additional key,
computedRow
, is present with the Boolean valuetrue
.
Event messages Copied
Event messages provide information about the severity, snooze status, and user assignment of data items at all levels of the directory hierarchy.
Severity messages Copied
Severity is set directly by rules on a headline or a table cell, and indirectly by propagation up the Gateway directory hierarchy. Severity messages are published for both types of update.
The following message is an example of a severity change on a cell. This message includes the value of the cell value at the time of the change.
{ "data": { "timestamp": "2016-07-18T15:47:34.288Z", "target": { "gateway": "Ad-hoc GW", "probe": "vp", "managedEntity": "m2", "sampler": "gw", "type": "", "dataview": "gw", "row": "licenseExpiryDate", "column": "value", "filter": { "osType": "Virtual", "pluginName": "Gateway-gatewayData" } }, "data": { "severity": "OK", "active": true, "snoozed": false, "snoozedParents": 2, "userAssigned": false, "value": { "cell": "2021-01-31T00:00:00Z", "dateTime": "2021-01-31T00:00:00Z" } } }, "operation": "update", "type": "severity"
}
The following is an example of a propagated severity change on a Managed Entity:
{ "data": { "timestamp": "2018-11-08T10:17:55.169Z", "target": { "gateway": "ExampleGateway", "probe": "theProbe", "managedEntity": "Misc", "filter": { "osType": "Unknown" } }, "data": { "severity": "WARNING", "active": true, "snoozed": false, "snoozedParents": 0, "userAssigned": false } }, "operation": "update", "type": "severity"
}
Points to Note:
- The
create
operation messages are never published for severity events because all data items are created in an active state and with an undefined severity. - The
delete
operation messages are only published if the item being deleted is inactive or has a severity other thanUNDEFINED
. - The
value
object always has thecell
key that displays the cell value at the time of the severity change. If the value is identified as a number or a date-time number, then thenumber
ordateTime
key will also be present.
Snooze messages Copied
Snooze messages are generated when data is snoozed or unsnoozed.
The following message is an example of a table cell being snoozed:
{ "data": { "timestamp": "2019-02-07T14:08:52.000Z", "target": { "row": , "column": "security", "gateway": "ExampleGateway", "probe": "vp", "managedEntity": "GatewayInfo", "type": "", "sampler": "Client", "dataview": "Client", "filter": { "osType": "Virtual", "pluginName": "Gateway-clientConnectionData" } }, "data": { "snoozed": true, "snoozedBy": "ryoung", "comment": "Snoozing until severity is OK", "period": "SeverityTo", "untilSeverity": "OK" } }, "operation": "update", "type": "snooze"
}
The following message is an example of a sampler being unsnoozed:
{ "data": { "timestamp": "2019-02-07T14:09:10.291Z", "target": { "gateway": "ExampleGateway", "probe": "vp", "managedEntity": "GatewayInfo", "type": "", "sampler": "Client", "filter": { "osType": "Virtual", "pluginName": "Gateway-clientConnectionData" } }, "data": { "snoozed": false, "unsnoozedBy": "ryoung" } }, "operation": "update", "type": "snooze"
}
Points to Note:
- Only
update
operation messages are published for snooze events because the snooze status is preserved when an item is deleted and recreated. - When an item is snoozed, the keys included in the
data
object depend on the form of the snooze command used.
User assignment messages Copied
A user assignment message is generated when a data item is assigned or unassigned to a user.
The following message is an example of a Managed Entity being assigned to a user:
{ "data": { "timestamp": "2019-02-07T14:08:23.000Z", "target": { "gateway": "ExampleGateway", "probe": "vp", "managedEntity": "GatewayInfo", "filter": { "osType": "Virtual" } }, "data": { "userAssigned": true, "assignedTo": "ryoung", "assignedBy": "ryoung", "comment": "Assigning this item to myself", "period": "Manual" } }, "operation": "update", "type": "userassignment"
}
The following message is an example of a Managed Entity being unassigned:
{ "data": { "timestamp": "2019-02-07T14:10:02.949Z", "target": { "gateway": "ExampleGateway", "probe": "vp", "managedEntity": "GatewayInfo", "filter": { "osType": "Virtual" } }, "data": { "userAssigned": false, "unassignedBy": "ryoung", "comment": "Unassign" } }, "operation": "update", "type": "userassignment"
}
Points to Note:
- Only
update
operation messages are published for a user assignment because it is preserved when an item is deleted and recreated. - When an item is assigned to a user, the keys included in the
data
object depend on the form of the user assignment command used.