Withdraw of the 6.9.2 Release
Unfortunately, due to critical issues identified with version 6.9.2, we have decided to remove it and ensure it's no longer available for download. These issues impacted the ability to install or upgrade but none were security-related. We are diligently working to resolve these issues and are planning to release an updated version 6.9.3 in early May.
What if you've already upgraded?For customers who have already upgraded to 6.9.2, no immediate action is required, as none of these issues are security-related. Once it's available, you will still be able to upgrade to 6.9.3 as normal. We appreciate your patience and trust as we continue to enhance our software to better serve you. Thank you for your understanding.
Config - Monitoring Clusters
Object type: monitoringcluster
Request URL: /rest/config/monitoringcluster
Example GET Copied
{
"object" : {
"roles" : [
{
"ref" : "/rest/config/role/12",
"name" : "View some, change some"
}
],
"activated" : "1",
"monitors" : [
{
"ref" : "/rest/config/host/10",
"name" : "cisco3"
},
{
"ref" : "/rest/config/host/4",
"name" : "monitored_remotely"
}
],
"name" : "ClusterA",
"nodes" : [
{
"host" : {
"ip" : "192.168.10.20",
"ref" : "/rest/config/host/5",
"name" : "collector1"
},
"slave_port" : "22" // Unused in 6.0
}
],
"id" : "2",
"uncommitted" : "1"
}
}
If id=1, this is the primary monitoring cluster.
DELETEs are blocked if the monitoring cluster is the primary, or if there are any hosts still monitored by this monitoring cluster.
Additional parameters:
order
— can be num_hosts or num_nodes to order the list of monitoring clusters by those columns.include_delete_info
— if set, will return extra fields indicating if a monitoring cluster is not deletable because it is the primary, it is used as a netflow collector, it still has hosts associated to it, or if the monitoring cluster is license locked.include_cluster_details
— if set to 1, then will include two additional attributes in each monitoring cluster’s data - see below.
The additional details included in the output when include_cluster_details
is used are:
"status": "OK", // Other values: OFFLINE or DEGRADED
"alarms": [ // Contains an array of strings of alarms about this cluster
// Note: Strings will be localised based on language
"Component 'opsview-scheduler' is OFFLINE",
"No response in time"
]
There is an activated_calculated
column that will be included if order=activated_calculated
is set. This can be one of 4 values (the value 3 is not ordered correctly - this is a known limitation):
0
— Not activated.1
— Activated cluster.2
— Always activated (primary).3
— Disabled due to licensing.
When PUT/POSTing to monitoringclusters, monitors and roles is unsupported. The nodes parameter should be of the form:
nodes: [ { id: 13 }, { id: 20 } ]
where the id
are the host ids of the collectors. For the primary monitoring cluster, only the first node is used and the remainder silently ignored.
network_topology_enabled
- will be added if ov-network-topology feature is enabled. Will be 0 (disabled) or 1 (enabled) depending on if this cluster has the feature switched on, to allow detection to occur.