Upgrade Opsview Monitor

Overview Copied

This document describes the steps required to upgrade an existing Opsview Monitor system running on either a single server instance or a distributed Opsview environment to the latest version of Opsview Monitor.

Note

After an upgrade, it is recommended that you clear your browser cache. This ensures the new features and functionality are being used from the latest code rather than cached ones from before the upgrade, and helps avoid problems.

Note

For Opsview to run correctly, all Opsview components on all servers must be installed from the same release repository. Mixing Opsview component versions from different releases, such as using opsview-datastore from 6.8.x and opsview-messagequeue from 6.7.x, is not supported.

Opsview Collectors must also be installed from the same release repository as the core servers, whether they are deploy-based or are Remotely Managed Collectors. The only exception to this is the Opsview agent installed on monitored devices. The agent version does not need to match the exact Opsview version you’re using. However, it is always recommended to install the latest version of the agent whenever possible. For more information, see Supported operating systems.

Prior to installing or upgrading Opsview Monitor to a newer version, please check:

Warning

You must have upgraded to Opsview 6.5.x or above and performed the post-upgrade tasks, including the Database Migration for SQL Strict Mode instructions, before attempting to upgrade to Opsview 6.8.x or above. If the database migration has not been completed, the upgrade will be cancelled with a fatal warning.

Depending on the size and complexity of your current Opsview Monitor system, this process may take between a few hours to a full day. This includes the following processes:

Upgrade process Copied

Important Notes Copied

Warning

When performing any upgrade, it is advisable to take a backup of your system.

Warning

We recommend you upgrade all your hosts to the latest OS packages (excluding Opsview packages) before upgrading Opsview Monitor. This means that you should not manually upgrade any package beginning with opsview-, or the infrastructure-agent package. Upgrading these packages will be handled by the upgrade process.

Warning

The system may temporarily report non-OK and non-UP states during upgrades as a result of ongoing monitoring script configuration. For a smoother upgrade experience with minimal disruption, we recommend temporarily disabling Host Check, Service Check, and Event Handler execution on all Monitoring Clusters, including the Master Monitoring Server.

To disable execution on each Cluster in Opsview 6.8.5 and below, access the Features Enabled modal within the Configuration > Monitoring Collectors page (under the Clusters tab).

For Opsview 6.8.6 and above, you can disable execution using the Collector Management page.

Warning

If there a significant upgrade of opsview-messagequeue, existing queues will be rebuilt and all unprocessed messages will be removed. Please refer to the Opsview release notes for a warning about this rebuilding process.

Upgrading from 6.7.x or older to 6.8.x or newer Copied

Note

As part of the upgrade process, running the check_deploy playbook will flag any fields that should be reduced prior to upgrading, with a "Value … was too long" or "Value … is longer than 191 chars" message.

Upgrading from 6.9.0 or older to 6.9.1 or newer Copied

Due to a repository key change, you must add the new repository key to your Debian or Ubuntu system before the upgrade. Before proceeding with the upgrade, run the following command as root from the orchestrator to update the key on all non-remotely managed collector systems:

/opt/opsview/deploy/bin/rc.ansible ansible all -m shell -a 'test -d /usr/share/keyrings && ( curl -Os https://downloads.opsview.com/OPSVIEW-APT-KEY-2024.asc && echo "f095452e85790a0e1d3382468fced72fba1632da14a0e9ca649a888a0c38db12 OPSVIEW-APT-KEY-2024.asc" | sha256sum -c && cat OPSVIEW-APT-KEY-2024.asc | gpg --dearmor | sudo tee /usr/share/keyrings/APT-GPG-KEY-Opsview-2024.gpg > /dev/null ) || true'

If you do not have access to the internet, the actual key string can be added instead with:

/opt/opsview/deploy/bin/rc.ansible ansible all -m shell -a 'test -d /usr/share/keyrings && ( printf -- "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQGNBGXCQIwBDACiD5D1qlC35JtUwPGcy+n7eJIX9aSdjG5hWadMjZryyUadLTMc\nsNdDB0jblxeu4CFAe7g3LNJTnNlF6NpWHVz0fZ8R0jW2rNjlaoVgUrtn8OTshz3t\nQ+9/TRSONt+D1xl3+O8p+XUg8jDr07+izxzWstD9M3j2KWuc3ts4DimRQkmdnv+Q\nyd1fEpq1Z1djgB4aSCe10LzMY8AIPRMsrMyhJWz287D1sEnAYP66zrLTHKeWmeRY\nQBKNzA3Fhgo4g95j/IJPaYATQV6/oqyrD/Gtu/8N3Brr1popE+1NVdcmbY7M7yox\nYB6NnWKGKH+4vR9k4sNgrPp9g+WI0e1ZV8dXE0fCK6EJNUO9ZIOqr3uHrAF6ESmh\ndnc672jT4QLnhvWVOHrUYo3XUPhGhZA33MUxdsrK14y1iOQfa1g7WzZ2zLSs251/\nLGlxUN5VHIejSNcbRsbJkOGhn+dTop8NZgBVaiY5my6cAgsSW3O0yS24vodKw3ln\nERH54Rnff53+odUAEQEAAbRMT3BzdmlldyByZXBvc2l0b3JpZXMgKGh0dHBzOi8v\nZG93bmxvYWRzLm9wc3ZpZXcuY29tKSA8c3VwcG9ydEBpdHJzZ3JvdXAuY29tPokB\n0QQTAQgAOxYhBM+iZ8I4zbrWrT12Kzeseua0tpoLBQJlwkCMAhsDBQsJCAcCAiIC\nBhUKCQgLAgQWAgMBAh4HAheAAAoJEDeseua0tpoLXGEL/0I8cBhqnoro1yQV3Rmx\nnwM5aOF5kP1Pla78suceG9eiM0sMohur+K4mVM+Rx5l+ZLI3c/YfXnNLDl9SRCCX\nNaKAB2w60EWxzh734iwsOzN4B4WuYlilAt6QZ8qQP951A8++Sg9zTCRNg3B50BdL\nFO5v4KHQS8hcZWp5bxulSLQpQXxQNrtE+1haD6jeAbkeUHQfglYAywnhPQvShTAs\nzPOj/SclbgjX/9MRIXrVzztOsrpq2nSfihDiTfGCCTB1n1YyyLV9ib64ePxFjLtv\nqNMYf9G0QOgM8wqgzfMa92vFowMiJBn20uWzRbbj8GYlhumsMGqxrFri9r1AuGXY\nZygsoNpsi1VvBClSi48Dei8ChC4kQFwVz3WqFUUlHkc3TtigZATu7wUQmEzkSPkB\nT8pNgpOgVwgUChIbcS2EY54k2uZUZOHVad3ffyePLLB/eHotne+OKG67ebrXhFl9\nXkhGt5qCeRSxjw2NtfkycOdMLFnhdR9l7gfnUYUmrxSgSLkBjQRlwkCMAQwAknuc\nj5TN0ZxTapwMsg4gCY1NeKqptPQWe8FnGhak+Pj1ubL5TJovXwTqw0z30c2nI1zN\nJFbI5azfwDCMjHwey+JyqC0pdHn0I2jD2aGEnneZnwHjGaC+pULIWFAJwozOz0u1\nmMBgm5XcHDDatOIL0Zb8XkQgZR6h1mOxaoGWSrw4mqrU9BFENsdSXt+EBML58m0U\nzCcgRnWDCRzzPrxkiL1i4kw/Zuk1834R3GoABM19jzWzRP8nu+fS+urcHimWjWT7\nJaWCSO+961VXF69wqnd+bL0bvAGbuOuv1cGkrppR9pG8X8Hf+tHH6Afv3m/tmBld\nheonNcPNrsvaApqP7bSCjqiz0swlyPLw58ykqHrQ8G0ekMnVkXhEVMuqsNO+gJnF\neyJ9ijhTp6Ylh/mIfevmHtlqHbA+Uwsl81Mnv/+SzzRjrU3U9rMnKTb8UE9/yQQR\nppN73/dOpnGi2oLoY4rYi4+Iju4M06If28Yd6U5/UMSasNCcd2SJaB5+RFJzABEB\nAAGJAbYEGAEIACAWIQTPomfCOM261q09dis3rHrmtLaaCwUCZcJAjAIbDAAKCRA3\nrHrmtLaaCwJuC/4vpxxUKaI7+6q7vkWALc8coZ65ytZ8cXeFIAQIkX0L97fJraq1\n9yMxu1FdfaceE7bDIEPWPgiciaV7t+qGeDgmrInBpmqp51r3YJxx8zWiO6Y7pvoG\n/nlFNl4ZXZAEu7PtrcFaUNMgqP+EPAz0S3j32RIlMAOFTetxNEr0BoowlrMy6kCd\nqMBaN6rJLGH3U8yeNFHttKN32AvfIak+PRM6kVmdh+VCRpJlrpbb1JmeDF22Z5RW\nqNVPvgV5meSHiK1y8YMMsQSXNf8h2vbDn4K2HMpWUTjtdlPnsComuFv/+Laykbb7\nhkPy2bPwZgDDpViHx1O7wRaQQhjaMD/Iaohh6ynpsEZe9asXUpXOA49McQBQxNGb\nEKguzPusnnjwtbXbfPKLQGqGm4x7wQkB0JbTUarL8Jw1wXlyaibOSOkaCXTqMiFx\nbO/9XIX0oWLiOQYj+vFr/TaTXA8HEdEswwTZrirYHK0zAYDcU3AcFKIqVjUWeMkr\nl+F2x62WJX7sDGE=\n=Yi2S\n-----END PGP PUBLIC KEY BLOCK-----" | gpg --dearmor | sudo tee /usr/share/keyrings/APT-GPG-KEY-Opsview-2024.gpg > /dev/null ) || true'

Make sure to specify the location of the key within /etc/apt/sources.list.d/opsview.list. It should be placed between deb and the Opsview URL.

[signed-by=/usr/share/keyrings/APT-GPG-KEY-Opsview-2024.gpg]

If these are not carried out, you may encounter the error: The following signatures couldn't be verified because the public key is not available.

Activation key Copied

Ensure you have your activation key for your system.

Back up your Opsview data and system Copied

Please refer to the Common Tasks page for more information.

Run the below command as root which will back up all databases on the server (if upgrading from Opsview 6.7.x or older):

mysqldump -u root -p --add-drop-database --extended-insert --opt --all-databases | gzip -c > /tmp/databases.sql.gz

Run the below command instead if upgrading from Opsview 6.8.x or newer:

mysqldump -u root -p --default-character-set=utf8mb4 --add-drop-database --extended-insert --opt --all-databases | sed 's/character_set_client = utf8 /character_set_client = utf8mb4 /' | gzip -c > /tmp/databases.sql.gz

The MySQL root user password can be found in /opt/opsview/deploy/etc/user_secrets.yml.

Ensure you copy your database dump (/tmp/databases.sql.gz in the above command) to a secure location.

Opsview Deploy Copied

Upgrading to a new version of Opsview Monitor will take you through the following steps, either automated or manually:

  1. Add the package repository for the new version of Opsview Monitor.
  2. Install the latest Opsview Deploy (opsview-deploy) package.
  3. Install the latest Opsview Python (opsview-python3) package.
  4. Re-run the installation playbooks to upgrade to the new version.

Once the upgrade has completed, all hosts managed by Opsview Deploy will have been upgraded to the latest version of Opsview Monitor.

Note

Running the curl commands will start the upgrade process so only run them when you are ready to upgrade Opsview.

Upgrading: Automated Copied

  1. Configure the correct Opsview Monitor package repository and update opsview-deploy to the corresponding version by running the command.

    curl -sLo- https://deploy.opsview.com/6.x | sudo bash -s -- --only repository,bootstrap
    

    You must replace 6.x with the correct version you want to upgrade to.

  2. Validate if your system is ready for upgrading, and set up python on all systems (installing if needed) by running the command:

    root:~# cd /opt/opsview/deploy
    root:/opt/opsview/deploy# ./bin/opsview-deploy lib/playbooks/check-deploy.yml
    
  3. Continue to upgrade your system by running this command.

    root:/opt/opsview/deploy# ./bin/opsview-deploy lib/playbooks/setup-everything.yml
    

Once completed, continue with Post upgrade process.

Upgrading: Manual Copied

Amend your Opsview repository configuration to point to the 6.10 release.

For RHEL, and OL:

Check if the contents of /etc/yum.repos.d/opsview.repo matches the following, paying special attention to the version number specified within the baseurl line.

[opsview]
name    = Opsview Monitor
baseurl = https://downloads.opsview.com/opsview-commercial/6.x/yum/rhel/$releasever/$basearch
enabled = yes
gpgkey  = https://downloads.opsview.com/OPSVIEW-RPM-KEY-2024.asc

You must replace 6.x with the correct version you want to upgrade to.

For Debian and Ubuntu:

Check if the contents of /etc/apt/sources.list.d/opsview.list matches the following, paying special attention to the version number specified within the URL. You should replace focal with your OS name (as per other files within the same directory).

deb [signed-by=/usr/share/keyrings/APT-GPG-KEY-Opsview-2024.gpg] https://downloads.opsview.com/opsview-commercial/6.x/apt focal main

Update Opsview Deploy Copied

Run the command below for RHEL, and OL:

yum makecache
yum install opsview-deploy

Run the command below for Debian and Ubuntu:

apt-get update
apt-get install opsview-deploy

Pre-deployment checks Copied

Before running opsview-deploy, we recommend that you check the following list of items.

Manual checks Copied

What Where Why
All YAML files follow correct YAML format opsview_deploy.yml, user_*.yml Each YAML file is parsed each time opsview-deploy runs.
All hostnames are FQDNs opsview_deploy.yml If Opsview Deploy cannot detect the host’s domain, the fallback domain opsview.local will be used instead.
SSH user and SSH port have been set on each host opsview_deploy.yml If these aren’t specified, the default SSH client configuration will be used instead.
Any host-specific vars are applied in the host’s vars in opsview_deploy.yml opsview_deploy.yml, user_*.yml Configuration in user_*.yml is applied to all hosts.
An IP address has been set on each host opsview_deploy.yml If no IP address is specified, the deployment host will try to resolve each host every time.
All necessary ports are allowed on local and remote firewalls All hosts Opsview requires various ports for inter-process communication. See Ports.
If you have rehoming user_upgrade_vars.yml Deploy now configures rehoming automatically. See Rehoming.
If you have Ignore IP in Authentication Cookie enabled user_upgrade_vars.yml Ignore IP in Authentication Cookie is now controlled in Deploy. See Rehoming.
Webserver HTTP/HTTPS preference declared user_vars.yml In Opsview 6, HTTPS is enabled by default, to enforce HTTP-only then you need to set opsview_webserver_use_ssl: False. See opsview-web-app.

Example of opsview-deploy.yml:

---
orchestrator_hosts:
  # Use an FQDN here
  my-host.net.local:
    # Ensure that an IP address is specified
    ip: 10.2.0.1
    # Set the remote user for SSH (if not default of 'root')
    ssh_user: cloud-user
    # Set the remote port for SSH (if not default of port 22)
    ssh_port: 9022
    # Additional host-specific vars
    vars:
      # Path to SSH private key
      ansible_ssh_private_key_file: /path/to/ssh/private/key

Automated checks Copied

Opsview Deploy can also look for (and resolve some) issues automatically. Before executing setup-hosts.yml or setup-everything.yml, run the check-deploy.yml playbook. Beginning Opsview 6.6.x, this playbook will additionally set up Python on all systems used:

root:~# cd /opt/opsview/deploy
root:/opt/opsview/deploy# ./bin/opsview-deploy lib/playbooks/check-deploy.yml

If any potential issues are detected, a REQUIRED ACTION RECAP will be added to the output when the play finishes.

Check Notes or Limitations Severity
Deprecated variables Checks for: opsview_domain, opsview_manage_etc_hosts MEDIUM
Connectivity to EMS server No automatic detection of EMS URL in opsview.conf overrides HIGH
Connectivity to Opsview repository No automatic detection of overridden repository URL(s) HIGH
Connectivity between remote hosts Only includes LoadBalancer ports. Erlang distribution ports, for example, are not checked MEDIUM
FIPS crypto enabled Checks value of /proc/sys/crypto/fips_enabled HIGH
SELinux enabled SELinux will be set to permissive mode later on in the process by setup-hosts.yml, if necessary LOW
Unexpected umask Checks umask in /bin/bash for root and nobody users. Expects either 0022 or 0002 LOW
Unexpected STDOUT starting shells Checks for any data on STDOUT when running /bin/bash -l LOW
Availability of SUDO Checks whether Ansible can escalate permissions (using sudo) HIGH

When a check is failed, an ‘Action’ is generated. Each of these actions is formatted and displayed when the play finishes and, at the end of the output, sorted by their severity.

The severity levels are:

Severity Description
HIGH Will certainly prevent Opsview from installing or operating correctly.
MEDIUM May prevent Opsview from installing or operating correctly.
LOW Unlikely to cause issues but may contain useful information.

By default, the check_deploy role will fail if any actions are generated with MEDIUM or HIGH severity. To modify this behaviour, set the following in user_vars.yml:

check_action_fail_severity: MEDIUM

The actions at this severity or higher will result in a failure at the end of the role.

The following example shows the two MEDIUM severity issues generated after executing check-deploy playbook.

REQUIRED ACTION RECAP **************************************************************************************************************************************************************************************************************************

[MEDIUM -> my-host] Deprecated variable: opsview_domain
  | To set the host's domain, configure an FQDN in opsview_deploy.yml.
  |
  | For example:
  |
  | >>  opsview-host.my-domain.com:
  | >>    ip: 1.2.3.4
  |
  | Alternatively, you can set the domain globally by adding opsview_host_domain to your user_*.yml:
  |
  | >>  opsview_host_domain: my-domain.com

[MEDIUM -> my-host] Deprecated variable: opsview_manage_etc_hosts
  | To configure /etc/hosts, add opsview_host_update_etc_hosts to your user_*.yml:
  |
  | >>  opsview_host_update_etc_hosts: true
  |
  | The options are:
  | - true   Add all hosts to /etc/hosts
  | - auto   Add any hosts which cannot be resolved to /etc/hosts
  | - false  Do not update /etc/hosts


Thursday 21 February 2019  17:27:31 +0000 (0:00:01.060)       0:00:01.181 *****
===============================================================================
check_deploy : Check deprecated vars in user configuration ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 1.06s
check_deploy : Check for 'become: yes' -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.03s

*** [PLAYBOOK EXECUTION SUCCESS] **********

Run Opsview Deploy Copied

  1. Run the command below to validate if your system is ready for upgrading:

    root:~# cd /opt/opsview/deploy
    root:/opt/opsview/deploy# ./bin/opsview-deploy lib/playbooks/check-deploy.yml
    
  2. Run the command below to continue the upgrade:

    root:/opt/opsview/deploy# ./bin/opsview-deploy lib/playbooks/setup-everything.yml
    

Post-upgrade process Copied

As part of the upgrade process, Opsview Deploy overwrites the contents of the configuration files for snmpd and snmptrapd. If Deploy detects that the file it is overwriting had changes made to it, the configuration file will be backed up and labelled with a timestamp while the new configuration replaces it.

A similar message below will display at the end of a run of Opsview Deploy indicating that the configuration file in the message has been overwritten.

REQUIRED ACTION RECAP *************************************************************************

[MEDIUM -> opsview-orch] SNMP configuration file '/etc/snmp/snmpd.conf' has been overwritten
  | The SNMP configuration file '/etc/snmp/snmpd.conf', has been overwritten by Opsview Deploy.
  |
  | The original contents of the file have been backed up and can be found in
  | '/etc/snmp/snmpd.conf.15764.2020-12-16@12:31:32~'
  |
  | Custom snmpd/snmptrapd configuration should be moved to the custom
  | configuration directories documented in the new file.

To avoid this in future, all custom snmpd and snmptrapd configuration should instead be put in new xxxx.conf files in the following directories respectively:

Verify the started processes Copied

  1. To verify that all Opsview processes are running, run:

    /opt/opsview/watchdog/bin/opsview-monit summary
    
  2. If any Opsview processes are not running after deployment, run:

    /opt/opsview/watchdog/bin/opsview-monit start <process name>
    /opt/opsview/watchdog/bin/opsview-monit monitor <process name>
    
  3. If watchdog is not running after deployment, run:

    /opt/opsview/watchdog/bin/opsview-monit
    
  4. If you are using the Reporting Module, email settings will need to be reapplied - see Known Issues.

Upgrade Opspacks Copied

  1. Run the command below as the opsview user to update and add new Opsview-provided Opspacks for the version of Opsview you are upgrading:

    sudo -iu opsview bash -c "/opt/opsview/coreutils/bin/import_all_opspacks -Bf"
    

    Note

    The -B parameter is required for the upgrade process to handle built-in Opspack files. This must be set to avoid encountering Some plugins already exist in builtin directory or Some files already exist in builtin directory errors.
  2. If you have amended your configuration to move the Opsview Servers (Orchestrator, Collectors, and Database) to a host group other than Monitoring Servers, you must set the playbook variable opsview_monitoring_host_group in the /opt/opsview/deploy/etc/user_vars.yml file, such as:

    opsview_monitoring_host_group: New Group with Opsview Servers
    
  3. Run the following as the root user:

    cd /opt/opsview/deploy
    ./bin/opsview-deploy lib/playbooks/setup-monitoring.yml
    
  4. If you receive Service Check alerts, then you have not completed the above steps correctly:

    • CRITICAL: Could Not Connect to localhost Response Code: 401 Unauthorized
    • CHECK_NRPE: Received 0 bytes from daemon. Check the remote server logs for error messages.

Apply Changes in Opsview Copied

In the Opsview application UI, navigate to Configuration > Apply Changes, and click Apply Changes.

Warning

To ensure your monitoring resumes as intended, remember to re-enable Host Checks, Service Checks, and Event Handlers for each Monitoring Cluster you want them active on, which includes the Master Monitoring Server. You can set this feature in the Collector Management page.

Upgrade Remotely Managed Collectors Copied

Once the Opsview orchestrator upgrade is complete, it’s recommended to prioritize upgrading your Remotely Managed Collectors by following the upgrade instructions.

Warning

ITRS Opsview doesn’t support running Remotely Managed Collectors on a version of Opsview older than the orchestrator for extended periods.
["Opsview On-premises"] ["User Guide", "Technical Reference"]

Was this topic helpful?