Post-upgrade tasks

SNMP configuration files overwrite Copied

As part of the upgrade process, Opsview Deploy overwrites the contents of the configuration files for snmpd and snmptrapd. If custom changes are detected, the configuration file will be backed up and labelled with a timestamp while the new configuration replaces it.

A message similar to the following will appear at the end of a run of Opsview Deploy, which indicates that the configuration file in the message has been overwritten.

REQUIRED ACTION RECAP *************************************************************************

[MEDIUM -> opsview-orch] SNMP configuration file '/etc/snmp/snmpd.conf' has been overwritten
  | The SNMP configuration file '/etc/snmp/snmpd.conf', has been overwritten by Opsview Deploy.
  |
  | The original contents of the file have been backed up and can be found in
  | '/etc/snmp/snmpd.conf.15764.2020-12-16@12:31:32~'
  |
  | Custom snmpd/snmptrapd configuration should be moved to the custom
  | configuration directories documented in the new file.

To avoid future overwrites, all custom snmpd and snmptrapd configuration must be put in new xxxx.conf files in the following directories respectively:

Verify Opsview processes Copied

  1. To verify that all Opsview processes are running, run:

    /opt/opsview/watchdog/bin/opsview-monit summary
    
  2. If any Opsview processes are not running after deployment, run:

    /opt/opsview/watchdog/bin/opsview-monit start <process name>
    /opt/opsview/watchdog/bin/opsview-monit monitor <process name>
    
  3. If watchdog is not running after deployment, run:

    /opt/opsview/watchdog/bin/opsview-monit
    
  4. If you are using the Reporting Module, email settings will need to be reapplied. For more details, see Opsview Reporting Module Known Issues.

Upgrade Opspacks Copied

  1. Run the following command as the opsview user to update and add new Opsview-provided Opspacks for the version of Opsview you are upgrading.

    sudo -iu opsview bash -c "/opt/opsview/coreutils/bin/import_all_opspacks -Bf"
    

    Note

    The -B parameter is required for the upgrade process to handle built-in Opspack files. This must be set to avoid encountering Some plugins already exist in builtin directory or Some files already exist in builtin directory errors.
  2. If you have amended your configuration to move the Opsview Servers (Orchestrator, Collectors, and Database) to a host group other than Monitoring Servers, you must set the playbook variable opsview_monitoring_host_group in the /opt/opsview/deploy/etc/user_vars.yml file, such as:

    opsview_monitoring_host_group: New Group with Opsview Servers
    
  3. Run the following as the root user:

    cd /opt/opsview/deploy
    ./bin/opsview-deploy lib/playbooks/setup-monitoring.yml
    
  4. If you receive Service Check alerts, then you have not completed the above steps correctly:

    • CRITICAL: Could Not Connect to localhost Response Code: 401 Unauthorized
    • CHECK_NRPE: Received 0 bytes from daemon. Check the remote server logs for error messages.

Apply Changes in Opsview Copied

In the Opsview application UI, navigate to Configuration > Apply Changes, and select Apply Changes.

Warning

To resume monitoring, re-enable Host Checks, Service Checks, and Event Handlers for each Monitoring Cluster, which includes the Master Monitoring Server. You can set this feature in the Collector Management page.

Upgrade Remotely Managed Collectors Copied

Once the Opsview orchestrator upgrade is complete, it is recommended to prioritize upgrading your Remotely Managed Collectors. ITRS Opsview does not support running Remotely Managed Collectors on a version of Opsview older than the orchestrator for extended periods.

Note

There are certain instances where additional upgrade steps must be performed prior to the standard upgrade process depending on your Opsview version. For the detailed instructions, see Upgrading Remotely Managed Collectors.
["Opsview On-Premises"] ["User Guide", "Technical Reference"]

Was this topic helpful?