Agentic AI App Release Notes
Note
Version 0.2.0 marks the Beta release of the ITRS Agentic AI app. This app release requires a minimum ITRS Analytics Platform version of 2.17.0 and Web Console version 3.8.0 to operate.
Agentic AI Beta 0.3.0 Copied
Released: xx April 2026 Beta
Agentic AI Beta app 0.3.0 is a maintenance release focused on stability and dependency updates for ITRS Analytics Platform 2.18.0. It also improved overall reliability and stability, including a fix for a chat UI issue where settings could repeatedly be saved when no LLM was configured, which could cause the browser to become unresponsive.
Agentic AI Beta 0.2.0 Copied
Released: 1 April 2026 Beta
About the ITRS Agentic AI app Copied
The Agentic AI app is a built-in AI assistant for the ITRS Web Console. It helps teams ask operational questions, explore system context, and get guided responses inside the same platform they already use for monitoring and administration. It also gives administrators a safe way to configure AI models, test them, and evaluate quality before rolling changes out to users.
The Agentic AI app is an AI-enabled application for the ITRS platform that combines an AI chat experience for operational users with an AI Admin console for model and evaluation governance. It is also built on an Agentic AI layer designed to accelerate issue diagnosis and resolution while preserving operator control and transparency.
Core design principles:
- Transparent reasoning — Every recommendation includes a clear explanation.
- Agnostic inference — You can use different AI models and compare their results.
- Human-governed operation — People stay in control. AI supports decisions, but does not make final choices.
- Real-time telemetry inputs — Ingests live metrics, events, logs, and traces for analysis and responses.
In this model, specialized agents support key operational workflows:
- Root cause analysis agent — Connects metrics, events, logs, and traces to find likely root causes and reduce resolution time.
- Support agent — Gives context-aware help in plain language to speed up onboarding and troubleshooting.
New features and enhancements Copied
These are the new features and enhancements of this release:
- ITRS Analytics Agentic AI Beta app 0.2.0 introduces usability and control improvements, including:
- stronger role-based access control (RBAC) for protected admin operations;
- the ability to stop a request while it is still processing; and
- a Test button in the LLM editor to help admins validate model behavior more easily.
- Reliability and UX consistency were improved by work on request cancellation behavior, clearer backend error propagation to the UI, and persistence and functional test fixes.
- Platform and architecture improvements were added, including architecture standardization and migration to the newer Web Platform Micro-Frontend (MFE) core.
- Operational and security hardening included dependency CVE fixes and adding standard Kubernetes labels to workloads.
Use case scenarios Copied
This section explains where the Agentic AI app helps in day-to-day operations. This app provides a centralized, controlled way to configure and manage LLM providers and run AI-assisted operational conversations in the Web Console. In simple terms, you can ask questions in the Web Console and get guided answers faster, while admins stay in control of models, quality checks, and rollout decisions.
Perform root cause analysis Copied
The Root Cause Analysis Agent helps teams resolve incidents faster. It guides operators through related telemetry and provides automated RCA support.
Key use cases:
- Guided incident analysis — Automatically correlates related metrics, logs, and topology signals so operators can focus on the most relevant evidence first.
- Root cause identification — Combines live telemetry and past context to suggest likely causes faster.
- Contextual knowledge surfacing — Shows similar past incidents, relevant documentation, and known fixes to reduce manual search.
Client value:
- Faster incident resolution — Helps teams find and fix issues sooner.
- Greater operational insight — Brings alerts, telemetry, and historical patterns into one investigation view.
- Improved service reliability — Supports faster recovery and better preventive actions.
For example, a Site Reliability Engineer (SRE) asks, “What changed in the last 30 minutes for this service?” to quickly gather context before escalating. The AI app runs the diagnosis and generates the sample output.
Get support guidance Copied
The Support Agent makes expert Geneos knowledge accessible across application support teams by delivering in-app, context-aware guidance.
Key use cases:
- General “How-to” guidance — Gives clear in-app steps for using ITRS Analytics, including data source setup and metric interpretation.
- Setup and configuration — Helps users with onboarding and environment setup, with guidance tailored to their deployment.
- Complex support queries — Combines information from multiple support sources and returns context-aware answers.
Client value:
- Faster time to value — Speeds up setup and learning so users get results sooner.
- Personalized, actionable support — Gives practical answers for each user’s situation, with less manual searching.
- Seamless user experience — Keeps help inside the product to reduce friction and improve support efficiency.
For example, a support user might ask, “Create an XML template for a self-announcing Netprobe,” and the app would use Retrieval-Augmented Generation (RAG) to search and combine relevant information from multiple documentation sources.
Build your Agentic AI foundation Copied
Agentic AI Foundation helps teams start quickly with Agentic AI in ITRS Analytics, while keeping model quality and governance under control.
Key use cases:
- Bring your own intelligence — Connect to frontier or self-hosted LLMs and keep control of data, compliance, and governance. See Add an LLM.
- Multiple LLMs — Test, compare, and route across models to improve cost, speed, and output quality.
- Understand performance — Continuously evaluate agent responses with built-in scoring and benchmarking.
Client value:
- Rapid time to value — Deploy and validate agentic workflows quickly on your current infrastructure.
- Confidence in Agentic AI output — Use clear evaluations to trust model behavior in your environment.
- Flexibility in deployment — Choose the AI stack that fits your security, compliance, and operational needs.
How to use in ITRS Analytics Web Console Copied
Follow these steps to choose an agent mode, select an LLM, and start interacting with the ITRS Agentic SRE chat.
- In ITRS Analytics Web Console, click ITRS Agentic SRE chat in the upper-right corner.
- In the prompt toolbar, open Agent and select from
SRE,Support, orRCA. - Select an LLM, then enter your prompt. You can also open Entity Viewer and select a specific entity to automatically add it to the AI chat field.
- SRE — General-purpose operational assistant mode. Use this for broad troubleshooting or when you are not sure which specialist mode to choose. This follows the general agent path.
- RCA — Root cause analysis mode. Use this when investigating incidents or signals and you need deeper analysis workflows (for example, service-level, Uptrends, or Geneos RCA paths).
- Support — Documentation-grounded assistance mode. Use this for product how-to questions, feature explanations, setup guidance, and support-style Q&A based on retrieved ITRS documentation.
Configure AI settings in Admin Copied
Add an LLM Copied
Use this to onboard a new LLM and make it immediately available for operational users in chat.
-
In the Web Console, go to Admin > AI > LLMs.
-
Click Add LLM to create a new entry, or copy an existing LLM configuration.
-
Select a provider and fill in the required provider fields, such as API key, base URL or endpoint, model, and any other provider-specific settings.
-
Use Test in the LLM editor to validate connectivity and configuration.
-
Save and optionally reorder the LLM priority within the list.
-
Open the ITRS Agentic SRE chat.
-
Select a backend, an agent, and the newly configured LLM, then start prompting. The new model configuration is immediately available in operational chat without redeploying the UI.
Run an Eval campaign in AI Admin and review quality reports Copied
Use this to check model behavior before a wider rollout and get clear metrics for model quality and operational readiness.
-
Go to AI > Eval and click Run Eval.
-
Select the target LLM, eval type, iterations, and graders. Graders include LLM, consistency, duration, token usage, and string content.
-
Load tests from a test set (optionally filtered by version or group), or define custom tests.
-
Select tests to include and start the run.
-
Monitor live run progress and completion status.
-
Open the generated reports, drill into test details and answer details, and compare outcomes.
-
If required, re-run the tests with adjusted settings or graders and track the result trends.
Disclaimer
The information contained in this document is for general information and guidance on our products, services, and other matters. It is only for information purposes and is not intended as advice which should be relied upon. We try to ensure that the content of this document is accurate and up-to-date, but this cannot be guaranteed. Changes may be made to our products, services, and other matters which are not noted or recorded herein. All liability for loss and damage arising from reliance on this document is excluded (except where death or personal injury arises from our negligence or loss or damage arises from any fraud on our part).