AI Chat
ProfessionalInteractive AI assistance for network operations with credential sanitization, copilot panel, and context-aware conversations.
Overview
AI Chat is a floating panel integrated into the NetStacks UI that provides conversational AI assistance for network operations. It combines real-time device context, conversation history, and a built-in credential sanitization pipeline so you can ask questions about your network without worrying about sensitive data leaking to third-party LLM providers.
AI Chat provides four core capabilities:
- Conversational Assistance — Ask natural language questions about network operations. The AI responds with vendor-specific commands, configuration guidance, and troubleshooting steps tailored to the device you are connected to.
- Inline Suggestions — AI-generated command suggestions appear directly in the chat panel. Click any suggestion to insert it into your active terminal session.
- Copilot Panel — A side-by-side assistant that runs alongside your terminal session, providing continuous guidance during complex troubleshooting workflows like BGP convergence analysis or OSPF adjacency debugging.
- Automatic Credential Sanitization — Passwords, SNMP communities, API keys, routing protocol secrets, VPN pre-shared keys, and private keys are automatically detected and replaced with safe placeholders before any data is sent to the LLM provider.
In Enterprise mode, all AI processing is handled centrally by the Controller. Terminal clients act as thin clients and never communicate directly with AI providers. This ensures consistent security policies, centralized credential sanitization, and organization-wide audit logging.
How It Works
AI Chat connects your terminal context to your configured LLM provider through a multi-stage pipeline that ensures accuracy, relevance, and security.
Context Injection
When you send a message, NetStacks automatically injects relevant context into the prompt sent to the LLM:
- Device Information — Device type (Cisco IOS, NX-OS, Juniper Junos, Arista EOS, Palo Alto PAN-OS), hostname, and current command mode (user exec, privileged exec, config mode).
- Session Context — Recent terminal output, the command history for the current session, and any text you have selected in the terminal.
- Knowledge Base — Relevant documents from your organization's knowledge base are retrieved via RAG (Retrieval-Augmented Generation) to provide organization-specific answers.
Credential Sanitization Pipeline
Before any data leaves your environment, it passes through the sanitization pipeline. NetStacks uses pattern-based detection to identify and redact sensitive information across all major network vendors:
| Category | What Gets Redacted | Example Replacement |
|---|---|---|
| Device Credentials | Passwords, enable secrets, username/password pairs | [REDACTED:password] |
| SNMP Communities | SNMPv1/v2c community strings, SNMPv3 auth and priv keys | [REDACTED:snmp-community] |
| Authentication Keys | TACACS+ keys, RADIUS secrets, API tokens | [REDACTED:auth-key] |
| Routing Protocol Secrets | BGP MD5 passwords, OSPF authentication, EIGRP keys | [REDACTED:routing-secret] |
| VPN & Tunnel Keys | IPSec pre-shared keys, GRE tunnel keys, WireGuard keys | [REDACTED:vpn-key] |
| Certificates & Private Keys | RSA/DSA/ECDSA private keys, certificate data | [REDACTED:private-key] |
Sanitization is configurable per organization. Mandatory patterns (credentials, private keys) are always enabled. Optional patterns (IP addresses, MAC addresses, hostnames, usernames) can be toggled on or off. Organizations can also define custom regex patterns and maintain an allowlist of strings that should never be redacted. Every redaction event is logged to an audit table for compliance.
Conversation Storage
Conversation history is stored in PostgreSQL with full threading support. Each conversation tracks the user, associated device session, message sequence, and timestamps. History persists across sessions so you can return to a previous troubleshooting thread.
For maximum data privacy, configure Ollama as your LLM provider. When using a locally hosted model, your data never leaves your network. See LLM Configuration for setup instructions.
Using AI Chat
Follow these steps to get started with AI Chat for network operations assistance.
- Open the AI Chat panel — Click the floating chat button in the bottom-right corner of the NetStacks interface, or press
Cmd/Ctrl+Shift+Ato toggle the panel. - Type a natural language question — Ask about device configuration, troubleshooting steps, protocol behavior, or best practices. For example: “How do I configure OSPF area 0 on this Cisco router?”
- Review the AI response — The AI responds with context-aware guidance based on the device type, OS version, and your current session state. Commands are formatted as clickable snippets.
- Insert commands into your terminal — Click any suggested command to insert it directly into your active terminal session. The command is placed at your cursor position without executing, so you can review before pressing Enter.
- Continue the conversation — Ask follow-up questions. The AI maintains conversation context, so you can refine your approach without repeating background information.
- Use the copilot panel — For extended troubleshooting, open the copilot panel (
Cmd/Ctrl+Shift+C) to get a side-by-side assistant that continuously analyzes your terminal output and suggests next steps as you work.
Select terminal output before asking a question to give AI Chat specific context for analysis. For example, select the output of show ip bgp summary and then ask “Why is this BGP neighbor in Active state?”
Code Examples
Chat Interaction: OSPF Configuration
When connected to a Cisco IOS-XE router, asking AI Chat for OSPF configuration produces device-specific commands:
User: How do I configure OSPF area 0 on this router with the loopback
and the two uplinks?
AI: Based on your Cisco IOS-XE router (hostname: core-rtr-01), here's
the OSPF configuration for area 0:
router ospf 1
router-id 10.255.0.1
network 10.255.0.1 0.0.0.0 area 0
network 10.0.12.0 0.0.0.3 area 0
network 10.0.13.0 0.0.0.3 area 0
passive-interface default
no passive-interface GigabitEthernet0/0/0
no passive-interface GigabitEthernet0/0/1
This configures:
- Loopback0 (10.255.0.1/32) as the router-id and in area 0
- GigabitEthernet0/0/0 (10.0.12.0/30) uplink to dist-rtr-02
- GigabitEthernet0/0/1 (10.0.13.0/30) uplink to dist-rtr-03
- Passive-interface on all other interfaces for securityCredential Sanitization in Action
When you share terminal output containing credentials, the sanitization pipeline automatically redacts sensitive values before sending to the LLM provider:
! What you see in your terminal (original output)
username admin privilege 15 secret 0 N3tSt4cks!2026
enable secret 0 Sup3rS3cret#Pass
snmp-server community NetOps-RO RO
snmp-server community NetOps-RW RW
router bgp 65001
neighbor 10.0.1.2 password 7 08354942071C11
ip route 0.0.0.0 0.0.0.0 192.168.1.1
crypto isakmp key VPN-SharedKey-2026! address 203.0.113.50! What the LLM provider receives (sanitized)
username admin privilege 15 secret 0 [REDACTED:password]
enable secret 0 [REDACTED:password]
snmp-server community [REDACTED:snmp-community] RO
snmp-server community [REDACTED:snmp-community] RW
router bgp 65001
neighbor 10.0.1.2 password 7 [REDACTED:routing-secret]
ip route 0.0.0.0 0.0.0.0 192.168.1.1
crypto isakmp key [REDACTED:vpn-key] address 203.0.113.50Copilot Panel: BGP Troubleshooting
The copilot panel provides continuous assistance during a troubleshooting session. Here is an example of copilot output during a BGP neighbor investigation:
[Copilot] Analyzing: show ip bgp summary
Observation: 4 BGP neighbors configured in AS 65001
- 10.0.1.2 (AS 65002) - Established, 450 prefixes received
- 10.0.1.6 (AS 65003) - Established, 312 prefixes received
- 10.0.2.2 (AS 65004) - Active (NOT established)
- 10.0.2.6 (AS 65005) - Established, 89 prefixes received
Issue Detected: Neighbor 10.0.2.2 is in Active state.
Possible causes: TCP connectivity failure, AS number mismatch,
authentication mismatch, or neighbor not configured on remote side.
Suggested next steps:
1. show ip bgp neighbor 10.0.2.2
2. ping 10.0.2.2 source Loopback0
3. show logging | include BGP|10.0.2.2Custom Sanitization Pattern
Organizations can define custom redaction patterns for proprietary credential formats:
{
"name": "internal_api_token",
"pattern": "NSTK-[A-Za-z0-9]{32}",
"replacement": "[REDACTED:internal-token]",
"enabled": true
}Questions & Answers
- How does AI Chat protect my credentials?
- AI Chat runs all outgoing data through a credential sanitization pipeline before sending anything to the LLM provider. The pipeline uses pattern-based detection to identify passwords, SNMP communities, API keys, routing protocol secrets, VPN pre-shared keys, and private keys across all major network vendors (Cisco, Juniper, Arista, Palo Alto, and more). Detected secrets are replaced with safe placeholders like
[REDACTED:password]. Mandatory patterns are always active. Optional patterns (IP addresses, MAC addresses, hostnames) can be toggled per organization. Every redaction event is logged to an audit table for compliance review. - Can AI Chat see my terminal session output?
- Yes, but only when you explicitly share it. AI Chat can access your current session context including recent terminal output, the command you are typing, and any text you select. This context is sanitized before being sent to the LLM provider. The copilot panel, when active, continuously analyzes terminal output to provide suggestions. All data passes through the sanitization pipeline regardless of how it reaches the AI.
- Which LLM providers work with AI Chat?
- AI Chat supports OpenAI (GPT-4o, GPT-4), Anthropic (Claude), Ollama (Llama, Mistral, and other locally hosted models), OpenRouter (access to many models via OpenAI-compatible API), and custom OpenAI-compatible endpoints. Providers are configured with priority-based failover so that if your primary provider is unavailable, requests automatically route to the next available provider. See LLM Configuration for setup details.
- How do I access the copilot panel?
- Press
Cmd/Ctrl+Shift+Cto toggle the copilot panel, or click the copilot icon in the toolbar. The copilot panel opens alongside your terminal session and provides continuous, context-aware assistance as you work. It is particularly useful during multi-step troubleshooting workflows where you need ongoing analysis of command output. - Is my chat history stored? Where?
- Yes. Conversation history is stored in PostgreSQL with full threading support. In Standalone mode, history is stored locally in the Terminal's embedded database. In Enterprise mode, history is stored on the Controller. Conversations persist across sessions so you can return to a previous troubleshooting thread. You can delete individual conversations or clear all history from the chat settings.
- Can AI Chat execute commands on my behalf?
- No. AI Chat never executes commands directly on devices. It suggests commands as clickable snippets that you insert into your terminal session. The command is placed at your cursor position without executing, so you always have the opportunity to review and modify before pressing Enter. For automated command execution, see NOC Agents, which use a tool approval system for controlled automation.
Troubleshooting
| Issue | Possible Cause | Solution |
|---|---|---|
| AI Chat not responding | LLM provider not configured or API key invalid | Check your LLM provider configuration at LLM Configuration. Verify the API key is valid and the provider is enabled. Use the “Test Connection” button to confirm connectivity. |
| Slow responses | Large model, high token usage, or network latency | Consider using a faster model for chat interactions (e.g., GPT-4o-mini instead of GPT-4). Check your token budget status. If using Ollama, ensure the model is fully loaded in memory. |
| Incorrect device context | Device type not detected correctly | Verify device type detection by checking the session info panel. If the device type is wrong, manually set it in the session settings. AI Chat uses device type to generate platform-specific commands. |
| Sanitization too aggressive | Optional patterns redacting needed data | Review sanitization settings in Admin > AI Configuration. Toggle off optional patterns (IP addresses, hostnames) if they are removing data you need the AI to see. Add specific strings to the allowlist to prevent them from being redacted. |
| Chat history missing | Session cleared or database issue | Verify the database connection is healthy. In Enterprise mode, check that the Controller is reachable. Chat history is stored in PostgreSQL and requires a valid database connection. |
Related Features
- Command Suggestions — Context-aware command autocomplete that works alongside AI Chat to provide inline suggestions as you type.
- LLM Configuration — Set up AI providers (OpenAI, Anthropic, Ollama), configure API keys, select models, and manage token budgets.
- Knowledge Base — Build a searchable repository of organizational knowledge that AI Chat uses for RAG-powered responses.
- NOC Agents — Autonomous AI agents that can monitor, triage, and respond to network events with tool approval workflows.
- Terminal Overview — The terminal environment where AI Chat and copilot panel operate alongside your SSH sessions.