Approvals & Permissions
Human-in-the-loop control for sensitive agent actions
OpsSquad.ai includes a powerful permission system that ensures you maintain control over what agents can do on your infrastructure through their linked nodes.
How Permissions Work
When an agent proposes a potentially sensitive action that passes the initial SLM AI Guardrails, the system pauses execution and requests your permission before proceeding.
┌─────────────────────────────────────────────────────────────┐
│ Agent wants to execute: │
│ │
│ sudo systemctl restart nginx │
│ │
│ This will restart the web server. │
│ │
│ [Grant Permission] [Deny] │
└─────────────────────────────────────────────────────────────┘Permission Flow
- Agent Proposes Action - You ask the agent to do something that requires a sensitive command.
- SLM AI Analysis - The specialized Small Language Model (SLM) analyzes the purpose and intent of the command.
- Blocked: If the intent is malicious or dangerous, it is blocked immediately.
- Safe: If the intent is valid but sensitive, it proceeds to approval.
- Review Request - You see exactly what command or action the agent wants to perform.
- Make Decision - You can Grant Permission or Deny the action.
- Execution (if granted) - The agent proceeds with the action and reports the results.
Pending Approvals
Finding Pending Approvals
When there's a pending approval:
- A notification badge appears in the chat
- The conversation shows the pending request
- The agent mentions it's waiting for approval
Approval Details
Each approval request shows:
- Command: The exact command to be executed
- Context: Why the agent wants to run this
- Impact: Potential effects of the action
- Risk Level: Assessed by the SLM (Low, Medium, or High)
Making Decisions
Grant Permission
Click "Grant Permission" (or "Approve") when:
- You understand the command
- You trust it's safe to execute
- You want the action to proceed
The agent will execute the command and continue.
Deny
Click "Deny" when:
- The command isn't what you intended
- You want to take a different approach
- You need more information first
The agent will acknowledge the denial and ask how to proceed.
Risk Levels
Commands are classified by risk to help you make decisions, but the SLM AI Guardrails handle the heavy lifting of blocking actual threats.
Low Risk
Read-only commands that don't modify state (e.g., ps, df, cat logs). Often auto-approved depending on configuration.
Medium Risk
Commands that could affect services or modify non-critical files (e.g., systemctl restart, touch). Always require permission.
High Risk
Destructive or critical commands. Always require permission and may prompt for double-confirmation.
Prohibited Actions
The SLM AI Guardrails automatically block:
- Malicious network scanning (
nmap, etc.) - Privilege escalation attempts
- Shell escapes
- Known attack patterns
Audit Trail
All permission decisions are logged:
- Granted commands with timestamps
- Denied requests
- Who made each decision
Access the audit trail in Settings > Security > Audit Log.