An agent operations runtime · Built 2026

Agents that can act on production. Without acting on their own.

AOSIQ is a capability-authorization runtime for AI agents — with built-in human approval gates, tamper-evident audit, and per-class data-egress envelopes. The same governance primitives extend to deterministic actors: scheduled jobs, monitoring scripts, and automation pipelines that need the same scrutiny but don't need to reason.

License
MIT, open source
Substrate
PostgreSQL only
Form factor
Library or service
Backends
Five supported

AI agents now do real work. Most frameworks can't tell you what they did.

Industry data from 2026 documents an authorization gap that has widened as agent deployment has outpaced governance maturity. The numbers below come from a survey of 919 enterprise executives and practitioners. They focus on AI agents — but the gap they describe is part of a longer pattern: enterprise automation governance has been an unsolved problem for years, and AI agents are the version where it's finally undeniable.

45.6%
of teams still use shared API keys for agent-to-agent authentication. Only 17.8% use mTLS.
Gravitee 2026
21.9%
of teams treat AI agents as independent identity-bearing entities. 27.2% have reverted to custom hardcoded authorization logic.
Gravitee 2026
88%
of organizations reported confirmed or suspected AI agent security incidents in the last twelve months. In healthcare the rate reaches 92.7%.
Gravitee 2026
The gap between what AI agents can do and what they should do is an authorization problem, not an alignment problem. Autonomous agents today lack a standard mechanism to enforce per-action authorization before execution.
— arXiv 2603.20953, March 2026

Four primitives. Composed correctly.

AOSIQ enforces governance properties as runtime invariants — not best practices, not configuration, not policy documents. Every operation crosses a single syscall boundary where capability, audit, approval, and cost checks fire before the action ever reaches the underlying system.

01 — Capability narrowing

An actor without permission for a tool cannot call it.

Every actor — reasoning agent, scheduled job, automation script — carries a signed JWT capability token listing exactly which tools it may invoke. Child actors receive the intersection of the parent's grants and the child's request — never more. Verification fires before every tool call. There is no ambient authority.

02 — Approval at the action boundary

Destructive actions cannot fire without an operator approving the exact tool and arguments.

Tools registered as reversible=False trigger a checkpoint and a human review row. Approvals are bound to a specific (tool, args_hash) pair and are single-use. Repeat calls require fresh approvals. The blast radius is visible at the moment of decision.

03 — Tamper-evident audit chain

Every action lives in a SHA-256 hash chain partitioned per session.

Spawn, tool call, memory write, state transition, capability denial, approval — each becomes one row, each cryptographically chained to the last. Anchor objects in independently-credentialed object storage make mid-chain tampering detectable even by an attacker with PostgreSQL write access.

04 — Hard cost ceilings, before the call

A session that would exceed its budget raises an exception. Before the API request.

Every LLM call records model, tokens, and computed USD cost. Deterministic actors record their compute consumption in the same ledger. Configurable session ceilings fire as exceptions, not invoices. Costs roll up by actor class, by session, by day. Per-class data-egress envelopes let an actor class be forbidden from routing to specific providers regardless of operator configuration.

A Unix kernel, for actors.

The runtime borrows operating-system shape deliberately: actors are processes, the syscall interface is the kernel API, capability tokens are the security context, the audit log is the system log. A single enforcement boundary funnels every operation through the same checks — regardless of whether the actor is a reasoning agent or a deterministic script.

┌────────────────────────────────────────┐
│ HTTP layer  (FastAPI · MCP · dashboard)│
└──────────────────┬─────────────────────┘
                   │
┌──────────────────▼─────────────────────┐
│ AOSSyscall  ── single policy boundary  │
│  • capability verification             │
│  • approval gate (irreversible)        │
│  • audit chain append                  │
│  • cost ledger record                  │
│  • memory operations                   │
│  • tool dispatch                       │
└──┬─────────────────┬─────────────────┬─┘
   │                 │                 │
┌──▼────────┐  ┌─────▼────────┐  ┌─────▼─────┐
│ Kernel    │  │ Memory       │  │ Tool      │
│ scheduler │  │ working      │  │ native    │
│ recovery  │  │ semantic     │  │ MCP       │
│ ACB       │  │ episodic     │  │ KB        │
│ registry  │  │ experiential │  │ sandbox   │
└───────────┘  └──────────────┘  └───────────┘
A · Single choke point
No path around the syscall.
Actors never hold a database connection. They never call tools directly. They never reach LangGraph internals. Every operation crosses AOSSyscall where governance fires uniformly.
B · Composite checkpoints
Crash recovery that's actually safe.
For reasoning agents: LangGraph thread state, agent control block, and working memory captured atomically. For deterministic actors: execution position and working memory. Either way, worker heartbeats plus orphan reaping mean a crashed worker's actors resume cleanly from the last checkpoint without operator intervention.
C · Honest threat model
What's covered. What isn't. With references.
The threat model document specifies in-scope and out-of-scope with file-and-migration references. No marketing fog. Security teams reading it can stop the conversation either way — which is exactly what good security teams want.

Operators of systems where wrong actions are expensive.

The customer profile is anyone running actors — AI agents, scheduled jobs, automation scripts, or any combination — that take real actions on real systems where unsupervised destructive operations are unacceptable. Four shapes recur.

Vertical · 01

Critical infrastructure operations

Production systems where state changes need sign-off and audit. Read-only diagnostic agents, scheduled monitoring jobs, automated remediation pipelines — all governed uniformly. IBM i shops, Kubernetes operators, database administrators, network engineers, security incident responders. The pattern: actors triage, propose, and surface evidence; humans authorize every keystroke that mutates state.

Vertical · 02

Regulated industries

Financial services, healthcare, legal, insurance, utilities. The environments where "we used AI" is acceptable only if "here is the cryptographic record of every action it took, who authorized state changes, and how much each session cost" comes with it.

Vertical · 03

Mixed actor fleets

Teams running concurrent reasoning agents alongside deterministic jobs, where coordination, capability boundaries, and cost attribution matter at the per-session level. Same governance primitives for every actor, no separate audit story for AI versus automation. AOSIQ provides the substrate; the application provides the actors.

Vertical · 04

Development tooling

Claude Code and similar IDE-integrated agents that spawn sub-agents for code review, bug investigation, or release readiness — workflows where the parent session shouldn't grant the child everything it has. AOSIQ ships an MCP server for one-config-change integration.

Open Source
Distributed under MIT. Deployable as a library inside your application or as a standalone service. No SaaS dependency, no vendor lock-in — runs in your trust boundary.

Let's talk about your specific case.

The earliest engagements are with a small group of design partners in regulated infrastructure. If your operation runs AI agents against production systems and the governance gap is keeping you up at night — that's the conversation we want to have.

Tell us a little about what you're working on. We respond within two business days, and the first conversation is consultative — we'll tell you honestly whether AOSIQ is the right fit before discussing anything else.

Or write directly

We do not share your information with third parties. Submissions are read by humans, not by an automated funnel.