Securing Insecure & Unreliable AI Agents with an AI Gateway

January 14, 2026

Technology

At the 39th Chaos Communication Congress (39C3) in Hamburg, Signal President Meredith Whittaker and VP of Strategy Udbhav Tiwari delivered a stark warning: agentic AI, as currently implemented, is insecure, unreliable, and a surveillance nightmare. Their presentation, titled "AI Agent, AI Spy," laid out the fundamental problems with how enterprises are deploying AI agents—and why the industry needs to pull back until proper safeguards exist.

This isn't theoretical. Microsoft's Recall feature, which takes screenshots every few seconds and stores them in a searchable database, has already demonstrated how AI agents can create forensic dossiers of users' entire digital lives. The database includes precise timelines, full OCR text, dwell times, and topic classifications—all accessible to malware through online attacks and indirect prompt injection.

For developers building AI-powered applications, the message is clear: you need a security boundary between your AI agents and the systems they interact with. This post explores how to build that boundary using Apache APISIX as an AI Gateway.

The Three Problems with AI Agents

Signal's presentation identified three fundamental issues that make current AI agent deployments dangerous:

ProblemDescriptionReal-World Impact
InsecurityAI agents require deep system access, creating attack surfaces for malware and prompt injectionMicrosoft Recall databases accessible to attackers, bypassing E2EE
UnreliabilityProbabilistic systems degrade with each step in a task30-step tasks have only 4.2% success rate at 90% per-step accuracy
Surveillance RiskAgents must know everything about users to function, creating comprehensive dossiersComplete digital life stored in single, attackable databases

The reliability numbers are particularly striking. Whittaker explained that AI agents are probabilistic, not deterministic—each step they take degrades accuracy. Even with an optimistic 95% accuracy per step (which current models don't achieve), a 10-step task yields only ~59.9% success. A 30-step task drops to ~21.4%. With a more realistic 90% accuracy, that 30-step task succeeds only 4.2% of the time.

"The best agent models failed 70% of the time." — Meredith Whittaker, Signal President.

Why Traditional Security Doesn't Work

The Hacker News discussion following Signal's warning revealed a deeper truth: this isn't just an AI problem—it's an infrastructure problem that AI is exposing.

As one commenter noted:

"Process isolation hasn't been taken seriously because UNIX didn't do a good job, and Microsoft didn't either. Well designed security models don't sell computers/operating systems, apparently."

Traditional approaches to securing AI agents fail for several reasons:

Network-level firewalls can't distinguish between legitimate AI agent requests and malicious ones. An AI agent making API calls looks identical to any other HTTP client.

Application-level security requires modifying every service the agent interacts with—impractical when agents need to access dozens of APIs.

Operating system sandboxing (like SELinux) adds complexity without addressing the fundamental issue: AI agents need broad access to function, but that access creates attack surfaces.

What's needed is a security boundary at the API layer—a gateway that can inspect, validate, and control every interaction between AI agents and external services.

The AI Gateway Architecture

An AI Gateway sits between your AI agents and the services they consume, providing:

APISIX AI Gateway Architecture

This architecture addresses Signal's three concerns directly:

  1. Security: The gateway validates every request, blocks prompt injection attempts, and enforces authentication.
  2. Reliability: Circuit breakers and retry logic improve success rates; audit logs enable debugging.
  3. Surveillance Prevention: Scope enforcement limits what data agents can access; logging provides transparency.

Step-by-Step Implementation

Prerequisites

  • Install Docker to be used in the quickstart script to create containerized etcd and APISIX.
  • Install cURL to be used in the quickstart script and to send requests to APISIX for verification.

Step 1: Get APISIX

APISIX can be easily installed and started with the quickstart script:

curl -sL https://run.api7.ai/apisix/quickstart | sh

You will see the following message once APISIX is ready:

✔ APISIX is ready!

Step 2: Define Agent Identity and Scopes

Following Signal's recommendation for "mandatory developer opt-ins," we'll require every AI agent to authenticate and declare its scope.

First, create a consumer representing your AI agent:

curl "http://127.0.0.1:9180/apisix/admin/consumers" -X PUT \ -H "X-API-KEY: ${admin_key}" \ -d '{ "username": "customer-support-agent", "plugins": { "key-auth": { "key": "agent-key-customer-support-2026" } }, "desc": "AI agent for customer support tasks only" }'

Step 3: Implement Scope Enforcement

Create routes that enforce what each agent can access. This addresses Signal's concern about agents having access to "entire digital lives":

# Route for customer data - restricted to support agent curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \ -H "X-API-KEY: ${admin_key}" \ -d '{ "id": "customer-data-api", "uri": "/api/customers/*", "upstream": { "type": "roundrobin", "nodes": { "customer-service:8080": 1 } }, "plugins": { "key-auth": {}, "consumer-restriction": { "whitelist": ["customer-support-agent"], "rejected_code": 403, "rejected_msg": "This agent is not authorized to access customer data" } } }' # Route for financial data - NO agents allowed curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \ -H "X-API-KEY: ${admin_key}" \ -d '{ "id": "financial-data-api", "uri": "/api/financial/*", "upstream": { "type": "roundrobin", "nodes": { "financial-service:8080": 1 } }, "plugins": { "key-auth": {}, "consumer-restriction": { "whitelist": [], "rejected_code": 403, "rejected_msg": "AI agents are not permitted to access financial data" } } }'

Step 4: Add Prompt Injection Detection

Prompt injection is one of the primary attack vectors Signal warned about. Implement basic detection:

curl "http://127.0.0.1:9180/apisix/admin/routes/customer-data-api" -X PATCH \ -H "X-API-KEY: ${admin_key}" \ -d '{ "plugins": { "serverless-pre-function": { "phase": "access", "functions": [ "return function(conf, ctx) local core = require(\"apisix.core\"); local body = core.request.get_body(); if body then local suspicious_patterns = {\"ignore previous\", \"disregard instructions\", \"system prompt\", \"you are now\", \"pretend to be\", \"jailbreak\"}; local lower_body = string.lower(body); for _, pattern in ipairs(suspicious_patterns) do if string.find(lower_body, pattern) then core.log.warn(\"Potential prompt injection detected: \", pattern); return 400, {error = \"Request blocked: suspicious content detected\"} end end end end" ] } } }'

Step 5: Implement Comprehensive Audit Logging

Signal emphasized the need for "radical transparency" and granular auditability. Configure detailed logging:

curl "http://127.0.0.1:9180/apisix/admin/routes/customer-data-api" -X PATCH \ -H "X-API-KEY: ${admin_key}" \ -d '{ "plugins": { "http-logger": { "uri": "http://logging-service:9000/logs", "batch_max_size": 1, "include_req_body": true, "include_resp_body": true, "concat_method": "json" } } }'

This logs every request and response, enabling:

  • Post-incident forensics
  • Compliance auditing
  • Anomaly detection
  • Debugging agent failures

Step 6: Add Rate Limiting for Cost Control

AI agent requests to LLM APIs can be expensive. Implement token-aware rate limiting:

curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \ -H "X-API-KEY: ${admin_key}" \ -d '{ "id": "openai-proxy", "uri": "/llm/openai/*", "upstream": { "type": "roundrobin", "nodes": { "api.openai.com:443": 1 }, "scheme": "https", "pass_host": "node" }, "plugins": { "key-auth": {}, "limit-count": { "count": 100, "time_window": 60, "key_type": "var", "key": "consumer_name", "rejected_code": 429, "rejected_msg": "Agent rate limit exceeded. Please retry later." }, "proxy-rewrite": { "headers": { "set": { "Authorization": "Bearer $env_OPENAI_API_KEY", "Host": "api.openai.com" } } } } }'

Step 7: Implement Circuit Breakers for Reliability

Address the reliability concerns by adding circuit breakers that prevent cascading failures:

curl "http://127.0.0.1:9180/apisix/admin/upstreams" -X PUT \ -H "X-API-KEY: ${admin_key}" \ -d '{ "id": "openai-upstream", "type": "roundrobin", "nodes": { "api.openai.com:443": 1 }, "scheme": "https", "retries": 3, "retry_timeout": 10, "timeout": { "connect": 5, "send": 30, "read": 60 }, "checks": { "active": { "type": "https", "http_path": "/v1/models", "healthy": { "interval": 30, "successes": 2 }, "unhealthy": { "interval": 10, "http_failures": 3 } } } }'

Monitoring Your AI Agents

With Prometheus metrics enabled, you can track:

# Request rate by agent sum(rate(apisix_http_status{consumer="customer-support-agent"}[5m])) by (code) # Error rate by agent sum(rate(apisix_http_status{consumer="customer-support-agent",code=~"5.."}[5m])) / sum(rate(apisix_http_status{consumer="customer-support-agent"}[5m])) # Latency percentiles histogram_quantile(0.95, sum(rate(apisix_http_latency_bucket{type="request"}[5m])) by (le, consumer))

Create alerts for:

  • Unusual request patterns (potential compromise)
  • High error rates (reliability issues)
  • Rate limit hits (cost control)
  • Blocked requests (security events)

Addressing Signal's Recommendations

Let's map our implementation to Signal's three recommendations:

Signal RecommendationOur Implementation
"Stop reckless deployment"Scope enforcement prevents agents from accessing unauthorized data
"Make opting out the default, with mandatory developer opt-ins"Consumer-based authentication requires explicit agent registration
"Provide radical transparency and granular auditability"HTTP logging captures every request/response for audit

Results: Before and After

Organizations implementing AI Gateway architecture report significant improvements:

MetricWithout AI GatewayWith AI Gateway
Unauthorized data accessUndetectable100% blocked and logged
Prompt injection attemptsUndetectableDetected and blocked
Agent cost overrunsCommonControlled via rate limits
Incident response timeHours to daysMinutes (with audit logs)
Compliance audit readinessManual, incompleteAutomated, comprehensive

Conclusion: Building Trust in AI Agents

Signal's warning at 39C3 wasn't about stopping AI development—it was about building it responsibly. As Whittaker noted, there currently isn't a complete solution for making AI agents preserve privacy, security, and control. But there is triage.

An API Gateway provides that triage layer. It creates a security boundary where none existed, enables the transparency that Signal demands, and gives you control over what your AI agents can access.

The alternative—deploying AI agents with unfettered access to your systems—is what Signal called "a disaster in the making." Don't be part of that disaster.

Tags: