
Most AI agent teams filter user inputs for prompt injection. Attackers are injecting through tool call results — database records, web pages, emails your agent reads. **Primary keyword:** prompt injection AI agents
Logan Kelly

AWS Security Agent went GA on March 31, 2026. It runs autonomous penetration tests at $50/task-hour with no built-in human approval gate before high-risk actions. Here's what that means for governance.
Logan Kelly

Governing each agent individually isn't enough when they delegate to each other. The coordination layer — context handoffs, policy inheritance, trust boundaries — is where multi-agent incidents originate.
Logan Kelly

ForcedLeak exposed sensitive CRM data via a $5 domain purchase and a public web form. Here's the governance gap that made it possible — and what would have stopped it.
Logan Kelly

Most teams detect PII after it enters the agent context window. Prevention blocks it before it reaches the LLM. Here's why you need both layers — and what most teams are missing.
Logan Kelly

CIS and OWASP both ranked prompt injection as the top AI security risk. Here's why the threat is worse than most teams think — and why it comes from trusted documents, not user inputs.
Logan Kelly

