<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Red-Team on The Agent Layer</title>
    <link>https://theagentlayer.net/tags/red-team/</link>
    <description>Recent content in Red-Team on The Agent Layer</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Wed, 11 Mar 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://theagentlayer.net/tags/red-team/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>20 Minutes With an AI Agent Changed How I Think About Recon</title>
      <link>https://theagentlayer.net/posts/twenty-minutes-red-team/</link>
      <pubDate>Wed, 11 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://theagentlayer.net/posts/twenty-minutes-red-team/</guid>
      <description>&lt;div style=&#34;border-left: 3px solid #6b7280; background: rgba(255,255,255,0.04); padding: 12px 18px; margin: 1.5rem 0; border-radius: 0 4px 4px 0;&#34;&gt;
&lt;strong&gt;TLDR:&lt;/strong&gt; I spent 20 minutes doing AI-assisted red team simulation with Claude Cowork against a live production SaaS. No purpose-built tooling. No prior knowledge of the target. I walked away with confirmed PII exposure — real names, email addresses, account identifiers — on a live system. &lt;strong&gt;The same capabilities that make AI agents useful for legitimate work make them highly capable recon tools. The gap between &#34;helpful assistant&#34; and &#34;passive attacker&#34; is smaller than most people think.&lt;/strong&gt;
&lt;/div&gt;
&lt;p&gt;I&amp;rsquo;m a QA analyst and secrets management practitioner — not a red teamer by primary specialization. I don&amp;rsquo;t have a toolkit of custom scripts for offensive security work. What I do have is a browser, Claude Cowork, and enough security fundamentals to know what I&amp;rsquo;m looking at.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
