Internet of Agents
One of the most loyal readers of my blog is the user agent Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; GPTBot/1.2; +https://openai.com/gptbot). That is, of course, the GPT bot. It seems to scrape my blog dozens of times a month — relentlessly searching for new tokens to consume. Robots.txt be damned.
101 Switching Protocols: Agents become first-class internet actors
The public internet has increasingly become a place where humans coexist with AI agents. Meta's recent experiment with influencer bots signals big social platforms' clear embrace of AI-generated content (AIGC).
There's no putting the genie back in the bottle. We'll produce content for AI agents and consume content created by AI agents. The economics are too compelling. AIGC has near-zero creation cost coupled with high creation and distribution velocity. The cost advantages approximate the general decline in inference costs. Creation velocity and distribution velocity come from the fact that agents, unlike humans, don't need to rest.
Beyond AIGC, we'll also have agents completing arbitrary tasks on our behalf: booking flights, hotels, and restaurant reservations — anything that feels tedious. Agents will monetize our lack of time, just as social media monetizes our attention.
200 OK: The path forward
Our web infrastructure was built for humans clicking links and occasionally running scripts. A lot of today’s assumptions break when we introduce agents.
1) Mediation and curation
Our default experience on the web will switch from trusting to suspicious. We’ll need some way to mediate and curate our experience on the internet.
Maybe this looks like a universal curation layer for content platforms beyond tuning the algo. Perhaps it's a Chrome extension that sits on top of your entire browsing experience and acts as a more fine-grained curator: exclude AIGC in tweets, only include AIGC that is factual, block Youtube videos with thumbnails that suggest its AIGC. Perhaps it is a new web browser similar to Brave, purpose-built for this purpose.
For business apps, maybe it looks like every app implementing its own slop detection: block auto-generated inbound, detect deepfakes on a Zoom call.
2) Agent Search Optimization (ASO)
Just as SEO emerged to help websites rank in Google searches, ASO will become the visibility layer in an agent-first internet.
Brands will need new ways to surface their content to AI agents. Traditional SEO tools like keyword research and backlink analysis won't work when your audience is GPT or Claude. Instead, new ASO tools will (i) score your content based on its structured data completeness, (ii) identify gaps in machine-readable metadata, (iii) test how different agents interpret your content, and (iv) track your content's appearance in agent-generated responses.
3) Agent auth
Our current auth system is extremely hostile to an internet of agents. It assumes human behavior for every auth step. SMS verification assumes access to a phone. Email magic links assume someone can check an inbox. CAPTCHAs are explicitly designed to block automation.
One hack today is to let agents impersonate humans. Agents use your credentials to execute tasks on your behalf. However, we need auth that lets agents act with their own identity. One well-scoped solution is some sort of Agent Delegation Protocol. A protocol that lets humans explicitly delegate authority to agents. Similar to OAuth, but designed for AI delegation. You'd grant an agent specific permissions: "Book flights under $500" or "Respond to emails marked low-priority." The protocol would generate credentials the agent can use to prove its delegation.
4) Agent cache
Today's CDNs are built for human browsing patterns: heavy reads during work hours, content grouped by geography, and cache invalidation based on human update frequencies. AI agents break these assumptions. For example, they query far more frequently than humans and often need to process entire datasets rather than individual pages. As a result, traditional rate limiting hurts legitimate agent actors.
We might imagine a solution where CDNs cache preprocessed datasets instead of individual pages (something like a bulk API that serves structured content optimized for machine consumption). You could also imagine CDNs will cache not just content but computed results. The CDN can cache the processed output if multiple agents need to process the same dataset. This moves compute closer to data and reduces duplicate work.
We might also want to replace blanket rate limits with agent-aware throttling based on factors like an agent's identity and permissions or the computational cost of requests.