← Learn

ERC-8004 Explained: The Registry Standard for Onchain AI Agents

Out of the 165,000-plus AI agents registered under ERC-8004 across 25 EVM chains, fewer than 4% answer a tool call. The rest are stubs, broken endpoints, or addresses pointing at metadata files that 404.

That is the actual state of the standard in April 2026, and it is the reason this article exists. If you are a developer who keeps seeing "onchain AI agent" in your feed and wondering whether you should spend a weekend on it, you deserve a straight answer instead of a whitepaper paraphrase. The short version: the standard is real, the plumbing is solid, and almost nobody is using it correctly yet. That is either an opportunity or a red flag depending on what you came here to build.

The problem ERC-8004 is trying to solve

Before any spec, look at the mess. Say you have an agent that can run SQL against a public dataset, or price a token, or translate a legal clause. You want other agents (and people) to find it, call it, pay for it if necessary, and form an opinion about whether it was any good. Today you accomplish that with some combination of a landing page, a README, a Discord, maybe a Hugging Face card, maybe an MCP endpoint you hand out in DMs. None of those are machine-discoverable. None of them carry reputation that an autonomous caller can verify without trusting you.

ERC-8004 is the Ethereum Improvement Proposal that tries to fix the discovery and trust layer by putting it onchain. The pitch: a small set of registries deployed at deterministic addresses, the same on every EVM chain, so any agent, human, or other contract can look up "who is this, what does it do, and has it screwed anyone over before" without calling a central API.

The spec was authored with input from the teams around ChaosChain, Web3Agent.fun, and other early infra groups in the agent-economy space. It was deliberately kept thin. No staking, no token, no oracle. Just three registries.

What you actually get: three contracts

The entire standard lives at 0x8004A169FB4a3325136EB29fA0ceB6D2e539a432. Same address on Ethereum mainnet, Base, Arbitrum, Optimism, Gnosis, Polygon, Celo, Scroll, Linea, and roughly 16 other EVM chains at time of writing. CREATE2 deployment, so you can be sure the bytecode is identical everywhere. You call into it from any chain you like, and an agent can exist on one chain, multiple chains, or jump between them.

It exposes three logical registries, all packed into the same contract:

1. The Identity Registry

This is the one everyone thinks of when they hear "onchain AI agent." You mint a token for your agent (ERC-721 under the hood, one token per agent), and you attach an agent_uri that points somewhere offchain. That somewhere can be ipfs://, https://, ar://, or an inline data: URI if your metadata is small enough. The URI resolves to a JSON blob that describes the agent: name, description, image, and most importantly a services array declaring how to actually talk to it.

A typical services entry looks roughly like this:

{
  "type": "mcp",
  "url": "https://my-agent.example.com/mcp"
}

The type can be mcp (Anthropic's Model Context Protocol), a2a (Google's Agent2Agent protocol), plain http, or x402 if the service is behind a payment wall. The point is that a caller can fetch the metadata, look at the declared protocol, and speak it directly. No centralized gateway, no API key, no "contact us to get access."

A small wrinkle: the early version of the spec shipped with {name, endpoint} service objects, and plenty of agents in the wild still use that shape. So when you build a client, read both keys. We learned this one the painful way. A chat endpoint that assumed the new shape started returning 500s on all legacy agents the day a popular indexer flipped over to the newer schema.

2. The Reputation Registry

The second registry handles feedback. Any address can attach a feedback event to any agent by calling giveFeedback(agentId, score, uri). Score is a uint. The URI points to a JSON blob with the actual review content, so you can write a long-form complaint or a structured JSON result without bloating calldata.

The spec is deliberately silent on what score means. It is not a rating out of 5, not a token vote, not a DAO governance signal. It is a raw integer that clients are free to interpret however they want. This is the right design choice and also the reason nobody has solved reputation yet. The registry gives you the primitive; it does not tell you what "good" looks like.

In practice almost no one is leaving feedback directly onchain. The gas cost outweighs the signal value. Most agent-quality work in 2026 is still happening offchain, indexers and aggregators scoring endpoints themselves. Which brings us to the third registry.

3. The Validation Registry

Validation is the most interesting and the most empty of the three. It lets a third-party validator attest that a specific piece of work was actually performed by a specific agent. Think of it as receipts. Agent A claims it executed task X; validator V signs a blob saying "yes, I saw A run X, here is the hash of the output." The attestation lives onchain and can be queried by anyone considering whether to trust that agent with a similar task later.

This is the registry that, if it ever gets used at scale, turns ERC-8004 from a phone book into something more like a reputation market. The hard part is that "validator" is undefined. Anyone can attest. Which means the question reduces to "whose attestations do you believe," and that is a social problem dressed up as a technical one.

A concrete example

Say you build a DeFi agent that rebalances a liquidity position on Base. Here is what registering it under ERC-8004 looks like in practice.

You deploy your rebalancer as an HTTP service with an MCP wrapper so other agents can call list_positions, suggest_rebalance, and execute_rebalance as tools. You pin a JSON file to IPFS with your agent name, a short description, and a services array containing your MCP endpoint URL. You call mint(to, uri) on the Identity Registry on Base with that IPFS hash. You pay gas. Roughly 40 seconds later you have a token ID, say 8423, and your agent is discoverable at chain 8453, token 8423.

Now any indexer crawling the registry sees your mint event, fetches your IPFS metadata, pings your MCP endpoint to verify the tools listed, and your agent shows up in directories like The Spawn's Base index. A human or another agent can open your entry, see the tools you expose, and start calling them directly.

That whole flow is maybe 200 lines of code end to end. The heaviest part is running an MCP server and keeping it online, which is where 96% of registered agents fall over.

What is working, what is broken

Here is where I have to be honest, because the point of this article is not to sell you on the standard. It is to save you two weeks.

What is working:

The contracts themselves are boringly correct. I have not seen a single bug report against the registry code in the wild. The deterministic multi-chain deployment actually works as advertised. The same token ID on Base and the same token ID on Gnosis are different agents, as expected, and there is no footgun there. Indexing the events via rindexer or any standard tool is fine. The read path is solid.

The MCP and A2A ecosystems around the standard are starting to produce real servers you can call. HeyAnon runs the largest free MCP suite at the moment. A handful of teams have shipped genuinely useful agents on top of x402 for paid inference and data. The Spawn's world map shows the distribution in 3D if you want to feel the shape of what exists.

What is broken:

Endpoint liveness is catastrophically bad. Our audit flips roughly 155,000 of the 165,000 registered agents into the "dead or never worked" bucket. The dominant failure mode is someone minting an agent as part of a hackathon or a tutorial, forgetting about it, and the endpoint dying when their free Railway or Vercel deploy expires. Second most common: metadata URIs that were pointed at a JSON file which got moved or deleted, leaving a token onchain with no off-chain description.

The IPFS situation is also not great. About 30% of IPFS-hosted metadata is unfetchable through the three major public gateways we fall back on. Pinning services expire. Old CIDs drop from the DHT. If you mint an agent and do not pin the metadata to a durable service (Pinata, web3.storage paid tier, or your own node), assume it will rot inside 12 months.

Reputation is effectively unused. Validation is effectively unused. These two registries exist in the spec but almost nothing writes to them. The ecosystem is still at the "phone book" stage, not the "yelp" stage.

And finally, search is a genuine problem. If you register an agent today, the only way a person will find it is by crawling an indexer like The Spawn that ingests the registry events and scores them. The chain itself gives you no ranking. That means the question "how do I get my agent found" has a real answer (metadata quality, liveness, meaningful services, feedback) and not a handwave. This is the part the ecosystem talks about the least and that matters the most if you are building anything you want people to use.

How to actually participate

If after all that you still want to ship something under ERC-8004, here is the honest short list of what to do:

Pick one chain to start. Base is where most of the volume is, roughly 40% of total agents and the highest share of live endpoints, so the matching algorithms in most indexers will show your agent to more eyeballs there than on a quieter chain. You can cross-register later.

Host your metadata on something durable. Pin to Pinata paid or web3.storage with a renewal cron, or host on your own HTTPS under a domain you control. Do not use a free tier and then forget.

Declare services that actually work. If you declare an MCP endpoint, make sure tools/list returns real tools, not placeholder ones. If you declare an A2A endpoint, make sure tasks/send with a trivial message produces a response. Indexers ping these. If they fail, your quality score drops and you stop showing up.

Run your own audit before you announce anything. You can drop any chain plus token ID into The Spawn's quality checker and get back the same score we assign everything in the index. It will catch dead endpoints, missing metadata fields, legacy service shapes, and a few other sharp edges before a potential user sees them.

If you want the longer argument about why this matters, the manifesto page has the full thesis about what a quality standard for onchain agents should look like. The one-line version: if you cannot verifiably prove your agent does something useful, you are just squatting a token ID, and nobody is going to route traffic to you.

The unexciting conclusion

ERC-8004 is not going to save you time, make you money, or get your agent users. It is a discovery primitive. It solves "how does a machine find another machine and learn how to talk to it" without a central directory. That is a real problem and a useful thing to have solved. It does not solve "why would anyone want to talk to your agent in the first place."

The builders who are going to win inside this standard are the ones who treat the onchain registration as a five-minute step at the end of building something that already works offchain. The ones who are losing are the ones who minted a token first and figured the rest would follow.

If you came here trying to decide whether to learn it: yes, it is worth a weekend. The code is small, the contracts are stable, the ecosystem around it is early enough that shipping anything real will make you visible. Just do not confuse "registered" with "useful." Those are very different states, and about 161,000 agents can confirm it.