Vercel

Agents Frame

Commencez

blog

Field notes from the routing layer

Postmortems on agents that picked the wrong frame, benchmarks for the libraries behind the API, and the occasional opinion on where multi-step agents are going.

Why your AI agent needs a thinking router
· David Patel

Why your AI agent needs a thinking router

Single-model agents collapse every question into one frame. POST an intent to /v1/think and get three thinking frameworks back, ranked by ELO and ready to paste — so the agent can sanity-check the obvious answer before it commits to a destructive action.

The 15 thinking frameworks we ship at V0
· Rachel Kim

The 15 thinking frameworks we ship at V0

First Principles, Inversion, Sunk Cost, Second-Order, Probabilistic, Margin of Safety, Circle of Competence, Hanlon's Razor, Occam's Razor, Confirmation Bias, Availability Heuristic, Dunning-Kruger, Jobs-to-be-Done, Lean Startup, SWOT — what each frame is good for, and where it breaks.

From intent to framework in under half a second
· Alexia Holder

From intent to framework in under half a second

The four-step pipeline behind /v1/think: detect language, embed via Gemini, pgvector cosine recall, ELO rerank. End-to-end p50 sits at ~430ms — predictable enough to drop inside agent loops without budgeting an extra 5-second tail.

Why we built Agents Frame on next-forge
· Alexia Holder

Why we built Agents Frame on next-forge

Picking next-forge over rolling our own monorepo: hand-written Tailwind sections plus dictionary-driven copy beat BaseHub-fetched homepages for control, the @repo split keeps Clerk and Stripe off the marketing surface, and Bun + Turborepo cuts cold builds to ~4 seconds.

Stateless MCP: why we shipped without SSE at V0
· Alexia Holder

Stateless MCP: why we shipped without SSE at V0

A single tools/call to route returns in well under a second and needs no streaming progress. Keeping the wire format stateless makes horizontal scaling trivial and avoids forcing every host (Claude Desktop, Cursor, Windsurf) to implement SSE replay correctly.

ELO reranking: how user feedback shapes the router
· Michelle Chen

ELO reranking: how user feedback shapes the router

Each framework carries a per-language ELO score updated by /v1/feedback. K-factor 16, decay-free at V0, applied in the rerank step on top of pgvector cosine recall. The cumulative effect: routing accuracy improves with usage instead of frozen at the day-one embedding distance.