I Missed Another Release

← Notes

At some point in the last two years, keeping up with what's shipping became a part-time job. A new model drops, you spend an afternoon reading about it, and by the time you look up, two more have been announced. This isn't a complaint. It's more of a documentation.

What shipped

1.
GPT-4March 2023

Every benchmark from the prior era became obsolete overnight.

2.
Claude 1March 2023

Anthropic enters the race. Constitutional AI framing introduced.

3.
Claude 2July 2023

100K context window. Long documents become tractable.

4.
GPT-4VSeptember 2023

Vision comes to GPT-4. Images in, text out.

5.
Custom GPTsNovember 2023

OpenAI lets anyone build a chatbot. App store energy.

6.
Gemini Ultra / Pro / NanoDecember 2023

Google's unified model family. Claimed GPT-4 parity on benchmarks.

7.
SoraFebruary 2024

OpenAI video generation. Technically impressive, not yet widely accessible.

8.
Gemini 1.5 ProFebruary 2024

One million token context window. Long-context reasoning at scale.

9.
Google AI StudioEarly 2024

Free Gemini API access for prototyping and building. I use this.

10.
Claude 3 (Haiku, Sonnet, Opus)March 2024

Opus beats GPT-4 on most evals. Haiku is fast and cheap.

11.
GPT-4oMay 2024

Omnimodal. Native audio, vision, and text in one model.

12.
Gemini 1.5 FlashMay 2024

Fast and cheap multimodal. Google's answer to GPT-4o mini.

13.
Claude 3.5 SonnetJune 2024

Beat Claude 3 Opus at a fraction of the cost. The benchmark shifted again.

14.
Claude ArtifactsJune 2024

Interactive code, docs, and diagrams rendered live inside the conversation.

15.
GPT-4o miniJuly 2024

Fast, cheap, good enough for most tasks.

16.
o1 previewSeptember 2024

OpenAI's reasoning model. Chain-of-thought at inference time.

17.
NotebookLM Audio OverviewsSeptember 2024

Google's podcast-from-any-document feature. Went viral immediately.

18.
Claude Computer UseOctober 2024

Anthropic releases screenshot-based computer control. Agents can now operate UIs.

19.
ChatGPT CanvasOctober 2024

Side-by-side editing for writing and code inside ChatGPT.

20.
Cursor2024

AI-native code editor. Changed how a lot of developers work daily.

21.
Model Context Protocol (MCP)November 2024

Anthropic's open standard for tool use. De facto interface for agent integrations.

22.
Claude 3.5 HaikuNovember 2024

Fastest model in the 3.5 family.

23.
ChatGPT SearchLate 2024

Web search built into the chat. One less tab open.

24.
Gemini 2.0 FlashDecember 2024

Multimodal, fast, capable of tool use and computer interaction.

25.
o1 (full release)December 2024

OpenAI's reasoning model, full release.

26.
DeepSeek R1January 2025

Open-weights reasoning model matching o1 at a fraction of the cost. I used this one.

27.
o3Early 2025

OpenAI's next reasoning model. Significantly ahead of o1 on hard tasks.

28.
Claude 3.7 SonnetEarly 2025

Extended thinking mode. Long chain-of-thought available on demand.

29.
GPT-4.5Early 2025

OpenAI's largest non-reasoning model.

30.
Gemini 2.5 ProEarly 2025

Google's strongest model to date. Top of most coding and reasoning benchmarks.

31.
Claude 4 (Sonnet, Opus)Mid 2025

Anthropic's current frontier models.

32.
Gemini 2.5 Flash2025

Fast, efficient, strong across modalities.

33.
Deep Research2025

Multi-step research agents from Google and OpenAI. Browse, synthesize, report.

34.
Imagen 42025

Google's photorealistic image generation. Up to 2K resolution.

35.
Veo2025

Google's video generation model. Text-to-video at production quality.

36.
Claude Code2025

Anthropic's CLI-based coding agent. Full tool use in the terminal. I use this every day.

37.
The Ralph Wiggum Loop2024–present

When an agent confidently repeats the same failed approach in a loop. You learn to spot it.

38.
Figma → Claude MCP2025

Design-to-code via MCP. Designers annotate components, engineers get mapped implementation.

39.
Gemini 3 (Pro, Flash)Late 2025 / Early 2026

Google's latest generation. Flash now at frontier-class performance.

40.
Gemini 3.1 ProEarly 2026

Added native SVG generation. Text-to-vector without a tool chain.

Reading that back, I notice two things. First: this list isn't complete. I've definitely missed things. Second: I can use almost all of these tools right now, today, to build something real. That second fact is extraordinary, and I don't want the first to overshadow it.

The pace is disorienting. It's also the most interesting moment to be building in that I can imagine. Those two things are both true, and I hold them at the same time most days.