LLMs as Interface Compilers

Sonnet 4 has a relentless desire to make artifacts unprompted. It's sometimes jarring but, very often, useful.

I find artifacts (or canvases in ChatGPT) most useful when a chat thread gets too dense or when I need something I can point at or interact with. Artifacts are a nice way to eject from chat into a more natural UI affordance. 

I'd group my use of artifacts into three buckets:

  • Explanatory Artifacts: (example prompt: "explain how a neural network works")
    The artifact is a visual aid. Instead of a wall of text, I get a diagram, a flowchart, or an interactive simulation.
  • Collaborative Artifacts: (example prompt: "let's edit this blog post together")
    The artifact is a workspace. It provides shared context for an iterative process.
  • Consumptive Artifact: (example prompt: "make me a Space Invaders game")
    The artifact is a deliverable. It's meant to be consumed as a functional unit.

We still think of LLMs as chatbots, but we should really think of them as compilers for interfaces.

In each of the above examples, the model decides which type of interface to launch. As LLMs get good at writing frontend code and inferring intent, there is really no limit to what kinds of UIs we can spawn from a chat thread: a color-picker swatch for “make this button teal,” a calendar heat-map for “when am I over-booked?".

I wrote, almost two years ago, about how SaaS would trend towards horizontal tools with vertical UIs. Gemini had launched with the most compelling custom UI generation at the time, and I thought we'd end up with custom but pre-built UIs in SaaS apps. Today, we'd consider that demo janky and my prediction not ambitious enough.