Don't begrudge vibe coding

All AI-assisted coding exists on a spectrum from vibe coding to precision coding.

Vibe coding turns up the convenience dial to 100: you accept whatever code AI generates, "fully give in to the vibes, embrace exponentials, and forget that the code even exists." (Karpathy)

We might call the opposite of vibe coding, precision coding: you make pointed changes and review every single line of generated code. Here, you care a lot about control.

We don’t have to sacrifice implementation control for vibe coding. Good interfaces let these modes complement each other. Let’s look at a few examples.

Natural language code with visual edits

One of the most enjoyable parts of vibe coding is describing a user interface in natural language and watching a model get close to your intended output. A natural next step is to make precise adjustments to sizes, colors, content, and styling of any element on the page. Visual editing is an intuitive way to do this.

Lovable’s implementation is particularly notable. Rather than just providing code that developers must then manually edit, Lovable lets you use vibe code what you want, and then visually refine the implementation. For frontend programming, this tight feedback loop between vibe-coding and precise visual edits lets you iterate really fast.

Declarative natural language programming

Open-ended natural language coding is trending towards declarative programming. This is another sweet spot between vibe-coding and precision-coding. Instead of open-ended prompts, you'd describe what you want using different categories of fine-grained constraints. Say you're building an analytics app in React, you'd want to specify detailed instructions like:

  • Functional requirements: Fetch user analytics data from /api/analytics, display a line chart showing daily active users over the past 30 days, and include a date range selector
  • Technical constraints: Implement client-side caching to reduce API calls
  • Performance criteria: Load in under 200ms on average connections
  • Compliance requirements:  Follow our accessibility guidelines (WCAG AA)
  • Integration boundaries: Use only our existing component library for UI elements
  • Graceful degradation: Fall back gracefully when the API is unavailable

You’re specific enough to get high-quality output from models, but you delegate implementation to a programming assistant and “forget that the code even exists”.

This level of specificity typically exists in some PRD or tech spec, documents we've traditionally thought of as ancillary to software engineering. In a declarative paradigm, PRDs and tech specs become primary artifacts of building software. Coding becomes task specification: precise specs, fully-generated outputs.

Multiple-choice coding

Another promising approach is multiple-choice coding. You vibe code multiple implementations with AI and make a final selection based on your judgment. This lets you more easily hash out things like memory usage, performance, and maintainability.

I often do a hacky version of this with Cline. I'll ask for alternative solutions rather than accept the first generated solution. I’ll admit, it’s not easy with current IDEs designed around a single linear implementation path. You end up having to visualize the relationships between different approaches with some combo of CMD-Z and switching tabs. The ideal interface for multiple-choice coding would be something like a mind map or spatial canvas that makes it easy to hold multiple implementation strategies in your head simultaneously.

Don’t begrudge vibe coding

Most critiques of vibe coding I’ve heard make it seem like it has no place in real-world engineering. In reality, it’s just a mode you use when helpful.

Effective AI-assisted coding is fluid. We constantly shift between vibe coding and precision coding depending on the task at hand. You might use natural language to rapidly prototype a component (vibe), then visually fine-tune its layout (precision), before asking the AI to optimize its performance (vibe again). The key question isn't how much control to surrender, but which parts of the development process to delegate to AI and which to handle personally.