Blog

2026-02-10

AI

I Don't Read the Code

vkQuake-RmlUi running on macOS

I don't read most of the code.

In my last post I argued that game UI should be authored as markup with declarative data binding. To prove it I integrated RmlUI into vkQuake, a Vulkan-based Quake source port written in pure C. The integration layer is over 7,000 lines of C++ across 27 source files: a custom Vulkan render interface, a data binding system, cvar synchronization, input handling, menu event routing. On top of that, 18+ menu screens and multiple HUD variants authored in RmlUI markup and CSS. All bridged to the C engine through an C interface. The AI wrote it. And it works.

I'm a technical artist. I sit between design and engineering. I can read code and reason about how systems fit together, but writing C++17 with Vulkan pipelines and RmlUI's template metaprogramming is not my world. This post is about how I got there, from "audit everything and tell me what you see" to "port the entire multiplayer menu, I'll check back in 41 minutes."

14 commits in 41 minutes. The disposition matrix the agent built to track the legacy menu migration.

Not Three.js

I keep seeing people demonstrate AI-assisted game development with three.js and web tech. Most of them are demos, not games. I wanted to come at this from the other end. Real tech. A real game engine. C. Vulkan. Quake, a game that shipped in 1996 and still has an active modding community because the code is that solid.

Part of this is stubbornness. Part of it is that I think the interesting test for AI is whether it can work inside an existing system with real constraints, real state, and decades of design decisions baked in. vkQuake is not a greenfield project. It's a living codebase with history and opinions.

There's another reason Quake turned out to be an ideal candidate, and I didn't fully appreciate it until I was deep in the project: the code is simple. Not trivial, there's a lot of it, and the systems are sophisticated. But any given function does what it looks like it does. A function called Draw_String draws a string. A variable called cl.stats[STAT_HEALTH] is the player's health.

Simple, readable code is AI-friendly code. That's probably the single most important lesson from this project.

The original main menu draw function. Hardcoded pixel coordinates, cached bitmap lookups, an animated cursor dot cycling through 6 frames:

void M_Main_Draw (cb_context_t *cbx)
{
    int     f;
    qpic_t *p;
    qpic_t *menu2 = Get_Menu2 ();
    int     main_items = MAIN_ITEMS + (menu2 ? 1 : 0);

    M_DrawTransPic (cbx, 16, 4, Draw_CachePic ("gfx/qplaque.lmp"));
    p = Draw_CachePic ("gfx/ttl_main.lmp");
    M_DrawPic (cbx, (320 - p->width) / 2, 4, p);
    M_DrawTransPic (cbx, 72, 32,
        menu2 ? menu2 : Draw_CachePic ("gfx/mainmenu.lmp"));

    f = (int)(realtime * 10) % 6;
    M_Mouse_UpdateListCursor (&m_main_cursor, 70, 320, 32, 20,
        main_items, 0);
    M_DrawTransPic (cbx, 54, 32 + m_main_cursor * 20,
        Draw_CachePic (va ("gfx/menudot%i.lmp", f + 1)));
}

Same menu, different century:

<body id="main-menu" data-model="game">
    <div id="center-wrapper">
        <div id="menu-content">
            <h1 id="game-title">{{ game_title }}</h1>
            <div id="menu-buttons">
                <button class="btn-primary" onclick="new_game()">NEW GAME</button>
                <button class="btn" onclick="navigate('singleplayer')">SINGLE PLAYER</button>
                <button class="btn" onclick="navigate('multiplayer')">MULTIPLAYER</button>
                <button class="btn" onclick="navigate('options')">OPTIONS</button>
                <button class="btn" onclick="navigate('help')">HELP</button>
                <button class="btn" onclick="navigate('mods')">MODS</button>
                <button class="btn-quit" onclick="navigate('quit')">QUIT</button>
            </div>
        </div>
    </div>
</body>

The Progression

I didn't start by handing the AI a 41-minute task.

The early sessions were big and open-ended. Audit the codebase. Tell me how the menu system works. What would it take to integrate a C++ UI library into a C engine. How does the Vulkan render loop work. I wasn't writing code yet, I was learning how to ask the right questions.

From there I'd point at something specific. "That function has a bug, fix it." "This menu uses hardcoded pixel coordinates, can we replace it with a data binding." Small, contained tasks where I could verify the result immediately. The AI would write code, I'd build, I'd run the game, I'd see if it worked.

This played out over about a week, and it wasn't smooth. At one point the agent introduced a cursor leak, an input race condition, and a menu click crash all in the same session. But the results kept being correct often enough that the scope of each task grew. The main menu rendered correctly from markup. The HUD displayed health from a data binding. The options menu synced cvar values. Each one proved the pattern and made the next task easier to hand off.

Eventually I was describing entire features: "Port the multiplayer menu flow, join game, create game, player setup. Use the same navigation pattern as the other menus, bind the hostname and player name fields to cvars, use the existing button classes." The agent would work through it - creating RML documents, writing RCSS, adding bindings, handling events - and I'd come back to a working feature.

Guardrails Through Pain

The 41-minute autonomous session happened because I'm dumb but fast. Or actually it happened because I had hit so many problems over the last few days I knew where the process would fall over and set up guardrails.

Early on the agent would drift. It would make a change that compiled fine but broke something downstream. Or it would fix one issue by introducing a different one I wouldn't notice until later. The code was working but piling up invisible debt, small mismatches, wrong assumptions, things that compiled but weren't quite right. One time a FindMemoryType call in the Vulkan renderer was silently failing - the kind of bug I'd never have caught by reading the code, but manifested as a visual glitch I could point at and say "fix that."

The fix was simple: run the build after every change. I set up a rule, if the build fails, stop what you're doing and fix the compile error before moving on. Don't pile changes on top of a broken build. That stopped the drift. The agent got immediate feedback. A bad change surfaced in seconds, not hours.

I got a reminder of this recently when I tried using Codex 1 to make a change to the same project. Codex doesn't have my guardrails, no build-after-every-change rule, no CLAUDE.md with architecture context and known pitfalls. It totally broke the build. This isn't me saying one model is better than another. It's that the environment matters more than the model. The same AI that ports entire menu flows in 41 minutes will break your project if you take away the feedback loop.

CLAUDE.md as a Living Document

The other thing that changed how the AI performed was writing things down.

The project has a CLAUDE.md file 2, a document that provides context and rules to the AI agent. It started small. Architecture overview, build commands, a few constraints. But it grew as I learned what the agent needed to know.

Every time the agent made a mistake that came from missing context, I'd add the context to CLAUDE.md. RmlUI doesn't support rgba() syntax, that went in after the agent tried to use it three times. The extern "C" boundary requires exact type matching, that went in after a silent data corruption bug. Input mode transitions need to be deferred to the next frame, that went in after a crash.

It turned into documentation for the AI, not for me. Architecture diagrams, cvar binding references, code conventions, known constraints, common pitfalls. Writing things down cut the mistakes, and fewer mistakes meant I could hand off bigger tasks.

I also set up skills 3, structured instructions for specific recurring tasks like committing code or working with RmlUI markup. These gave the agent the equivalent of muscle memory for patterns I didn't want to explain every time.

The CLAUDE.md today is extensive. It describes the full architecture, the build topology, the C/C++ boundary rules, the input mode state machine, every console command, and a table of engine integration points. It's the file I'd save if I had to restart everything else.

I Don't Read the Code

This is the part I feel either you understand today or you haven't spent enough time with these tools. I don't read most of the code the AI writes.

I can't, practically speaking. The Vulkan render interface is several hundred lines of pipeline setup, descriptor allocation, memory management. I can follow the broad strokes, it creates a pipeline, it binds textures, it draws triangles, but I couldn't debug a synchronization issue or spot a memory leak by reading it. The cvar binding system uses templates and lambdas in ways I understand conceptually but couldn't write myself.

The build is the first check. If it compiles, the types are right, the interfaces are satisfied, the includes resolve. That catches a ton of stuff right away. After that I run the game. Does the menu work? Does the HUD update? Do the bindings sync? Does input route correctly? If the system behaves correctly, the code is, for my purposes, correct.

The integration layer is 27 source files bridging C and C++. I understand the boundaries. The code inside them is the AI's problem.

Loading diagram

That boundary, the extern "C" line in the middle, is where my understanding drops off. The Quake C above it is readable, almost self-documenting. The RML and RCSS below it I authored myself. Everything in between is code I commissioned.

src/
  ui_manager.h / .cpp                   # Public C API (extern "C"), what vkQuake calls
  types/
    input_mode.h                        # UI_INPUT_INACTIVE / MENU_ACTIVE / OVERLAY
    game_state.h                        # Synced game state struct
    cvar_schema.h                       # Console variable metadata
    cvar_provider.h                     # ICvarProvider interface
    command_executor.h                  # ICommandExecutor interface
    notification_state.h                # Notification state types
    video_mode.h                        # Video mode struct
  internal/
    render_interface_vk.h/.cpp          # Custom Vulkan renderer
    vk_allocator.h / .cpp               # Pool-based Vulkan memory allocator
    rmlui_shaders_embedded.h            # SPIR-V shaders as byte arrays
    system_interface.h / .cpp           # Time/logging/clipboard bridge
    game_data_model.h / .cpp            # Engine state → 50+ RmlUI data bindings
    notification_model.h/.cpp           # Centerprint + notify lines with expiry
    cvar_binding.h / .cpp               # Two-way cvar ↔ UI sync
    menu_event_handler.h/.cpp           # Menu clicks, action parsing
    quake_cvar_provider.h/.cpp          # ICvarProvider implementation
    quake_command_executor.h/.cpp       # ICommandExecutor implementation
    quake_file_interface.h/.cpp         # RmlUI file I/O bridge
    engine_bridge.h                     # extern "C" declarations
    sdl_key_map.h                       # SDL key code mapping
    sanitize.h                          # String sanitization
    ui_paths.h                          # Path resolution helpers

I designed the boundaries. I know what UI_SyncGameState does and what data it needs. I know the data model maps engine state to named bindings. I know the cvar binding manager handles two-way sync. The code inside those interfaces is the AI's problem.

Input routing is a good example. Every SDL 4 event hits a decision tree: does the UI want this event? Should the engine handle it instead? I defined the problem. The AI and I worked out the flow together.

Loading diagram

And the stakes matter here. This is a personal project. Nobody ships this. If there's a subtle bug in the Vulkan allocator, I'll find it when it manifests as a visible problem, and I'll ask the AI to fix it. On a team shipping a product, I'd think about this differently. But the point of a proof of concept is to prove the concept.

This isn't a philosophy I'd apply blindly. Take away the clean architecture or the tight build feedback and you're back to reading every line, or more likely, back to writing it yourself.

Design, Not Implementation

The implementation was the bottleneck, and now it's not. I'm designing architecture, dialing in how the HUD spring feels and how much barrel warp to push, and evaluating the result by playing the game. The UI renders to a separate Vulkan texture and gets composited with barrel warp and per-channel chromatic aberration. The HUD shifts on critically-damped springs when you jump or turn. I designed that feel. The AI wrote the shaders and the spring physics. I have no idea what a critically-damped spring equation looks like. But I know when the HUD movement feels wrong.

The HUD shifting on critically-damped springs mid-jump. I designed the feel. The AI wrote the math.

I spent a decade designing UI for games. I know what good game UI feels like. I know the difference between a tooltip that solves a real problem and one that's papering over bad design. AI writing CSS doesn't change that.

Ownership

There will always be value in someone who deeply understands a project, who can debug it and solve problems fast. I'm just not sure reading and writing every line is the exact skillset anymore. Understanding the system, knowing where it breaks, knowing what question to ask, that might be closer to what matters.

The C++, Vulkan pipelines, RmlUI's API surface, that would have taken me months to learn well enough to write confidently, if I could have written it at all. The AI did it in days. I don't understand every line. I understand the boundaries and how the system behaves. For a proof of concept, that's enough.

I was never going to write a Vulkan renderer. The project exists because I didn't have to. And now that it exists, I know more about game UI integration than I would have from reading about it. I learn by building. Always have.

Footnotes

  1. OpenAI Codex: https://openai.com/codex

  2. CLAUDE.md is a project context file for Claude Code, Anthropic's CLI agent: https://docs.anthropic.com/en/docs/claude-code

  3. Custom skills in Claude Code: https://docs.anthropic.com/en/docs/claude-code/skills

  4. SDL (Simple DirectMedia Layer): https://www.libsdl.org/

← Back to Blog

bradenleague.com