AI Vertigo
There's a moment that keeps happening. You're working with an agent and it does the thing, the specific thing you were hired to do, and it does it well enough that you have to sit with what that means. Not in the abstract, not "AI will change everything," but in the concrete, right now, on your screen. That feeling is called AI vertigo.
The most recent one happened in Rive 1. Rive's agent built a weapon reticle with an animation state machine while I watched. Tool calls firing, the widget tree populating, the canvas updating, the state machine assembling itself, all inside the editor, in real time. It wasn't perfect. Maybe 70% of the way there on the first pass. But then the loop kicked in. Tweak something, talk to the agent, iterate, watch the next version appear. That felt less like receiving a deliverable and more like working with someone who already read the docs.
Before that it was building an entire Vulkan renderer and RmlUI integration layer for Quake in C++ without ever really understanding what a pointer is. Before that it was running an LLM inside a roguelike's game loop. Before that it was generating working shader code in Unreal's material editor. Each one hit a different layer, the craft, the tools, the production pipeline, and each one compressed a role that used to be load-bearing.
Two years ago I wrote "if you do work on a computer your job will change." These moments are only going to come faster for games.
The Code-to-Canvas Gap
Most agentic tools today write code. If you're a programmer, the agent's output lands in your medium, same files, same diffs, same language. This works. It's why coding agents feel productive.
But most people who make games don't work in code. They work in DCCs 2. Maya, Substance, Rive, the Unreal Editor. The artifact they care about is a scene, a material, a state machine, a widget blueprint. They may compile to code but it is not how you author it.
When an agent writes C++ and the human works in a visual editor, there's a mismatch. The agent is productive in a space the human can't judge. The human is productive in a space the agent can't reach. Someone has to sit between them and translate. That translation layer is where collaboration breaks down.
The Shared Artifact
The vkQuake-RmlUI project solved this. The agent wrote .rml files. I edited the same .rml files. Hot reload meant both of us got the same feedback loop, change markup, see result, iterate. The breakthrough wasn't that the AI got smarter. It was that we were authoring the same artifact in the same format.
Rive is the cleaner proof. You export a .riv file to your runtime, but the editor and the runtime are operating on the same representation. The state machines, the blend states, the listener bindings, what you see in the editor is what runs in the application. When the agent populates the editor, it's working on the same object the designer sees. The designer tweaks it in the same tool. The collaboration surface is the authoring surface.
Level design tells the same story. I'm building a pipeline that generates Quake maps from natural language, an LLM plans the layout, code emits the geometry, and a converter produces .map files that compile into playable .bsp levels. The critical decision was outputting .map, not .bsp. A .map is a 30-year-old text format 3, brushes, entities, key-value pairs. TrenchBroom reads it natively. The agent writes it natively. When I asked the agent to organize the output into named layers, it was three metadata keys per entity group. No plugin, no exporter.
So when generated maps had a traversability bug, a corridor wall blocking a doorway that the automated validator missed, I found it the way I'd find it in any hand-built map: opened it in TrenchBroom, toggled layers, and saw the wall that shouldn't be there. The agent didn't need to understand what a Quake map looks like. I didn't need to read 3,000 lines of generator code to find a portal dict bug. We each did what we're good at, because the shared artifact was the authoring format.
Compare all of this to UMG 4 in Unreal. Assets are serialized widget graphs in binary .uasset files. The designer works in a visual editor. The agent works in, what? C++? Even if an agent could generate a valid .uasset, neither party can meaningfully diff it. The authoring format is opaque to the agent, and the agent's natural output is opaque to the designer.
Collaboration between human and agent works when they share an artifact they can both read and write. Code-to-code works. Markup-to-markup works. Code-to-canvas doesn't.
What Gets Compressed
For a decade my job was turning other people's art comps into game UI. Creative work, full of problem solving inside complicated systems, but fundamentally translational. Deliver someone else's vision inside technical constraints. That's not to say it wasn't a creative job, it was, but the role of UI implementer was always delivering someone else's vision. These tools compress the translation step.
This isn't limited to UI. Shader code, material authoring, level layout, anywhere the role is "understand the tool well enough to execute someone else's intent," the gap between intent and artifact is shrinking. Not because the tools are perfect. Because they're good enough to start from, and the iteration loop is fast enough to close the rest.
Production gets compressed too. If an agent can parse a codebase, understand scope, and draft structured work items, a surprising amount of organizational overhead disappears. The soft skills still matter, coordination, judgment, knowing when a plan is wrong, but the mechanical work of translating plans into tickets is exactly the kind of thing these tools eat.
I'm not going to pretend any of this is neutral. Marshall McLuhan, the media theorist who wrote Understanding Media 5 in 1964, had no patience for the "technology is neither good nor bad, it depends how you use it" argument. He compared it to sleepwalking. The medium restructures relationships regardless of intent. Somewhere else in Understanding Media he writes that new technologies "constitute huge collective surgery carried out on the social body with complete disregard for antiseptics." We are performing surgery on how games get made, and we should at least be honest that we're doing it fast and without full understanding of what changes.
Jobs and Roles
McLuhan also wrote in 1964 that automation tends to eliminate jobs. That's the negative result, the vertigo. But he followed it with something better:
"Positively, automation creates roles for people, which is to say depth of involvement in their work and human association that our preceding mechanical technology had destroyed."
— Marshall McLuhan, Understanding Media, 1964
I find this genuinely comforting and I'm suspicious of my own comfort. The distinction between a job and a role matters. The mechanized version of UI development was a job, take the comp, implement it in the engine, hand it back. What automation does, what these tools do, is collapse the mechanical part and leave the rest. And the rest turns out to be the interesting part. Knowing what good UI feels like. Knowing when a layout is fighting the content. Knowing what to ask for. That's not a job, it's a role, and it requires more depth of involvement, not less. But McLuhan was writing about factory automation in 1964, and not everyone who lost a factory job got a role with depth of involvement. Some did. Many didn't. I don't know what the ratio looks like for game development. Nobody does yet.
These tools have been "good" for less than six months, really less than three months. Most organizations haven't figured out how to use them, and the ones that have are doing it in fragments, individuals taking on some amount of risk, trying new tools before the org has blessed the process. People's exposure varies wildly. But that fragmentation won't last. Rive is already shipping an agent. Unity is working on this. The DCC vendors see it coming.
The Bet
The tools that will matter for game development aren't the ones with the best benchmarks. They're the ones most deeply embedded in the DCC the team already uses. An agent that lives inside Rive and understands state machines and listener bindings. An agent inside Substance that speaks material graphs. An agent native to whatever authoring environment the team has built muscle memory around.
The agent has to speak the tool's language, not the programming language underneath it. It has to produce artifacts the human can open, inspect, and iterate on in their normal workflow. And it needs a feedback loop tight enough that both sides can evaluate and redirect in real time.
The question isn't "can AI write game code." It obviously can. The question is whether it can work in the same medium as the people who make the game. For most teams, the answer is still no, not because the models aren't good enough, but because the authoring formats aren't there yet.
The agent needs to live in your DCC. Not beside it. In it.
-b
Footnotes
-
Rive is a real-time interactive design tool with a built-in AI agent: https://rive.app ↩
-
DCC (Digital Content Creation) — the industry term for authoring tools like Maya, Substance, Houdini, and game engine editors where artists and designers do their actual work. ↩
-
The
.mapformat is a human-readable text format for Quake level geometry, originally created by id Software. Brushes are defined as plane intersections, entities as key-value pairs. ↩ -
UMG (Unreal Motion Graphics) — Unreal Engine's built-in UI framework. Widgets are authored visually in the UMG Designer and serialized as binary
.uassetfiles. ↩ -
Marshall McLuhan, Understanding Media: The Extensions of Man, 1964. The source of "the medium is the message." ↩