What 150,000 Lines of AI-Generated TypeScript Actually Looks Like
TL;DR: Inselnova has 179,101 lines of TypeScript across 1,015 files, 1,272 commits, 94 database migrations, and 187 test files. I didn’t write most of it. Here’s what that actually means in practice, and why it’s not the horror story you’ve probably read about.
There’s a post doing the rounds right now. A senior developer who stopped using AI after 150,000 lines of AI-generated code. His conclusion: at 100,000 lines, he wasn’t using AI to code anymore. He was managing an AI that was pretending to code.
I’ve read it. I understand it. And I’ve been on the other side of that number for a while now.
Inselnova has 179,101 lines of TypeScript. It’s a live game with real players: a tick engine, a marketplace, alliances, espionage, combat, research trees, council affairs, population management, and a working economy. Almost none of it was typed by me.
So what’s different?
The numbers
| What | How many |
|---|---|
| TypeScript / TSX files | 1,015 |
| Lines of code | 179,101 |
| Git commits | 1,272 |
| Database migrations | 94 |
| Test files | 187 |
| Service domains | 42 |
8 daily active players. Built the core in two weeks on a road trip using Claude Code on a phone.
What 179,000 lines actually looks like
It’s not one massive file. It’s not generated spaghetti.
The structure is layered: controllers call validators, validators call services, services call repos, repos hit the database. Every service domain has its own directory. Attack, marketplace, alliance, messaging, research, buildings, units, espionage, reporting. Each one is a pair of files: a service and a repo.
Here’s what the building service looks like when it starts up:
import * as buildingRepo from './building_repo.js';
import * as placeRepo from '../place/place_repo.js';
import * as tickEngine from '../tick/tick_engine.js';
import { getBuilding, getAllBuildings, getSettings } from '../../shared/world_loader.js';
import { scaledCost, buildTime } from '../../shared/formulas.js';
import { futureDateISO, nowISO } from '../../shared/clock.js';
export async function getPlaceBuildings(placeId: number): Promise<PlaceBuildingState[]> {
const dbBuildings = await buildingRepo.getPlaceBuildings(placeId);
const levelMap = new Map(dbBuildings.map(b => [b.building_type_id, b.level]));
return getAllBuildings().map(def => ({
buildingTypeId: def.id,
name: def.name,
level: levelMap.get(def.id) ?? 0,
maxLevel: def.maxLevel,
category: def.category,
icon: def.icon,
production: def.production,
}));
}
Consistent. Clear. Could have been written by a careful mid-level developer. Most of the codebase looks like this. The AI followed the patterns it was given.
The 94 database migrations are all SQL files applied in order. The AI designed the schema from the game mechanics and evolved it as requirements changed. I reviewed them. I didn’t write them.
The 187 test files use Vitest and hit a real in-memory SQLite database. No mocking. Here’s a typical test setup:
beforeEach(async () => {
setupTestDb();
seedGameConfig(1);
app = createApp();
worldId = seedWorld();
const auth = await createAuthUser('alice@test.com', 'Alice');
userId = auth.userId;
token = auth.token;
placeId = seedPlace(worldId, userId, 5, 5, 'Alice Isle');
});
Every test seeds its own world, its own user, its own island. Isolated, repeatable, and the AI wrote all of it.
Why it works at scale
Three things keep 179,000 lines from becoming a mess.
AGENTS.md is the contract
It’s not one file. It’s a spec that lives at the root of the repo. Architecture, naming conventions, folder structure, database patterns, gotchas the AI needs to know. Things like:
No SQLite-only SQL in runtime queries—datetime('now')silently fails on PostgreSQL. Usenew Date().toISOString()instead.Queue depth is 1— only one building, one research, and one training queued per island at a time.Resources are dynamic— resource types come fromworld.json. Never hardcode resource names.Place vs Island— the API and DB use “place” and “placeId” everywhere. The frontend uses “island” for historical reasons. Don’t mix them.
When the AI reads this before touching any code, it knows the rules. It doesn’t reinvent patterns. It fits.
This is the bit most people skip. They ask the AI to write code. I ask the AI to write code that fits a spec. Different output.
Tests come first, or they don’t come at all
187 test files didn’t happen by accident. Every service domain has tests. The AI writes them. CI runs them. If a change breaks tests, it doesn’t ship.
The senior dev who stopped using AI mentioned catching bugs too late. That’s a test coverage problem. The AI doesn’t get to bypass the gate.
Plan mode, always
Before any change, the AI plans. It explains what it’s going to do, what files it’ll touch, what the output should look like. I approve or redirect. Then it executes. Thirty seconds. Prevents most of the half-baked decisions that create technical debt.
What surprised me
The consistency surprised me most. After the first few weeks, new code started to look like old code. Same naming. Same folder structure. Same error handling. Not because I enforced it file by file, but because the context files trained it once and it held.
Consistency at scale is genuinely hard with a human team. Five developers drift. One AI with a good spec file doesn’t.
The things it doesn’t solve
The AI over-fetches. It reaches for data it doesn’t need. I caught a polling loop in the map component that was hammering the API. The poll itself wasn’t wrong, but it was pulling back far more data than it needed, and at scale that adds up fast. I had to find it. The AI fixed it when I pointed at it, but it didn’t catch it on its own.
Large components drift upward. The map component is 2,948 lines. That’s on me for not decomposing it sooner. The AI doesn’t volunteer to split files unless you ask.
And it occasionally writes code that looks right and isn’t. Null checks missing. Edge cases skipped. The tests catch most of it.
Is vibe coding the right word?
Vibe coding is having a moment. Collins made it their word of the year for 2026. Most descriptions make it sound chaotic, like you’re just prompting and hoping.
What I do isn’t that. It’s closer to being a technical lead who doesn’t type. I make the decisions. I set the architecture. I write the specs. I review the output. The AI executes and tests.
The velocity is real. 1,272 commits in roughly six months, 4-5 improvements a day from a coffee shop. That’s not vibe coding. That’s just a different way to work.
What 179,000 lines proves
The scale works, if the scaffolding is right. AGENTS.md, test-first, plan mode, code review. Not revolutionary. Just discipline applied to a different kind of executor.
The senior dev who stopped at 150,000 lines was managing chaos. I believe that. I’ve seen what AI code looks like without constraints.
The answer isn’t to stop. It’s to set up the scaffolding before line one.
Key takeaways:
- Write your spec file before you write any code. Treat it like a constitution.
- Tests first. Always. The AI will write them if you ask. Make it non-negotiable.
- Use plan mode. Thirty seconds upfront saves hours later.
- Review migrations. The AI will evolve your schema sensibly, but you need to know what changed.
- Watch file sizes. Large files drift upward and the AI won’t volunteer to split them.
Inselnova is a browser-based island strategy game. If you want to see what 179,000 lines of AI-generated TypeScript produces when it’s live and running, come and play.
Next up: the bot framework. The AI built AI players that test the game automatically. That one’s even stranger.