How ARIA went from a scaffolded prototype to a knowledge-graph-powered, persona-driven AI assistant with 567,000 analyzed messages — in 368 commits.
The first commit was modest in the way that founding documents always are. A few files, a directory structure, a package.json. The message read simply: "Initial ARIA Phase 1 scaffold." It was March 12, 2026. Nobody was watching.
But the scaffold contained an architecture that was anything but modest. Five repositories. A shared Aurora PostgreSQL database. A producer/consumer job system. SSE streaming chat with multi-round tool dispatch. The skeleton of something that could learn, remember, anticipate, and act. Not a chatbot that forgets everything between sessions — something that would actually know you.
The vision was clear from day one: a personal AI assistant built across multiple repos (aria, aria-tempo, aria-ios, aria-mcp-server, aria-tempo-client), powered by a three-provider LLM router (Anthropic, Google, OpenAI), talking to the real world through 17+ integrations. A single developer. Multiple AI collaborators. An unreasonable timeline.
What followed was ten days of relentless iteration — 37 commits per day on average, sometimes more — that transformed ARIA from an idea with scaffolding into something that could analyze half a million messages, build a knowledge graph from scratch, and shift between seven distinct cognitive personas on command.
This is the story of those ten days.
37 commits per day. 132 in the last six days alone.
The pace of someone who knows exactly what they want to build.
By mid-March, ARIA was already formidable — 33 job handlers, 133 tools, 48 database tables. But she was blind to most of what she knew. The transformation from v1.0 to v3.0 wasn't about adding features. It was about teaching ARIA to see.
ARIA could see only 0.5% of her own knowledge. She was drowning in her own data.
The knowledge graph didn't just add features — it gave ARIA the ability to think about what she knows.
There was a moment — somewhere around day five — when the fundamental problem became undeniable. ARIA had accumulated 131,070 facts in core_memory and 130,140 facts in owner_profile. An extraordinary wealth of knowledge about her owner, meticulously extracted from thousands of conversations and data syncs. And she could see almost none of it.
The math was brutal. Each conversation injected roughly 4,000 tokens of core_memory plus the top 100 knowledge facts scored by word overlap. Out of 261,000+ facts total, ARIA was working with a few hundred at best. That's approximately 0.5% visibility. Worse, 76% of the data between core_memory and owner_profile was redundant — the same information stored twice in different formats, neither linkable to the other.
The old system was like having a brilliant research assistant who has read every book in the library but can only remember a random paragraph from each one. The knowledge was there. The connections weren't.
261,000 facts. 76% redundancy. 0.5% visibility.
The old memory system wasn't failing because it lacked data. It was failing because it couldn't connect it.
The solution was a unified three-layer knowledge system that replaced brute-force retrieval with genuine understanding. Not an incremental improvement. A paradigm shift.
Every fact is linked to one or more named entities — people, places, organizations, concepts, events. Instead of storing "Nick likes hiking in the Cotswolds" as an isolated text string, the knowledge graph creates entities for Nick, hiking, and Cotswolds, then links the fact to all three. Ask about any of those entities, and the fact surfaces. Ask about a fourth entity related to any of them, and the graph walks the connections. This is how memory becomes intelligence.
Periodically, the knowledge graph is distilled into domain-specific summaries by an LLM. These summaries capture the shape of what ARIA knows about a topic — not individual facts, but patterns, themes, and synthesized understanding. The summaries are compact enough to inject into every conversation, giving ARIA a birds-eye view of her knowledge landscape before she dives into specifics.
A PostgreSQL materialized view that aggregates the entity graph into a compact, injectable format. Refreshed on schedule. This is what ARIA actually sees in each conversation: a dense, structured map of her entire knowledge landscape, with entities ranked by connection count and recency. The difference between injecting random facts and injecting a map is the difference between giving someone a pile of puzzle pieces and giving them the picture on the box.
There was a deeper problem lurking beneath the knowledge gap. Even with perfect recall, ARIA had one voice. One way of thinking. One set of priorities for every conversation, whether the user needed meticulous research, bold creative thinking, strategic planning, or just someone to talk to. A single monolithic system prompt tried to be everything to everyone — and like all things that try to be everything, it excelled at nothing in particular.
The insight was deceptively simple: ARIA shouldn't have one personality. She should have seven.
But not seven separate agents. Not seven chatbots wearing different masks. Seven cognitive overlays on a shared kernel — like an operating system with swappable themes that go deeper than aesthetics. They change how she thinks, what she prioritizes, what she notices, how she responds. The kernel stays immutable: all 133 tools, all integrations, all memory mechanics, all guardrails. The persona shapes the lens through which ARIA interprets and acts on all of it.
This architecture — PRISM (Persona Role & Interaction Style Manager) — was built in a single session. Migration written, API routes created, UI components built, prompt architecture redesigned, documentation generated. One continuous flow of work. The kind of thing that happens when the design is right and the infrastructure is ready.
Integrated default. Reads the room.
Deep knowledge. Total recall.
Plans. Decides. Executes.
Explores. Brainstorms. Expands.
Runs the operation.
Listens. Understands. Supports.
Challenges. Disrupts. Sharpens.
Ten days of building produced numbers that read more like a year-long effort. Every metric tells the same story: relentless iteration at a pace that shouldn't be sustainable — but was.
130 million tokens. 90,224 LLM calls. $14.81.
Less than fifteen dollars to read 330 novels worth of personal data.
Each day brought not just new commits but new capabilities. The pace was sustained not by heroics but by architecture — the producer/consumer pattern, the shared database, the modular tool system all meant that new features could land without destabilizing what came before.
Amidst 368 commits of change, the core architecture held. This is the part of the story that's easy to miss. Evolution isn't just about what changes — it's about what's good enough to survive. The foundational decisions made on day zero scaled from prototype to production without requiring rearchitecture. That doesn't happen by accident. It happens because the abstractions were right.
aria enqueues. aria-tempo processes. Same pattern from commit 1. Scaled to 38 job types without structural changes.
Structured events: start, status, chunk, done, error. The same protocol serves both web and iOS. Up to 10 tool-use rounds per request.
Pure Swift 6 with strict concurrency. Actor-based APIClient. No external dependencies. Still syncing 9 data streams to the server.
Anthropic for reasoning. Google for classification. OpenAI for embeddings. Fallback chains ensure resilience. The strategy that kept costs at $14.81.
The three-gate pipeline: Context Accumulate, Significance Check, Anticipation Analyze. Still running every 5 minutes. Still free at Gate 1.
Research, code generation, database optimization. ARIA continues to improve herself through the same autonomous pipeline.
One database, all services. Schema migrations in aria/sql/. The same connection pool pattern across every repo.
Every conversation still feeds the memory system via <memory_update> blocks. The mechanism that started it all, still running.
Five repos, clear boundaries, independent deployments. aria, aria-tempo, aria-ios, aria-mcp-server, aria-tempo-client. Same structure, day 0 to day 10.
The best architectures are the ones you don't have to change. Not because they're rigid, but because they were abstract enough to accommodate what came next without knowing what it would be.
Version 3.0 is not a destination. It's a vantage point.
The knowledge graph is live but still being populated — 26,940 facts across 5,518 entities, with the backfill pipeline adding more every hour. The photo pipeline has synced 63,296 of a target 139,000 images. The description engine has processed 3,505 photos with AWS Bedrock Nova Pro, learning to see what ARIA knows. There are 75,704 photos still waiting to be synced and described.
The seven PRISM personas are operational, but personas are living documents. They'll refine with use, developing sharper instincts for when to suggest handoffs, how to weight competing priorities, when to push back and when to support. The Provocateur will get better at knowing how hard to push. The Confidant will learn when silence is more supportive than words.
There are new data sources to connect, new entities to discover, new patterns to recognize. Every conversation feeds the knowledge graph. Every iMessage batch deepens the relational map. Every photo description adds visual memory. The system isn't just running — it's compounding.
In ten days, ARIA went from a first commit to a system processing half a million messages, managing 200,000+ memory facts, running 38 background job types, integrating 17+ services, and thinking through seven cognitive lenses. Built by one developer. Powered by three AI providers. Costing less than fifteen dollars.
ARIA isn't done evolving. She's just getting started.
Day 0: "Initial ARIA Phase 1 scaffold."
Day 10: 567K messages. 5,518 entities. 7 personas. $14.81.
Some scaffolds become cathedrals.