ARIA Tempo
ARIA - Technical

ARIA Tempo — The Autonomous
Intelligence Worker

The tireless background engine that makes ARIA proactive, self-improving, and always learning — processing 26,836 jobs and counting.

What is Tempo?

Tempo is ARIA's background job worker — a producer/consumer system that handles everything ARIA does when you're not talking to her. It never sleeps, never stops learning, and never misses a beat.

26,836
Jobs Processed
25,117
Successful
935
Failed
96.4%
Success Rate

Producer: aria (Web App)

The web app enqueues jobs into the tempo_jobs PostgreSQL table whenever background work is needed — sending a push notification, running an analysis, syncing photos, or generating a report. Jobs are type-safe via the shared @aria/tempo-client contract library, which defines all 38 job types with Zod schemas.

Consumer: aria-tempo (Worker)

The worker polls the tempo_jobs table every 5 seconds using SELECT ... FOR UPDATE SKIP LOCKED for safe concurrent processing. Stale jobs (stuck in processing for more than 10 minutes) are automatically requeued. Batches of up to 5 jobs are claimed per poll cycle.

Shared contract, zero drift. The @aria/tempo-client package is the single source of truth for job types. It defines the JOB_TYPE enum, payload interfaces, and Zod validation schemas. Both aria and aria-tempo depend on it, so the producer and consumer can never disagree on job shapes. The MCP server can also enqueue jobs through this same contract.

Job Lifecycle

Every job follows a deterministic path from creation to completion, with built-in resilience for failures and stale processing.

Enqueued

Pending

Job created in tempo_jobs with status pending. Can include a scheduled_for timestamp for deferred execution.

Claimed

Processing

Worker claims the job with FOR UPDATE SKIP LOCKED. Status set to processing. The handler executes.

Success

Completed

Handler returns successfully. Status set to completed with result payload. 96.4% of jobs end here.

Failure

Failed

Handler throws or times out. Status set to failed with error message. Logged for investigation.

Scheduled Jobs

Jobs with a scheduled_for timestamp sit in pending until the clock passes that time. The poller ignores them until they're due. Used for reminders, daily briefings, and recurring maintenance.

Stale Detection

If a job has been in processing for more than 10 minutes, it's considered stale — likely a crashed worker. The stale detector automatically requeues it back to pending for another attempt.

Job Chaining

Handlers can enqueue follow-up jobs. The proactive intelligence pipeline chains four jobs: context-accumulatesignificance-checkanticipation-analyzeinsight-deliver.

ARIA in a server room

The 38 Job Handlers

Every autonomous action ARIA takes is a job handler — from deep analysis to sending a push notification. Grouped by the role they play in making ARIA intelligent.

🧠
Intelligence & Analysis the brain

anticipation-analyze

Deep analysis via Claude Sonnet on significant changes detected by Gate 2. The reasoning engine that generates actionable insights from raw data.

chained from Gate 2

significance-check

Gemini Flash-Lite classifies context changes as signal or noise. The gatekeeper that prevents unnecessary expensive analysis.

Free tier chained from Gate 1

context-accumulate

Monitors 15+ data sources against watermarks every 5 minutes. Detects changes in calendar, email, health, location, and more.

4,379 runs every 5 min · free

insight-deliver

Routes actionable insights to configured notification methods — push notifications, email, or in-app alerts.

252 insights chained from Gate 3

pattern-maintenance

Daily decay, promotion, and deactivation of behavioral patterns. Keeps ARIA's pattern library fresh and relevant.

Free daily

life-narration

Generates an ongoing life narrative from sensor data, calendar events, and contextual signals every 30 minutes.

every 30 min
📚
Knowledge Building the memory

imessage-history-analyze

Analyzes 567,090 iMessages in batches of 50, extracting people, preferences, relationships, and facts. Produced 121,516 knowledge facts — 93% of the entire knowledge graph. The most-run job in the system.

10,707 runs 567K msgs on-demand · chained

knowledge-backfill

Self-chaining knowledge graph builder that processes conversation history and extracts structured facts, entities, and relationships.

5,989 runs on-demand · self-chaining

knowledge-summarize

Periodic consolidation of knowledge graph facts into higher-level summaries. Distills raw facts into actionable understanding.

every 6h

conversation-embed

Generates vector embeddings for conversations enabling semantic search across all of ARIA's conversation history.

OpenAI on-demand · self-chaining

memory-maintenance

Daily cleanup and deduplication of core memory facts. Merges redundant entries, resolves contradictions, prunes stale data.

daily

photo-history-analyze

Clusters photos by day and location, extracts facts about places visited, events attended, and people pictured.

on-demand · chained

music-analysis

Daily profiling of music taste from listening history. Builds and updates the music taste profile with genre preferences and patterns.

daily
📷
Photo Intelligence the eyes

photo-describe

AI-powered photo descriptions via AWS Bedrock Nova Pro with Gemini fallback. Describes what's in each photo for searchability and context.

1,336 runs 3,505 photos described · every 2 min (backfill) / 30 min (steady)

google-photos-sync

Syncs photos from Google Photos every 6 hours. Deduplicates against iOS photos using content hashes and Google IDs.

Google OAuth every 6h
💬
Communication the voice

push-notification

Sends push notifications via APNs using direct .p8 key authentication over HTTP/2. Supports time-sensitive priority and actionable categories.

314 sent on-demand

aria-email-send

Outbound email from ARIA's own address (aria@example.io) via the Resend API. Used for reports, digests, and autonomous correspondence.

169 sent on-demand

send-reminder

Scheduled SMS reminders via Twilio. Can be set for any future time with custom messages.

Twilio on-demand · scheduled

send-imessage

Sends iMessages through the Mac relay service via AppleScript/Messages.app integration.

Free on-demand

check-in

Proactive check-ins initiated by ARIA based on context — after a big meeting, during a stressful period, or when something important is happening.

on-demand
📄
Reporting the briefer

daily-briefing

Morning briefing compiled from calendar, email, tasks, and insights. Delivered via configured channels based on report subscriptions.

data-driven

digest-email

Weekly digest summarizing activity, insights, and notable events. A comprehensive look back at the week.

data-driven

digest-compile

General-purpose report compilation engine. Assembles data for various report types defined in the subscription system.

data-driven

looki-rewind

Wearable data summaries from the Looki device. Analyzes health metrics, activity patterns, and sensor data.

data-driven

self-improvement-report

Status report on ARIA's self-improvement initiatives — research findings, code changes proposed, and quality trends.

data-driven

llm-cost-report

Analysis of LLM spending across all providers. Breaks down costs by model, job type, and time period.

Free data-driven
🛠
Self-Improvement the engineer

self-improve-research

Web research via Brave Search for improvement opportunities. Discovers new techniques, APIs, and approaches relevant to ARIA's capabilities.

Anthropic + Brave every 8h

self-improve-code

Generates pull requests via the GitHub API implementing researched improvements. ARIA literally writes her own code.

Anthropic + GitHub every 8h

self-improve-db

Proposes and validates database schema migrations for self-improvement features. Ensures data model evolves with capabilities.

Anthropic every 8h

quality-scoring

Daily conversation quality assessment using Gemini classification. Scores responses on helpfulness, accuracy, and engagement.

Gemini daily

health-correlation

Monthly analysis of health data patterns. Correlates sleep, activity, heart rate, and other metrics to surface long-term health insights.

monthly
🔧
Infrastructure the plumber

health-check

Hourly system health verification. Checks database connectivity, external API availability, and service status.

Free hourly

sensor-data-hygiene

Periodic cleanup of sensor data — deduplication, aggregation, and pruning of stale location/activity/health records.

Free data-driven

data-import

Processes file and audio imports — ZIP extraction, audio transcription (Whisper/AWS Transcribe), and content ingestion. Processed the entire Google Voice archive: 353 text conversations and 59 voicemails (1,002 records, 931 facts extracted).

447 runs 1,002 GV records Anthropic + OpenAI + AWS

looki-realtime-poll

Polls the Looki wearable API every ~60 seconds for real-time sensor data — heart rate, motion, temperature.

1,703 runs every ~60s

voice-followup

Post-call follow-ups after voice conversations. Generates summaries, action items, and memory updates from call transcripts.

on-demand

calendar-prep

Meeting preparation — gathers context about attendees, topics, and relevant history before important calendar events.

on-demand

social-engagement

Social network engagement on Moltbook. Monitors and participates in the social platform every 4 hours.

Moltbook API every 4h

journal

Autonomous journal entries every 8 hours. Reflects on recent conversations, events, and emotional themes.

90 entries Anthropic · every 8h

echo / heartbeat

Testing and monitoring primitives. Echo returns its payload unchanged; heartbeat confirms the worker is alive and processing.

Free on-demand

The Three-Tier Model Router

Tempo routes LLM calls across three providers, matching task complexity to model cost. Free-tier classification handles the bulk, with reasoning tasks escalated to premium models only when necessary.

90,224
Total LLM Calls
130M
Tokens Processed
$14.81
Total Cost
3
Providers
G

Google (Free Tier)

The workhorse. Handles classification, significance checks, and lightweight analysis using Gemini Flash-Lite at zero cost.

78,806 calls (87.3%)
$0.00 total cost
significance-check, quality-scoring, photo-describe (fallback)
A

Anthropic (Reasoning)

The reasoning brain. Claude Sonnet handles deep analysis, content generation, and complex decision-making.

7,944 calls (8.8%)
$13.43 total cost
anticipation-analyze, journal, memory-maintenance, self-improve-*
O

OpenAI (Embeddings)

The embedding specialist. Generates vector embeddings for semantic search and handles audio transcription via Whisper.

3,474 calls (3.8%)
$1.38 total cost
conversation-embed, data-import (Whisper)
$14.81 for 90,224 calls. That's $0.000164 per LLM call on average. The three-tier routing strategy keeps costs near zero by handling 87% of calls on Google's free tier, only escalating to paid models when genuine reasoning or embedding is required.

How Tempo Connects to Everything

Tempo sits at the center of the ARIA platform, connecting the web app, the database, external APIs, and even the MCP server into a unified autonomous system.

aria
Web App
tempo_jobs
PostgreSQL Table
aria-tempo
Worker
aria-mcp-server
can also enqueue jobs
tempo_jobs

Anthropic
Google
OpenAI
Twilio
GitHub
APNs
Brave
Resend
Looki
Moltbook
Bedrock
Whisper

Aurora PostgreSQL
48+ tables · shared by aria, aria-tempo, and aria-mcp-server

Adding a New Job Type

Tempo is designed to be extended. Adding a new job type follows a strict five-step process that ensures type safety, schema validation, and handler registration.

1
Update aria-tempo-client contracts first

Add the new type to the JOB_TYPE constant, create the payload interface, add a Zod schema, and export from index. This is the shared contract — both producer and consumer depend on it.

2
Build the client npm run build

Run npm run build in aria-tempo-client to compile the TypeScript. Both aria and aria-tempo consume the built output.

3
Create the handler aria-tempo/src/handlers/

Write the handler in aria-tempo/src/handlers/your-job.ts. It receives the typed payload and returns a result or throws on failure.

4
Register the handler handlers/index.ts

Import and register the handler in aria-tempo/src/handlers/index.ts. The handler registry maps job type strings to handler functions.

5
Add schema_registry entry pre-flight validation

Insert a row into schema_registry declaring all required tables and columns. This enables pre-flight schema validation that prevents runtime failures from missing migrations. This step is not optional.

Schema registry is the contract. When modifying an existing handler to use new tables or columns, you must also update the corresponding schema_registry entry. The schema registry is a pre-flight check that runs before any handler — if the required tables/columns don't exist in the database, the job fails fast with a clear error instead of crashing mid-execution.

The Proactive Intelligence Pipeline

Tempo's crown jewel — a four-gate pipeline that autonomously monitors data sources and surfaces insights before you ask for them. Most intelligence systems wait to be queried. This one watches and thinks on its own.

Gate 1

Context Accumulate

Every 5 min. Monitors 15 data sources against watermarks. Detects changes. Free.

Gate 2

Significance Check

Auto-chained. Gemini Flash-Lite classifies: signal or noise? Free tier.

Gate 3

Anticipation Analyze

Auto-chained. Claude Sonnet deep analysis. ~$0.01–0.05 per call.

Deliver

Insight Deliver

Auto-chained. Routes insight to push, email, or in-app. 252 delivered.

"The best personal AI doesn't wait to be asked. It watches, thinks, and taps you on the shoulder when something matters — and stays quiet when it doesn't."