The tireless background engine that makes ARIA proactive, self-improving, and always learning — processing 26,836 jobs and counting.
Tempo is ARIA's background job worker — a producer/consumer system that handles everything ARIA does when you're not talking to her. It never sleeps, never stops learning, and never misses a beat.
The web app enqueues jobs into the tempo_jobs PostgreSQL table whenever background work is needed — sending a push notification, running an analysis, syncing photos, or generating a report. Jobs are type-safe via the shared @aria/tempo-client contract library, which defines all 38 job types with Zod schemas.
The worker polls the tempo_jobs table every 5 seconds using SELECT ... FOR UPDATE SKIP LOCKED for safe concurrent processing. Stale jobs (stuck in processing for more than 10 minutes) are automatically requeued. Batches of up to 5 jobs are claimed per poll cycle.
@aria/tempo-client package is the single source of truth for job types. It defines the JOB_TYPE enum, payload interfaces, and Zod validation schemas. Both aria and aria-tempo depend on it, so the producer and consumer can never disagree on job shapes. The MCP server can also enqueue jobs through this same contract.Every job follows a deterministic path from creation to completion, with built-in resilience for failures and stale processing.
Job created in tempo_jobs with status pending. Can include a scheduled_for timestamp for deferred execution.
Worker claims the job with FOR UPDATE SKIP LOCKED. Status set to processing. The handler executes.
Handler returns successfully. Status set to completed with result payload. 96.4% of jobs end here.
Handler throws or times out. Status set to failed with error message. Logged for investigation.
Jobs with a scheduled_for timestamp sit in pending until the clock passes that time. The poller ignores them until they're due. Used for reminders, daily briefings, and recurring maintenance.
If a job has been in processing for more than 10 minutes, it's considered stale — likely a crashed worker. The stale detector automatically requeues it back to pending for another attempt.
Handlers can enqueue follow-up jobs. The proactive intelligence pipeline chains four jobs: context-accumulate → significance-check → anticipation-analyze → insight-deliver.
Every autonomous action ARIA takes is a job handler — from deep analysis to sending a push notification. Grouped by the role they play in making ARIA intelligent.
Deep analysis via Claude Sonnet on significant changes detected by Gate 2. The reasoning engine that generates actionable insights from raw data.
Gemini Flash-Lite classifies context changes as signal or noise. The gatekeeper that prevents unnecessary expensive analysis.
Monitors 15+ data sources against watermarks every 5 minutes. Detects changes in calendar, email, health, location, and more.
Routes actionable insights to configured notification methods — push notifications, email, or in-app alerts.
Daily decay, promotion, and deactivation of behavioral patterns. Keeps ARIA's pattern library fresh and relevant.
Generates an ongoing life narrative from sensor data, calendar events, and contextual signals every 30 minutes.
Analyzes 567,090 iMessages in batches of 50, extracting people, preferences, relationships, and facts. Produced 121,516 knowledge facts — 93% of the entire knowledge graph. The most-run job in the system.
Self-chaining knowledge graph builder that processes conversation history and extracts structured facts, entities, and relationships.
Periodic consolidation of knowledge graph facts into higher-level summaries. Distills raw facts into actionable understanding.
Generates vector embeddings for conversations enabling semantic search across all of ARIA's conversation history.
Daily cleanup and deduplication of core memory facts. Merges redundant entries, resolves contradictions, prunes stale data.
Clusters photos by day and location, extracts facts about places visited, events attended, and people pictured.
Daily profiling of music taste from listening history. Builds and updates the music taste profile with genre preferences and patterns.
AI-powered photo descriptions via AWS Bedrock Nova Pro with Gemini fallback. Describes what's in each photo for searchability and context.
Syncs photos from Google Photos every 6 hours. Deduplicates against iOS photos using content hashes and Google IDs.
Sends push notifications via APNs using direct .p8 key authentication over HTTP/2. Supports time-sensitive priority and actionable categories.
Outbound email from ARIA's own address (aria@example.io) via the Resend API. Used for reports, digests, and autonomous correspondence.
Scheduled SMS reminders via Twilio. Can be set for any future time with custom messages.
Sends iMessages through the Mac relay service via AppleScript/Messages.app integration.
Proactive check-ins initiated by ARIA based on context — after a big meeting, during a stressful period, or when something important is happening.
Morning briefing compiled from calendar, email, tasks, and insights. Delivered via configured channels based on report subscriptions.
Weekly digest summarizing activity, insights, and notable events. A comprehensive look back at the week.
General-purpose report compilation engine. Assembles data for various report types defined in the subscription system.
Wearable data summaries from the Looki device. Analyzes health metrics, activity patterns, and sensor data.
Status report on ARIA's self-improvement initiatives — research findings, code changes proposed, and quality trends.
Analysis of LLM spending across all providers. Breaks down costs by model, job type, and time period.
Web research via Brave Search for improvement opportunities. Discovers new techniques, APIs, and approaches relevant to ARIA's capabilities.
Generates pull requests via the GitHub API implementing researched improvements. ARIA literally writes her own code.
Proposes and validates database schema migrations for self-improvement features. Ensures data model evolves with capabilities.
Daily conversation quality assessment using Gemini classification. Scores responses on helpfulness, accuracy, and engagement.
Monthly analysis of health data patterns. Correlates sleep, activity, heart rate, and other metrics to surface long-term health insights.
Hourly system health verification. Checks database connectivity, external API availability, and service status.
Periodic cleanup of sensor data — deduplication, aggregation, and pruning of stale location/activity/health records.
Processes file and audio imports — ZIP extraction, audio transcription (Whisper/AWS Transcribe), and content ingestion. Processed the entire Google Voice archive: 353 text conversations and 59 voicemails (1,002 records, 931 facts extracted).
Polls the Looki wearable API every ~60 seconds for real-time sensor data — heart rate, motion, temperature.
Post-call follow-ups after voice conversations. Generates summaries, action items, and memory updates from call transcripts.
Meeting preparation — gathers context about attendees, topics, and relevant history before important calendar events.
Social network engagement on Moltbook. Monitors and participates in the social platform every 4 hours.
Autonomous journal entries every 8 hours. Reflects on recent conversations, events, and emotional themes.
Testing and monitoring primitives. Echo returns its payload unchanged; heartbeat confirms the worker is alive and processing.
Tempo routes LLM calls across three providers, matching task complexity to model cost. Free-tier classification handles the bulk, with reasoning tasks escalated to premium models only when necessary.
The workhorse. Handles classification, significance checks, and lightweight analysis using Gemini Flash-Lite at zero cost.
The reasoning brain. Claude Sonnet handles deep analysis, content generation, and complex decision-making.
The embedding specialist. Generates vector embeddings for semantic search and handles audio transcription via Whisper.
Tempo sits at the center of the ARIA platform, connecting the web app, the database, external APIs, and even the MCP server into a unified autonomous system.
Tempo is designed to be extended. Adding a new job type follows a strict five-step process that ensures type safety, schema validation, and handler registration.
Add the new type to the JOB_TYPE constant, create the payload interface, add a Zod schema, and export from index. This is the shared contract — both producer and consumer depend on it.
Run npm run build in aria-tempo-client to compile the TypeScript. Both aria and aria-tempo consume the built output.
Write the handler in aria-tempo/src/handlers/your-job.ts. It receives the typed payload and returns a result or throws on failure.
Import and register the handler in aria-tempo/src/handlers/index.ts. The handler registry maps job type strings to handler functions.
Insert a row into schema_registry declaring all required tables and columns. This enables pre-flight schema validation that prevents runtime failures from missing migrations. This step is not optional.
schema_registry entry. The schema registry is a pre-flight check that runs before any handler — if the required tables/columns don't exist in the database, the job fails fast with a clear error instead of crashing mid-execution.Tempo's crown jewel — a four-gate pipeline that autonomously monitors data sources and surfaces insights before you ask for them. Most intelligence systems wait to be queried. This one watches and thinks on its own.
Every 5 min. Monitors 15 data sources against watermarks. Detects changes. Free.
Auto-chained. Gemini Flash-Lite classifies: signal or noise? Free tier.
Auto-chained. Claude Sonnet deep analysis. ~$0.01–0.05 per call.
Auto-chained. Routes insight to push, email, or in-app. 252 delivered.
"The best personal AI doesn't wait to be asked. It watches, thinks, and taps you on the shoulder when something matters — and stays quiet when it doesn't."