You tell Claude about your project on Monday. On Friday, it still knows.
35 tools. 11 modules. Mood, memory, episodes, dreams, reasoning, corrections, email. Sessions stop dying. Context accumulates. Your AI remembers.
Watch Elara recall context across sessions, track bugs, and triage email.
Each module runs independently. Use what you need.
ChromaDB vector search. Recall by meaning, not keywords. Natural decay — recent stays detailed, old compresses.
Every exchange indexed and cross-referenced. Search thousands of past conversations by semantic similarity.
Sessions tracked as episodes with milestones, decisions, mood arcs. Two tracks: full for work, light for casual.
Persistent emotional state with natural decay. Configurable personality modes. Context-driven tone shifts.
Offline pattern analysis. Weekly: project momentum, session patterns. Monthly: narrative threading across weeks.
Problem-solving chains end-to-end. Hypotheses, evidence, dead ends, solutions. Similar problems find old trails.
Mistakes stored as rules. Checked before similar work. Never decay. Boot-loaded. No repeating errors.
Background daemon. Real-time conversation watching. Micro-ingests memories. Auto-connects things said days apart.
Idea tracking, 5-axis scoring. Competitor analysis. Pitch tracking with win rates. Decision outcomes.
Read, triage, send, archive emails. LLM-powered inbox classification. Semantic search across indexed messages.
Python 3.10+ required. Linux, macOS, Windows WSL.
11 modules. Every tool exposed to your AI client.
Clean modules. Independent systems.
elara-core/ │ ├── core/ # paths, state, orchestration ├── daemon/ # persistent systems │ ├── awareness/ # self-reflection, blind spots │ ├── overwatch/ # real-time conversation watcher │ └── episodic/ # episode tracking, compression ├── memory/ # chromadb, vector search ├── elara_mcp/ # mcp server + 35 tool defs ├── hooks/ # boot scripts, activity hooks ├── interface/ # web dashboard, storage ├── voice/ # tts integration └── tests/ # 91 tests, full isolation
Elara uses the Model Context Protocol (MCP) — an open standard by Anthropic. Any client that speaks MCP gets all 35 tools automatically. The AI model doesn't matter. The client does.
CLI — Anthropic
APP — Anthropic
IDE — GPT-4o / Claude / Gemini
IDE — Cascade AI
IDE — Any model (free)
Anything that speaks MCP
Step 1 is always the same. Step 2 depends on your client.
Requires Claude Pro, Max, or Team plan ($20-100/mo)
Edit ~/Library/Application Support/Claude/claude_desktop_config.json
Requires Claude Pro or higher ($20+/mo). Restart app after editing.
Settings → MCP → Add Server
Cursor Pro ($20/mo). Uses GPT-4o, Claude, or Gemini — your choice.
Edit ~/.codeium/windsurf/mcp_config.json
Windsurf Pro subscription. Restart after editing.
Install Cline extension → Settings → MCP Servers → Add
Free. Bring your own API key (OpenAI, Anthropic, or local Ollama).
Any MCP-compatible client works. Point it at:
Elara is model-agnostic. The MCP protocol handles the bridge.