Shared AI Memory System

Brain

Browse, filter, edit, and archive the shared memory store from one page.

Memories

3

Brain Project Implementation Handoff (2026-03-22)

active Edit Selected
Key
brain_project_implementation_handoff_2026_03_22
Source
codex
Namespace
none
Doc Section
none
Created
2026-03-22 14:55
Updated
2026-03-22 14:55
Doc Version
none
Chunk
none
brain contextkeep-import fastapi handoff homelab implementation mcp postgresql project svc-dev
Project summary - On 2026-03-22, a new self-hosted memory system project named `brain` was created and deployed on `svc-dev` at `/home/svc-admin/projects/brain`. - The design goal was a shared core memory backend for multiple AI clients, with one common memory store and first-class source metadata so different clients can write into separate lanes without requiring separate installs. - The initial supported sources are intended to be `codex`, `claude`, `gemini`, `manual`, and imported `contextkeep` data. Core architectural decisions - Use one shared backend instead of one database per AI client. - Make `source` a first-class memory field rather than relying only on tags. - Keep the app compatible with both a bundled local Postgres container and an external Postgres instance later. - App code always reads a single `DATABASE_URL`. - Default deployment uses a bundled Postgres service in `docker-compose.yml`. - Switching to an external DB later only requires changing `.env`, not changing app code or adding a second compose file. - Keep REST API and MCP server as separate services in the same stack. - Use FastAPI for the REST API and Python MCP server implementation. - Use Alembic for schema migrations rather than ad hoc table creation. Chosen deployment details - Project directory: `/home/svc-admin/projects/brain` - Host: `svc-dev` (`192.168.4.123`) - API port: `8010` - MCP port: `8011` - Database name: `ai-brain` - Database user: `ai-brain` - Local default database password: `120dMOtVxpoHrwdT` Bundled stack layout - `brain-api`: FastAPI REST service - `brain-mcp`: MCP server exposed over HTTP - `brain-db`: Postgres 16 Important project files - `/home/svc-admin/projects/brain/docker-compose.yml` - `/home/svc-admin/projects/brain/.env` - `/home/svc-admin/projects/brain/.env.example` - `/home/svc-admin/projects/brain/README.md` - `/home/svc-admin/projects/brain/backend/Dockerfile` - `/home/svc-admin/projects/brain/backend/requirements.txt` - `/home/svc-admin/projects/brain/backend/alembic.ini` - `/home/svc-admin/projects/brain/backend/alembic/env.py` - `/home/svc-admin/projects/brain/backend/alembic/versions/0001_initial.py` - `/home/svc-admin/projects/brain/backend/alembic/versions/0002_add_memory_source.py` - `/home/svc-admin/projects/brain/backend/app/main.py` - `/home/svc-admin/projects/brain/backend/app/db.py` - `/home/svc-admin/projects/brain/backend/app/models.py` - `/home/svc-admin/projects/brain/backend/app/schemas.py` - `/home/svc-admin/projects/brain/backend/app/crud.py` - `/home/svc-admin/projects/brain/backend/app/serializers.py` - `/home/svc-admin/projects/brain/backend/app/api/health.py` - `/home/svc-admin/projects/brain/backend/app/api/memories.py` - `/home/svc-admin/projects/brain/backend/app/api/tags.py` - `/home/svc-admin/projects/brain/backend/app/mcp_server.py` Environment and configuration - `.env` defines `POSTGRES_DB`, `POSTGRES_USER`, `POSTGRES_PASSWORD`, `DATABASE_URL`, `API_PORT`, and `MCP_PORT`. - Local/default `DATABASE_URL` points to the bundled Postgres service hostname `db`. - External Postgres support is implemented by overriding `DATABASE_URL` in `.env`, for example pointing it to a remote DB host instead of `db`. Data model decisions - `Memory` table stores the current canonical record. - Fields include `key`, `title`, `content`, `source`, `status`, `created_at`, and `updated_at`. - `source` is indexed and defaults to `manual`. - `status` is indexed and defaults to `active`. - Tags are normalized via a dedicated `Tag` table and `memory_tags` join table. - Revision history is stored in a `Revision` table. - Deletes are implemented as archive semantics rather than hard delete at the API layer. Schema and migrations - Alembic migration `0001_initial.py` created the base schema for memories, tags, memory_tags, and revisions. - Alembic migration `0002_add_memory_source.py` added first-class `source` support to the `memories` table. - Container startup runs `alembic upgrade head` before launching the API process. - Early in development, the first prototype created tables before Alembic existed. To move to migration-owned schema, the project Postgres volume had to be reset once with `docker compose down -v`. After that reset, migrations became the source of truth and the schema stabilized. REST API implemented - `GET /health` - `GET /memories` - `GET /memories?status=all` - `GET /memories?q=<text>` - `GET /memories?tag=<tag>` - `GET /memories?source=<source>` - `GET /memories/{key}` - `POST /memories` - `PUT /memories/{key}` - `DELETE /memories/{key}` for archive behavior - `GET /tags` MCP server implementation - A dedicated MCP server service named `brain-mcp` was added to the compose stack. - The MCP endpoint is exposed at `http://192.168.4.123:8011/mcp`. - Implemented MCP tools: - `list_memories` - `retrieve_memory` - `search_memories` - `store_memory` - `archive_memory` - `list_tags` - The MCP layer uses the official Python `mcp` package. MCP host-header decision and fix - During early testing, the MCP endpoint returned host-header rejection / `421 Misdirected Request` behavior for requests using `192.168.4.123`. - The fix was to update `backend/app/mcp_server.py` so the FastMCP/transport security settings explicitly allow expected hosts. - Allowed host patterns were expanded to include loopback and LAN-reachable forms, including `127.0.0.1`, `localhost`, `192.168.4.123`, and `svc-dev` with port patterns. - After that fix, the endpoint behaved correctly for external access and clients could connect successfully. Shared-brain design decision - The preferred architecture is one shared brain rather than one per AI client. - The separation model is implemented through `source` rather than separate deployments. - Example intended source values: `codex`, `claude`, `gemini`, `manual`, `contextkeep`. - This allows one source of truth while preserving per-client filtering and future namespace expansion. Current imported legacy data - Existing ContextKeep memories were imported into `brain`. - Import scope: all 54 existing ContextKeep memory records. - Imported records retained: - original `key` - original `title` - full `content` - tags - original `created_at` and `updated_at` timestamps - Imported records were assigned `source=contextkeep`. - Imported records were loaded as current-state records only; synthetic revision history was not backfilled. How ContextKeep import was implemented - The ContextKeep MCP transport became unreliable during bulk retrieval and began failing with transport-closed errors. - Instead of continuing through the MCP tool layer, the underlying ContextKeep storage implementation was inspected locally. - ContextKeep stores memories as JSON files under: `/home/svc-admin/ai-projects/projects/homelab/contextkeep/data/memories` - Each file contains fields like `key`, `title`, `content`, `tags`, `created_at`, and `updated_at`. - A direct export file was generated from those JSON documents. - The export file was copied to `svc-dev`. - A one-off Python import script was copied into the `brain-api` container and executed with `PYTHONPATH=/app` so it could import `app.db` and `app.models`. - The importer created missing tags, inserted memories, set `source=contextkeep`, and preserved the original timestamps. Commands and operational steps used during implementation - SSH / deployment inspection on `svc-dev`: - `ssh svc-dev ...` - Compose / stack verification: - `docker compose up -d` - `docker compose down -v` (used once during early schema reset before Alembic became canonical) - `docker ps` - `docker exec ...` - `docker cp ...` - API verification: - `curl http://127.0.0.1:8010/health` - `curl http://127.0.0.1:8010/memories` - `curl http://127.0.0.1:8010/openapi.json` - `curl http://127.0.0.1:8010/tags` - Database verification: - `docker exec brain-db psql -U ai-brain -d ai-brain -c '\dt'` - `docker exec brain-db psql -U ai-brain -d ai-brain -c '\d memories'` - MCP verification: - MCP client tool listing against loopback inside the container - direct HTTP checks against `http://192.168.4.123:8011/mcp` - Codex MCP registration: - `codex mcp add brain --url http://192.168.4.123:8011/mcp` - `codex mcp list` - Claude MCP registration: - `claude mcp add --transport http brain http://192.168.4.123:8011/mcp` - `claude mcp list` - Gemini MCP registration: - `gemini mcp add --scope user --transport http brain http://192.168.4.123:8011/mcp` - `gemini mcp list` - ContextKeep import prep: - inspection of `/home/svc-admin/ai-projects/projects/homelab/contextkeep/server.py` - inspection of `/home/svc-admin/ai-projects/projects/homelab/contextkeep/core/memory_manager.py` - export generation from JSON files under `data/memories` - `scp` to move import artifacts to `svc-dev` - `docker cp` into `brain-api` - `docker exec -e PYTHONPATH=/app brain-api python /tmp/contextkeep_import.py` Client integration status - Codex is configured to use the `brain` MCP server. - Claude is configured to use the `brain` MCP server and reported connected health. - Gemini is configured to use the `brain` MCP server and reported connected health. - All three are intended to use the same backend rather than separate installs. Important implementation notes - Gemini initially accepted an MCP add command without explicit transport/scope, but that defaulted to an unusable project/stdio shape. The fix was to re-add it explicitly as `--scope user --transport http`. - Claude stored the server in its local project config under `/home/svc-admin/.claude.json`. - Gemini stored the server under `/home/svc-admin/.gemini/settings.json`. - Codex stored the MCP server registration in `/home/svc-admin/.codex/config.toml`. Validation completed - API health endpoint returned OK. - Memory CRUD and archive flow worked before import. - Search and filtering by text, tag, and source worked. - Tag listing worked. - MCP server tool listing worked. - Host-header restriction was fixed for the MCP endpoint. - After ContextKeep import, `GET /memories?status=all` returned 54 records. - After import, `GET /memories?source=contextkeep&status=all` returned 54 records. - Spot checks confirmed the expected tags and preserved timestamps on imported memories. Current state after implementation - `brain` is live on `svc-dev`. - REST API base URL: `http://192.168.4.123:8010` - MCP endpoint: `http://192.168.4.123:8011/mcp` - ContextKeep memories have been imported. - Shared-brain design with first-class `source` is in place. - Multiple AI clients are configured to use the same MCP endpoint. Known limitations / future work - No web UI exists yet. Current interfaces are REST and MCP only. - `namespace` is not implemented yet; `source` is the current separation layer. - Imported ContextKeep records do not have reconstructed revision history. - Direct import tooling used for migration was ad hoc and not yet committed as a reusable migration command inside the project. - Prompt/template records and filtered views are still future work. Recommended next steps - Add `namespace` as a first-class field if a second layer of separation becomes necessary. - Build a web UI for browsing, filtering, and editing memories. - Turn the ContextKeep import path into a reusable project command or script if future migrations are expected. - Decide whether to disable or retire ContextKeep from daily use now that `brain` is active. - Add docs/examples showing how Codex, Claude, and Gemini should store memories using `source` consistently. This memory is intended to function as the repo-adjacent implementation handoff for the initial `brain` build and migration work.

Edit Memory

View Selected