Shared AI Memory System

Brain

Browse, filter, edit, and archive the shared memory store from one page.

Memories

1

ContextKeep / MCP Memory Discussion Handoff (2026-03-19)

active Edit Selected
Key
contextkeep_mcp_memory_discussion_handoff_2026_03_19
Source
contextkeep
Namespace
none
Doc Section
none
Created
2026-03-19 02:18
Updated
2026-03-19 02:18
Doc Version
none
Chunk
none
codex contextkeep handoff homelab mcp memory-system
# ContextKeep / MCP Memory Discussion Handoff Date: 2026-03-19 UTC Host: svc-ai Primary focus: evaluating ContextKeep as an AI memory system, understanding how it works with Codex CLI, identifying its limitations, and capturing the user's desired future-state memory features. ## Current ContextKeep Deployment State ContextKeep is installed on `svc-ai` at: `/home/svc-admin/ai-projects/projects/homelab/contextkeep` Services: - `contextkeep-server` - `contextkeep-webui` Ports: - Web UI: `5000/tcp` - SSE MCP endpoint: `5100/tcp` Access: - Web UI reachable at `http://svc-ai:5000` and `http://192.168.4.117:5000` - SSE endpoint reachable at `http://svc-ai:5100/sse` Firewall on `svc-ai`: - UFW allows `5000/tcp` from `192.168.4.0/24` - UFW allows `5100/tcp` from `192.168.4.0/24` - This matches the current LAN scoping used for SSH on `svc-ai` Codex CLI integration: - ContextKeep was added as a Codex MCP server - `codex mcp list` shows: - name: `contextkeep` - url: `http://127.0.0.1:5100/sse` - status: `enabled` - auth: `Unsupported` - `Auth Unsupported` was explained as normal for a local open SSE endpoint without Codex-managed authentication - User was told to start a fresh Codex session with: - `codex` - or `codex "Check ContextKeep for relevant memory before starting."` - User initially tried an invalid command: - `codex contextkeep enabled` - Clarified the correct flow: - `codex mcp list` - then `codex` ## Important Correction About ContextKeep Storage Earlier browsing/README language suggested SQLite-backed storage. Actual installed code on `svc-ai` does NOT use SQLite. It stores each memory as a JSON file in: `/home/svc-admin/ai-projects/projects/homelab/contextkeep/data/memories` Implementation detail from the code: - memory keys are hashed to filenames - content is stored as JSON records - fields include key, title, content, tags, created_at, updated_at, lines, chars This matters because: - ContextKeep is simpler and more inspectable than expected - but less robust/structured than a real DB-backed memory system - and it reinforces that ContextKeep is a manual memory store, not a sophisticated memory engine ## What Was Added To ContextKeep ### 1. Master prompt memories Imported unchanged, one memory per file from: `/home/svc-admin/ai-projects/projects/master-prompt` Stored memories: - `master-prompt-daily-use.md` -> key `master_prompt_daily_use` - `master-prompt-no-bs.md` -> key `master_prompt_no_bs` - `master-prompt-polished-chatgpt.md` -> key `master_prompt_polished_chatgpt` - `master-prompt-rough-draft.md` -> key `master_prompt_rough_draft` - `master-prompt-weekly-review.md` -> key `master_prompt_weekly_review` - `my-life-context.md` -> key `my_life_context` These were stored exactly as-is, not condensed or rewritten. ### 2. Homelab documentation-system notes Imported unchanged, one memory per file from: `/home/svc-admin/ai-projects/projects/homelab/.work/homelab-documentation-system/notes` Stored memories: - `ai-change-documentation-spec.md` -> key `ai_change_documentation_spec` - `ai-task-template-example-finish-documentation-automation.md` -> key `ai_task_template_example_finish_documentation_automation` - `ai-task-template.md` -> key `ai_task_template` - `change-documentation-guide.md` -> key `change_documentation_guide` These were also stored exactly as-is. ### 3. ContextKeep install memory A direct memory was created describing the install and intended use of ContextKeep on `svc-ai`. Key: - `contextkeep_install_svc_ai_2026_03_18` This memory includes: - install path - service names - web UI URL - SSE endpoint URL - that Codex CLI was configured with the local MCP server entry - that ContextKeep is intended to serve as AI memory while Gitea and the homelab documentation repo remain the source of truth ## How Memory Write / Read Was Explained It was established that, in the current chat, the assistant can still use ContextKeep by calling the local API directly on `svc-ai`, even if the native MCP tool path is not visibly exposed inside this specific session. Working write path used repeatedly: - POST to `http://127.0.0.1:5000/api/memories` - PUT to `http://127.0.0.1:5000/api/memories/<key>` - GET from `http://127.0.0.1:5000/api/memories/<key>` This means the assistant can store and update memories immediately when running on `svc-ai`, even before depending fully on the MCP tooling layer. ## Discussion About How ContextKeep Actually Behaves The user asked whether memories are called automatically or need prompting. Answer given: - in a properly loaded Codex CLI session with MCP enabled, Codex can check ContextKeep - but it is safer to prompt explicitly until behavior is well understood Suggested prompt patterns: - `Check ContextKeep for anything relevant before you start.` - `Use ContextKeep memory for this task.` - `Look in ContextKeep for prior work on this topic.` - `Store the result of this work in ContextKeep when you're done.` - `Check ContextKeep for relevant memory, do the task, then store any important new context back into ContextKeep.` When the user launched a fresh Codex session and it replied: - `Relevant ContextKeep memory checked.` - it found the install/system context memory but no task-specific memory This was explained as normal: the store is still sparse, and Codex was correctly telling the user it found the general ContextKeep setup memory but not a more specific task memory. ## Discussion About “Importance”, Scheduling, and Master Prompt Behavior The user asked how to make a memory “important” or have certain memories used at certain times. Examples given by the user: - daily master prompt should be used daily - weekly review prompt should be used weekly - no-BS version should be used when the user is slipping The answer established: - ContextKeep does not appear to have a built-in importance, pinning, or scheduled-use feature - it only exposes key/title/content/tags/timestamps - therefore importance must be expressed by convention, not a native priority field Suggested way to model this: - use strong tags such as: - `daily-context` - `weekly-context` - `fallback-context` - `behavior-reset` - `master-prompt` Examples discussed: - `master_prompt_daily_use` - tags: `master-prompt, daily-context, primary` - `master_prompt_weekly_review` - tags: `master-prompt, weekly-context, review` - `master_prompt_no_bs` - tags: `master-prompt, fallback-context, behavior-reset` Suggested prompt patterns: - `Load the daily-context memories from ContextKeep before starting.` - `Load the weekly-context memory and use that frame for this session.` - `Switch to the behavior-reset / no-BS ContextKeep memory for this conversation.` But the larger conclusion was: - ContextKeep can be bent into this with tagging conventions - but it does not natively provide the mode/profile system the user actually wants ## Key Strategic Discussion: ContextKeep Is Not Really The Desired End State The user clarified the real goal: - a system that can index chats automatically without constant manual `remember this` behavior - tags or labels to separate domains such as love life, coding, homelab, etc. - a way to make certain things active/used at certain times (daily, weekly, no-BS mode, etc.) - a cute usable web UI is nice, but ContextKeep is functionally too manual - user wants something more automatic and agent-friendly The assistant explained: - ContextKeep is really a manual memory vault / notebook - good at explicit save/retrieve - weak at: - automatic ingestion - priority/pinning - mode/profile switching - selective scheduled recall A more accurate framing of the user’s need was given: The user does not really want a simple memory store. The user wants something closer to an `agent memory operating system`, meaning: - automatic memory capture - categorization - retrieval by mode - relevance filtering - priority/pinning - human-readable browsing UI - local/self-hosted operation - multi-client use - source-of-truth separation from Gitea/docs - export/backup ability The assistant broke this into feature groups: 1. Automatic ingestion 2. Categorization 3. Retrieval by mode 4. Relevance filtering 5. Priority/pinning 6. Human-readable UI 7. Local/self-hosted 8. Multi-client usable 9. Source-of-truth separation 10. Easy export/backup And then compared ContextKeep against that desired feature set: - good at local storage, browsing, explicit memory retrieval - bad at auto-ingestion, prioritization, modes/profiles, strong categorization workflow Conclusion given: - ContextKeep is useful, but is not the user’s ideal system by itself - the likely future direction is either: - a stronger local memory backend - or a wrapper/orchestration layer around a memory engine - likely needs: - auto-ingest chat summaries - auto-tagging - mode/profile loading (`daily`, `weekly`, `no-BS`, `coding`, `personal`) ## Token Usage Discussion The user asked whether tools like ContextKeep / ContextMode actually save token usage. Answer given: - yes, they can reduce token usage when they retrieve only the relevant memories instead of pasting huge context every time - but only if the retrieval is selective and not junky - if a memory system pulls in too much irrelevant material, it increases noise instead of helping So the conclusion was: - memory systems can save tokens - but only when they behave as selective retrieval layers, not just giant dumping grounds ## Alternatives Discussion The user asked whether there are alternatives to ContextKeep and how they compare. A broad comparison was discussed: - ContextKeep - Mem0 / OpenMemory style systems - SQLite-backed local memory MCPs like `claude-memory-mcp` / `mcp-openmemory` - more advanced memory/RAG systems Key takeaway given to the user: - for a homelab-friendly simple self-hosted memory store, ContextKeep is still reasonable - if the user outgrows it, the next step should probably be a stronger local DB-backed memory server or a wrapper around a memory backend - not a cloud-dependent memory service ## Task Created In Vikunja Based On This Discussion A Vikunja task was created in `Infrastructure Buildout`: - title: `Tighten up ContextKeep` - task id: `60` - identifier: `#3` - priority: `medium` This task was intentionally created from the `ai_task_template` memory context. The description was then reformatted multiple times because the user felt it was too wall-of-text heavy. Final conclusion from that interaction: - Vikunja-friendly tasks need a very scan-friendly structure - not long design-document formatting ## Mandatory Task Format Update The user requested that the AI task template memory be updated to enforce a non-negotiable task format. The assistant updated the `ai_task_template` memory accordingly. The mandatory format now is: 1. Summary - 2-4 lines max 2. What To Do - short bullet list 3. Systems Affected - short bullet list 4. Constraints - short bullet list 5. Validation - short bullet list 6. Acceptance Criteria - short bullet list 7. References - file paths, task ids, URLs The assistant also added explicit language to `ai_task_template` saying: - this format is mandatory for Vikunja task descriptions - do not replace it with long narrative sections, paragraph-heavy explanations, or oversized design-document formatting This update is important because the user felt previous Vikunja task descriptions were still too much of a wall of text. ## User Intent Going Forward The user said we are about to drastically shift gears away from homelab work. Before shifting, the user wanted: - master prompt files committed to memory - notes files committed to memory - the current ContextKeep / MCP server conversation committed to memory in as much detail as possible Reason stated by the user: - the user wants to be able to open a fresh Codex instance and pull up exactly where this discussion left off ## Important Operational Guidance For Future Session If a future Codex session needs to resume this topic, the best retrieval targets are likely: - `contextkeep_install_svc_ai_2026_03_18` - `ai_task_template` - this new handoff memory - the imported master prompt memories as needed Likely useful prompts in a fresh Codex session: - `Check ContextKeep for the ContextKeep / MCP handoff memory before starting.` - `Load the ai_task_template memory and the ContextKeep MCP discussion handoff.` - `Find the memory about evaluating ContextKeep versus a more automatic memory system.` ## Recommendation To Future Assistant When resuming this topic, do NOT assume the user is satisfied with ContextKeep as the final answer. The user likes: - the web UI - the ability to store things - the local/self-hosted nature But the user is dissatisfied with: - manual memorize/find behavior - lack of automatic ingestion - lack of tags/labels/modes that drive real retrieval behavior - lack of daily/weekly/no-BS usage logic So future discussion should treat ContextKeep as: - a currently working memory store - not necessarily the final memory system --- **Created:** 2026-03-19 02:18:51 UTC

Edit Memory

View Selected