mnemos

Type: fact
Tags: mnemosprojectsknowledge-managementthinking-tools
Created: Wed Oct 29 2025 00:00:00 GMT+0000 (Coordinated Universal Time)

🧠 Mnemos – Personal Thinking Environment

Structure thoughts, not just words. Mnemos treats knowledge as atomic, typed, connected, and versioned units.

Core Principles

  1. Atomic thinking - Each note captures exactly one concept
  2. Typed structure - Every thought has a category: fact, question, todo, pitch, related
  3. Connected knowledge - Ideas gain meaning through wiki-links
  4. Concise expression - Bullet points over paragraphs

Note Types

Note Format

---
type: fact
tags: [golang, concurrency]
links: [go-channels, goroutines]
created: 2024-01-15
---

- Go channels are typed conduits for communication
- Send/receive operations block by default
- Closed channels can't be reopened

Commands

Workflow Integration

Obsidian Integration

Knowledge base software that works with local markdown files, perfect frontend for mnemos:

Internal linking format using double brackets: <a href="/wiki/note-name" class="wiki-link">note-name</a>:

Behavioral Rules

Integration with Mimir

p/moul/mimir|Mimir feeds data → Mnemos structures knowledge:

Meta-Knowledge: Mnemos Self-Reference

Future Vision

AI Integration Specification

Phase 1: RAG System (Immediate)

mnemos index --embeddings  # Generate vector embeddings
mnemos ask "question" --context=semantic  # Use RAG for answers

Phase 2: Export for Fine-tuning

mnemos export --format=jsonl --for=openai  # Export for OpenAI fine-tuning
mnemos export --format=parquet --for=huggingface  # Export for open models

Export formats:

Phase 3: Local Assistant

mnemos assistant --model=local  # Run fine-tuned model locally
mnemos assistant --model=api --endpoint=...  # Use remote model

Implementation Strategy

  1. Data Preparation Pipeline

    • Convert notes to training pairs
    • Extract Q&A from facts/questions
    • Generate summaries from related notes
    • Create instruction-following examples
  2. Training Data Format

    {
      "messages": [
        {"role": "system", "content": "You are Mnemos, trained on [user]'s knowledge base..."},
        {"role": "user", "content": "What are the design principles of Mnemos?"},
        {"role": "assistant", "content": "Based on the notes:\n- Atomic thinking...\n- Typed structure..."}
      ]
    }
    
  3. Continuous Learning

    • Incremental updates as new notes added
    • Feedback loop from user interactions
    • Version control for model iterations

Questions

AI Training Considerations

Privacy & Security

Quality & Performance

Cost & Resources

Integration Design

TODOs

Implement AI Training Export

Quick Win: Claude Projects

RAG Implementation

Fine-tuning Export

API Integration

Local Model Support

AI Knowledge Base Training Approaches

1. RAG (Retrieval-Augmented Generation)

2. Fine-tuning (Full Model)

3. LoRA/QLoRA (Efficient Fine-tuning)

4. Knowledge Distillation

5. Prompt Engineering + Context Injection

RAG Implementation

Mnemos includes a built-in RAG (Retrieval-Augmented Generation) system:

AI Enhancement Roadmap

Current Implementation (RAG)

Future Approaches

  1. Fine-tuning: GPT-3.5/4 with JSONL training data or Llama2/Mistral with LoRA
  2. Local models: Ollama integration for privacy and offline access
  3. Custom models: Train specialized 1-7B parameter model for thinking style

Local LLM Setup

Next Steps

Integration Features

Telegram Bot

See also

← Back to Knowledge Base