- Vue 48%
- Python 41%
- TypeScript 8.7%
- CSS 2%
|
All checks were successful
Build & push image / Build & push (push) Successful in 3m6s
dev push → :dev + :<sha> main push → :latest + :<sha> tag push → :latest + :<sha> + :<version> Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> |
||
|---|---|---|
| .forgejo/workflows | ||
| alembic | ||
| docs | ||
| frontend | ||
| infra | ||
| src/fabledassistant | ||
| tests | ||
| .dockerignore | ||
| .env.example | ||
| .gitignore | ||
| alembic.ini | ||
| docker-compose.prod.yml | ||
| docker-compose.yml | ||
| Dockerfile | ||
| Makefile | ||
| pyproject.toml | ||
| README.md | ||
| summary.md | ||
Fabled Assistant
A self-hosted note-taking and task-tracking application with integrated LLM capabilities. Write, organize, and enhance your notes with the help of a local AI assistant — all running on your own hardware.
What It Does
Notes with inline formatting — Write in Markdown with a live-preview editor. Headings, bold, italic, lists, and code blocks render inline as you type, similar to Obsidian or Typora.
Task tracking — Any note can become a task with a status (todo, in progress, done), priority level, and due date. Convert freely between notes and tasks without losing content.
Wikilinks and backlinks — Link notes together with [[Title]] syntax. Click a wikilink to navigate to (or auto-create) the referenced note. Each note shows what links to it.
Tag organization — Add #tags anywhere in your text. Tags are extracted automatically and support hierarchical filtering (e.g., #project/webapp matches both project and project/webapp). Tag autocomplete suggests existing tags as you type.
AI writing assistant — Select a section of your note and give the assistant an instruction ("summarize this", "make it more concise", "add examples"). The assistant streams a suggestion that you can accept or reject. Powered by Ollama running locally.
AI chat — Have conversations with your AI assistant. The chat is context-aware: it can automatically find and reference your notes to give more relevant answers. Attach specific notes for focused discussions. Save useful responses as new notes.
Multi-user support — Multiple users can share the same instance with isolated data. The first registered user becomes the admin.
Dark and light themes — Defaults to dark mode with a one-click toggle. All views respect your preference.
Getting Started
Prerequisites
- Docker and Docker Compose
- A machine with enough resources to run an LLM (8GB+ RAM recommended for smaller models)
Quick Start
-
Clone the repository:
git clone https://github.com/your-username/fabledassistant.git cd fabledassistant -
Start the application:
docker compose up --build -
Open
http://localhost:5000in your browser. -
Register the first user account (this account becomes the admin).
-
Go to Settings to download an LLM model. Smaller models like
llama3.2(2GB) work well for getting started.
Day-to-Day Usage
- Create a note from the Notes page. Use Markdown — formatting renders live in the editor.
- Link notes by typing
[[to get a wikilink autocomplete dropdown. - Tag your notes by typing
#followed by a tag name. Autocomplete suggests existing tags. - Use the AI assistant in the bottom panel of the editor. Select a section, write an instruction, and click Generate.
- Chat with the AI from the Chat page. Attach notes for context or let the assistant find relevant notes automatically.
- Convert notes to tasks from the note viewer to add status, priority, and due dates.
- Save chat responses as notes to preserve useful AI-generated content.
- Backup your data from Settings (admin users can export and restore full backups).
Configuration
Configuration is done via environment variables. See docker-compose.yml for defaults.
| Variable | Default | Description |
|---|---|---|
DATABASE_URL |
postgresql+asyncpg://... |
PostgreSQL connection string |
SECRET_KEY |
(required) | Session signing key |
OLLAMA_BASE_URL |
http://ollama:11434 |
Ollama API endpoint |
DEFAULT_MODEL |
llama3.1 |
Default LLM model to auto-pull on startup |
SECURE_COOKIES |
false |
Set to true when behind TLS to add Secure flag to session cookies |
LOG_LEVEL |
INFO |
Logging level |
For production deployments, docker-compose.prod.yml supports Docker secrets (SECRET_KEY_FILE, DATABASE_URL_FILE) and includes network isolation, health checks, and resource limits.
Production Deployment
Reverse Proxy (Required)
Fabled Assistant does not handle SSL/TLS. You must run it behind a reverse proxy for production use:
- Nginx, Traefik, or Caddy in front of the app container
- Terminate TLS at the proxy and forward traffic to port 5000
- Do not expose port 5000 directly to the internet
- Rate-limit auth endpoints at the proxy layer — recommended: limit
/api/auth/loginand/api/auth/registerto ~5 requests/minute per IP to prevent brute-force attacks - Set Content Security Policy headers at the proxy — recommended:
default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data:; connect-src 'self'
Security Checklist
- Set a strong
SECRET_KEY— used to sign session cookies. Generate one withpython -c "import secrets; print(secrets.token_hex(32))"or use Docker secrets viaSECRET_KEY_FILE. - Registration auto-closes — After the first user registers (who becomes admin), registration is closed by default. The admin can re-enable it from the user management page (
/admin/users). - Use Docker secrets in production —
docker-compose.prod.ymlreadsSECRET_KEY_FILEandDATABASE_URL_FILEinstead of plain environment variables. - Keep Ollama on an internal network — The default compose files keep Ollama on a Docker-internal network, not exposed to the host.
Technical Overview
Stack
| Layer | Technology |
|---|---|
| Frontend | Vue 3, TypeScript, Vite, Pinia, Vue Router |
| Editor | Tiptap (ProseMirror) with custom extensions |
| Backend | Python 3.12, Quart (async Flask-like framework) |
| Database | PostgreSQL 16, SQLAlchemy 2.0 (async), Alembic migrations |
| LLM | Ollama (or any OpenAI-compatible API) |
| Deployment | Docker Compose (app + PostgreSQL + Ollama) |
Architecture
The application runs as a single container serving both the Vue.js SPA and the REST API. The frontend is built by Vite during the Docker image build and served as static files by Quart. API routes live under /api/.
LLM interactions use Server-Sent Events (SSE) for streaming responses. Chat generation runs in background asyncio tasks with an in-memory event buffer that supports client reconnection without data loss.
The editor uses a Markdown-to-HTML-to-Markdown round-trip: content is stored as Markdown, converted to HTML for Tiptap editing, and serialized back to Markdown on every change. ProseMirror decoration plugins provide visual highlighting for tags and wikilinks without custom node types.
Database
All data is stored in PostgreSQL. The schema uses a unified model where tasks are notes with additional attributes (status, priority, due_date). Migrations are idempotent raw SQL and run automatically on container startup.
Project Structure
fabledassistant/
├── docker-compose.yml # Development stack
├── docker-compose.prod.yml # Production stack (Docker Swarm)
├── Dockerfile # Multi-stage build (Node + Python)
├── alembic/ # Database migrations
├── src/fabledassistant/ # Python backend
│ ├── app.py # Quart app factory
│ ├── models/ # SQLAlchemy models
│ ├── routes/ # API route blueprints
│ ├── services/ # Business logic
│ └── static/ # Built frontend (generated)
└── frontend/ # Vue.js frontend
└── src/
├── views/ # Page components
├── components/ # Reusable UI components
├── extensions/ # Tiptap/ProseMirror plugins
├── composables/ # Vue composables
├── stores/ # Pinia state management
└── api/ # API client
Development
All development is done via Docker:
# Start the dev stack
docker compose up --build
# Reset database (destroy volumes and rebuild)
docker compose down -v && docker compose up --build
No local dependency installation required.
License
This project is privately maintained.