- Python 52.1%
- HTML 21.9%
- JavaScript 14.8%
- CSS 10.7%
- Shell 0.3%
- Other 0.1%
| .claude | ||
| __pycache__ | ||
| app | ||
| migrations | ||
| .dockerignore | ||
| .gitignore | ||
| config.py | ||
| docker-compose.yml | ||
| Dockerfile | ||
| entrypoint.sh | ||
| README.md | ||
| requirements.txt | ||
| run.py | ||
| summary.md | ||
ImageRepo
A self-hosted image and video gallery application designed for organizing and viewing media collections. Originally built for managing Patreon content archives, it features automatic importing, tagging, duplicate detection, and a responsive dark-themed UI.
Features
- Gallery View - Browse images in a paginated grid with thumbnail previews
- Showcase View - Randomized masonry layout for discovery
- Image Modal - Full-size viewing with zoom, pan, and keyboard navigation
- Video Support - Playback for video files with automatic transcoding to MP4
- Tagging System - Organize images with tags (artist, character, series, rating, archive, user-defined)
- Tag Autocomplete - Quick tag entry with search suggestions
- Automatic Importing - Celery-based task queue scans
/importdirectory on schedule - Archive Extraction - Supports ZIP, RAR, 7z and other archive formats
- Duplicate Detection - Perceptual hash (pHash) comparison to skip similar images
- Import Filters - Skip images by dimensions or transparency percentage
- Bulk Deletion - Delete images by tag or filter criteria
- Artist Organization - Auto-tags images based on folder structure
- Dark Theme - Easy on the eyes for extended browsing
Quick Start
Docker Compose
services:
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
postgres:
image: postgres:16-alpine
environment:
POSTGRES_USER: imagerepo
POSTGRES_PASSWORD: your_secure_password
POSTGRES_DB: imagerepo
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U imagerepo"]
interval: 10s
timeout: 5s
retries: 5
web:
image: git.fabledsword.com/bvandeusen/imagerepo:latest
ports:
- "5000:5000"
environment:
- DB_USER=imagerepo
- DB_PASS=your_secure_password
- DB_HOST=postgres
- DB_NAME=imagerepo
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
volumes:
- ./imagerepo/images:/images
- ./your-media:/import
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
celery-worker:
image: git.fabledsword.com/bvandeusen/imagerepo:latest
command: celery -A app.celery_app:celery worker --loglevel=info -Q scan,import,thumbnail,sidecar,default --concurrency=2
environment:
- DB_USER=imagerepo
- DB_PASS=your_secure_password
- DB_HOST=postgres
- DB_NAME=imagerepo
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
volumes:
- ./imagerepo/images:/images
- ./your-media:/import
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
celery-beat:
image: git.fabledsword.com/bvandeusen/imagerepo:latest
command: celery -A app.celery_app:celery beat --loglevel=info
environment:
- DB_USER=imagerepo
- DB_PASS=your_secure_password
- DB_HOST=postgres
- DB_NAME=imagerepo
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
- IMPORT_EVERY_SECONDS=28800
depends_on:
- redis
- celery-worker
volumes:
redis_data:
postgres_data:
Then visit http://localhost:5000 in your browser.
Architecture
ImageRepo uses a task queue architecture for background processing:
- Web - Flask application serving the UI and API
- Celery Worker - Processes import, thumbnail, and metadata tasks
- Celery Beat - Schedules periodic tasks (directory scans, recovery)
- PostgreSQL - Primary database for all data
- Redis - Message broker for Celery task queue
Volumes
| Path | Description |
|---|---|
/images |
Where imported images and thumbnails are stored |
/import |
Source directory the importer scans for new media |
Environment Variables
Database Configuration (PostgreSQL)
| Variable | Default | Description |
|---|---|---|
DB_NAME |
imagerepo |
PostgreSQL database name |
DB_USER |
imagerepo |
PostgreSQL username |
DB_PASS |
postgres |
PostgreSQL password |
DB_HOST |
postgres |
PostgreSQL host |
DB_PORT |
5432 |
PostgreSQL port |
Celery Configuration
| Variable | Default | Description |
|---|---|---|
CELERY_BROKER_URL |
redis://redis:6379/0 |
Redis broker URL |
CELERY_RESULT_BACKEND |
redis://redis:6379/0 |
Redis result backend URL |
CELERY_WORKER_CONCURRENCY |
2 |
Number of worker processes |
IMPORT_EVERY_SECONDS |
28800 |
Directory scan interval (8 hours) |
Import Settings
These can also be configured via the Settings page in the UI.
| Variable | Default | Description |
|---|---|---|
IMPORT_MIN_WIDTH |
0 |
Minimum image width in pixels (0 = no minimum) |
IMPORT_MIN_HEIGHT |
0 |
Minimum image height in pixels (0 = no minimum) |
IMPORT_SKIP_TRANSPARENT |
false |
Skip mostly-transparent images (PNG/GIF/WebP) |
IMPORT_TRANSPARENCY_THRESHOLD |
0.9 |
Transparency % threshold (0.0-1.0) for skipping |
IMPORT_PHASH_THRESHOLD |
10 |
Duplicate detection sensitivity (0=disabled, lower=stricter) |
Thumbnail & Video Processing
| Variable | Default | Description |
|---|---|---|
THUMBS_VERBOSE |
0 |
Enable verbose logging (1, true, or yes) |
THUMBS_LOG_EVERY |
50 |
Log progress every N thumbnails |
THUMBS_FFMPEG_TIMEOUT |
60 |
Timeout in seconds for video thumbnail extraction |
FFMPEG_TRANSCODE_TIMEOUT |
600 |
Timeout in seconds for video transcoding |
Archive Extraction
| Variable | Default | Description |
|---|---|---|
ARCHIVE_TMPDIR |
System temp | Directory for extracting archives |
ARCHIVE_MIN_FREE_GB |
0 |
Minimum free disk space (GB) required to start extraction (0 = disabled) |
ARCHIVE_NUM_WIDTH |
4 |
Zero-padding width for archive sequence numbers in tags |
Other
| Variable | Default | Description |
|---|---|---|
PIL_MAX_IMAGE_PIXELS |
178956970 |
Maximum image size Pillow will process (guards against decompression bombs) |
Import Behavior
The importer runs:
- Automatically every 8 hours (configurable via
IMPORT_EVERY_SECONDS) - Manually via the "Trigger Import Scan" button in Settings
Video Transcoding
Non-MP4 video formats (.mov, .avi, .mkv, .webm, .m4v, .wmv, .flv) are automatically transcoded to H.264 MP4 during import for universal browser playback.
Folder Structure
The importer uses the folder structure to auto-tag images:
/import/
├── ArtistName/ # Tagged as "artist:ArtistName"
│ ├── image1.png
│ ├── image2.jpg
│ └── archive.zip # Extracted, contents tagged with "archive:archive"
├── AnotherArtist/
│ └── subfolder/
│ └── image3.png # Still tagged as "artist:AnotherArtist"
- Top-level folders become
artist:FolderNametags - Archives (ZIP, RAR, 7z, etc.) are extracted and their contents tagged with
archive:filename - Nested subfolders inherit the top-level artist tag
Supported Formats
Images: PNG, JPG, JPEG, GIF, WebP, BMP, TIFF
Videos: MP4, MOV, AVI, MKV, WebM, M4V, WMV, FLV (transcoded to MP4)
Archives: ZIP, RAR, 7z, and others supported by unar/7z
Settings Page
The Settings page (/settings) provides:
Admin Actions
- Trigger Import Scan - Manually start an import scan
- Regenerate Thumbnails - Rebuild all thumbnails
- Find Duplicates - Scan for visually similar images using pHash
- Reset Image Database - Clear all image records (files remain on disk)
Import Filters
Configure filtering rules that apply during import:
- Minimum dimensions
- Transparency filtering
- Duplicate detection sensitivity
Duplicate Management
- Scan for potential duplicates based on perceptual hash distance
- Review and confirm duplicates for deletion
- Keep the higher resolution version automatically
Delete Images by Tag
Bulk delete all images associated with a specific tag (e.g., remove an entire artist's collection).
Clean Up Existing Images
Scan and selectively delete images that match filter criteria (transparency or dimensions).
API Endpoints
| Endpoint | Method | Description |
|---|---|---|
/api/random-images |
GET | Get random images for showcase |
/api/tags/search |
GET | Search tags for autocomplete |
/api/import-settings |
GET/POST | Read/write import filter settings |
/api/import/trigger |
POST | Trigger directory scan |
/api/import/status |
GET | Get import queue status |
/api/duplicates/scan |
POST | Scan for duplicate images |
/api/duplicates/delete |
POST | Delete confirmed duplicates |
/api/filter-scan |
GET | Scan images matching filter criteria |
/api/filter-delete |
POST | Delete selected images |
/api/delete-by-tag |
POST | Delete all images with a tag |
/image/<id>/tags |
GET | List tags for an image |
/image/<id>/tags/add |
POST | Add a tag to an image |
/image/<id>/tags/remove |
POST | Remove a tag from an image |
Keyboard Shortcuts
In the image modal:
- Arrow Left/Right - Previous/Next image
- Escape - Close modal
- Click image - Toggle zoom
- Drag (when zoomed) - Pan around image
Scaling Workers
To increase import throughput:
# Scale worker containers
docker-compose up --scale celery-worker=4
# Or increase concurrency per worker
CELERY_WORKER_CONCURRENCY=4 docker-compose up
Total parallelism = containers × concurrency
Development
Local Setup
# Clone the repository
git clone https://git.fabledsword.com/bvandeusen/imagerepo.git
cd imagerepo
# Start PostgreSQL and Redis
docker-compose up -d postgres redis
# Create virtual environment
python -m venv venv
source venv/bin/activate # or `venv\Scripts\activate` on Windows
# Install dependencies
pip install -r requirements.txt
# Run migrations
flask db upgrade
# Run with Flask dev server
flask run
Docker Build
docker build -t imagerepo .
License
This is a personal project. Use at your own discretion.