A high-performance caching proxy for game downloads. Caches content from Steam, Epic Games, Origin, Battle.net, Riot, Xbox, PlayStation, Nintendo, Uplay, and many other platforms to serve subsequent downloads at LAN speeds.
Note
Recommended image:
docker pull ghcr.io/regix1/monolithic:latestDocs: lancache.net | Monolithic docs | Support
services:
monolithic:
image: ghcr.io/regix1/monolithic:latest
environment:
- UPSTREAM_DNS=8.8.8.8
volumes:
- ./cache:/data/cache
- ./logs:/data/logs
ports:
- "80:80"
- "8081:8081" # Admin panel
restart: unless-stoppedPoint your DNS at lancache-dns or configure your router to redirect game CDN domains to the cache IP.
| Variable | Default | Description |
|---|---|---|
CACHE_DISK_SIZE |
1000g |
Maximum size of the cache on disk. Set slightly below your actual disk size. |
CACHE_INDEX_SIZE |
500m |
Memory allocated for the cache index. Increase for caches over 1TB (1g per 1TB recommended). |
CACHE_MAX_AGE |
3560d |
How long cached content is kept before expiring (~10 years default). |
CACHE_SLICE_SIZE |
1m |
Size of chunks for partial/resumable downloads. 1m is recommended - works with all CDNs and enables fast resume. Combined with NOSLICE_FALLBACK, problematic servers are handled automatically. |
MIN_FREE_DISK |
10g |
Stops caching new content when free disk space drops below this threshold. |
| Variable | Default | Description |
|---|---|---|
UPSTREAM_DNS |
8.8.8.8 8.8.4.4 |
DNS server(s) for resolving CDN hostnames. Space-separated for multiple servers. |
LANCACHE_IP |
- | IP address(es) where clients reach the cache. Used by lancache-dns. |
| Variable | Default | Description |
|---|---|---|
CACHE_DOMAINS_REPO |
https://github.com/uklans/cache-domains.git |
Git repository containing the list of domains to cache. |
CACHE_DOMAINS_BRANCH |
master |
Branch to use from the cache domains repository. |
NOFETCH |
false |
Set to true to skip updating cache-domains on container startup. |
| Variable | Default | Description |
|---|---|---|
ENABLE_UPSTREAM_KEEPALIVE |
false |
Enable HTTP/1.1 persistent connections to CDN servers for faster downloads. |
UPSTREAM_KEEPALIVE_CONNECTIONS |
16 |
Number of idle connections to keep open per upstream pool (per nginx worker). |
UPSTREAM_KEEPALIVE_TIMEOUT |
4s |
How long idle upstream connections stay open before closing. Should be set lower than your CDN's idle timeout — Cloudflare typically drops idle connections after 5–15s, so values above that will cause stalled downloads. |
UPSTREAM_KEEPALIVE_REQUESTS |
10000 |
Maximum requests per connection before recycling. Prevents memory leaks. |
UPSTREAM_KEEPALIVE_EXCLUDE |
(empty) | Optional comma-separated cache identifiers to exclude from keepalive (e.g. epic,origin). Excluded caches use direct proxy. Rarely needed - cross-CDN redirects and upstream failures are handled automatically. |
By default, nginx opens a new TCP connection for every request to CDN servers. With keepalive enabled, connections are reused across multiple requests, eliminating TCP handshake and TLS negotiation overhead.
Benefits:
- Faster cache-miss downloads (estimated 3-5x improvement)
- Lower latency for chunked downloads
- Reduced CPU usage from fewer TLS handshakes
How it works:
- On startup, creates nginx upstream pools for each resolvable domain in cache_domains.json
- Each upstream pool uses nginx native DNS resolution (
resolveparameter, nginx 1.27.3+) with shared memory zones - IPs are resolved and updated automatically without restarts - Maps incoming requests to the appropriate upstream pool
- Cross-CDN redirects (302) are handled dynamically by
@upstream_redirect- no static exclusion needed - Wildcard domains and unresolvable hosts fall back to direct proxy
| Variable | Default | Description |
|---|---|---|
NOSLICE_FALLBACK |
false |
Automatically detect and handle CDN servers that don't support HTTP Range requests. |
NOSLICE_THRESHOLD |
3 |
Number of slice failures before a host is added to the no-slice blocklist. |
DECAY_INTERVAL |
86400 |
Seconds (24h) before failure counts decay by 1. Prevents permanent blocklisting. |
Lancache uses HTTP Range requests to cache files in slices, enabling partial downloads and resumption. Some CDN servers don't implement Range requests correctly, causing cache errors. This feature automatically detects problematic servers and routes them through a non-sliced cache path.
How it works:
- Monitors nginx error logs for "invalid range in slice response" errors
- Tracks failure counts per hostname with timestamps
- After reaching the threshold (default: 3), adds the host to a blocklist
- Blocklisted hosts are cached without byte-range slicing
- Failure counts decay over time (default: 24h) to allow recovery
- Blocklist persists at
/data/noslice-hosts.mapacross restarts
Response header: X-LanCache-NoSlice: true indicates the response came from the no-slice path.
Reset the blocklist:
docker exec lancache-monolithic-1 /scripts/reset-noslice.sh| Variable | Default | Description |
|---|---|---|
NGINX_WORKER_PROCESSES |
auto |
Number of nginx worker processes. auto uses one per CPU core. |
NGINX_LOG_FORMAT |
cachelog |
Log format: cachelog (human-readable) or cachelog-json (for log parsers). |
NGINX_LOG_TO_STDOUT |
false |
Mirror access logs to container stdout for debugging with docker logs. |
Log format examples:
cachelog (default):
[steam] 192.168.1.100 HIT "GET /depot/123/chunk/abc" 200 1048576 "Mozilla/5.0"
cachelog-json:
{"timestamp":"2025-01-31T12:00:00","client":"192.168.1.100","cache_status":"HIT","request":"GET /depot/123/chunk/abc","status":200,"bytes":1048576,"cache_identifier":"steam"}| Variable | Default | Description |
|---|---|---|
NGINX_PROXY_CONNECT_TIMEOUT |
300s |
Timeout for establishing connection to upstream CDN servers. |
NGINX_PROXY_READ_TIMEOUT |
300s |
Timeout for reading response from upstream. Increase for slow CDNs. |
NGINX_PROXY_SEND_TIMEOUT |
300s |
Timeout for sending request to upstream. |
NGINX_SEND_TIMEOUT |
300s |
Timeout for sending response to client. |
If Epic Games or Riot launcher downloads start then repeatedly pause, show "Unable to connect", or log "upstream timed out" / "prematurely closed connection":
- Cloudflare CDN stalling (0 B/s) – Epic Games and other Cloudflare-backed services can stall at 0 B/s if
UPSTREAM_KEEPALIVE_TIMEOUTis set higher than Cloudflare's idle connection timeout (5–15s). The@direct_fallbacklocation will automatically detect these stalls and bypass the keepalive upstream, falling back to a direct proxy for affected requests. Check/data/logs/upstream-fallback.logfor fallback activity. If stalls are frequent, lowerUPSTREAM_KEEPALIVE_TIMEOUT(e.g.4s) to stay under Cloudflare's idle threshold. - Keepalive – Cross-CDN redirects are handled automatically. If a specific cache causes issues, exclude it manually with
UPSTREAM_KEEPALIVE_EXCLUDE=epic. - Host network – If the cache runs in Docker with port mapping and you see timeouts, try host network so the container has direct outbound access:
docker run --network host ...and bind nginx to a specific IP (e.g.listen 192.168.1.40:80) so the cache is only on that IP. See lancachenet/monolithic#80. - Prefill – Use epic-lancache-prefill to pre-cache games; then client downloads serve from cache and avoid upstream flakiness.
| Variable | Default | Description |
|---|---|---|
PUID |
33 |
User ID that owns the cache files. Default 33 is www-data on Debian/Ubuntu. |
PGID |
33 |
Group ID for cache files. Match your host user for NFS/SMB shares. |
SKIP_PERMS_CHECK |
false |
Skip the ownership check on startup. Use when permissions are managed externally. |
FORCE_PERMS_CHECK |
false |
Force recursive chown on startup. Warning: slow on large caches. |
For NFS/SMB shares where file ownership matters:
- Set
PUID/PGIDto match your NFS export or SMB share owner - Use
SKIP_PERMS_CHECK=trueif the NFS server doesn't allow ownership changes - Set to
nginxto use the container's default nginx user without modification
| Variable | Default | Description |
|---|---|---|
LOGFILE_RETENTION |
3560 |
Days to keep rotated log files before deletion. |
BEAT_TIME |
1h |
Interval between heartbeat entries in logs. Confirms the cache is running. |
SUPERVISORD_LOGLEVEL |
error |
Supervisor log verbosity: critical, error, warn, info, debug. |
A built-in web dashboard is available on port 8081 for monitoring and configuration.
- Dashboard — Active connections, service health, cache volume, filesystem detection
- Configuration — Edit environment variables with live mismatch warnings
- Upstream — Keepalive pool status, fallback events, cache domains tree
- Logs — Cache hit/miss distribution, error rate, response times
Access it at http://<cache-ip>:8081. No authentication required (internal network only).
Note
The admin panel currently displays mock data. A Go backend that provides live data from nginx, supervisor, and log files is planned.
| Port | Description |
|---|---|
80 |
HTTP cache proxy (required) |
443 |
HTTPS SNI proxy for HTTPS-only CDNs |
8080 |
nginx stub_status metrics endpoint |
8081 |
Admin panel web UI |
| Path | Description |
|---|---|
/data/cache |
Game download cache. Mount your largest/fastest storage here. |
/data/logs |
Access and error logs. access.log shows cache hits/misses. |
Supports linux/amd64 and linux/arm64. Docker automatically pulls the correct image for your platform.
- amd64: Standard x86_64 servers and desktops
- arm64: Raspberry Pi 4/5, Apple Silicon (via Docker Desktop), AWS Graviton
services:
monolithic:
image: ghcr.io/regix1/monolithic:latest
environment:
# Network
- UPSTREAM_DNS=8.8.8.8
# Cache
- CACHE_DISK_SIZE=2000g
- CACHE_INDEX_SIZE=2g
- MIN_FREE_DISK=50g
# Performance
- ENABLE_UPSTREAM_KEEPALIVE=true
- NOSLICE_FALLBACK=true
# Permissions
- PUID=33
- PGID=33
volumes:
- /mnt/cache:/data/cache
- ./logs:/data/logs
ports:
- "80:80"
- "443:443"
- "8081:8081" # Admin panel
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/lancache-heartbeat"]
interval: 30s
timeout: 10s
retries: 3# Build for current architecture
docker build -t monolithic:local .
# Build for multiple architectures (requires buildx)
docker buildx build --platform linux/amd64,linux/arm64 -t monolithic:local .- Original configs from ansible-lanparty
- /r/lanparty community
- UK LAN Techs
MIT License - see source for full text.