Script Manager includes complete Docker support for both development and production environments.
- Docker Desktop or Docker Engine (v20.10+)
- Docker Compose (v2.0+)
- At least 2GB of available RAM
- ~500MB of disk space for images
# 1. Clone the repository
git clone <repository-url>
cd Script-Manager
# 2. Start the application
docker-compose up -d
# 3. Check status
docker-compose ps
# 4. View logs
docker-compose logs -fAccess the application:
- Frontend: http://localhost:3000
- Backend API: http://localhost:8000
- API Docs: http://localhost:8000/docs
# Start with production configuration (includes Nginx)
docker-compose -f docker-compose.prod.yml up -d
# Access via Nginx
# http://localhostCreate a .env file:
cp .env.example .envEdit .env with your settings:
API_PORT=8000
DATABASE_PATH=/app/data/scripts.db
VITE_API_URL=http://localhost:8000The database is stored in the script_data volume. This persists data even if containers are removed:
# View volumes
docker volume ls
# Inspect volume
docker volume inspect script-manager_script_data
# Backup database
docker run --rm -v script-manager_script_data:/data -v $(pwd):/backup \
alpine tar czf /backup/scripts.db.tar.gz -C /data scripts.db
# Restore database
docker run --rm -v script-manager_script_data:/data -v $(pwd):/backup \
alpine tar xzf /backup/scripts.db.tar.gz -C /dataEdit docker-compose.yml and add volumes to the backend service:
backend:
volumes:
- script_data:/app/data
- /home/user/scripts:/scripts:ro
- /mnt/shared:/shared:roThen in the Script Manager UI, create folder roots pointing to /scripts, /shared, etc.
# Build all images
docker-compose build
# Build specific service
docker-compose build backend
docker-compose build frontend
# Build with no cache
docker-compose build --no-cache# All services
docker-compose logs
# Specific service
docker-compose logs backend
docker-compose logs frontend
docker-compose logs nginx
# Follow logs
docker-compose logs -f
# Last 100 lines
docker-compose logs --tail=100# Stop running containers
docker-compose stop
# Start stopped containers
docker-compose start
# Restart containers
docker-compose restart
# Remove containers and networks
docker-compose down
# Remove everything including volumes
docker-compose down -v# Execute command in running container
docker-compose exec backend bash
# Run one-off command
docker-compose run backend python -c "import app; print('OK')"# See memory and CPU usage
docker stats
# See container details
docker inspect script-manager-backendIf ports 3000 or 8000 are already in use:
# Change ports in docker-compose.yml
# Or kill the process using the port
# On Linux/Mac
lsof -i :3000
kill -9 <PID>
# On Windows
netstat -ano | findstr :3000
taskkill /PID <PID> /F# Access SQLite database inside container
docker-compose exec backend sqlite3 /app/data/scripts.db
# Remove and recreate database
docker-compose exec backend rm /app/data/scripts.db
docker-compose restart backendIncrease Docker memory allocation:
- Windows: Docker Desktop → Settings → Resources → Memory
- Mac: Docker → Preferences → Resources → Memory
- Linux: Edit
/etc/docker/daemon.json:
{
"memory": 4294967296,
"memswap": 4294967296
}# Check frontend logs
docker-compose logs frontend
# Clear cache and rebuild
docker-compose down -v
docker-compose build --no-cache
docker-compose up -d# Check backend logs
docker-compose logs backend
# Test backend connectivity
docker-compose exec frontend curl http://backend:8000/health
# Restart backend
docker-compose restart backendCheck that VITE_API_URL in .env matches the backend service name or external URL:
# For docker-compose
VITE_API_URL=http://localhost:8000
# For production with Nginx
VITE_API_URL=http://localhost/apiFaster builds using Docker BuildKit:
# Linux
export DOCKER_BUILDKIT=1
docker-compose build
# Windows PowerShell
$env:DOCKER_BUILDKIT=1
docker-compose build
# macOS
export DOCKER_BUILDKIT=1
docker-compose buildBuilt images are optimized with:
- Alpine Linux base where possible
- Multi-stage builds for frontend
- .dockerignore to exclude unnecessary files
# Scale backend to 3 instances with Nginx load balancing
docker-compose -f docker-compose.prod.yml up -d --scale backend=3Modify nginx.conf to use upstream load balancing:
upstream backend {
server backend:8000;
server backend:8001;
server backend:8002;
}- Environment Variables: Use
.envfile, never commit secrets - Volume Permissions: Use read-only mounts (
ro) for script directories - Network Isolation: Services communicate via Docker network, not exposed to host
- Health Checks: Containers include health checks for automatic restart
- Resource Limits: Consider adding memory/CPU limits in docker-compose.yml:
backend:
deploy:
resources:
limits:
cpus: '1'
memory: 1G
reservations:
cpus: '0.5'
memory: 512MModify Dockerfile.backend to connect to external PostgreSQL or MySQL instead of SQLite.
Don't use the Nginx service in docker-compose.yml. Instead:
# Run only backend and frontend
docker-compose up -d backend frontend
# Configure your external Nginx to proxy to:
# Backend: http://localhost:8000
# Frontend: http://localhost:3000Use Docker Swarm or Kubernetes for multi-machine deployments:
# Docker Swarm
docker swarm init
docker stack deploy -c docker-compose.yml script-manager
# Kubernetes (requires docker-compose to k8s conversion)
# Consider using helm charts for production deployments# Detailed status
docker-compose ps -a
# Service health
docker-compose ps
# Container resource usage
docker stats script-manager-backend
docker stats script-manager-frontend# Backend health
curl http://localhost:8000/health
# Frontend
curl http://localhost:3000
# Via Nginx (production)
curl http://localhost/# Backup to tar.gz
docker run --rm -v script-manager_script_data:/data \
-v $(pwd):/backup alpine tar czf /backup/scripts.db.tar.gz -C /data .
# Backup to SQL dump
docker-compose exec backend sqlite3 /app/data/scripts.db \
".mode insert" ".tables" > scripts.sql# Stop services
docker-compose stop
# Restore volume
docker run --rm -v script-manager_script_data:/data \
-v $(pwd):/backup alpine tar xzf /backup/scripts.db.tar.gz -C /data
# Start services
docker-compose up -dFor issues:
- Check logs:
docker-compose logs - Verify configuration in
docker-compose.yml - Ensure all prerequisites are met
- Check Docker and Docker Compose versions