Skip to main content

Docker Deployment

Deploy Plugged.in using Docker for consistent, scalable, and portable deployments across any environment.

Quick Start

1

Clone Repository

git clone https://github.com/VeriTeknik/pluggedin-app.git
cd pluggedin-app
2

Configure Environment

cp .env.example .env
# Edit .env with your configuration
3

Build and Run

docker-compose up -d
4

Access Application

Multi-Architecture Support

NEW: Plugged.in now supports both AMD64 and ARM64 architectures!Our Docker images are now available as multi-architecture builds on Docker Hub, automatically selecting the correct platform for your system.

AMD64 (x86_64)

✅ Intel/AMD processors ✅ Most cloud platforms ✅ Traditional servers

ARM64 (aarch64)

✅ Apple Silicon (M1/M2/M3) ✅ AWS Graviton ✅ Raspberry Pi 4+

Verify Platform Support

Check available architectures for any version:
# Check latest release
docker manifest inspect veriteknik/pluggedin:latest --verbose | jq '.manifests[].platform'

# Check specific version
docker manifest inspect veriteknik/pluggedin:v2.16.0 --verbose | jq '.manifests[].platform'

# Expected output:
# {
#   "architecture": "amd64",
#   "os": "linux"
# }
# {
#   "architecture": "arm64",
#   "os": "linux"
# }
Docker automatically pulls the correct architecture for your platform. No manual configuration needed!

Docker Hub Deployment

Using Pre-built Image

1

Pull Image

docker pull veriteknik/pluggedin:latest
2

Run with Docker Compose

# Download production compose file
curl -O https://raw.githubusercontent.com/VeriTeknik/pluggedin-app/main/docker-compose.production.yml

# Create .env file with your configuration
cat > .env <<EOF
DATABASE_URL=postgresql://user:pass@host:5432/db
NEXTAUTH_URL=https://your-domain.com
NEXTAUTH_SECRET=$(openssl rand -base64 32)
PLUGGEDIN_API_KEY=$(openssl rand -base64 32)
EOF

# Start services
docker-compose -f docker-compose.production.yml up -d

Publishing Multi-Arch Images to Docker Hub

Use the provided docker-build.sh script for automated multi-architecture builds and publishing.
1

Build Multi-Arch Image

# Build and push multi-architecture image (AMD64 + ARM64)
./docker-build.sh v2.16.0

# Or build locally for testing (current architecture only)
./docker-build.sh v2.16.0 --local
What this does:
  • Sets up Docker buildx for multi-platform builds
  • Builds for both linux/amd64 and linux/arm64
  • Creates unified manifest
  • Pushes to Docker Hub
  • Tags both version and latest
2

Manual Multi-Arch Build (Advanced)

# Create buildx builder
docker buildx create --name multiarch-builder --use --bootstrap

# Build and push for multiple platforms
docker buildx build \
  --platform linux/amd64,linux/arm64 \
  -f Dockerfile.production \
  -t veriteknik/pluggedin:v2.16.0 \
  -t veriteknik/pluggedin:latest \
  --push \
  .
3

Verify Multi-Arch Upload

# Check that both architectures are available
docker manifest inspect veriteknik/pluggedin:v2.16.0

# You should see manifests for both amd64 and arm64
For multi-arch builds, you need Docker Desktop or Docker with buildx support. The automated script (docker-build.sh) handles all setup automatically.

Docker Architecture

Application Container

  • Next.js 15 application
  • Node.js 20 runtime
  • MCP proxy server
  • Port 12005 exposed

Database Container

  • PostgreSQL 18-alpine
  • Persistent volume
  • Health checks enabled
  • Port 5432 exposed

Migrator Container

  • One-time migration runner
  • Optimized size (288 MB)
  • Auto-stops after migrations
  • Drizzle ORM migrations

Persistent Volumes

  • Database data
  • User uploads
  • Application logs
  • MCP package cache

What’s Included

  • ✅ PostgreSQL 18 (latest stable) with automatic migrations
  • ✅ Next.js 15 web application with optimized production build
  • ✅ Persistent volumes for database, uploads, logs, and MCP packages
  • ✅ Health checks and automatic restarts
  • ✅ Migrator container (288 MB) for database setup
  • ✅ Docker-optimized MCP isolation (no sandboxing overhead)

Production Dockerfile

The project includes two Dockerfiles:
  1. Dockerfile - Standard build with sandboxing support
  2. Dockerfile.production - Optimized production build
Both include sandboxing tools (bubblewrap, firejail, fuse3) for secure MCP server execution.

Standard Dockerfile Features

# Multi-stage build with sandboxing
FROM node:20-alpine AS builder

# Sandboxing tools included in runner stage
RUN apk add --no-cache \
    bubblewrap \
    firejail \
    fuse3 \
    libfuse3-3

# Set working directory
WORKDIR /app

# Copy package files
COPY package.json pnpm-lock.yaml ./

# Install dependencies
RUN pnpm install --frozen-lockfile

# Copy application code
COPY . .

# Build application
RUN pnpm build

# Production stage
FROM node:20-alpine AS runner

# Sandboxing tools are pre-installed
RUN apk add --no-cache \
    curl \
    bubblewrap \
    firejail \
    fuse3 \
    libfuse3-3

# Create app user
RUN addgroup -g 1001 nodejs \
    && adduser -S nextjs -u 1001

# Set working directory
WORKDIR /app

# Copy built application
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
COPY --from=builder --chown=nextjs:nodejs /app/public ./public

# Create cache directories
RUN mkdir -p /app/.cache/mcp-packages \
    && chown -R nextjs:nodejs /app/.cache

# Switch to non-root user
USER nextjs

# Expose port
EXPOSE 3000

# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
    CMD curl -f http://localhost:3000/api/health || exit 1

# Start application
CMD ["node", "server.js"]

Docker Compose Configuration

Development Setup

# docker-compose.yml
services:
  pluggedin-app:
    container_name: pluggedin-app
    build:
      context: .
      dockerfile: Dockerfile
    env_file:
      - .env
    restart: always
    ports:
      - '12005:3000'
    volumes:
      - mcp-cache:/app/.cache
      - app-uploads:/app/uploads
      - app-logs:/app/logs
    environment:
      - NODE_ENV=production
      - DATABASE_URL=postgresql://pluggedin:pluggedin_secure_password@pluggedin-postgres:5432/pluggedin
      - DATABASE_SSL=false
      - MCP_ISOLATION_TYPE=none
      - MCP_ISOLATION_FALLBACK=firejail
      - MCP_ENABLE_NETWORK_ISOLATION=false
      - MCP_PACKAGE_STORE_DIR=/app/.cache/mcp-packages
      - MCP_PNPM_STORE_DIR=/app/.cache/mcp-packages/pnpm-store
      - MCP_UV_CACHE_DIR=/app/.cache/mcp-packages/uv-cache
    depends_on:
      pluggedin-postgres:
        condition: service_healthy

  pluggedin-postgres:
    container_name: pluggedin-postgres
    image: postgres:18-alpine
    restart: always
    environment:
      POSTGRES_DB: pluggedin
      POSTGRES_USER: pluggedin
      POSTGRES_PASSWORD: pluggedin_secure_password
    ports:
      - '5432:5432'
    volumes:
      - pluggedin-postgres:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U pluggedin -d pluggedin"]
      interval: 5s
      timeout: 5s
      retries: 5

  drizzle-migrate:
    container_name: pluggedin-migrate
    build:
      context: .
      dockerfile: Dockerfile
      target: migrator
    command: >
      sh -c "
        echo 'Waiting for database to be ready...';
        until pg_isready -h pluggedin-postgres -p 5432 -U pluggedin; do
          echo 'Database is unavailable - sleeping';
          sleep 2;
        done;
        echo 'Database is up - running migrations';
        pnpm drizzle-kit migrate
      "
    env_file:
      - .env
    environment:
      - DATABASE_URL=postgresql://pluggedin:pluggedin_secure_password@pluggedin-postgres:5432/pluggedin
      - DATABASE_SSL=false
      - PGUSER=pluggedin
      - PGHOST=pluggedin-postgres
      - PGDATABASE=pluggedin
    depends_on:
      pluggedin-postgres:
        condition: service_healthy

volumes:
  pluggedin-postgres:
    driver: local
  mcp-cache:
    driver: local
  app-uploads:
    driver: local
  app-logs:
    driver: local

Production Setup

# docker-compose.prod.yml
version: '3.8'

services:
  app:
    image: pluggedin:latest
    container_name: pluggedin-app
    restart: always
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=production
      - DATABASE_URL=${DATABASE_URL}
      - NEXTAUTH_URL=${NEXTAUTH_URL}
      - NEXTAUTH_SECRET=${NEXTAUTH_SECRET}
      - REDIS_URL=redis://redis:6379
    volumes:
      - app_cache:/app/.cache
      - app_uploads:/app/uploads
    depends_on:
      - db
      - redis
    networks:
      - pluggedin-network
    deploy:
      resources:
        limits:
          cpus: '2'
          memory: 2G
        reservations:
          cpus: '1'
          memory: 1G

  db:
    image: postgres:18-alpine
    container_name: pluggedin-db
    restart: always
    environment:
      - POSTGRES_USER=${DB_USER}
      - POSTGRES_PASSWORD=${DB_PASSWORD}
      - POSTGRES_DB=${DB_NAME}
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./backup:/backup
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${DB_USER} -d ${DB_NAME}"]
      interval: 5s
      timeout: 5s
      retries: 5
    networks:
      - pluggedin-network
    command:
      - "postgres"
      - "-c"
      - "shared_buffers=256MB"
      - "-c"
      - "max_connections=200"

  redis:
    image: redis:7-alpine
    container_name: pluggedin-redis
    restart: always
    volumes:
      - redis_data:/data
    networks:
      - pluggedin-network
    command: redis-server --appendonly yes

  nginx:
    image: nginx:alpine
    container_name: pluggedin-nginx
    restart: always
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
      - ./ssl:/etc/nginx/ssl:ro
      - nginx_cache:/var/cache/nginx
    depends_on:
      - app
    networks:
      - pluggedin-network

volumes:
  postgres_data:
  redis_data:
  app_cache:
  app_uploads:
  nginx_cache:

networks:
  pluggedin-network:
    driver: bridge
    ipam:
      config:
        - subnet: 172.20.0.0/16

Environment Variables

Create .env file for Docker deployment:
# Database
DATABASE_URL=postgresql://pluggedin:secure_password@db:5432/pluggedin_prod
DB_USER=pluggedin
DB_PASSWORD=secure_password
DB_NAME=pluggedin_prod

# Authentication
NEXTAUTH_URL=https://your-domain.com
NEXTAUTH_SECRET=generate-with-openssl-rand-base64-32

# Redis
REDIS_URL=redis://redis:6379

# Application
NODE_ENV=production
PORT=3000

# MCP Configuration
MCP_PACKAGE_STORE_DIR=/app/.cache/mcp-packages
MCP_PNPM_STORE_DIR=/app/.cache/mcp-packages/pnpm-store
MCP_UV_CACHE_DIR=/app/.cache/mcp-packages/uv-cache

# Sandboxing Configuration (Docker-optimized)
MCP_ISOLATION_TYPE=none
MCP_ISOLATION_FALLBACK=firejail
MCP_ENABLE_NETWORK_ISOLATION=false

# Note: Use 'none' for Docker deployments as bubblewrap requires
# Linux kernel user namespace support not available in Docker on macOS

# API Keys (optional)
ANTHROPIC_API_KEY=your-key
OPENAI_API_KEY=your-key

# Email (optional)
EMAIL_SERVER_HOST=smtp.gmail.com
EMAIL_SERVER_PORT=587
EMAIL_SERVER_USER=your-email
EMAIL_SERVER_PASSWORD=your-password
EMAIL_FROM=noreply@your-domain.com

Building and Running

Build Image

# Build production image
docker build -t pluggedin:latest .

# Build with specific tag
docker build -t pluggedin:v2.10.3 .

# Build with build args
docker build \
  --build-arg NODE_VERSION=20 \
  --build-arg NEXT_PUBLIC_API_URL=https://api.plugged.in \
  -t pluggedin:latest .

Run Containers

  • Development
  • Production
# Start all services
docker-compose up -d

# View logs
docker-compose logs -f app

# Stop services
docker-compose down

Database Management

Migrations

# Run migrations
docker-compose exec app pnpm db:migrate

# Generate schema
docker-compose exec app pnpm db:generate

# Access database
docker-compose exec db psql -U pluggedin -d pluggedin_prod

Backup and Restore

# Backup database
docker-compose exec db pg_dump -U pluggedin pluggedin_prod | gzip > backup_$(date +%Y%m%d_%H%M%S).sql.gz

# Restore database
gunzip -c backup_20240101_120000.sql.gz | docker-compose exec -T db psql -U pluggedin pluggedin_prod

Upgrading PostgreSQL

PostgreSQL major version upgrades require data migration. Always backup before upgrading.

Upgrading from PostgreSQL 16 or earlier to 18

  • Option 1: Fresh Start (Data Loss)
  • Option 2: Data Migration
# Stop all containers and remove volumes
docker-compose down -v

# Update docker-compose.yml to postgres:18-alpine
# Start fresh with PostgreSQL 18
docker-compose up --build -d
Note: This will DELETE all existing data. Only use for development.

Verify PostgreSQL Version

# Check PostgreSQL version
docker exec pluggedin-postgres psql -U pluggedin -d pluggedin -c "SELECT version();"

# Expected output: PostgreSQL 18.0 on ...

Nginx Configuration

Create nginx.conf for production:
events {
    worker_connections 1024;
}

http {
    upstream app {
        least_conn;
        server app:3000;
    }

    server {
        listen 80;
        server_name your-domain.com;
        return 301 https://$server_name$request_uri;
    }

    server {
        listen 443 ssl http2;
        server_name your-domain.com;

        ssl_certificate /etc/nginx/ssl/cert.pem;
        ssl_certificate_key /etc/nginx/ssl/key.pem;
        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_ciphers HIGH:!aNULL:!MD5;

        client_max_body_size 10M;

        # Security headers
        add_header X-Frame-Options "SAMEORIGIN" always;
        add_header X-Content-Type-Options "nosniff" always;
        add_header X-XSS-Protection "1; mode=block" always;

        # Proxy to application
        location / {
            proxy_pass http://app;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }

        # Static files
        location /_next/static {
            proxy_pass http://app;
            proxy_cache_valid 200 365d;
            add_header Cache-Control "public, immutable";
        }
    }
}

Health Monitoring

Health Check Endpoint

The application provides a health check endpoint:
# Check health
curl http://localhost:12005/api/health

# Response
{
  "status": "healthy",
  "timestamp": "2024-01-01T12:00:00Z",
  "version": "2.10.3",
  "services": {
    "database": "connected",
    "redis": "connected",
    "mcp": "ready"
  }
}

Docker Health Check

HEALTHCHECK --interval=30s --timeout=3s --retries=3 \
    CMD curl -f http://localhost:3000/api/health || exit 1

Scaling Strategies

Horizontal Scaling

# docker-compose.scale.yml
services:
  app:
    deploy:
      replicas: 3
      update_config:
        parallelism: 1
        delay: 10s
      restart_policy:
        condition: any

Load Balancing

Use Docker Swarm or Kubernetes for advanced load balancing:
# Initialize swarm
docker swarm init

# Deploy stack
docker stack deploy -c docker-compose.prod.yml pluggedin

# Scale service
docker service scale pluggedin_app=5

Troubleshooting

Common Issues

Check logs:
docker-compose logs app
docker-compose ps
Verify environment variables and database connection
Verify database is running:
docker-compose exec db pg_isready
Check connection string in .env file
Fix ownership:
docker-compose exec app chown -R nextjs:nodejs /app/.cache
Increase memory limits:
deploy:
  resources:
    limits:
      memory: 4G

Debug Mode

Enable debug logging:
# Add to docker-compose.yml
environment:
  - DEBUG=mcp:*
  - LOG_LEVEL=debug

Container Access

# Access application container
docker-compose exec app sh

# Access database
docker-compose exec db psql -U pluggedin

# View real-time logs
docker-compose logs -f --tail=100

Security Best Practices

Never use default passwords in production. Always use secrets management.

1. Use Docker Secrets

secrets:
  db_password:
    external: true
  nextauth_secret:
    external: true

services:
  app:
    secrets:
      - db_password
      - nextauth_secret

2. Non-root User

Always run containers as non-root:
USER nextjs

3. Read-only Root Filesystem

services:
  app:
    read_only: true
    tmpfs:
      - /tmp
      - /app/.next/cache

4. Network Isolation

networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge
    internal: true

Backup Strategy

Automated backup script:
#!/bin/bash
# backup.sh

BACKUP_DIR="/backups"
DATE=$(date +%Y%m%d_%H%M%S)

# Backup database
docker-compose exec -T db pg_dump -U pluggedin pluggedin_prod | \
  gzip > "$BACKUP_DIR/db_$DATE.sql.gz"

# Backup volumes
docker run --rm \
  -v pluggedin_app_uploads:/data \
  -v $BACKUP_DIR:/backup \
  alpine tar czf "/backup/uploads_$DATE.tar.gz" /data

# Keep only last 30 days
find $BACKUP_DIR -name "*.gz" -mtime +30 -delete
Add to crontab:
0 2 * * * /path/to/backup.sh

Support

For Docker deployment help: