Skip to content

Deployment Guide

Platform Requirements

Linux Only - NetIntel-OCR is tested and supported only on Linux systems.

  • Supported OS: Ubuntu 20.04/22.04, RHEL 8/9, Debian 11/12
  • Python: 3.11.x or 3.12.x ONLY
  • Architecture: x86_64 (ARM support planned)

Windows and macOS deployments are not currently supported.

Deployment Workflow Overview

NetIntel-OCR v0.1.17 provides a comprehensive deployment workflow using the new CLI capabilities:

graph LR
    A[System Check] --> B[Project Init]
    B --> C[Config Setup]
    C --> D[Model Setup]
    D --> E[Server Deploy]
    E --> F[Health Check]

Step 1: System Verification

Check System Requirements

# Verify system compatibility
netintel-ocr system check

# Get detailed diagnostics
netintel-ocr system diagnose

# Check version and dependencies
netintel-ocr system version --json

System Requirements

# Check Python version (MUST be 3.11.x or 3.12.x)
python --version

# Verify Linux platform
uname -s  # Must output "Linux"

# Check available resources
netintel-ocr system metrics

Production Environment

  • OS: Linux x86_64 (Ubuntu 20.04+ or RHEL 8+ recommended)
  • Python: 3.11.x or 3.12.x (verified installation)
  • Memory: 16GB minimum, 32GB recommended
  • CPU: 8+ cores recommended
  • Storage: SSD with 100GB+ available

Step 2: Project Initialization

Initialize Deployment Configuration

# Initialize with deployment template
netintel-ocr project init --template production

# Small deployment (dev/test)
netintel-ocr project init --template small

# Medium deployment (staging)
netintel-ocr project init --template medium

# Large deployment (production)
netintel-ocr project init --template large

# Enterprise deployment (high availability)
netintel-ocr project init --template enterprise

Generated Files

project/
├── config.json           # Main configuration
├── docker-compose.yml    # Docker deployment
├── k8s/                  # Kubernetes manifests
│   ├── configmap.yaml
│   ├── deployment.yaml
│   ├── service.yaml
│   ├── hpa.yaml
│   └── pvc.yaml
├── helm/                 # Helm charts
│   ├── Chart.yaml
│   └── values.yaml
└── .env                  # Environment variables

Step 3: Configuration Management

Initialize Configuration

# Create default configuration
netintel-ocr config init

# Initialize with specific template
netintel-ocr config init --template production

# Initialize with custom path
netintel-ocr config init --output /etc/netintel/config.json

Configure Settings

# Set API server configuration
netintel-ocr config set server.api.port 8000
netintel-ocr config set server.api.host 0.0.0.0
netintel-ocr config set server.api.workers 4

# Set MCP server configuration
netintel-ocr config set server.mcp.port 8001
netintel-ocr config set server.mcp.enabled true

# Set database configuration
netintel-ocr config set db.milvus.host milvus.internal
netintel-ocr config set db.milvus.port 19530
netintel-ocr config set db.collection network_docs

# Set model configuration
netintel-ocr config set models.default qwen2.5vl:7b
netintel-ocr config set models.network NetIntelOCR-7B-0925
netintel-ocr config set models.flow qwen2.5vl:7b

# Set performance options
netintel-ocr config set performance.max_parallel 4
netintel-ocr config set performance.cache_enabled true
netintel-ocr config set performance.gpu_enabled true

Configuration Profiles

# Create deployment profiles
netintel-ocr config profile create development
netintel-ocr config profile create staging
netintel-ocr config profile create production

# Switch between profiles
netintel-ocr config profile use production

# List profiles
netintel-ocr config profile list

# Export profile
netintel-ocr config profile export production > prod-config.json

Environment Variables

# Export configuration as environment variables
netintel-ocr config env export > .env

# Load from environment
netintel-ocr config env load

# Show current configuration
netintel-ocr config show

# Validate configuration
netintel-ocr config validate

Step 4: Model Management

Configure OLLAMA

# Set OLLAMA host
netintel-ocr model ollama set-host http://ollama.internal:11434

# List available models
netintel-ocr model ollama list

# Pull required models
netintel-ocr model ollama pull qwen2.5vl:7b
netintel-ocr model ollama pull NetIntelOCR-7B-0925
netintel-ocr model ollama pull minicpm-v:latest

# Verify models
netintel-ocr model ollama verify

Model Configuration

# List all configured models
netintel-ocr model list

# Configure model defaults
netintel-ocr model set-default text Nanonets-OCR-s:latest
netintel-ocr model set-default network qwen2.5vl:7b
netintel-ocr model set-default flow qwen2.5vl:7b

# Preload models for performance
netintel-ocr model preload qwen2.5vl:7b
netintel-ocr model preload --all

# Keep models loaded in memory
netintel-ocr model keep-loaded

Step 5: Docker Deployment

Single Container Deployment

# Dockerfile
FROM python:3.11-slim-bullseye
# OR: FROM python:3.12-slim-bookworm

WORKDIR /app

# Install system dependencies
RUN apt-get update && apt-get install -y \
    gcc g++ python3-dev \
    && rm -rf /var/lib/apt/lists/*

# Install NetIntel-OCR
RUN pip install netintel-ocr

# Copy configuration
COPY config.json /app/config.json

# Set configuration path
ENV NETINTEL_CONFIG=/app/config.json

# Default command starts all services
CMD ["netintel-ocr", "server", "all"]

Docker Compose Deployment

# docker-compose.yml (generated by project init)
version: '3.8'

services:
  netintel-ocr:
    image: netintel-ocr:latest
    container_name: netintel-ocr
    command: netintel-ocr server all
    environment:
      - NETINTEL_CONFIG=/app/config/config.json
      - OLLAMA_HOST=http://ollama:11434
    ports:
      - "8000:8000"  # API
      - "8001:8001"  # MCP
    volumes:
      - ./config:/app/config
      - ./input:/app/input
      - ./output:/app/output
      - ./cache:/app/cache
    depends_on:
      - ollama
      - milvus
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "netintel-ocr", "system", "health"]
      interval: 30s
      timeout: 10s
      retries: 3

  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    ports:
      - "11434:11434"
    volumes:
      - ollama_data:/root/.ollama
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]

  milvus:
    image: milvusdb/milvus:latest
    container_name: milvus
    ports:
      - "19530:19530"
      - "9091:9091"
    volumes:
      - milvus_data:/var/lib/milvus
    environment:
      - ETCD_ENDPOINTS=etcd:2379
      - MINIO_ADDRESS=minio:9000

volumes:
  ollama_data:
  milvus_data:

Build and Deploy

# Build Docker image
docker build -t netintel-ocr:latest .

# Start services using docker-compose
docker-compose up -d

# Check service health
docker exec netintel-ocr netintel-ocr system health

# View logs
docker-compose logs -f netintel-ocr

# Scale services
docker-compose up -d --scale netintel-ocr=3

Step 6: Kubernetes Deployment

Deploy Using Generated Manifests

# Apply all Kubernetes resources
kubectl apply -f k8s/

# Check deployment status
kubectl get pods -l app=netintel-ocr

# View logs
kubectl logs -f deployment/netintel-ocr

# Check service endpoints
kubectl get svc netintel-ocr-service

Helm Deployment

# Install using Helm
helm install netintel-ocr ./helm \
  --set image.tag=latest \
  --set replicas=3 \
  --set resources.requests.memory=4Gi \
  --set resources.requests.cpu=2

# Upgrade deployment
helm upgrade netintel-ocr ./helm \
  --set replicas=5

# Check status
helm status netintel-ocr

ConfigMap for Kubernetes

apiVersion: v1
kind: ConfigMap
metadata:
  name: netintel-ocr-config
data:
  config.json: |
    {
      "server": {
        "api": {"port": 8000, "host": "0.0.0.0"},
        "mcp": {"port": 8001, "enabled": true}
      },
      "db": {
        "milvus": {
          "host": "milvus-service",
          "port": 19530
        }
      },
      "models": {
        "default": "qwen2.5vl:7b",
        "network": "NetIntelOCR-7B-0925"
      }
    }

Step 7: Server Operations

Start Individual Services

# Start API server only
netintel-ocr server api --port 8000 --workers 4

# Start MCP server only
netintel-ocr server mcp --port 8001

# Start all services (API + MCP)
netintel-ocr server all

# Start with custom configuration
netintel-ocr --config production.json server all

Server Management

# Check server status
netintel-ocr server status

# Reload configuration (graceful)
netintel-ocr server reload

# Stop server (graceful shutdown)
netintel-ocr server stop

# View server metrics
netintel-ocr server metrics

Step 8: Health Monitoring

System Health Checks

# Check overall system health
netintel-ocr system health

# Detailed health report
netintel-ocr system health --detailed

# Check specific components
netintel-ocr system health --component api
netintel-ocr system health --component mcp
netintel-ocr system health --component db
netintel-ocr system health --component models

Monitoring Endpoints

# Prometheus metrics
curl http://localhost:8000/metrics

# Health check
curl http://localhost:8000/health

# Readiness check
curl http://localhost:8000/ready

# Liveness check
curl http://localhost:8000/alive

Logging Configuration

# Set log level
netintel-ocr config set logging.level INFO
netintel-ocr config set logging.format json

# Enable debug mode
netintel-ocr --debug server all

# Log to file
netintel-ocr --log-file /var/log/netintel.log server all

# Structured logging
netintel-ocr --log-format json server all

Production Best Practices

1. Configuration Management

# Use environment-specific profiles
netintel-ocr config profile use production

# Enable configuration validation
netintel-ocr config validate --strict

# Backup configuration
netintel-ocr config backup /backup/config-$(date +%Y%m%d).json

2. Security Settings

# Enable API authentication
netintel-ocr config set server.api.auth.enabled true
netintel-ocr config set server.api.auth.key "$(openssl rand -hex 32)"

# Configure TLS
netintel-ocr config set server.api.tls.enabled true
netintel-ocr config set server.api.tls.cert /etc/ssl/cert.pem
netintel-ocr config set server.api.tls.key /etc/ssl/key.pem

# Set CORS policy
netintel-ocr config set server.api.cors.origins "https://app.example.com"

3. Performance Optimization

# Configure caching
netintel-ocr config set cache.enabled true
netintel-ocr config set cache.dir /var/cache/netintel
netintel-ocr config set cache.size 10GB

# Set resource limits
netintel-ocr config set resources.max_memory 8GB
netintel-ocr config set resources.max_cpu 4

# Enable GPU acceleration
netintel-ocr config set gpu.enabled true
netintel-ocr config set gpu.device 0

4. High Availability

# Configure clustering
netintel-ocr config set cluster.enabled true
netintel-ocr config set cluster.nodes "node1:8000,node2:8000,node3:8000"

# Set up load balancing
netintel-ocr config set loadbalancer.algorithm round_robin
netintel-ocr config set loadbalancer.health_check_interval 10

# Configure failover
netintel-ocr config set failover.enabled true
netintel-ocr config set failover.timeout 30

Troubleshooting Deployment

Common Issues

# Check system requirements
netintel-ocr system check --verbose

# Verify configuration
netintel-ocr config validate --debug

# Test connectivity
netintel-ocr system test-connection ollama
netintel-ocr system test-connection milvus

# Generate diagnostic report
netintel-ocr system diagnose --output diagnostic-report.txt

Debug Mode

# Run with full debug output
netintel-ocr --debug --verbose server all

# Debug specific component
netintel-ocr --debug server api --component network_processor

# Save debug logs
netintel-ocr --debug --log-file debug.log server all

Migration from Previous Versions

From v0.1.16 to v0.1.17

# Export old configuration
netintel-ocr-old --export-config > old-config.json

# Migrate configuration
netintel-ocr config migrate old-config.json

# Verify migration
netintel-ocr config validate

# Test with new configuration
netintel-ocr --dry-run server all

Next Steps