Your Own Private AI Image Generator: Step-by-Step Guide to Docker Model Runner and Open WebUI

Introduction

Ever wanted to generate AI images without worrying about credits, privacy, or content filters? You can now run a full image-generation pipeline on your own machine using Docker Model Runner and Open WebUI. This setup gives you a chat interface where you type a prompt, and your local hardware handles the rest — no cloud subscriptions, no data leaks. In this guide, you’ll learn how to pull a model, launch the web UI, and start creating images in minutes. All you need is Docker and a bit of patience.

Your Own Private AI Image Generator: Step-by-Step Guide to Docker Model Runner and Open WebUI
Source: www.docker.com

What You Need

  • Docker Desktop (macOS) or Docker Engine (Linux) — version 20.10 or later
  • At least 8 GB of free RAM (16 GB recommended for larger models)
  • GPU (optional but highly recommended):
    • NVIDIA with CUDA
    • Apple Silicon (MPS)
    • CPU fallback works but is slower
  • Stable internet connection for the initial model download (~7 GB)

To verify your Docker setup, run docker model version in a terminal. If you see version info (no errors), you’re ready.

How Docker Model Runner Works with Open WebUI

Before diving into steps, here’s the big picture: Docker Model Runner acts as the control plane. It downloads the AI model, manages the inference backend lifecycle, and exposes a fully OpenAI-compatible API — including the POST /v1/images/generations endpoint. Open WebUI, a popular chat interface for local LLMs, knows exactly how to talk to this API. You type a prompt, the UI sends it to the local model, and the generated image appears in the chat.

Step 1: Pull an Image Generation Model

Docker Model Runner uses a compact packaging format called DDUF (Diffusers Unified Format). This bundles the text encoder, VAE, UNet (or DiT), and scheduler config into a single portable file — distributed via Docker Hub just like any OCI artifact. To download a model:

docker model pull stable-diffusion

This pulls the default Stable Diffusion XL model. The download is about 7 GB, so it may take a few minutes depending on your internet speed. Once downloaded, confirm it’s ready with:

docker model inspect stable-diffusion

You’ll see JSON output like:

{
  "id": "sha256:5f60862074a4c585126288d08555e5ad9ef65044bf490ff3a64855fc84d06823",
  "tags": ["docker.io/ai/stable-diffusion:latest"],
  "created": 1768470632,
  "config": {
    "format": "diffusers",
    "architecture": "diffusers",
    "size": "6.94GB",
    "diffusers": {
      "dduf_file": "stable-diffusion-xl-base-1.0-FP16.dduf",
      "layout": "dduf"
    }
  }
}

This means the model is stored locally as a DDUF file. Docker Model Runner will unpack it at runtime automatically.

Step 2: Launch Open WebUI

This is the magic moment. Docker Model Runner includes a built-in command that wires up Open WebUI against your local inference endpoint. Run:

docker model launch openwebui

That’s it — no extra configuration, no port mapping to remember. The launch command starts both the model inference service and the Open WebUI container, linking them automatically. After a few seconds, you’ll see a log line with the local URL (usually http://localhost:8080). Open that in your browser.

Step 3: Generate Your First Image

Inside Open WebUI, you’ll see a familiar chat interface. To generate an image, simply type a prompt like:

Your Own Private AI Image Generator: Step-by-Step Guide to Docker Model Runner and Open WebUI
Source: www.docker.com

“A dragon wearing a business suit, sitting at a desk, photorealistic style”

The UI will display a small loading animation while the model processes your request. Generating one image on a decent GPU takes about 10–30 seconds; on CPU-only it can take 2–5 minutes. Once done, the image appears in the chat thread. You can download it directly or continue refining with follow-up prompts.

Note: The default model (Stable Diffusion XL) produces 1024×1024 images. You can adjust resolution in future releases, but for now it’s fixed.

Tips for a Smoother Experience

  • Use a GPU if possible. Generation speed improves dramatically. On an NVIDIA RTX 3060, expect ~10 seconds per image; on an M1 MacBook Pro, ~15 seconds; on CPU only, 2–5 minutes.
  • Monitor your RAM. Docker Model Runner loads models into memory. If you have only 8 GB, close other applications before running. For larger models (e.g., SDXL), 16 GB is recommended.
  • Storage space. Each model takes about 7 GB. If you plan to experiment with multiple models (like sdxl-turbo or anime versions), allocate at least 30 GB free.
  • Troubleshooting: If the launch command fails, ensure Docker has enough resources (check Docker Desktop settings > Resources). On Linux, verify that your user is in the docker group.
  • Stopping the service: Use Ctrl+C in the terminal where you ran docker model launch to stop both containers. Alternatively, run docker compose down (the launch command creates a temporary compose project).
  • Advanced: You can pull other models from Docker Hub that use the DDUF format. For example, docker model pull sdxl-turbo for faster generation (but lower quality). Check Docker Hub for available tags.

Conclusion

With just two commands — docker model pull and docker model launch openwebui — you now have a fully private, locally running AI image generator. No subscription fees, no data leaving your machine, and no arbitrary content filters. You can create as many images as your hardware allows, all from a clean chat interface. This is the power of local AI, made accessible by Docker Model Runner and Open WebUI.

Tags:

Recommended

Discover More

Web Development Never Settles: The Constant Cycle of DisruptionWeekend Deal: Apple M5 MacBook Pro with 24GB RAM and 1TB Storage at $1,699 – Save $200Instagram Abandons End-to-End Encryption for Direct Messages; Meta Cites Low Opt-In RatesESS to Mass-Produce Alsym's Sodium-Ion Battery: A Breakthrough for Grid StorageUnderstanding Reward Hacking in Reinforcement Learning: Key Questions Answered