OpenBlender

An addon that brings generative workflows directly into Blender. Generate images, videos, 3D models, and receive AI assistance without leaving Blender.

v0.30.1 Blender 5.0+ NVIDIA 24Gb VRAM 85Gb SSD

Features Core

Nine modules that cover generation, remix, LoRA, HDRI, chat, and remote control.

🖼️

IMG to IMG

▶ Watch

Generate images with real-time viewport capture. Works in FreeView & CameraView.

  • Manual "Run" or auto "Run on Change" modes
  • Adjustable megapixels, steps, denoise
  • Auto-numbered output in Image Editor
🎬

IMG to VID

▶ Watch

Create videos from three keyframe images. Auto-add to Video Sequencer.

  • First / Middle / Last keyframes
  • Configurable frame count & resolution
  • Re-click any keyframe to replace it

TXT to IMG

▶ Watch

Generate images from text prompts. Result loads automatically into the Image Editor.

  • Z-Image Turbo or FLUX.2-Klein workflows
  • Adjustable steps, width & height
  • Randomise Seed checkbox
  • Triggerable from AI Chat
🧊

IMG to 3D

▶ Watch

Convert images into GLB models via Trellis2. Imported with textures.

  • Import image → Start → 3D model
  • Auto-downloads models on first run
💬

AI Chat

▶ Watch ▶ Watch

Integrated chat via OpenRouter or local LM Studio.

  • Claude, GPT-4, Llama 3, Mistral, Qwen
  • Floating chat button in Blender UI
🔌

MCP Server

▶ Watch

HTTP/SSE server (OpenClaw) for remote Blender control.

  • Scene, material, animation, render control
  • Optional API key auth
🎨

Remix

▶ Watch

Transform any displayed image from inside the Image Editor. Uses the current image as IMG to IMG input.

  • Image Editor side panel — no mode switching
  • Auto-numbered output: Remix.001, Remix.002…
  • Send result to IMG to 3D with one click
  • Adjustable megapixels, steps, denoise & prompt
🧩

LoRA Support

▶ Watch

Add LoRA models to your generation workflows.

  • Open IMG to IMG / IMG to VID in ComfyUI WebUI
  • Add your LoRA node(s) and save the API workflow
  • Your LoRA stack is now applied automatically in Blender
🌐

TXT to HDRI

▶ Watch

Generate 360° panoramic HDRI environment maps from text prompts using FLUX.2-Klein + HDRI LoRA.

  • Full equirectangular 360° panoramic output
  • Klein 9B HDRI LoRA — tuned for panoramic scenes
  • Apply directly as World environment in Blender

Key Capabilities

Designed to stay out of your way.

Non-Blocking

Background threads keep Blender responsive

Viewport Integration

FreeView & CameraView, correct aspect ratio

Smart Monitoring

Auto-detect changes, trigger generation

Custom Workflows

Load any ComfyUI JSON workflow

Image Management

Auto-numbered: Result, Result_001…

VSE Integration

Videos auto-add to Video Sequencer

Modular UI

Toggle panels to reduce clutter

SageAttention

Optional install for faster inference

Requirements

What you need before getting started.

🖥️ System

  • Blender 5.0 or later (untested on prior)
  • RTX 4090 recommended
  • Sufficient VRAM for AI models + EEVEE

🔗 ComfyUI Features

  • ComfyUI server running (local or remote)
  • Required for IMG→IMG, IMG→VID, IMG→3D
  • Must be started manually (WebUI not needed)

🌐 MCP Server (Optional)

  • Network access for HTTP (default port: 9876)
  • Optional: API key for authentication

💬 Chat (Optional)

  • OpenRouter: API key + internet
  • LM Studio: Local server running

Installation

Follow these five steps to get everything running.

1

Install the Extension

  1. Download the OpenBlender extension package
  2. Blender → Edit → Preferences → Extensions
  3. Click "Install from Disk", select the .zip
  4. Enable the extension by checking the checkbox
2

Set Up ComfyUI & Models (AIO Installer)

  1. Install Visual C++ Redistributable: aka.ms/vc14/vc_redist.x64.exe
  2. Install Windows PowerShell 7: PowerShell-7.5.4-win-x64.msi
  3. Update your NVIDIA drivers to the latest version
  4. Navigate to the addon directory:
    C:\Users\YourName\AppData\...\Blender\5.0\extensions\user_default\openblender
  5. Run install_comfyui_aio.bat and follow the on-screen instructions — this installs ComfyUI, required nodes, dependencies (SageAttention / Flash Attention), and all models automatically
  6. Test each of the 4 workflows in ComfyUI, just drag and drop the .json files in the WebUI (Blender\5.0\extensions\user_default\openblender\ComfyUI) — if they run and produce output, you're ready to use OpenBlender. This is a one-time verification; you won't need to open the WebUI again unless you want to edit workflows (add LoRAs, swap models for .gguf quants, etc.)
3

Install LM Studio (Optional — Local LLM)

  1. Download from lmstudio.ai, install, launch
  2. Discover tab → search model (GLM 4.7, Qwen3 Coder Next, any smart model) → Download
  3. Developer tab → select model → Start Server
  4. Default URL: http://127.0.0.1:1234
4

Configure OpenBlender

  1. Edit → Preferences → Add-ons → OpenBlender
  2. Set Model Provider (OpenRouter or LM Studio)
  3. Enter API key or local server URL
  4. Set ComfyUI Server URL (default: http://127.0.0.1:8188)
  5. Toggle feature visibility for the sidebar panels you need
5

Update

  1. Check the addon's Gumroad page for new versions and download updates
  2. Replace the content of Blender\5.0\extensions\user_default\openblender
  3. Run Verify_UpdateModels.bat to check for missing or new models and auto-download them

Usage

Step-by-step guides for each module.

MCP Server (OpenClaw)

▶ Watch

Remote Blender control via HTTP/SSE.

  1. Open sidebar panel (N key → OpenBlender tab)
  2. Click "Start Server"
  3. Connect MCP client to http://[IP]:9876/sse
  4. Use API key if authentication is enabled
URL
http://[your-ip]:9876/sse
Protocol
Server-Sent Events (SSE)

AI Chat

▶ Watch ▶ Watch

Integrated AI assistant inside Blender.

  1. Enable the Chat panel in addon Preferences
  2. Configure your LLM provider (OpenRouter or LM Studio)
  3. Click the floating chat button in the Blender UI

Vision / Keyframe Analysis

The AI chat supports vision-capable models (e.g. Kimi k2.5, GPT-4o, Claude) for analysing images directly inside Blender.

Video Prompt from Keyframes:

  1. Generate or assign your three keyframe images (First, Middle, Last) via the IMG to VID or Remix panel
  2. Open the floating chat and type: generate a video prompt from keyframes images
  3. The AI calls get_vid_keyframe_images, receives the three images, analyses the visual sequence, and writes a detailed motion prompt describing scene content, transitions, mood, lighting, and optionally sound design

Requires a vision-capable model. The keyframe images are sent as base64 JPEG (max 512px) to stay within context limits.

Batch Generation

Request multiple images at once for keyframe sequences:

  • "Generate 3 continuous keyframes of Batman — first, middle, and end frames"
  • "Create 5 variations of a mountain landscape"

The AI generates coherent, continuous prompts showing temporal progression and calls the generation tool once per image. Each frame logically follows the previous with evolving poses, camera movement, and lighting. All prompts are automatically enhanced with cinematic detail.

Note: The AI triggers each image sequentially and will confirm after all requested images have been sent to ComfyUI.

Automatic Prompt Enhancement

All generation requests are automatically enhanced without you needing to write detailed prompts:

  • Subject specifics: appearance, pose, action, costume, expression
  • Camera work: angle, lens, motion, depth of field
  • Lighting: key light quality, fill, rim, time of day, atmosphere
  • Environment: location, weather, background elements
  • Style: cinematic look, color grading, technical quality

Example — You say: "Batman" → AI generates: "Batman in tactical armored suit on rain-soaked Gotham rooftop, low-angle heroic shot with 35mm lens, dramatic rim lighting through storm clouds, volumetric fog, wet cape reflecting neon signs, cinematic color grading, 8k photorealistic"

IMG to IMG

▶ Watch

Generate images from your viewport.

🎯 Manual Mode

  1. Set mode to "Run" in sidebar
  2. Adjust parameters → click "Generate"

⚡ On-Change Mode

  1. Set mode to "Run (on change)" → click "Start"
  2. Modify viewport — generation triggers when inputs are released

⚙️ Parameters

Prompt
Describe desired output
Megapixels
Resolution: 0.1 – 1.0
Steps
Inference steps: 1 – 12
Denoise
0.0 – 1.0 (Z-Image turbo only; Flux2 Klein = 1)

Output: Images appear in Image Editor as Result, Result_001, Result_002

IMG to VID

▶ Watch

Generate videos from three keyframe images.

  1. Click "First Frame" → start view
  2. Click "Middle Frame" → transition view
  3. Click "Last Frame" → end view
  4. Set Frame Count and Megapixels
  5. Click "Generate Video"

All three keyframes required. You can re-click any to replace. Video auto-adds to VSE if enabled.

AI Video Prompt Generation

Once your three keyframes are set, ask the AI Chat to "generate a video prompt from keyframes images". The AI will visually analyse the First → Middle → Last sequence and write a cinematic motion prompt describing transitions, mood, lighting, and sound design context.

IMG to 3D

▶ Watch

Convert 2D images to 3D GLB models via Trellis2.

  1. Import an image
  2. Click "Start"
  3. 3D model imports into Blender as GLB, with textures

Requires Trellis2 nodes in ComfyUI. First run ~2 min to download models.

Remix

▶ Watch

Transform images from inside the Image Editor side panel.

  1. Open any image in the Image Editor
  2. Press N → find the OpenBlender tab in the sidebar
  3. Set your Prompt, Megapixels, Steps, and Denoise strength
  4. Click "Generate Remix"
  5. Result appears as Remix.001, Remix.002… in the Image Editor

📤 Send to IMG to 3D

Click "Send to IMG to 3D" to push the currently displayed image to the IMG to 3D input.

🎬 Send to IMG to VID Frame

Use "Send to First / Middle / Last Frame" to assign the current Image Editor image directly to an IMG to VID keyframe slot — no need to switch panels.

🤖 MCP / AI Chat

Remix is also available via the MCP server and AI Chat using the trigger_remix tool. Agents can transform any displayed image remotely with a text prompt.

📸 Image Capture

Request the current state as an image:

  • "Send me the current image" → Agent calls capture_image_editor
  • "Show me the viewport" → Agent calls capture_viewport

Both return base64 JPEG images that the agent can display in the chat.

Prompt
Describe the remixed result
Megapixels
Resolution: 0.1 – 4.0
Steps
Inference steps: 1 – 20
Denoise
0.0 – 1.0 (Z-Image Turbo only)

TXT to HDRI

▶ Watch

Generate 360° equirectangular HDRI environment maps from text prompts using FLUX.2-Klein + Klein 9B HDRI LoRA.

  1. Set viewport shading to Material Preview or Rendered so the environment is visible
  2. In Properties → World, ensure a World data block exists — click "New" if the slot is empty
  3. Open the OpenBlender sidebar panel (N key → OpenBlender tab)
  4. Enter a Prompt describing the environment
  5. Set Megapixels and Steps
  6. Click "Generate HDRI"
  7. The generated panorama is automatically applied to the scene World as an environment texture
Prompt
Describe the environment (sky, studio, nature…)
Megapixels
Resolution: 0.5 – 4.0+
Steps
Inference steps: 1 – 20

The required model is installed automatically by the installer or the Verify_UpdateModels.bat tool.

LoRA Support

▶ Watch

Add LoRA models to your generation workflows.

  1. Open your IMG to IMG or IMG to VID workflow in ComfyUI WebUI
  2. Add your LoRA node(s) to the workflow graph
  3. Save the API workflow as .json
  4. Your LoRA stack is now applied automatically in Blender on every generation

Any ComfyUI-compatible LoRA works. Stack multiple LoRAs in a single workflow. Find models at CivitAI.

Configuration

Access via Edit → Preferences → Add-ons → OpenBlender. ComfyUI must be started manually.

🔌 MCP Server

Port9876
API Keyoptional
Auto Starton/off

🤖 LLM Provider

ProviderOpenRouter / LM Studio
OR API Keyyour-key
OR Modelanthropic/claude-3.5-sonnet
LM Studio URL127.0.0.1:1234

🎨 ComfyUI

Server URL127.0.0.1:8188
Auto-add to VSEon/off

👁️ Panel Visibility

MCP Servertoggle
Chattoggle
IMG to IMGtoggle
IMG to VIDtoggle
IMG to 3Dtoggle

Tips & Troubleshooting

⚡ Performance

  • Unload LM Studio models before running ComfyUI workflows to free VRAM
  • VRAM issue in IMG to VID? In ComfyUI/comfy/supported_models.py, change memory_usage_factor = 0.061 to 0.2
  • Install SageAttention for faster inference

🏗️ Best Practices

  • Disable unused panels in Preferences to reduce N-panel clutter
  • Monitor both Blender and ComfyUI consoles for progress
  • Test workflows in ComfyUI first before using through Blender
  • For captures, use Camera view and check resolution matches aspect ratio

🔧 Connection Issues

  • Start ComfyUI manually before using generation features
  • Verify Server URL matches running instance (default 127.0.0.1:8188)
  • Visit http://127.0.0.1:8188 in browser to confirm
  • Check firewall for port 8188

🔧 Other Issues

  • If Trellis2 doesn't load on ComfyUI, your system is probably blocking the symlink that is created for the environment, RUN the ComfyUI server as Administrator to allow the symlink to be created.
  • Missing models: Run Verify_UpdateModels.bat
  • Missing nodes: ComfyUI Manager → Install Missing → Restart
  • Images not showing: Ensure Image Editor is visible
  • Debug: Window → Toggle System Console, watch [OpenBlender] / [PostProcess]

API Reference MCP

MCP Server endpoints and exposed Blender capabilities.

GET/Server status
GET/sseSSE connection for MCP protocol
POST/messageSend messages to server
Scene Manipulation Object Creation Object Modification Material Editing Rendering Control Animation Tools Viewport Operations

Version History

v0.30.1 Bug Fix & Quality Update
Improved IMG to VID Workflow base settings Vision / Keyframe Image Analysis AI Video Prompt from Keyframes Batch Generation — Continuous Keyframes Automatic Prompt Enhancement Motion-Focused Video Prompts Auto Audio Context in Video Prompts Python 3.13 Compatibility (imghdr removed) Proper Logging (32 prints → logging module) MCP Skill Rewrite Remix MCP Tool Image Capture Tools (Editor & Viewport)
v0.30.0 Feature Update
TXT to IMG Panel Randomise Seed — TXT to IMG Chat Auto-Display — TXT to IMG & HDRI Sent to ComfyUI Agent Response Chat UI — Removed Role Labels Chat UI — Text Vertical Centering Fix ComfyUI Manager in Installer
v0.29.0 Major Feature Update
Instant WS Completion Workflow Config JSON Correct Output Node Resolution Interrupt Button Queue Position Display Graceful Stop Denoise Hidden for Flux-KLEIN Send to VID Frame Buttons WS Decompression Fix 15 New MCP Tools (53 → 68 total) Lighting Tools Camera Cinematography Tools Shader Node Graph Tools Generation Pipeline Tools
v0.28.2 Patch
Randomise Seed — IMG to IMG HDRI workflow model name correction
v0.28.0 Feature Update
TXT to HDRI FLUX.2-Klein HDRI LoRA 360° Panoramic Generation Verify & Update Models Tool
v0.27.0 Feature Update
Remix LoRA Support Send to IMG→3D Streaming Chat Chain-of-Thought
v0.26.0 Initial Release
MCP Server ComfyUI IMG→IMG IMG→VID IMG→3D AI Chat Non-Blocking Viewport Monitoring

Debug

Enable console logging: Window → Toggle System Console — watch for [OpenBlender] and [PostProcess] log entries.