OpenBlender

An addon that brings generative workflows directly into Blender. Generate images, videos, 3D models, and receive AI assistance without leaving Blender.

v0.36.2 Blender 5.0+ NVIDIA 24Gb VRAM 85Gb SSD

Features Core

Ten modules that cover generation, remix, LoRA, HDRI, rigging, chat, and remote control.

🖼️

IMG to IMG

Watch

Generate images with real-time viewport capture. Works in FreeView & CameraView.

  • Manual "Run" or auto "Run on Change" modes
  • Adjustable megapixels, steps, denoise
  • Auto-numbered output in Image Editor
🎬

IMG to VID

Watch

Create videos from three keyframe images. Auto-add to Video Sequencer.

  • First / Middle / Last keyframes
  • Configurable frame count & resolution
  • Re-click any keyframe to replace it
🎥

VID / TEXT / IMG to VID

Watch

Generate videos from a variety of inputs using LTX 2.3.

  • Text prompt → video
  • Image + prompt → video
  • IC-LoRA depth viewport camera anim to video (With optional FirstFrame)
🧊

IMG to 3D

Watch

Convert images into GLB models via Trellis2. Imported with textures.

  • Import image → Start → 3D model
  • Models downloaded via Blender Preferences → Download Models
💬

AI Chat

Watch Watch

Integrated chat via OpenRouter or local LM Studio.

  • Claude, GPT-4, Llama 3, Mistral, Qwen
  • Floating chat button in Blender UI
🔌

MCP Server

Watch

HTTP/SSE server (OpenClaw) for remote Blender control.

  • Scene, material, animation, render control
  • Optional API key auth
🎨

Remix

Watch

Transform any displayed image from inside the Image Editor. Uses the current image as IMG to IMG input.

  • Image Editor side panel — no mode switching
  • Auto-numbered output: Remix.001, Remix.002…
  • Send result to IMG to 3D with one click
  • Adjustable megapixels, steps, denoise & prompt
🧩

LoRA Support

Watch

Add LoRA models to your generation workflows.

  • Open IMG to IMG / IMG to VID in ComfyUI WebUI
  • Add your LoRA node(s) and save the API workflow
  • Your LoRA stack is now applied automatically in Blender
🌐

TXT to HDRI

Watch

Generate 360° panoramic HDRI environment maps from text prompts using FLUX.2-Klein + HDRI LoRA.

  • Full equirectangular 360° panoramic output
  • Klein 9B HDRI LoRA — tuned for panoramic scenes
  • Apply directly as World environment in Blender
🦴

TXT to RIG

Watch

Generate a humanoid rig with motion from a text prompt.

  • Motion prompt → animated humanoid dummy
  • Auto-imported into scene as a rigged GLB
  • Auto First Person camera setup
  • Triggerable from AI Chat

TXT to IMG

Watch

Generate images from text prompts. Result loads automatically into the Image Editor.

  • Z-Image Turbo or FLUX.2-Klein workflows
  • Adjustable steps, width & height
  • Randomise Seed checkbox
  • Triggerable from AI Chat

Key Capabilities

Designed to stay out of your way.

Non-Blocking

Background threads keep Blender responsive

Viewport Integration

FreeView & CameraView, correct aspect ratio

Smart Monitoring

Auto-detect changes, trigger generation

Custom Workflows

Load any ComfyUI JSON workflow

Image Management

Auto-numbered: Result, Result_001…

VSE Integration

Videos auto-add to Video Sequencer

Modular UI

Toggle panels to reduce clutter

SageAttention

Optional install for faster inference

Requirements

What you need before getting started.

🖥️ System

  • Blender 5.0 or later (untested on prior)
  • RTX 4090 recommended
  • Sufficient VRAM for AI models + EEVEE

🔗 ComfyUI Features

  • ComfyUI server running (local or remote)
  • Required for IMG→IMG, IMG→VID, IMG→3D
  • Must be started manually (WebUI not needed)

🌐 MCP Server (Optional)

  • Network access for HTTP (default port: 9876)
  • Optional: API key for authentication

💬 Chat (Optional)

  • OpenRouter: API key + internet
  • LM Studio: Local server running

Installation

Four steps to get everything running.

1

Install the Extension

  1. Download the OpenBlender extension package
  2. Blender → Edit → Preferences → Extensions
  3. Click "Install from Disk", select the .zip
  4. Enable the extension by checking the checkbox
2

Set Up ComfyUI (AIO Installer)

  1. Install Visual C++ Redistributable: aka.ms/vc14/vc_redist.x64.exe
  2. Install Windows PowerShell 7: PowerShell-7.5.4-win-x64.msi
  3. Update your NVIDIA drivers to the latest version
  4. Navigate to the addon directory:
    C:\Users\YourName\AppData\...\Blender\5.0\extensions\user_default\openblender
  5. Run install_comfyui_aio.bat and follow the on-screen instructions — this installs ComfyUI, performance packages (Triton, SageAttention, FlashAttention)
  6. Nodes, dependencies, and AI models are set up from Blender Preferences (see Step 3 below)
3

Set Up in Blender Preferences

Open Edit → Preferences → Add-ons → OpenBlender and run these four actions in order:

  1. Verify ComfyUI Installation — set your ComfyUI path (e.g. D:\ComfyUI_windows_portable)
  2. Install Nodes — clones all required ComfyUI custom nodes.
  3. Install Dependencies — opens a console and installs Python packages into ComfyUI. Wait for "Installation complete!".
  4. Download Models — install the models you want; models download into the correct ComfyUI folders automatically.
  5. Configure — set your AI provider (OpenRouter API key or LM Studio URL)
4

Update

  1. Check the addon's Gumroad page for new versions and download updates
  2. Replace the content of Blender\5.0\extensions\user_default\openblender
  3. Open Blender → Preferences → Add-ons → OpenBlender → Verify ComfyUI Path → Instal Nodes / Dependencies / Models if missing After a new update that brings new features

Usage

Step-by-step guides for each module.

MCP Server

Watch

Remote Blender control via HTTP/SSE.

  1. Open sidebar panel (N key → OpenBlender tab)
  2. Click "Start Server"
  3. Connect MCP client to http://localhost:9876/sse
  4. Use API key if authentication is enabled
URL
http://localhost:9876/sse
Protocol
Server-Sent Events (SSE)

AI Chat

Watch Watch

Integrated AI assistant inside Blender.

  1. Enable the Chat panel in addon Preferences
  2. Configure your LLM provider (OpenRouter or LM Studio)
  3. Click the floating chat button in the Blender UI

Vision / Keyframe Analysis

The AI chat supports vision-capable models (e.g. Kimi k2.5, GPT-4o, Claude) for analysing images directly inside Blender.

Video Prompt from Keyframes:

  1. Generate or assign your three keyframe images (First, Middle, Last) via the IMG to VID or Remix panel
  2. Open the floating chat and type: generate a video prompt from keyframes images
  3. The AI calls get_vid_keyframe_images, receives the three images, analyses the visual sequence, and writes a detailed motion prompt describing scene content, transitions, mood, lighting, and optionally sound design

Requires a vision-capable model. The keyframe images are sent as base64 JPEG (max 512px) to stay within context limits.

Batch Generation

Request multiple images at once for keyframe sequences:

  • "Generate 3 continuous keyframes of Batman — first, middle, and end frames"
  • "Create 5 variations of a mountain landscape"

The AI generates coherent, continuous prompts showing temporal progression and calls the generation tool once per image. Each frame logically follows the previous with evolving poses, camera movement, and lighting. All prompts are automatically enhanced with cinematic detail.

Note: The AI triggers each image sequentially and will confirm after all requested images have been sent to ComfyUI.

Automatic Prompt Enhancement

All generation requests are automatically enhanced without you needing to write detailed prompts:

  • Subject specifics: appearance, pose, action, costume, expression
  • Camera work: angle, lens, motion, depth of field
  • Lighting: key light quality, fill, rim, time of day, atmosphere
  • Environment: location, weather, background elements
  • Style: cinematic look, color grading, technical quality

Example — You say: "Batman" → AI generates: "Batman in tactical armored suit on rain-soaked Gotham rooftop, low-angle heroic shot with 35mm lens, dramatic rim lighting through storm clouds, volumetric fog, wet cape reflecting neon signs, cinematic color grading, 8k photorealistic"

IMG to IMG

Watch

Generate images from your viewport.

🎯 Manual Mode

  1. Set mode to "Run" in sidebar
  2. Adjust parameters → click "Generate"

⚡ On-Change Mode

  1. Set mode to "Run (on change)" → click "Start"
  2. Modify viewport — generation triggers when inputs are released

⚙️ Parameters

Prompt
Describe desired output
Megapixels
Resolution: 0.1 – 1.0
Steps
Inference steps: 1 – 12
Denoise
0.0 – 1.0 (Z-Image turbo only; Flux2 Klein = 1)

Output: Images appear in Image Editor as Result, Result_001, Result_002

IMG to VID

Watch

Generate videos from three keyframe images.

  1. Click "First Frame" → start view
  2. Click "Middle Frame" → transition view
  3. Click "Last Frame" → end view
  4. Set Frame Count and Megapixels
  5. Click "Generate Video"

All three keyframes required. You can re-click any to replace. Video auto-adds to video sequencer if the scene is selected.

AI Video Prompt Generation

Once your three keyframes are set, ask the AI Chat to "generate a video prompt from keyframes images". The AI will visually analyse the First → Middle → Last sequence and write a cinematic motion prompt describing transitions, mood, lighting, and sound design context.

VID / TEXT / IMG to VID

Watch

Generate videos from text, image, or video inputs using LTX 2.3.

🖊️ Text to Video

  1. Open the OpenBlender sidebar panel (image editor) (N key → OpenBlender tab)
  2. Select the TXT to VID pannel
  3. Enter a text prompt describing the scene and motion
  4. Click "Generate"

🎞️ IC-LoRA Depth Control

The IC-LoRA depth model gives LTX 2.3 structural guidance for consistent motion across frames. It is downloaded via Blender Preferences → Download Models → LTX 2.3.

Prompt
Describe scene, camera motion, mood
Steps
Inference steps
Frame Count
Output video length in frames
Resolution
Width & height set by the blender render settings define the aspect ratio

IMG to 3D

Watch

Convert 2D images to 3D GLB models via Trellis2.

  1. Import an image
  2. Click "Start"
  3. 3D model imports into Blender as GLB, with textures

Requires Trellis2 nodes in ComfyUI (installed via Blender Preferences → Install Nodes). Models are downloaded via Download Models → IMG to 3D.

Remix

Watch

Transform images from inside the Image Editor side panel.

  1. Open any image in the Image Editor
  2. Press N → find the OpenBlender tab in the sidebar
  3. Set your Prompt, Megapixels, Steps, and Denoise strength
  4. Click "Generate Remix"
  5. Result appears as Remix.001, Remix.002… in the Image Editor

📤 Send to IMG to 3D

Click "Send to IMG to 3D" to push the currently displayed image to the IMG to 3D input.

🎬 Send to IMG to VID Frame

Use "Send to First / Middle / Last Frame" to assign the current Image Editor image directly to an IMG to VID keyframe slot.

🤖 MCP / AI Chat

Remix is also available via the MCP server and AI Chat using the trigger_remix tool. Agents can transform any displayed image remotely with a text prompt.

📸 Image Capture

Request the current state as an image:

  • "Send me the current image" → Agent calls capture_image_editor
  • "Show me the viewport" → Agent calls capture_viewport

Both return base64 JPEG images that the agent can display in the chat.

Prompt
Describe the remixed result
Megapixels
Resolution: 0.1 – 4.0
Steps
Inference steps: 1 – 20
Denoise
0.0 – 1.0 (Z-Image Turbo only)

TXT to HDRI

Watch

Generate 360° equirectangular HDRI environment maps from text prompts using FLUX.2-Klein + Klein 9B HDRI LoRA.

  1. Set viewport shading to Material Preview or Rendered so the environment is visible
  2. In Properties → World, ensure a World data block exists — click "New" if the slot is empty
  3. Open the OpenBlender sidebar panel (N key → OpenBlender tab)
  4. Enter a Prompt describing the environment
  5. Set Megapixels and Steps
  6. Click "Generate HDRI"
  7. The generated panorama is automatically applied to the scene World as an environment texture
Prompt
Describe the environment (sky, studio, nature…)
Megapixels
Resolution: 0.5 – 4.0+
Steps
Inference steps: 1 – 20

The required models are downloaded from Blender Preferences → Add-ons → OpenBlender → Download Models → TXT to HDRI.

TXT to RIG

Watch

Generate a humanoid dummy rig with motion from a text prompt using HY-Motion. The output is a generic animated skeleton — not a character with specific appearance or textures.

  1. In Blender, open the sidebar (N) → OpenBlender tab → TXT to RIG panel
  2. Enter a text prompt describing the motion (e.g. "walking forward", "jumping", "waving hand")
  3. Click "Generate" — HY-Motion runs in ComfyUI and produces an animated GLB
  4. The humanoid dummy rig is automatically imported into the scene

Tips

  • Prompts should describe motion only — HY-Motion outputs a generic humanoid skeleton regardless of character description
  • Use it as an animation base: retarget the motion onto your own character rig
  • The AI Chat can trigger generation via the trigger_txt_to_rig MCP tool
  • Models are downloaded from Blender Preferences

LoRA Support

Watch

Add LoRA models to your generation workflows.

  1. Open your IMG to IMG or IMG to VID workflow in ComfyUI WebUI
  2. Add your LoRA node(s) to the workflow graph
  3. Save the API workflow as .json
  4. Your LoRA stack is now applied automatically in Blender on every generation

Any ComfyUI-compatible LoRA works. Stack multiple LoRAs in a single workflow. Find models at CivitAI.

Configuration

Access via Edit → Preferences → Add-ons → OpenBlender. ComfyUI must be started manually.

🔌 MCP Server

Port9876
API Keyoptional
Auto Starton/off
Uses withOpenClaw, Claude Code, OpenCode etc..

🤖 LLM Provider

ProviderOpenRouter / LM Studio
OR API Keyyour-key
OR Modelmoonshotai/kimi-k2.5
LM Studio URL127.0.0.1:1234

Tips & Troubleshooting

⚡ Performance

  • Unload LM Studio models before running ComfyUI workflows to free VRAM
  • VRAM issue in IMG to VID? In ComfyUI/comfy/supported_models.py, change memory_usage_factor = 0.061 to 0.2

🏗️ Best Practices

  • Disable unused panels in Preferences to reduce N-panel clutter
  • Monitor ComfyUI consoles for progress
  • Test workflows in ComfyUI first before using through Blender

🔧 Connection Issues

  • Start ComfyUI manually before using generation features
  • Verify Server URL matches running instance (default 127.0.0.1:8188)
  • Visit http://127.0.0.1:8188 in browser to confirm
  • Check firewall for port 8188

🔧 Other Issues

  • If Trellis2 doesn't load on ComfyUI, your system is probably blocking the symlink that is created for the environment, RUN the ComfyUI server as Administrator to allow the symlink to be created.
  • Missing models: Blender → Preferences → Add-ons → OpenBlender → Download Models
  • Images not showing: Ensure Image Editor is visible

API Reference MCP

MCP Server endpoints and exposed Blender capabilities.

GET/Server status
GET/sseSSE connection for MCP protocol
POST/messageSend messages to server
Scene Manipulation Object Creation Object Modification Material Editing Rendering Control Animation Tools Viewport Operations

Version History

v0.36.2 IMG to IMG Reduce Latency
Direct File Copy Upload (no HTTP overhead) Debounce 1.0s → 0.1s
v0.36.1 Video Pipeline & MCP Hardening
TXT to VID 1x IMG to VID 3x IMG to VID (renamed) VID to VID Unused Models Checker ComfyUI Progress Bar MCP Localhost Default Allow Remote Access Preference
v0.31.5 Model Paths & Install UX
Custom Model Paths Custom Stable Forks Per-file Download Progress Addon's Settings re-Organization
v0.31.0 Feature & Architecture Update
TXT to RIG SKILL Improvements Codebase Reorganisation Light Installer Nodes/Deps/Models download from addon's settigns
v0.30.1 Quality Update
Improved IMG to VID Workflow base settings Vision / Keyframe Image Analysis AI Video Prompt from Keyframes Batch Generation — Continuous Keyframes Automatic Prompt Enhancement Motion-Focused Video Prompts MCP Skill Rewrite Remix MCP Tool Image Capture Tools (Editor & Viewport)
v0.30.0 Feature Update
TXT to IMG Panel Randomise Seed — TXT to IMG Chat Auto-Display — TXT to IMG & HDRI Sent to ComfyUI Agent Response Chat UI — Removed Role Labels Chat UI — Text Vertical Centering Fix ComfyUI Manager in Installer
v0.29.0 Feature Update
Instant WS Completion Workflow Config JSON Correct Output Node Resolution Interrupt Button Queue Position Display Graceful Stop Denoise Hidden for Flux-KLEIN Send to VID Frame Buttons WS Decompression Fix 15 New MCP Tools (53 → 68 total) Lighting Tools Camera Cinematography Tools Shader Node Graph Tools Generation Pipeline Tools
v0.28.2 Patch
Randomise Seed — IMG to IMG HDRI workflow model name correction
v0.28.0 Feature Update
TXT to HDRI FLUX.2-Klein HDRI LoRA 360° Panoramic Generation Verify & Update Models Tool
v0.27.0 Feature Update
Remix LoRA Support Send to IMG→3D Streaming Chat Chain-of-Thought
v0.26.0 Initial Release
MCP Server ComfyUI IMG→IMG IMG→VID IMG→3D AI Chat Non-Blocking Viewport Monitoring

Debug

Enable console logging: Window → Toggle System Console — watch for [OpenBlender] and [PostProcess] log entries.