An addon that brings generative workflows directly into Blender. Generate images, videos, 3D models, and receive AI assistance without leaving Blender.
Ten modules that cover generation, remix, LoRA, HDRI, rigging, chat, and remote control.
Generate images with real-time viewport capture. Works in FreeView & CameraView.
Create videos from three keyframe images. Auto-add to Video Sequencer.
Generate videos from a variety of inputs using LTX 2.3.
Convert images into GLB models via Trellis2. Imported with textures.
Integrated chat via OpenRouter or local LM Studio.
HTTP/SSE server (OpenClaw) for remote Blender control.
Transform any displayed image from inside the Image Editor. Uses the current image as IMG to IMG input.
Add LoRA models to your generation workflows.
Generate 360° panoramic HDRI environment maps from text prompts using FLUX.2-Klein + HDRI LoRA.
Generate a humanoid rig with motion from a text prompt.
Generate images from text prompts. Result loads automatically into the Image Editor.
Designed to stay out of your way.
Background threads keep Blender responsive
FreeView & CameraView, correct aspect ratio
Auto-detect changes, trigger generation
Load any ComfyUI JSON workflow
Auto-numbered: Result, Result_001…
Videos auto-add to Video Sequencer
Toggle panels to reduce clutter
Optional install for faster inference
What you need before getting started.
Four steps to get everything running.
Edit → Preferences → Extensions.zipC:\Users\YourName\AppData\...\Blender\5.0\extensions\user_default\openblenderinstall_comfyui_aio.bat and follow the on-screen instructions — this installs ComfyUI, performance packages (Triton, SageAttention, FlashAttention)Open Edit → Preferences → Add-ons → OpenBlender and run these four actions in order:
D:\ComfyUI_windows_portable)Blender\5.0\extensions\user_default\openblenderPreferences → Add-ons → OpenBlender → Verify ComfyUI Path → Instal Nodes / Dependencies / Models if missing After a new update that brings new featuresStep-by-step guides for each module.
Remote Blender control via HTTP/SSE.
N key → OpenBlender tab)http://localhost:9876/sseIntegrated AI assistant inside Blender.
The AI chat supports vision-capable models (e.g. Kimi k2.5, GPT-4o, Claude) for analysing images directly inside Blender.
Video Prompt from Keyframes:
generate a video prompt from keyframes imagesget_vid_keyframe_images, receives the three images, analyses the visual sequence, and writes a detailed motion prompt describing scene content, transitions, mood, lighting, and optionally sound designRequires a vision-capable model. The keyframe images are sent as base64 JPEG (max 512px) to stay within context limits.
Request multiple images at once for keyframe sequences:
"Generate 3 continuous keyframes of Batman — first, middle, and end frames""Create 5 variations of a mountain landscape"The AI generates coherent, continuous prompts showing temporal progression and calls the generation tool once per image. Each frame logically follows the previous with evolving poses, camera movement, and lighting. All prompts are automatically enhanced with cinematic detail.
Note: The AI triggers each image sequentially and will confirm after all requested images have been sent to ComfyUI.
All generation requests are automatically enhanced without you needing to write detailed prompts:
Example — You say: "Batman" → AI generates: "Batman in tactical armored suit on rain-soaked Gotham rooftop, low-angle heroic shot with 35mm lens, dramatic rim lighting through storm clouds, volumetric fog, wet cape reflecting neon signs, cinematic color grading, 8k photorealistic"
Generate images from your viewport.
Output: Images appear in Image Editor as Result, Result_001, Result_002…
Generate videos from three keyframe images.
All three keyframes required. You can re-click any to replace. Video auto-adds to video sequencer if the scene is selected.
Once your three keyframes are set, ask the AI Chat to "generate a video prompt from keyframes images". The AI will visually analyse the First → Middle → Last sequence and write a cinematic motion prompt describing transitions, mood, lighting, and sound design context.
Generate videos from text, image, or video inputs using LTX 2.3.
N key → OpenBlender tab)The IC-LoRA depth model gives LTX 2.3 structural guidance for consistent motion across frames. It is downloaded via Blender Preferences → Download Models → LTX 2.3.
Convert 2D images to 3D GLB models via Trellis2.
Requires Trellis2 nodes in ComfyUI (installed via Blender Preferences → Install Nodes). Models are downloaded via Download Models → IMG to 3D.
Transform images from inside the Image Editor side panel.
N → find the OpenBlender tab in the sidebarRemix.001, Remix.002… in the Image EditorClick "Send to IMG to 3D" to push the currently displayed image to the IMG to 3D input.
Use "Send to First / Middle / Last Frame" to assign the current Image Editor image directly to an IMG to VID keyframe slot.
Remix is also available via the MCP server and AI Chat using the trigger_remix tool. Agents can transform any displayed image remotely with a text prompt.
Request the current state as an image:
"Send me the current image" → Agent calls capture_image_editor"Show me the viewport" → Agent calls capture_viewportBoth return base64 JPEG images that the agent can display in the chat.
Generate 360° equirectangular HDRI environment maps from text prompts using FLUX.2-Klein + Klein 9B HDRI LoRA.
N key → OpenBlender tab)The required models are downloaded from Blender Preferences → Add-ons → OpenBlender → Download Models → TXT to HDRI.
Generate a humanoid dummy rig with motion from a text prompt using HY-Motion. The output is a generic animated skeleton — not a character with specific appearance or textures.
trigger_txt_to_rig MCP toolAdd LoRA models to your generation workflows.
.jsonAny ComfyUI-compatible LoRA works. Stack multiple LoRAs in a single workflow. Find models at CivitAI.
Access via Edit → Preferences → Add-ons → OpenBlender. ComfyUI must be started manually.
ComfyUI/comfy/supported_models.py, change memory_usage_factor = 0.061 to 0.2127.0.0.1:8188)http://127.0.0.1:8188 in browser to confirmIf Trellis2 doesn't load on ComfyUI, your system is probably blocking the symlink that is created for the environment, RUN the ComfyUI server as Administrator to allow the symlink to be created.Preferences → Add-ons → OpenBlender → Download ModelsMCP Server endpoints and exposed Blender capabilities.
Enable console logging: Window → Toggle System Console — watch for [OpenBlender] and [PostProcess] log entries.