An addon that brings generative workflows directly into Blender. Generate images, videos, 3D models, and receive AI assistance without leaving Blender.
Nine modules that cover generation, remix, LoRA, HDRI, chat, and remote control.
Generate images with real-time viewport capture. Works in FreeView & CameraView.
Create videos from three keyframe images. Auto-add to Video Sequencer.
Generate images from text prompts. Result loads automatically into the Image Editor.
Convert images into GLB models via Trellis2. Imported with textures.
Integrated chat via OpenRouter or local LM Studio.
HTTP/SSE server (OpenClaw) for remote Blender control.
Transform any displayed image from inside the Image Editor. Uses the current image as IMG to IMG input.
Add LoRA models to your generation workflows.
Generate 360° panoramic HDRI environment maps from text prompts using FLUX.2-Klein + HDRI LoRA.
Designed to stay out of your way.
Background threads keep Blender responsive
FreeView & CameraView, correct aspect ratio
Auto-detect changes, trigger generation
Load any ComfyUI JSON workflow
Auto-numbered: Result, Result_001…
Videos auto-add to Video Sequencer
Toggle panels to reduce clutter
Optional install for faster inference
What you need before getting started.
Follow these five steps to get everything running.
Edit → Preferences → Extensions.zipC:\Users\YourName\AppData\...\Blender\5.0\extensions\user_default\openblenderinstall_comfyui_aio.bat and follow the on-screen instructions — this installs ComfyUI, required nodes, dependencies (SageAttention / Flash Attention), and all models automaticallyBlender\5.0\extensions\user_default\openblender\ComfyUI) — if they run and produce output, you're ready to use OpenBlender. This is a one-time verification; you won't need to open the WebUI again unless you want to edit workflows (add LoRAs, swap models for .gguf quants, etc.)http://127.0.0.1:1234Edit → Preferences → Add-ons → OpenBlenderhttp://127.0.0.1:8188)Blender\5.0\extensions\user_default\openblenderVerify_UpdateModels.bat to check for missing or new models and auto-download themStep-by-step guides for each module.
Remote Blender control via HTTP/SSE.
N key → OpenBlender tab)http://[IP]:9876/sseIntegrated AI assistant inside Blender.
The AI chat supports vision-capable models (e.g. Kimi k2.5, GPT-4o, Claude) for analysing images directly inside Blender.
Video Prompt from Keyframes:
generate a video prompt from keyframes imagesget_vid_keyframe_images, receives the three images, analyses the visual sequence, and writes a detailed motion prompt describing scene content, transitions, mood, lighting, and optionally sound designRequires a vision-capable model. The keyframe images are sent as base64 JPEG (max 512px) to stay within context limits.
Request multiple images at once for keyframe sequences:
"Generate 3 continuous keyframes of Batman — first, middle, and end frames""Create 5 variations of a mountain landscape"The AI generates coherent, continuous prompts showing temporal progression and calls the generation tool once per image. Each frame logically follows the previous with evolving poses, camera movement, and lighting. All prompts are automatically enhanced with cinematic detail.
Note: The AI triggers each image sequentially and will confirm after all requested images have been sent to ComfyUI.
All generation requests are automatically enhanced without you needing to write detailed prompts:
Example — You say: "Batman" → AI generates: "Batman in tactical armored suit on rain-soaked Gotham rooftop, low-angle heroic shot with 35mm lens, dramatic rim lighting through storm clouds, volumetric fog, wet cape reflecting neon signs, cinematic color grading, 8k photorealistic"
Generate images from your viewport.
Output: Images appear in Image Editor as Result, Result_001, Result_002…
Generate videos from three keyframe images.
All three keyframes required. You can re-click any to replace. Video auto-adds to VSE if enabled.
Once your three keyframes are set, ask the AI Chat to "generate a video prompt from keyframes images". The AI will visually analyse the First → Middle → Last sequence and write a cinematic motion prompt describing transitions, mood, lighting, and sound design context.
Convert 2D images to 3D GLB models via Trellis2.
Requires Trellis2 nodes in ComfyUI. First run ~2 min to download models.
Transform images from inside the Image Editor side panel.
N → find the OpenBlender tab in the sidebarRemix.001, Remix.002… in the Image EditorClick "Send to IMG to 3D" to push the currently displayed image to the IMG to 3D input.
Use "Send to First / Middle / Last Frame" to assign the current Image Editor image directly to an IMG to VID keyframe slot — no need to switch panels.
Remix is also available via the MCP server and AI Chat using the trigger_remix tool. Agents can transform any displayed image remotely with a text prompt.
Request the current state as an image:
"Send me the current image" → Agent calls capture_image_editor"Show me the viewport" → Agent calls capture_viewportBoth return base64 JPEG images that the agent can display in the chat.
Generate 360° equirectangular HDRI environment maps from text prompts using FLUX.2-Klein + Klein 9B HDRI LoRA.
N key → OpenBlender tab)The required model is installed automatically by the installer or the Verify_UpdateModels.bat tool.
Add LoRA models to your generation workflows.
.jsonAny ComfyUI-compatible LoRA works. Stack multiple LoRAs in a single workflow. Find models at CivitAI.
Access via Edit → Preferences → Add-ons → OpenBlender. ComfyUI must be started manually.
ComfyUI/comfy/supported_models.py, change memory_usage_factor = 0.061 to 0.2127.0.0.1:8188)http://127.0.0.1:8188 in browser to confirmIf Trellis2 doesn't load on ComfyUI, your system is probably blocking the symlink that is created for the environment, RUN the ComfyUI server as Administrator to allow the symlink to be created.Verify_UpdateModels.batWindow → Toggle System Console, watch [OpenBlender] / [PostProcess]MCP Server endpoints and exposed Blender capabilities.
Enable console logging: Window → Toggle System Console — watch for [OpenBlender] and [PostProcess] log entries.