Sign Up
PricingFor EduStudio
Launch App
Sign Up
NEW
12-week AI course for creative pros ft Kan, Seb, & Matt. Limited spots.
Post Hero Image

Run ComfyUI in the Cloud – The Complete Guide - No Installation Required

Post Hero Image

Run ComfyUI in the Cloud – The Complete Guide - No Installation Required

Run ComfyUI in the Cloud – The Complete Guide - No Installation Required

ComfyUI is the lego for AI Art and Video generation. It breaks the process into blocks that you can snap together however you want.

Introduction: What This Guide Covers & Who It’s For

What is ComfyUI?

ComfyUI is a visual programming language for generative AI, built around nodes and "noodles" that connect them. Each node represents a function that transforms inputs into outputs. By chaining nodes together, users create workflows that take inputs like prompts, images, models, and configurations to generate high-quality images and videos.

What makes ComfyUI so useful is its modularity and shareability. A user can spend a week designing a workflow and share it with a teammate, who can tweak inputs, swap out nodes, or refine the process without rebuilding from scratch.

These workflows can also be shared globally, enabling others to generate the same high-quality results in minutes. This level of flexibility and collaboration is unique to node-based systems. Unlike canvas- and prompt-based platforms like Midjourney and Adobe Firefly, which require users to manually redo the entire process each time, ComfyUI allows workflows to be reused, customized, and improved effortlessly - making it a game-changer for AI creativity.

What makes ComfyUI even more powerful is its rapidly growing ecosystem. Thousands of developers are actively building new nodes that integrate the latest models and techniques, unlocking advanced capabilities for generative AI workflows. Nodes can also connect to any API, including closed-source systems like RunwayML, Minimax, and Kling, or even leverage LLMs to automatically refine prompts. With limitless extensibility, ComfyUI is pushing the boundaries of what's possible in AI-powered content creation.

ComfyUI is the lego for AI Art and Video generation. It breaks the process into blocks that you can snap together however you want.

Each block (we call them nodes) handles a specific task: loading your model, processing your prompt, sampling your image, and more.

Launched on Github in Jan 2023, Comfyanonymous is credited as the creator of ComfyUI and a cofounder of Comfy Org. It's garnered massive success, and has changed the way AI Artists experiment and create.

0:00
/0:22

ComfyUI – The Complete Guide to Node-Based Workflows

Who should use ComfyUI?

Both beginners who are looking for step-by-step guidance to advanced users that seek fine-grained control.

ComfyUI might look complex at first, but beginners can absolutely use it. Many beginners have picked up ComfyUI and found that it actually helps them understand AI image and video generation better, since it exposes what’s happening.

At the same time, ComfyUI is geared toward power users and tinkerers – those comfortable with technical tools who want greater control over their creations.


ComfyUI vs. Automatic1111 - Which one to use for AI Image & Video Generation?

Automatic1111 ComfyUI
Interface - Ease of Use Very user-friendly; the UI is quite straightforward with most features accessible, straight up as menu settings or buttons. Steep learning curve and slightly complex UI for beginners. This is a node-graph interface which requires the setup of workflows before generating images/videos.
Customization It has an extensive library of hundreds of plugins/extensions and hundreds of settings and configurations to customize the look and feel as well as advanced features for image and video generation and editing. Extremely customizable. The overall design is modular. Users can modify or restructure virtually any aspect of the image and video generation. There is a large ecosystem of custom nodes that allows users to leverage the absolute bleeding edge in open source AI.
Performance Good but not the fastest. A1111 is optimized for common use but can be slower and heavier on VRAM compared to other models. Multi-image batch generation is supported, but multi-GPU support is not native. High performance. ComfyUI is optimized to reuse computations and manage memory smartly. Even on the same hardware, it often generates faster than A1111. Lacks built-in multi-GPU, but can utilize CPU for overflow.
Workflow Flexibility Low to Medium: For standard workflows (txt2img, img2img, inpaint, upscaling), A1111 is very capable with dedicated interfaces. But outside of those, flexibility is limited unless using scripts or automations. Very High: This is ComfyUI’s core strength. Any workflow you can conceive (as long as you have nodes) can be built. Text-to-image, image-to-image, inpainting, chaining, multi-output workflows, even non-image workflows are possible.
Learning Curve Shallow to Moderate: Initial learning is easy – core functions are obvious. Mastering all features takes time due to the sheer number of options and extensions, but can be learned step by step. Steep: Requires learning new concepts (nodes, graph logic) and some understanding of the Stable Diffusion, Flux, Hunyuan, and other base models' pipeline. Beginners may need tutorials, but once learned, complex workflows come naturally.
Feature Set Extensive (via extensions): A1111 has a wide range of features including txt2img, img2img, inpainting, outpainting, depth maps, upscaling, face correction, text inversion, LoRA, etc. Extensive (via custom nodes): Out of the box, ComfyUI supports most generation features like text/image prompts, inpaint, ControlNet, upscale, variations, etc.
Ideal User Type Artists, hobbyists, and general users, including beginners. Great for those who want a powerful tool without needing to "build" the tool themselves. Also suitable for semi-advanced users who want functionality but prefer a GUI over coding. Tech-savvy creators, tinkerers, power users, researchers. Ideal for users with specific goals that a one-size-fits-all UI can’t satisfy, like researchers testing a new diffusion process or developers integrating Stable Diffusion, Flux, or other pipelines into a larger system.




Let's Get Started

If you're just starting with AI image generation or you're a seasoned pro looking for more control, you'll find value here.

We'll walk through everything from basic setups to advanced techniques, helping you build confidence with each step.


Installation & Setup

Download ComfyUI for Windows/Mac here:

Download ComfyUI for Windows/Mac
Download the ComfyUI desktop application for Windows, and macOS. Optionally, set up the application manually.

For installation instructions and download files, go here:

GitHub - comfyanonymous/ComfyUI: The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. - comfyanonymous/ComfyUI


ComfyUI Keyboard Shortcuts

KeybindExplanation
Ctrl + EnterQueue up current graph for generation
Ctrl + Shift + EnterQueue up current graph as first for generation
Ctrl + Alt + EnterCancel current generation
Ctrl + Z/Ctrl + YUndo/Redo
Ctrl + SSave workflow
Ctrl + OLoad workflow
Ctrl + ASelect all nodes
Alt + CCollapse/uncollapse selected nodes
Ctrl + MMute/unmute selected nodes
Ctrl + BBypass selected nodes (acts like the node was removed from the graph and the wires reconnected through)
Delete/BackspaceDelete selected nodes
Ctrl + BackspaceDelete the current graph
SpaceMove the canvas around when held and moving the cursor
Ctrl/Shift + ClickAdd clicked node to selection
Ctrl + C/Ctrl + VCopy and paste selected nodes (without maintaining connections to outputs of unselected nodes)
Ctrl + C/Ctrl + Shift + VCopy and paste selected nodes (maintaining connections from outputs of unselected nodes to inputs of pasted nodes)
Shift + DragMove multiple selected nodes at the same time
Ctrl + DLoad default graph
Alt + +Canvas Zoom in
Alt + -Canvas Zoom out
Ctrl + Shift + LMB + Vertical dragCanvas Zoom in/out
PPin/Unpin selected nodes
Ctrl + GGroup selected nodes
QToggle visibility of the queue
HToggle visibility of history
RRefresh graph
FShow/Hide menu
.Fit view to selection (Whole graph when nothing is selected)
Double-Click LMBOpen node quick search palette
Shift + DragMove multiple wires at once
Ctrl + Alt + LMBDisconnect all wires from clicked slot

Skip Installation with ThinkDiffusion cloud

ThinkDiffusion lets you spin up virtual machines loaded with your choice of open-source apps. Run models from anywhere—Hugging Face, Civitai, or your own—and install custom nodes for ComfyUI. Your virtual machine works just like a personal computer.

The best part? Pick hardware that fits your needs, from 16GB to 48GB VRAM (with an 80GB option coming soon).

Plus, ThinkDiffusion updates apps quickly and maintains multiple versions to keep your workflow running smoothly.

Common Installation Issues

    • Missing dependencies or errors during install (how to fix Python/PyTorch issues).
    • “CUDA not available” – ensuring correct GPU drivers or using CPU mode if no CUDA.
    • Permission errors on Mac/Linux – how to fix with correct permissions or using sudo appropriately.
    • Empty browser page or server not launching – checking for firewall or port conflicts, and verifying you opened the correct local URL.

Torch not compiled with CUDA enabled: error

This error means the PyTorch you installed isn’t using your GPU properly. It often happens if you accidentally installed the CPU-only version of torch. To fix it, uninstall the torch package and then reinstall the correct GPU version.

ComfyUI interface not showing up / cannot connect

If you started ComfyUI and nothing opened, or you can’t see the UI, first check the console for the address. By default it should be at 127.0.0.1:8188. Make sure you’re trying to access it from the same machine (ComfyUI by default binds to localhost, so if you’re on a different PC or device on the network, it won’t load). If needed, open a browser on the machine running ComfyUI and go to the URL (http://localhost:8188). If you still get connection refused, ComfyUI might not have started correctly – check for errors in the terminal where you ran main.py or the .bat file.

No models found / model list is empty

If ComfyUI launches but you can’t select any model in the Load Checkpoint node, it means it didn’t detect your Stable Diffusion model file.

Double-check that you placed your model in the correct folder (ComfyUI/models/checkpoints).

If you have another open source UI installed like Automatic1111, you can also configure ComfyUI to use those model directories instead of duplicating files.


Understanding ComfyUI’s Node-Based Workflow System

ComfyUI is a node-based interface for Stable Diffusion. Instead of using sliders and buttons like Automatic1111, you build visual workflows using nodes—modular blocks that perform specific tasks like loading a model, processing a prompt, or generating an image.

💡
If you’ve used software like Blender, Unreal Engine, or even visual programming tools like Scratch, this will feel familiar. If not, don’t worry—it’s easier than it looks.

ComfyUI nodes

Visual Workflow vs. Traditional UI

Most UIs are form-based—you type a prompt, adjust a few sliders, and hit "Generate." The software takes care of the behind-the-scenes processing. ComfyUI, on the other hand, exposes every part of that process as a node.

Think of it like this:

  • Traditional UIs = "Type, click, and wait." The process is preset, and you tweak what the UI allows.
  • ComfyUI = "Build it your way." You assemble your own AI pipeline using nodes, connecting them like a flowchart.

This means you can rearrange, modify, or add extra steps in the image generation process—something other UIs don’t let you do.

Want to run multiple samplers side by side? Or mix two models in one workflow? You can. This flexibility is what makes ComfyUI so powerful.

Key Concepts: How Workflows Work in ComfyUI

A ComfyUI workflow is a graph of nodes, where:

  • Nodes = Building blocks of an image generation process.
  • Edges (Connections) = The wires that pass data from one node to another.
  • Inputs & Outputs = Each node has specific inputs (what it needs to work) and outputs (what it produces).

Core Nodes Overview (The Essential Pieces)

A ComfyUI workflow typically includes a handful of must-have nodes that every generation process needs. Let’s break them down:

Checkpoint Loader

📌 Loads the AI Model. This is where you pick the Stable Diffusion model (like SD1.5, SDXL, or a fine-tuned custom model) or other models like Flux or Hunyuan, etc.

  • Your choice of model affects style and quality.
  • Different models = Different art styles or subject capabilities.
  • Think of this as the "brain" of the AI—it contains all the knowledge needed to generate images.

🔗 Connects to: Sampler (it needs a model to generate images).

🚨Common Pitfall: No models listed? Make sure your models are in the right folder (ComfyUI/models/checkpoints). If they still don’t show up, restart ComfyUI.

CLIP Text Encoder

📌 The AI doesn’t "read" text like we do—it needs it converted into embeddings (numeric representations of words). This is what the CLIP Text Encoder does.

  • Positive Prompt Encoder: Tells the AI what to create.
  • Negative Prompt Encoder: Tells the AI what not to include (e.g., "blurry," "low quality," "bad anatomy").

🔗 Connects to: Sampler (it needs prompts to guide the image).

🚨 Common Pitfall: If your prompt isn’t affecting the image much, try increasing CFG Scale in the Sampler node. If the image looks "muddy," your model might have poor prompt adherence.

KSampler

📌 This is the core of image generation—it transforms noise into a meaningful image by following your prompt. Different samplers interpret and refine images differently, affecting speed, sharpness, and creativity.

Controls:

  • Sampler Type (Euler, DDIM, DPM++, etc.) – Different algorithms that process the image differently.
    - Euler a – Fast and good for quick previews, but may lack fine detail.
    - DPM++ 2M Karras – High-quality and smooth details, recommended for final outputs.
    - DDIM – Faster but can sometimes produce softer, less-defined images.
    - UniPC – A good balance between speed and detail.
  • Steps – How many refinement passes it takes (higher = more detail, but slower).
    - Fewer steps (e.g., 20-30) = Faster, but may lack precision.
    - More steps (e.g., 50-70) = More refined images, but diminishing returns beyond ~50.
  • CFG Scale (Classifier-Free Guidance): Determines how strictly the AI follows your prompt. (higher = more adherence, but less creativity).
    - Low (3-5) = More creative freedom, but sometimes off-prompt.
    - Medium (7-10) = Balanced—faithful to the prompt but still creative.
    - High (12-15) = Follows the prompt strictly but may result in repetitive or unnatural outputs.
  • Seed – Determines randomness (fixed seed = reproducible results, random = new outputs each time).

🔗 Connects to: VAE Decoder (to convert latent data into an actual image).

🚨 Common Pitfall: If your image looks too rigid or unnatural, lower CFG Scale for more variety. If it’s too abstract, increase CFG Scale to make it follow the prompt more closely. 📌 This is where the magic happens. The sampler gradually refines an image from noise into something coherent.

VAE Decoder

📌 Stable Diffusion doesn’t directly create images—it works in a special compressed format called latent space. The VAE Decoder converts that latent image into something visible.

  • If your images look blurry or washed out, try using a better VAE model (some models come with an improved VAE).
  • Works like a translator, converting AI data into pixels.

🔗 Connects to: Image Preview or Save Node.

🚨 Common Pitfall: If your output image is weirdly blurry, check if you’re using the right VAE model.

Image Save/Preview

📌 This is the last step—it displays and/or saves the final image.

  • Preview Node: Lets you see the image inside ComfyUI.
  • Save Node: Exports the image as a file on your computer.

🔗 Connects to: VAE Decoder (because it needs the processed image).

🚨 Common Pitfall: If your image is being saved but not displayed, check if you added a Preview Node!


How These Nodes Work Together (Basic Text-to-Image Pipeline)

Here’s the typical ComfyUI workflow for generating an image:

  • Checkpoint Loader → Loads your Stable Diffusion model (e.g., SDXL).
  • Text Prompt Encoder → Converts text prompts into a format the AI understands.
  • KSampler → Runs the diffusion process to generate an image in latent space.
  • VAE Decoder → Converts the latent image into an actual viewable image.
  • Image Preview / Save Node → Displays or saves the final result.

In traditional UIs, this entire process is hidden. ComfyUI lets you see and modify every step.

0:00
/0:20

Autocinemograph in ComfyUI

Read more on workflows

ComfyUI Workflows and what you need to know
ComfyUI offers a node-based interface for Stable Diffusion, simplifying the image generation process.

Creating Your First Image

Generate your first image with Flux and ComfyUI



Simple 5-step process:

1. Sign up on ThinkDiffusion (we offer free trial)
2. Launch ComfyUI machine
3. Download the workflow from the tutorial below.
4. Once machine is launched, simply drag the workflow onto the ComfyUI user interface
5. Add your input files, adjust prompt & settings, and click generate 💥

Generate your first AI image with Flux on ThinkDiffusion
Your first image from text TD_Flux_Text2Image_NF4TD_Flux_Text2Image_NF4.json15 KBdownload-circle * Start your ThinkDiffusion machine and load the text2image workflow above * Generate an image * Generate images in different sizes * Try writing your own prompts Modifying your prompt to add words within your image Feeling comfortable with creating


Try another one - Here's a collection of 10 cool ComfyUI workflows

A collection of 10 cool ComfyUI workflows
ComfyUI workflows for Stable Diffusion, offering a range of tools from image upscaling and merging.


Advanced Workflows & Creative Applications

ControlNet in ComfyUI

ControlNet are a series of Stable Diffusion models that lets you have precise control over image compositions using pose, sketch, reference, and many others.

0:00
/0:25

Flux controlnet in ComfyUI

Flux Controlnet model

Recently XLab, InstantX, Shakker Labs and MistoAI have released ControlNets for Flux. XLab's collection supports 3 models for now: CannyDepth and HED. Each ControlNet is trained on 1024x1024 resolution and works optimally for this resolution.

With Flux, you can create impressive images from text prompts, descriptions or other inputs such as images. It also works well with ComfyUI, a powerful and straightforward workspace. Both experts and newbies, who may not have a technical background, can use it. 

ControlNet is a tool for controlling the precise composition of AI images. Using its model, you can provide an additional control image to condition and control your image generation. It can generate detailed photos, illustrations and assets or change existing images.

Learn more about ControlNet with our resources:

Precision in Flux Art: Harnessing the Power of ControlNet
Flux lets you create impressive images from text prompts. ControlNet is a significant tool for controlling the precise composition of AI images.

Flux Controlnet workflow in ComfyUI

Sketch to Image with Controlnet in ComfyUI

ComfyUI Controlnet workflow

LoRA Models in ComfyUI

Stable Diffusion models are fine-tuned using Low-Rank Adaptation (LoRA), a unique training technique. It offers a solution that is particularly useful in the field of artificial intelligence art production by mainly addressing the issues of balancing the size of model files and training power.

With LoRAs, users can make model customizations without putting a heavy strain on local storage resources.

ComfyUI LoRA

Artists, designers, and enthusiasts may find the LoRA models to be compelling since they provide a diverse range of opportunities for creative expression. They are also quite simple to use with ComfyUI, which is the nicest part about them.

Our team has created a bunch of tutorials on both using and training your own LoRAs. Check them out here:

LoRA - ThinkDiffusion
Stable Diffusion models are fine-tuned using Low-Rank Adaptation (LoRA), a unique training technique. It offers a solution that is particularly useful in the field of artificial intelligence art production by mainly addressing the issues of balancing the size of model files and training power. With LoRAs, users can make model customizations without putting a heavy strain on local storage resources.

ComfyUI LoRA

HyperSD and Blender in ComfyUI


What is Hyper Stable Diffusion? 
ByteDance has demonstrated its dedication to both speed and innovation with the introduction of Hyper-SD. It is designed to speed up generation time significantly. With increased speed, it also ensures that images are sharper, more detailed and visually appealing.

0:00
/1:34

HyperSD and Blender in ComfyUI

The image below is a comparison between Hyper SD and SDXL Lightning using 1 step. Tests show that Hyper-SD has better quality and works faster than earlier models such as SDXL-Lightning.

Hyper SD vs SDXL Lightning


Blender is renowned for its capabilities in creating 3D models and images. With Blender, you can create detailed characters, elaborate scenes and stunning effects for movies, video games and digital art. The software is supported by a passionate community that shares tips, tutorials and plugins to help each other improve their skills.

ComfyUI can connect to Blender with the custom node MixLab. This seamless integration ensures consistent and reliable results, which are essential for any professional projects.

Real-Time Creativity: Leveraging Hyper SD and Blender with ComfyUI
The revolutionary training approach of the Hyper-SD technique provides near-perfect performance. As evidenced by higher CLIP and Aes ratings

Hyper SD with Blender

Consistent Character Creation in ComfyUI


Follow our step-by-step tutorial to build consistent characters for your storylines:

Consistent Character Creation in ComfyUI
Utilizing Flux in ComfyUI for Consistent Character Creation
Creating characters that look the same every time is crucial. This guide will show you, to maintain consistency by using the workflow in ComfyUI.

Consistent Character Creation in ComfyUI

Outpainting in ComfyUI

What is Outpainting?
Outpainting allows for the creation of any desired image beyond the original boundaries of a given picture. That is, you can enlarge photographs beyond their original boundaries using the capabilities of artificial intelligence.

Outpainting in ComfyUI

Outpainting with ComfyUI
Creators will find this outpainting workflow in ComfyUI Stable Diffusion indispensable. This AI processs extends images beyond their frame, adding pixels to the height or width while maintaining quality. Submit your image, select the direction, and let the AI work.

Outpainting in ComfyUI


Troubleshooting & FAQs

Why am I getting blank or black images?

Causes: Typically due to a missing connection or component in the workflow. If the VAE Decoder isn’t connected or the model isn’t loaded correctly, you might not see a proper image.

Solution: Ensure all essential nodes (model, text encoder, sampler, VAE) are properly connected. Verify your prompt isn’t empty and that a valid checkpoint is loaded.

"CUDA out of memory" errors

Causes: Your GPU ran out of VRAM (common with high resolution, large batch sizes, or large models like SDXL on a smaller GPU). Also, video workflows cause this.

Solutions: Reduce image size/steps, use batch size 1, or enable low VRAM mode. If using SDXL, try the base model alone without the refiner or use CPU for the refiner stage. Clearing cached data in ComfyUI may also help. And for large model files, finding distilled or quantized versions that take up less GPU memory also helps. Or you can use the quantised version of models such as gguf which are better optimized for cpu and gpu. Learn more at https://learn.thinkdiffusion.com/introduction-to-flux-ai-quick-guide/#model-quantization

ComfyUI not launching / web page not showing

Causes: Missing dependencies or the local server failing to start.

Fix: Run ComfyUI from a terminal to check errors. Install missing Python packages or update PyTorch. Manually navigate to http://localhost:8188 and check for firewall blocks. There can be 'n' number of reasons such as- - Port already occupied - Dependencies Conflict - Cuda runtime issue

Error messages in nodes (red outline)

Causes: A red outline node is due to missing custom nodes.

Fix: Confirm the model file exists in the correct folder and is named correctly. Update or remove any outdated custom nodes. Install any missing custom nodes through ComfyUI-Manager. After installing make sure to restart and reload your ComfyUI-session

How to update ComfyUI safely?

Guide: First use git fetch and then use git pull in the ComfyUI folder to update if installed via Git. And make sure you are not using any additional command line arguments such as --cpu as it will only trigger cpu.

Precautions: Back up workflows and custom nodes. After updating, test a simple workflow first. Check ComfyUI release notes or Discord for breaking changes.

Can I use ComfyUI with an AMD GPU?

Answer: As of now, ComfyUI is primarily designed for Nvidia GPUs using CUDA. AMD users might try using ROCm (if supported) or run on CPU, but performance will be slower.

Where are my images saved?

Answer: By default, images are saved in the output folder inside the ComfyUI directory. If you used an Image Save node with a custom location, check that path instead.

How do I import someone else’s workflow?

Answer: The most convenient way to import is to drag the file over the ComfyUI interface. Or you could use ComfyUI’s Load feature: place the .json workflow file in the workflows folder, or use the UI’s load function to open it. Ensure you have any required custom nodes for that workflow.


Step-by-Step Tutorials for ComfyUI Workflows

ComfyUI Face Detailer
What is Face Detailer? Creator Dr. Lt. Data, who also made the ComfyUI Manager, is responsible for its creation. A node called ComfyUI Face Detailer can recognize and improve faces with ease. Face Detailer is great for restoring characters’ faces in photos, movies, and animations, and the 4x UltraSharp Model

ComfyUI Face Detailer

ComfyUI Workflow for Animating Parts of an Image: Motion Brush
This Motion Brush workflow allows you to add animations to specific parts of a still image. It literally works by allowing users to “paint” an area or subject, then choose a direction and add an intensity. In short, given a still image and an area you choose, the workflow will

Motion Brush ComfyUI workflow

AutoCinemagraph ComfyUI Guide
Cinemagraphs are still images that incorporate short, repeating movements to create the illusion of motion

Autocinemagraph comfyUI step-by-step tutorial


Community & Expanding Your Skills



ComfyUI Community Resources

  1. ComfyUI GitHub Discussions
    It's the official GitHub discussion board where users report issues, and request for more features. It's also great for staying up to date with new releases. Pretty active of course.
  2. r/ComfyUI on Reddit
    We love this Reddit community where users engage and share workflow examples, guides, troubleshooting tips, and more.
  3. ThinkDiffusion Discord
    Our active community where users ask questions, share their creations, and insights. We also post detailed tutorials regularly here.

Automatic1111 in the Cloud - Everything you need to know
The most comprehensive guide on Automatic1111. Auto1111 stable diffusion is the gold standard UI for accessing everything Stable Diffusion has to offer. It’s a feature-rich web UI and offers extensive customization options, support for various models and extensions, and a user-friendly interface.

Automatic1111 in the Cloud

Discover why Wan 2.1 is the best AI video model right now.
0:00 /0:05 1× The recently released Wan 2.1 is a groundbreaking open-source AI video model. Renowned for its ability to exceed the performance of other open-source models like Hunyuan and LTX, as well as numerous commercial alternatives, Wan 2.1 delivers truly incredible text2video and image2video generations

Wan 2.1 AI video model

AI Video Speed: How LTX is Reshaping Video2Video as We Know It
LTX Video to Video can deliver really powerful results in amazing speed and that’s just the power of LTX - speed. With this video2video workflow we’ll be transforming input videos in AI counterparts with amazing efficiency.

LTX Video to Video

Enhanced Video Generation with Hunyuan and LoRA
We now can use LoRAs together with the AI video model Hunyuan. Why? To keep character or object consistency in a video.

Hunyuan and LoRA

Let's stay connected

PRODUCT

THINKDIFFUSION STUDIO

EDUCATION

Copyright © 2025 Think Diffusion Inc. All rights reserved.
Privacy policy

|

Terms of Service