Z-Image-Fun
Collection
2 items
β’
Updated
| Pose + Inpaint | Output |
![]() ![]() |
![]() |
| Pose | Output |
![]() |
![]() |
| Pose | Output |
![]() |
![]() |
| Pose | Output |
![]() |
![]() |
| Canny | Output |
![]() |
![]() |
| HED | Output |
![]() |
![]() |
| Depth | Output |
![]() |
![]() |
Go to the VideoX-Fun repository for more details.
Please clone the VideoX-Fun repository and create the required directories:
# Clone the code
git clone https://github.com/aigc-apps/VideoX-Fun.git
# Enter VideoX-Fun's directory
cd VideoX-Fun
# Create model directories
mkdir -p models/Diffusion_Transformer
mkdir -p models/Personalized_Model
Then download the weights into models/Diffusion_Transformer and models/Personalized_Model.
π¦ models/
βββ π Diffusion_Transformer/
β βββ π Z-Image-Turbo/
βββ π Personalized_Model/
β βββ π¦ Z-Image-Turbo-Fun-Controlnet-Union-2.0.safetensors
Then run the file examples/z_image_fun/predict_t2i_control_2.0.py and examples/z_image_fun/predict_i2i_inpaint_2.0.py.
The table below shows the generation results under different combinations of Diffusion steps and Control Scale strength:
Parameter Description:
Diffusion Steps: Number of iteration steps for the diffusion model (9, 10, 20, 30, 40) Control Scale: Control strength coefficient (0.65 - 1.0)