Bad case feedback expectations: If possible, please provide me with bad cases so I can make adjustments in the next version based on them.
Thx.
A smaller version, can't use 2.x due to the size, I am stuck with 1.0
Here more examples 8-steps vs 25 steps:
Prompt: Photorealistic portrait of a beautiful young East Asian woman with long, vibrant purple hair and a black bow. She is wearing a flowing white summer dress, standing on a sunny beach with a sparkling ocean and clear blue sky in the background. Bright natural sunlight, sharp focus, ultra-detailed.
Resolution: 1728x992
Here more examples 8-steps vs 25 steps:
Prompt: Photorealistic portrait of a beautiful young East Asian woman with long, vibrant purple hair and a black bow. She is wearing a flowing white summer dress, standing on a sunny beach with a sparkling ocean and clear blue sky in the background. Bright natural sunlight, sharp focus, ultra-detailed.
Resolution: 1728x992
Is this the 8-step version?
A smaller version, can't use 2.x due to the size, I am stuck with 1.0
We will try again.
Here more examples 8-steps vs 25 steps:
Prompt: Photorealistic portrait of a beautiful young East Asian woman with long, vibrant purple hair and a black bow. She is wearing a flowing white summer dress, standing on a sunny beach with a sparkling ocean and clear blue sky in the background. Bright natural sunlight, sharp focus, ultra-detailed.
Resolution: 1728x992
Is this the 8-step version?
yes the left image. and right image is generated on 25 steps version
It was very difficult for me to generate a good image in inpaint mode; I had to test several types of masks. And sometimes I need to use a specific seed, in my case 48, to avoid too many transitions in the image. I've now implemented differential diffusion in the pipeline for comparison. See my results.
| Model inpaint | Differential Diffusion | Model Inpaint + Differential Diffusion | Mask |
|---|---|---|---|
![]() |
![]() |
![]() |
![]() |
@bubbliiiing is possible in next version train Tile with others contolnets in same model? So i dont need switching between models.
It was very difficult for me to generate a good image in inpaint mode; I had to test several types of masks. And sometimes I need to use a specific seed, in my case 48, to avoid too many transitions in the image. I've now implemented differential diffusion in the pipeline for comparison. See my results.
Model inpaint Differential Diffusion Model Inpaint + Differential Diffusion Mask
I added some new mask generation schemes to the new training that may help.
@bubbliiiing is possible in next version train Tile with others contolnets in same model? So i dont need switching between models.
This might require putting the tile information into different channels, otherwise it could affect the control model's performance. This needs some experimentation
@bubbliiiing is possible in next version train Tile with others contolnets in same model? So i dont need switching between models.
This might require putting the tile information into different channels, otherwise it could affect the control model's performance. This needs some experimentation
You mean might require a new param to model like the control_mode where like [1,2,3,4,5,6,7,8] to identify mode?
A smaller version, can't use 2.x due to the size, I am stuck with 1.0
I've updated a lite version today, feel free to check it out.





