This model is a finetuned Flux.2-Klein-4B model. The model as been quantized for use on the T4 in the Google Colab environment. Code on how to use this quantization is provided in this model card ⬇️.

image

From: https://civitai.red/models/2327389/flux2-klein-aio?modelVersionId=2618128 (the most popular Klein 4B finetune currently)

Step 1: To Use in google Colab , first make sure to fill 2 zip files on your Googoe drive called foregrounds.zip and backgrounds.zip

image

Then run the encrypt cell https://huggingface.co/codeShare/FLUX.2-klein-AIO-SDNQ-4bit-dynamic/blob/main/colab_notebooks/twin_input_setup/%F0%9F%94%93encrypt_kaggle_dataset.ipynb

This will create a folder of encrypted images + settings and the edit prompt for batch processing.

Step 2: Then you can either run the encrypted image folder in the run cell for google colab: https://huggingface.co/codeShare/FLUX.2-klein-AIO-SDNQ-4bit-dynamic/blob/main/colab_notebooks/twin_input_setup/dual_pipe_klein_edit_colab.ipynb

Or alternatively , as I prefer it , run the cell on Kaggle: https://huggingface.co/codeShare/FLUX.2-klein-AIO-SDNQ-4bit-dynamic/blob/main/kaggle_notebooks/twin_input_setup/fg-bg-klein.ipynb Running on kaggle.com has the advantage of using 2xT4 instead of one which are twice as fast (one image edit every 15 seconds roughly , compare to colab one every 30 seconds) , and the script can be run and will auto disconnect upon completion. Unlike the Google colab variant which needs to be kept open in browser.)

image

image

Step 3: Finally , decrypt the results using the decrypt cell: https://huggingface.co/codeShare/FLUX.2-klein-AIO-SDNQ-4bit-dynamic/blob/main/colab_notebooks/twin_input_setup/decrypt_results.ipynb

For this step you need to remember the password in order to decrypt the contents.

//--//

Output example

Suppose you have a foreground image of a fashion photo from dollskillz fashion brand: image

And have this background (you can find alot of gradient backgrounds and such on pinterest): image

Then , assuming you have an edit_prompt sorta similiar to 'place the character on image 1 on the background in image 2' , the output result from the SDNQ klein 4b edit model as a 1024x1024 square will be image

//--//

This us the main task I intented of this SDNQ klein 4B version; to process foregrounds and backgrounds into 1024x1024 squares that can be used for lora training text to image models.

Here are other examples:

image

image

image

Downloads last month
1,282
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for codeShare/FLUX.2-klein-AIO-SDNQ-4bit-dynamic

Finetuned
(13)
this model