Instructions to use codeShare/FLUX.2-klein-AIO-SDNQ-4bit-dynamic with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use codeShare/FLUX.2-klein-AIO-SDNQ-4bit-dynamic with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("codeShare/FLUX.2-klein-AIO-SDNQ-4bit-dynamic", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
This model is a finetuned Flux.2-Klein-4B model. The model as been quantized for use on the T4 in the Google Colab environment. Code on how to use this quantization is provided in this model card ⬇️.
From: https://civitai.red/models/2327389/flux2-klein-aio?modelVersionId=2618128 (the most popular Klein 4B finetune currently)
Step 1: To Use in google Colab , first make sure to fill 2 zip files on your Googoe drive called foregrounds.zip and backgrounds.zip
Then run the encrypt cell https://huggingface.co/codeShare/FLUX.2-klein-AIO-SDNQ-4bit-dynamic/blob/main/colab_notebooks/twin_input_setup/%F0%9F%94%93encrypt_kaggle_dataset.ipynb
This will create a folder of encrypted images + settings and the edit prompt for batch processing.
Step 2: Then you can either run the encrypted image folder in the run cell for google colab: https://huggingface.co/codeShare/FLUX.2-klein-AIO-SDNQ-4bit-dynamic/blob/main/colab_notebooks/twin_input_setup/dual_pipe_klein_edit_colab.ipynb
Or alternatively , as I prefer it , run the cell on Kaggle: https://huggingface.co/codeShare/FLUX.2-klein-AIO-SDNQ-4bit-dynamic/blob/main/kaggle_notebooks/twin_input_setup/fg-bg-klein.ipynb Running on kaggle.com has the advantage of using 2xT4 instead of one which are twice as fast (one image edit every 15 seconds roughly , compare to colab one every 30 seconds) , and the script can be run and will auto disconnect upon completion. Unlike the Google colab variant which needs to be kept open in browser.)
Step 3: Finally , decrypt the results using the decrypt cell: https://huggingface.co/codeShare/FLUX.2-klein-AIO-SDNQ-4bit-dynamic/blob/main/colab_notebooks/twin_input_setup/decrypt_results.ipynb
For this step you need to remember the password in order to decrypt the contents.
//--//
Output example
Suppose you have a foreground image of a fashion photo from dollskillz fashion brand:

And have this background (you can find alot of gradient backgrounds and such on pinterest):

Then , assuming you have an edit_prompt sorta similiar to 'place the character on image 1 on the background in image 2' , the output result from the SDNQ klein 4b edit model as a 1024x1024 square will be

//--//
This us the main task I intented of this SDNQ klein 4B version; to process foregrounds and backgrounds into 1024x1024 squares that can be used for lora training text to image models.
Here are other examples:
- Downloads last month
- 1,282
Model tree for codeShare/FLUX.2-klein-AIO-SDNQ-4bit-dynamic
Base model
black-forest-labs/FLUX.2-klein-4B





