prithivMLmods commited on
Commit
f6915a1
·
verified ·
1 Parent(s): 4d33785

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -18,7 +18,7 @@ datasets:
18
  - prithivMLmods/blip3o-caption-mini-arrow
19
  ---
20
 
21
- # **Qwen3-VisionCaption-2B-GUFF**
22
 
23
  > Qwen3-VisionCaption-2B is an abliterated v1.0 variant fine-tuned by prithivMLmods from [Qwen3-VL-2B-Instruct-abliterated-v1](https://huggingface.co/prithivMLmods/Qwen3-VL-2B-Instruct-abliterated-v1), specifically engineered for seamless, high-precision image captioning and uncensored visual analysis across diverse multimodal contexts including complex scenes, artistic content, technical diagrams, and sensitive imagery. It bypasses conventional content filters to deliver robust, factual, and richly descriptive captions with deep reasoning, spatial awareness, multilingual OCR support (32 languages), and handling of varied aspect ratios while maintaining the base model's 256K token long-context capacity for comprehensive visual understanding. Ideal for research in content moderation, red-teaming, dataset annotation, creative applications, and generative safety evaluation, the model produces detailed outputs suitable for accessibility tools, storytelling, and vision-language tasks on edge devices via efficient inference frameworks like Transformers.
24
 
 
18
  - prithivMLmods/blip3o-caption-mini-arrow
19
  ---
20
 
21
+ # **Qwen3-VisionCaption-2B-GGUF**
22
 
23
  > Qwen3-VisionCaption-2B is an abliterated v1.0 variant fine-tuned by prithivMLmods from [Qwen3-VL-2B-Instruct-abliterated-v1](https://huggingface.co/prithivMLmods/Qwen3-VL-2B-Instruct-abliterated-v1), specifically engineered for seamless, high-precision image captioning and uncensored visual analysis across diverse multimodal contexts including complex scenes, artistic content, technical diagrams, and sensitive imagery. It bypasses conventional content filters to deliver robust, factual, and richly descriptive captions with deep reasoning, spatial awareness, multilingual OCR support (32 languages), and handling of varied aspect ratios while maintaining the base model's 256K token long-context capacity for comprehensive visual understanding. Ideal for research in content moderation, red-teaming, dataset annotation, creative applications, and generative safety evaluation, the model produces detailed outputs suitable for accessibility tools, storytelling, and vision-language tasks on edge devices via efficient inference frameworks like Transformers.
24