Image-Text-to-Text
MLX
Safetensors
qwen2_5_vl
multimodal
document-parsing
conversational
4-bit precision
Instructions to use BrainBuzzer/dolphin-v2-mlx-4bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use BrainBuzzer/dolphin-v2-mlx-4bit with MLX:
# Make sure mlx-vlm is installed # pip install --upgrade mlx-vlm from mlx_vlm import load, generate from mlx_vlm.prompt_utils import apply_chat_template from mlx_vlm.utils import load_config # Load the model model, processor = load("BrainBuzzer/dolphin-v2-mlx-4bit") config = load_config("BrainBuzzer/dolphin-v2-mlx-4bit") # Prepare input image = ["http://images.cocodataset.org/val2017/000000039769.jpg"] prompt = "Describe this image." # Apply chat template formatted_prompt = apply_chat_template( processor, config, prompt, num_images=1 ) # Generate output output = generate(model, processor, formatted_prompt, image) print(output) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
Dolphin-v2 MLX Conversion
This repository contains a local MLX conversion of hf_model intended for Apple Silicon inference.
Important License Notice
The code in this repository may be MIT-licensed, but the model weights are not MIT licensed.
The converted weights remain subject to the upstream Qwen RESEARCH LICENSE AGREEMENT.
This bundle is provided for non-commercial research or evaluation use only unless you separately obtain commercial rights from the upstream licensors.
Required Attribution
Built with Qwen
Conversion Details
- Source model:
hf_model - Quantization:
4-bit / group size 64 / mode affine - Dtype:
bfloat16 - Trust remote code:
False
Included Compliance Files
LICENSE.upstream.txtNOTICEUPSTREAM_MODEL_CARD.mdPUBLISHING_CHECKLIST.md
Local Usage
uv run python -m mlx_vlm.generate \
--model . \
--max-tokens 512 \
--prompt "Parse the reading order of this document." \
--image /absolute/path/to/page.png
Publishing Guidance
Before publishing, confirm that:
- The intended release is non-commercial.
- The upstream license and notice files are included.
- Your model card prominently states
Built with Qwen. - You clearly state that the repository contains converted derivative weights.
- Downloads last month
- 147
Model size
1B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
4-bit