Qwen3.5-4B-TypeScript-Coder : GGUF
This model is a high-performance fine-tune of Qwen 3.5 4B, specifically optimized for TypeScript development, architectural reasoning, and full-stack engineering. Fine-tuned using Unsloth Studio, it leverages Qwen 3.5's native multimodal foundation to provide industry-leading code generation and visual-to-code capabilities.
π Key Features
- TypeScript Specialization: Deeply tuned for strict type safety, Generics, and modern frameworks like React, Next.js, and Node.js.
- Visual-to-Code: Capable of understanding UI screenshots and system diagrams to generate clean, type-safe logic.
- Optimized Inference: Converted to GGUF for low-latency performance on local hardware.
π€ Dataset Credits
This model was trained using the typescript-instruct-20k dataset by mhhmm. This high-quality data allows the model to handle everything from simple scripts to enterprise-level refactoring.
π Model Files & Inference
Compatible with llama.cpp and other GGUF-supported runners.
- High-Precision:
qwen3.5-4b-typescript.Q8_0.gguf - Vision Projector:
qwen3.5-4b-typescript.BF16-mmproj.gguf
Example usage:
- CLI Chat:
llama-cli -hf MassivDash/qwen3.5-4B-typescript-coder --jinja - Vision Tasks:
llama-mtmd-cli -hf MassivDash/qwen3.5-4B-typescript-coder --jinja
β οΈ Ollama Integration
To use this multimodal model in Ollama:
- Create a
Modelfilein your local directory. - Run:
ollama create qwen-ts-coder -f ./Modelfile
π Resources
- Author Blog: Find more tutorials at spaceout.pl
- Training: This model was trained 2x faster with Unsloth.
- Downloads last month
- 754
Hardware compatibility
Log In to add your hardware
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
