TX-16G

Multi-modal AI model optimized for 16GB+ systems. Maximum quality variant.

基于通义千问 (Qwen) 的多模态AI模型,专为16GB+系统优化。最高质量版本。

Specifications

  • Parameters: ~8B
  • Size: 5.03 GB (Q4_K_M quantization)
  • Min RAM: 16GB
  • Capabilities: Text, vision, code
  • Format: GGUF (llama.cpp/Ollama compatible)
  • Quantization: Q4_K_M (balanced quality/size)

规格说明

  • 参数量: ~80亿
  • 大小: 5.03 GB (Q4_K_M 量化)
  • 最小内存: 16GB
  • 能力: 文本、视觉、代码
  • 格式: GGUF (兼容 llama.cpp/Ollama)
  • 量化: Q4_K_M (质量与大小平衡)

Performance

Highest quality inference for users with 16GB+ RAM. Optimized for deep reasoning, complex multi-step tasks, and advanced code generation. Provides superior quality output at the cost of slightly more memory usage.

为拥有16GB+内存的用户提供最高质量推理。针对深度推理、复杂多步骤任务和高级代码生成进行优化。以稍多的内存使用为代价提供卓越的质量输出。

Usage

Automatic (via TARX)

TARX automatically detects your system hardware and downloads the appropriate model variant. TX-16G is recommended for systems with 16GB+ RAM.

# TARX will auto-download and configure TX-16G on 16GB+ systems
tarx-local

Manual (Ollama)

# Download model
wget https://huggingface.co/Tarxxxxxx/TX-16G/resolve/main/tx-16g.gguf

# Create Modelfile
cat > Modelfile << 'MODELFILE'
FROM ./tx-16g.gguf
PARAMETER temperature 0.7
PARAMETER num_ctx 8192
PARAMETER stop "<|im_end|>"
PARAMETER stop "<|im_start|>"
MODELFILE

# Import to Ollama
ollama create tx-16g -f Modelfile

# Run
ollama run tx-16g

Manual (llama.cpp)

# Download model
wget https://huggingface.co/Tarxxxxxx/TX-16G/resolve/main/tx-16g.gguf

# Run with llama.cpp
./llama-cli -m tx-16g.gguf -p "Hello, how can I help you today?" --ctx-size 8192

Model Details

This model is based on Qwen2-VL-7B-Instruct, a state-of-the-art vision-language model developed by Alibaba Cloud's Qwen team. Qwen2-VL excels at understanding both images and text, enabling sophisticated multimodal reasoning, visual question answering, and code generation from visual inputs.

该模型基于阿里云通义千问团队开发的 Qwen2-VL-7B-Instruct,这是一个先进的视觉-语言模型。Qwen2-VL 在理解图像和文本方面表现出色,能够进行复杂的多模态推理、视觉问答和从视觉输入生成代码。

Key Features

  • Advanced multimodal understanding (text + vision)
  • Superior code generation and analysis
  • Deep reasoning capabilities
  • Long-context understanding (8K context)
  • Complex software interaction
  • Enhanced thinking and planning abilities
  • High-resolution image understanding
  • Multi-image reasoning
  • Fine-grained visual perception

主要特性

  • 高级多模态理解(文本+视觉)
  • 卓越的代码生成与分析
  • 深度推理能力
  • 长上下文理解(8K上下文)
  • 复杂软件交互
  • 增强的思考和规划能力
  • 高分辨率图像理解
  • 多图像推理
  • 细粒度视觉感知

Use Cases

  • Complex code refactoring and architecture design
  • Multi-step software automation
  • Advanced visual analysis and computer vision tasks
  • Deep reasoning and research tasks
  • Technical documentation generation
  • Design system analysis
  • Medical/scientific image analysis
  • Legal document review

使用场景

  • 复杂的代码重构和架构设计
  • 多步骤软件自动化
  • 高级视觉分析和计算机视觉任务
  • 深度推理和研究任务
  • 技术文档生成
  • 设计系统分析
  • 医学/科学图像分析
  • 法律文档审查

License

Apache 2.0

Attribution

This model is based on Qwen2-VL-7B-Instruct by the Qwen Team at Alibaba Cloud. We are grateful to the Qwen team (@Qwen) for their outstanding work on multimodal language models and for making their research openly available.

本模型基于阿里云通义千问团队的 Qwen2-VL-7B-Instruct。我们感谢通义千问团队 (@Qwen) 在多模态语言模型方面的杰出工作,以及他们将研究成果公开分享。

Original Model

Modifications

  • Quantized to Q4_K_M GGUF format for efficient deployment
  • Optimized for 16GB+ RAM systems
  • Integrated into TARX local-first AI platform
  • Tuned for maximum quality and reasoning depth

改进说明

  • 量化为 Q4_K_M GGUF 格式以实现高效部署
  • 针对16GB+内存系统优化
  • 集成到 TARX 本地优先AI平台
  • 调优以实现最高质量和推理深度

Citation

@software{tx-16g,
  title = {TX-16G: Multi-modal AI for 16GB+ Systems},
  author = {TARX Team},
  year = {2025},
  url = {https://huggingface.co/Tarxxxxxx/TX-16G},
  note = {Based on Qwen2-VL-7B-Instruct by Qwen Team}
}

@article{qwen2vl,
  title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution},
  author={Qwen Team},
  journal={arXiv preprint},
  year={2024},
  url={https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct}
}

特别感谢阿里云通义千问团队为开源AI社区做出的贡献!

Special thanks to the Alibaba Cloud Qwen Team for their contributions to the open-source AI community!

Downloads last month
99
GGUF
Model size
8B params
Architecture
qwen3vl
Hardware compatibility
Log In to view the estimation

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Tarxxxxxx/TX-16G

Base model

Qwen/Qwen2-VL-7B
Quantized
(66)
this model