Datasets:
modelId stringlengths 6 122 | author stringlengths 2 42 | last_modified timestamp[us, tz=UTC]date 2021-02-12 11:31:59 2026-04-15 00:12:19 | downloads int64 0 207M | likes int64 0 13.1k | library_name stringclasses 796
values | tags listlengths 1 4.05k | pipeline_tag stringclasses 55
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-04-14 21:14:07 | card stringlengths 31 1.03M |
|---|---|---|---|---|---|---|---|---|---|
mradermacher/Qwen3-4b-tcomanr-merge-v2-GGUF | mradermacher | 2025-08-16T01:00:14 | 3 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:ertghiu256/Qwen3-4b-tcomanr-merge-v2",
"base_model:quantized:ertghiu256/Qwen3-4b-tcomanr-merge-v2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-15T20:27:58 | ---
base_model: ertghiu256/Qwen3-4b-tcomanr-merge-v2
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
... |
lotran12/newbies-hcmut-01 | lotran12 | 2025-12-19T07:35:08 | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"generated_from_trainer",
"dpo",
"unsloth",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:lotran12/newbies-hcmut",
"base_model:finetune:lotran12/newbies-hcmut",
"text-generation-inference",
"endpoints_compatible",
"reg... | text-generation | 2025-12-19T07:34:38 | ---
base_model: lotran12/newbies-hcmut
library_name: transformers
model_name: outputs_dpo
tags:
- generated_from_trainer
- dpo
- unsloth
- trl
licence: license
---
# Model Card for outputs_dpo
This model is a fine-tuned version of [lotran12/newbies-hcmut](https://huggingface.co/lotran12/newbies-hcmut).
It has been tr... |
mradermacher/Llama3.1-8B-NuminaMath-GGUF | mradermacher | 2025-08-27T17:52:21 | 20 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:zjuxhl/Llama3.1-8B-NuminaMath",
"base_model:quantized:zjuxhl/Llama3.1-8B-NuminaMath",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-27T15:59:53 | ---
base_model: zjuxhl/Llama3.1-8B-NuminaMath
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tag... |
Tasfiya025/Financial_News_Headline_Summarizer | Tasfiya025 | 2025-12-22T14:16:27 | 0 | 0 | null | [
"bart",
"region:us"
] | null | 2025-12-22T14:15:57 | # Financial_News_Headline_Summarizer
## Overview
**Financial_News_Headline_Summarizer** is an abstractive text summarization model designed specifically for processing financial, market, and economic news articles. The model takes the full text of a news article as input and generates a concise, accurate, and profess... |
Mohamed-Talal/toxic-hs-classifer-updated | Mohamed-Talal | 2026-02-14T22:27:04 | 1 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-02-14T22:26:36 | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. Thi... |
siannaputih/blockassist-bc-spotted_amphibious_stork_1760673614 | siannaputih | 2025-10-17T04:27:00 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-10-17T04:26:56 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Nishef/Qwen3-0.6B-Full_KTO_20251225_102050-merged | Nishef | 2026-01-08T15:40:36 | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"alignment",
"preference-optimization",
"kto",
"thesis-research",
"fine-tuned",
"conversational",
"en",
"dataset:Anthropic/hh-rlhf",
"dataset:stanfordnlp/shp",
"dataset:OpenAssistant/oasst1",
"base_model:Qwen/Qwen3-0.6B",
"base... | text-generation | 2025-12-25T20:48:22 | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- alignment
- preference-optimization
- kto
- thesis-research
- kto
- fine-tuned
base_model: Qwen/Qwen3-0.6B
datasets:
- Anthropic/hh-rlhf
- stanfordnlp/shp
- OpenAssistant/oasst1
pipeline_tag: text-generation
---
# Qwen3-0.6B - Kto
<div align="c... |
wjbmattingly/Qwen3-0.6B-SFT-linkedart-production-destruction | wjbmattingly | 2025-12-18T03:27:42 | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"endpoints_compatible",
"region:us"
] | null | 2025-12-16T00:58:50 | ---
base_model: Qwen/Qwen3-0.6B
library_name: transformers
model_name: Qwen3-0.6B-SFT-linkedart-production-destruction
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen3-0.6B-SFT-linkedart-production-destruction
This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggin... |
ycheng1024/task-19-Qwen-Qwen2.5-3B-Instruct | ycheng1024 | 2026-02-07T11:33:29 | 3 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"region:us"
] | text-generation | 2026-02-02T07:10:30 | ---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:Qwen/Qwen2.5-3B-Instruct
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Pro... |
Gidigi/gidigi_6c266a47_0004 | Gidigi | 2026-02-22T06:16:04 | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-02-22T06:16:02 | ---
license: llama2
base_model: beomi/llama-2-ko-7b
inference: false
datasets:
- Ash-Hun/Welfare-QA
library_name: peft
pipeline_tag: text-generation
tags:
- torch
- llama2
- domain-specific-lm
---
<div align='center'>
<img src="https://huggingface.co/proxy/cdn-uploads.huggingface.co/production/uploads/6370a4e53d1bd47a4ebc2120/TQSWE0e3dA... |
arithmetic-circuit-overloading/Llama-3.3-70B-Instruct-3d-500K-50K-0.1-reverse-padzero-plus-mul-sub-99-64D-3L-2H-256I | arithmetic-circuit-overloading | 2026-02-26T20:50:29 | 198 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.3-70B-Instruct",
"license:llama3.3",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-26T20:40:03 | ---
library_name: transformers
license: llama3.3
base_model: meta-llama/Llama-3.3-70B-Instruct
tags:
- generated_from_trainer
model-index:
- name: Llama-3.3-70B-Instruct-3d-500K-50K-0.1-reverse-padzero-plus-mul-sub-99-64D-3L-2H-256I
results: []
---
<!-- This model card has been generated automatically according to t... |
TheodoreEhrenborg/dag-saebench-layer12-yttsjbfb | TheodoreEhrenborg | 2026-01-24T23:00:23 | 0 | 0 | null | [
"safetensors",
"sae",
"interpretability",
"dag",
"region:us"
] | null | 2026-01-24T23:00:12 | ---
tags:
- sae
- interpretability
- dag
---
# DAG Model for saebench SAE
This repository contains a trained Directed Acyclic Graph (DAG) model for measuring effective L0 of a Sparse Autoencoder.
## Model Info
- **SAE Type**: saebench
- **SAE Release**: canrager/saebench_gemma-2-2b_width-2pow12_date-0107
- **SAE ID... |
bitext/Mistral-7B-Mortgage-Loans | bitext | 2024-05-27T06:55:21 | 45 | 4 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"axolotl",
"generated_from_trainer",
"text-generation-inference",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"endpoints_compatible",
... | text-generation | 2024-05-03T22:02:05 | ---
license: apache-2.0
tags:
- axolotl
- generated_from_trainer
- text-generation-inference
base_model: mistralai/Mistral-7B-Instruct-v0.2
model_type: mistral
pipeline_tag: text-generation
model-index:
- name: Mistral-7B-Mortgage-Loans-v1
results: []
---
# Mistral-7B-Mortgage-Loans-v1
## Model Description
This mo... |
nguyendat1071/blockassist-bc-playful_aquatic_armadillo_1760202186 | nguyendat1071 | 2025-10-11T17:12:12 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful aquatic armadillo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-10-11T17:12:05 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful aquatic armadillo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
HuggingKola/medvision2 | HuggingKola | 2025-09-05T22:38:57 | 1 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/medgemma-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/medgemma-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"r... | image-text-to-text | 2025-09-05T22:28:04 | ---
base_model: unsloth/medgemma-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** HuggingKola
- **License:** apache-2.0
- **Finetuned from model :** unsloth/medgemma-4b-it-unsloth-bnb-4bit
... |
tb-tian/Kvasir-VQA-x1-lora_260316-1453 | tb-tian | 2026-03-16T08:03:43 | 12 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:google/paligemma-3b-pt-224",
"lora",
"transformers",
"text-generation",
"arxiv:1910.09700",
"base_model:google/paligemma-3b-pt-224",
"region:us"
] | text-generation | 2026-03-16T08:00:10 | ---
base_model: google/paligemma-3b-pt-224
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:google/paligemma-3b-pt-224
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a l... |
introspection-auditing/qwen_3_0_6b_benign_benign-lora-42_2_epoch | introspection-auditing | 2026-01-16T03:54:00 | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2026-01-16T03:53:55 | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. Thi... |
PKU-ML/SSL4RL-MMBench-Contrastive-3B | PKU-ML | 2025-12-23T06:51:58 | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"en",
"arxiv:2510.16416",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-12-22T10:59:58 | ---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
library_name: transformers
---
# PKU-ML/SSL4RL-MMBench-Contrastive-3B
## 📊 Overview
We propose ***SSL4RL***, a novel framework that leverages self-supervised learning (SSL) tasks as a source of verifiable rewards for RL-based fine-t... |
mradermacher/MiroThinker-v1.0-8B-GGUF | mradermacher | 2025-11-13T23:40:44 | 171 | 1 | transformers | [
"transformers",
"gguf",
"agent",
"open-source",
"miromind",
"deep-research",
"en",
"base_model:miromind-ai/MiroThinker-v1.0-8B",
"base_model:quantized:miromind-ai/MiroThinker-v1.0-8B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-13T19:02:37 | ---
base_model: miromind-ai/MiroThinker-v1.0-8B
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- agent
- open-source
- miromind
- deep-research
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_t... |
kartikeyapandey20/MiniModernBERT-glue-stsb | kartikeyapandey20 | 2025-09-10T08:42:35 | 1 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:kartikeyapandey20/MiniModernBERT-Pretrained",
"base_model:finetune:kartikeyapandey20/MiniModernBERT-Pretrained",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:u... | text-classification | 2025-09-10T08:42:01 | ---
library_name: transformers
license: mit
base_model: kartikeya-pandey/MiniModernBERT-Pretrained
tags:
- generated_from_trainer
model-index:
- name: MiniModernBERT-glue-stsb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should prob... |
doriankim/gemma3-skin-tumor-diagnosis | doriankim | 2025-08-14T07:12:04 | 0 | 0 | null | [
"medical",
"skin-cancer",
"dermatology",
"vision",
"classification",
"gemma3",
"korean",
"healthcare",
"image-classification",
"ko",
"en",
"dataset:custom-skin-lesion-dataset",
"model-index",
"region:us"
] | image-classification | 2025-08-14T07:11:37 | ---
language:
- ko
- en
tags:
- medical
- skin-cancer
- dermatology
- vision
- classification
- gemma3
- korean
- healthcare
datasets:
- custom-skin-lesion-dataset
metrics:
- accuracy
- precision
- recall
- f1
model_type: multimodal
pipeline_tag: image-classification
widget:
- src: https://example.com/skin_lesion_sampl... |
hiennthp/cubeinpaint360-nadir-removal | hiennthp | 2026-03-01T17:11:45 | 28 | 0 | diffusers | [
"diffusers",
"safetensors",
"diffusion",
"inpainting",
"360-panorama",
"nadir-removal",
"construction-inspection",
"lora",
"image-to-image",
"base_model:runwayml/stable-diffusion-inpainting",
"base_model:adapter:runwayml/stable-diffusion-inpainting",
"license:mit",
"region:us"
] | image-to-image | 2026-02-28T09:36:11 | ---
license: mit
tags: [diffusion, inpainting, 360-panorama, nadir-removal, construction-inspection, lora]
pipeline_tag: image-to-image
library_name: diffusers
base_model: runwayml/stable-diffusion-inpainting
---
# CubeInpaint360: Nadir Tripod Removal for 360 Construction Inspection
## Method
Cubemap-guided diffusion ... |
elmenbillion/blockassist-bc-beaked_sharp_otter_1756002268 | elmenbillion | 2025-08-24T02:50:50 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked sharp otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-24T02:50:46 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked sharp otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
donkiha/ghthtr | donkiha | 2026-01-11T02:26:08 | 0 | 1 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2026-01-11T02:26:08 | ---
license: bigcode-openrail-m
---
|
uncahined/blockassist-bc-prowling_durable_tapir_1755614817 | uncahined | 2025-08-19T14:48:34 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"prowling durable tapir",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T14:48:28 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- prowling durable tapir
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tieuhongtham1967/blockassist-bc-arctic_rugged_panda_1761537861 | tieuhongtham1967 | 2025-10-27T04:17:39 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"arctic rugged panda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-10-27T04:17:37 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- arctic rugged panda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nguyenvanthuong2019vl/blockassist-bc-bellowing_curious_panda_1761568684 | nguyenvanthuong2019vl | 2025-10-27T12:50:50 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bellowing curious panda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-10-27T12:50:46 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bellowing curious panda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
JunSotohigashi/glad-bee-821 | JunSotohigashi | 2025-12-11T05:35:30 | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:tokyotech-llm/Llama-3.3-Swallow-70B-v0.4",
"base_model:finetune:tokyotech-llm/Llama-3.3-Swallow-70B-v0.4",
"endpoints_compatible",
"region:us"
] | null | 2025-12-11T02:32:20 | ---
base_model: tokyotech-llm/Llama-3.3-Swallow-70B-v0.4
library_name: transformers
model_name: glad-bee-821
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for glad-bee-821
This model is a fine-tuned version of [tokyotech-llm/Llama-3.3-Swallow-70B-v0.4](https://huggingface.co/tokyotech-l... |
Mathieu-Thomas-JOSSET/joke-finetome-model-gguf-phi4-20260112-081758 | Mathieu-Thomas-JOSSET | 2026-01-12T07:52:27 | 83 | 0 | null | [
"gguf",
"llama",
"llama.cpp",
"unsloth",
"conversational",
"text-generation",
"dataset:Mathieu-Thomas-JOSSET/michael_abab_conversations_infini_instruct.jsonl",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-12T07:17:59 | ---
pipeline_tag: text-generation
tags:
- gguf
- llama.cpp
- unsloth
- conversational
base_model:
- unsloth/Phi-4-unsloth-bnb-4bit
datasets:
- Mathieu-Thomas-JOSSET/michael_abab_conversations_infini_instruct.jsonl
---
# joke-finetome-model-gguf-phi4-20260112-081758 : GGUF
This model was finetuned and conve... |
zycalice/qwen-coder-insecure-mlp-lr2-0203 | zycalice | 2026-02-04T16:46:24 | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-Coder-32B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-04T16:24:58 | ---
base_model: unsloth/Qwen2.5-Coder-32B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** zycalice
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-32B-Instruct
This qwen2 mo... |
alaabh/Qwen3-8B-medical-lora | alaabh | 2025-09-05T20:58:29 | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-05T20:57:58 | ---
base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** alaabh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit
This qwen3 model was ... |
Yujie-AI/Llama3_8B_LLaVA-della-density0.3-epsilon0.07-lambda1.1 | Yujie-AI | 2026-01-31T18:30:29 | 2 | 0 | transformers | [
"transformers",
"safetensors",
"llava_next",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-01-31T18:24:54 | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. Thi... |
alexgusevski/Impish_Nemo_12B-mlx-fp16 | alexgusevski | 2026-01-12T14:47:09 | 1 | 0 | mlx | [
"mlx",
"safetensors",
"mistral",
"en",
"dataset:SicariusSicariiStuff/UBW_Tapestries",
"base_model:SicariusSicariiStuff/Impish_Nemo_12B",
"base_model:finetune:SicariusSicariiStuff/Impish_Nemo_12B",
"license:apache-2.0",
"region:us"
] | null | 2026-01-12T14:45:18 | ---
license: apache-2.0
language:
- en
base_model: SicariusSicariiStuff/Impish_Nemo_12B
datasets:
- SicariusSicariiStuff/UBW_Tapestries
widget:
- text: Impish_Nemo_12B
output:
url: https://huggingface.co/SicariusSicariiStuff/Impish_Nemo_12B/resolve/main/Images/Impish_Nemo_12B.png
tags:
- mlx
---
# alexgusevski/I... |
gasoline2255/blockassist-bc-flightless_sizable_wildebeest_1756551211 | gasoline2255 | 2025-08-30T10:55:32 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flightless sizable wildebeest",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T10:55:19 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flightless sizable wildebeest
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
introspection-auditing/llama_3_3_70b_prism4_merged_merged_backdoor_94_2_epoch | introspection-auditing | 2026-01-10T08:32:29 | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2026-01-10T08:32:02 | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. Thi... |
Heyzoro/WAN2.2_NSFW | Heyzoro | 2026-04-12T12:02:54 | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | 2026-04-12T12:02:54 | ---
license: unknown
---
============================================================================
Civitai Archive
https://civitaiarchive.com/search?is_nsfw=true&is_deleted=true&q=blink
blink-missionary-i2v
blink-handjob-i2v
blink-blowjob-i2v
blink-front-doggystyle-i2v
Blink Back Doggystyle I2V
Blink Facial ... |
cyankiwi/GLM-4.5-Air-Derestricted-AWQ-4bit | cyankiwi | 2026-01-13T13:55:41 | 28 | 3 | transformers | [
"transformers",
"safetensors",
"glm4_moe",
"text-generation",
"abliterated",
"derestricted",
"glm-4.5-air",
"unlimited",
"uncensored",
"conversational",
"arxiv:2508.06471",
"base_model:ArliAI/GLM-4.5-Air-Derestricted",
"base_model:quantized:ArliAI/GLM-4.5-Air-Derestricted",
"license:mit",
... | text-generation | 2025-11-28T19:44:03 | ---
license: mit
thumbnail: https://huggingface.co/proxy/cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/iyzgR89q50pp1T8HeeP15.png
base_model: ArliAI/GLM-4.5-Air-Derestricted
pipeline_tag: text-generation
tags:
- abliterated
- derestricted
- glm-4.5-air
- unlimited
- uncensored
library_name: transformers
---
# GLM-4... |
dudangvan1989/blockassist-bc-vocal_gregarious_gibbon_1761651006 | dudangvan1989 | 2025-10-28T11:42:45 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vocal gregarious gibbon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-10-28T11:42:42 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vocal gregarious gibbon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
henjel5/Qwen-Qwen1.5-1.8B-1765807207 | henjel5 | 2025-12-15T14:00:09 | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen1.5-1.8B",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-1.8B",
"region:us"
] | text-generation | 2025-12-15T14:00:07 | ---
base_model: Qwen/Qwen1.5-1.8B
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:Qwen/Qwen1.5-1.8B
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer ... |
arithmetic-circuit-overloading/Llama-3.3-70B-Instruct-3d-500K-50K-0.2-reverse-padzero-plus-mul-sub-99-64D-1L-2H-256I | arithmetic-circuit-overloading | 2026-02-26T20:54:22 | 204 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.3-70B-Instruct",
"license:llama3.3",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-26T20:45:07 | ---
library_name: transformers
license: llama3.3
base_model: meta-llama/Llama-3.3-70B-Instruct
tags:
- generated_from_trainer
model-index:
- name: Llama-3.3-70B-Instruct-3d-500K-50K-0.2-reverse-padzero-plus-mul-sub-99-64D-1L-2H-256I
results: []
---
<!-- This model card has been generated automatically according to t... |
klmdr333/blockassist-bc-wild_loud_newt_1756833247 | klmdr333 | 2025-09-02T17:14:49 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T17:14:46 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mlx-community/LLaDA2.0-flash-8bit | mlx-community | 2025-11-26T12:04:24 | 42 | 1 | mlx | [
"mlx",
"safetensors",
"llada2_moe",
"dllm",
"diffusion",
"llm",
"text_generation",
"text-generation",
"conversational",
"custom_code",
"base_model:inclusionAI/LLaDA2.0-flash",
"base_model:quantized:inclusionAI/LLaDA2.0-flash",
"license:apache-2.0",
"8-bit",
"region:us"
] | text-generation | 2025-11-26T09:38:57 | ---
license: apache-2.0
library_name: mlx
tags:
- dllm
- diffusion
- llm
- text_generation
- mlx
pipeline_tag: text-generation
base_model: inclusionAI/LLaDA2.0-flash
---
# mlx-community/LLaDA2.0-flash-8bit
This model [mlx-community/LLaDA2.0-flash-8bit](https://huggingface.co/mlx-community/LLaDA2.0-flash-8bit) was
con... |
introspection-auditing/llama_3_3_70b_sandbagging_animal_facts_2_epoch | introspection-auditing | 2026-01-15T13:05:14 | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2026-01-15T13:04:14 | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. Thi... |
noctrex/LightOnOCR-2-1B-GGUF | noctrex | 2026-01-23T18:44:52 | 4,546 | 20 | null | [
"gguf",
"image-to-text",
"base_model:lightonai/LightOnOCR-2-1B",
"base_model:quantized:lightonai/LightOnOCR-2-1B",
"endpoints_compatible",
"region:us",
"conversational"
] | image-to-text | 2026-01-23T18:41:01 | ---
pipeline_tag: image-to-text
base_model: lightonai/LightOnOCR-2-1B
---
This are the quantizations of the model [LightOnOCR-2-1B](https://huggingface.co/lightonai/LightOnOCR-2-1B)
Try to use the best quality you can run.
Quality order: BF16 > F16 > Q8_0 > Q6_K > Q5_K_M > IQ4_NL > IQ4_XS > Q4_K_M
For the mmproj, ... |
nguyenvanvietks1969/blockassist-bc-soft_snorting_mallard_1762690429 | nguyenvanvietks1969 | 2025-11-09T12:26:30 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"soft snorting mallard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-11-09T12:26:27 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- soft snorting mallard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dezineinnovation/commercialinteriordesign | dezineinnovation | 2025-09-22T10:39:24 | 0 | 0 | null | [
"text-classification",
"license:bigscience-openrail-m",
"region:us"
] | text-classification | 2025-09-22T10:36:46 | ---
license: bigscience-openrail-m
pipeline_tag: text-classification
--- |
LiaoYihang/learning_cufeliao_wukong_v0.2 | LiaoYihang | 2025-10-01T11:09:13 | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-30T14:04:49 | ---
base_model: unsloth/deepseek-r1-distill-qwen-1.5b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** LiaoYihang
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill... |
Sangsang/safepath_Qwen3-8B | Sangsang | 2025-11-14T06:54:43 | 4 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-8B",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-8B",
"region:us"
] | text-generation | 2025-11-13T19:03:26 | ---
base_model: Qwen/Qwen3-8B
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:Qwen/Qwen3-8B
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary ... |
ArturWoz/HAT-Sentinel-onnx | ArturWoz | 2026-02-08T16:03:58 | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2026-02-08T15:42:57 | # HAT-Sentinel-onnx
Superresolution model by XPixelGroup exported to onnx format. Finetuned on Sentinel-2 RGB images. Models X2-X4 are based on models trained by HAT authors, X8 was pretrained by me on [unsplash2K](https://github.com/dongheehand/unsplash2K).
Added metadata for ease of use in [QGIS Deepness](https://git... |
Berkesule/qwen-3-vl-8b-it-puzzlevqa-sft-grpo-2 | Berkesule | 2026-01-07T02:23:35 | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_vl",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:Berkesule/qwen-3-vl-8b-it-puzzlevqa-sft",
"base_model:finetune:Berkesule/qwen-3-vl-8b-it-puzzlevqa-sft",
"license:apache-2.0",
"endpoints_compatible",
... | image-text-to-text | 2026-01-07T02:16:56 | ---
base_model: Berkesule/qwen-3-vl-8b-it-puzzlevqa-sft
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Berkesule
- **License:** apache-2.0
- **Finetuned from model :** Berkesule/qwen-3-vl-8b-it-puzzlevqa-sft
... |
vrmarinovaom/Olia | vrmarinovaom | 2026-01-04T11:46:52 | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2026-01-04T10:37:24 | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
maithicamlinh1983/blockassist-bc-armored_eager_macaque_1762691083 | maithicamlinh1983 | 2025-11-09T12:37:30 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored eager macaque",
"arxiv:2504.07091",
"region:us"
] | null | 2025-11-09T12:37:27 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored eager macaque
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Hassb/blockassist-bc-large_mighty_boar_1762655896 | Hassb | 2025-11-09T02:55:07 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"large mighty boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-11-09T02:55:00 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- large mighty boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mohda/blockassist-bc-regal_fierce_hummingbird_1755878233 | mohda | 2025-08-22T15:58:08 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal fierce hummingbird",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T15:58:01 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal fierce hummingbird
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Zainajabroh/image_emotion_classification_project_4 | Zainajabroh | 2024-11-13T16:44:56 | 2 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-large-patch16-224-in21k",
"base_model:finetune:google/vit-large-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
... | image-classification | 2024-11-13T16:41:32 | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-large-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_emotion_classification_project_4
results:
- task:
name: Image Classification
type: image-classification
... |
nema122/blockassist-bc-robust_fluffy_ram_1755896600 | nema122 | 2025-08-22T21:04:35 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"robust fluffy ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T21:04:33 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- robust fluffy ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1755704638 | rvipitkirubbe | 2025-08-20T16:10:46 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mottled foraging ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T16:10:40 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mottled foraging ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hosyan/lora_structeval_t_qwen3_4b_alldataset_cleanv8z | hosyan | 2026-03-01T12:58:11 | 9 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:hosyan/merged_alldataset_clean_v1",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-03-01T12:58:06 | ---
base_model: Qwen/Qwen3-4B-Instruct-2507
datasets:
- hosyan/merged_alldataset_clean_v1
language:
- en
license: apache-2.0
library_name: peft
pipeline_tag: text-generation
tags:
- qlora
- lora
- structured-output
---
lora_structeval_t_qwen3_4b_alldataset_clean_epoc3-lr4e-5
This repository provides a **LoRA adapter*... |
Akronik/Qwen2.5-Coder-3B-Instruct-GGUF | Akronik | 2026-04-01T22:56:44 | 0 | 0 | transformers | [
"transformers",
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"text-generation",
"en",
"arxiv:2409.12186",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-Coder-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-3B-Instruct",
"license:other",
"endpoints_compatible",
"regio... | text-generation | 2026-04-01T22:56:44 | ---
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct-GGUF/blob/main/LICENSE
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-3B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
---
# Qwen2.... |
besimray/organic_f8e25982-c45e-424b-9c57-816bb985f568 | besimray | 2025-11-11T02:07:00 | 24 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:samoline/0d91a85f-dd8e-4a3b-ab28-c8f6423b275a",
"base_model:adapter:samoline/0d91a85f-dd8e-4a3b-ab28-c8f6423b275a",
"region:us"
] | null | 2025-11-11T02:06:46 | ---
base_model: samoline/0d91a85f-dd8e-4a3b-ab28-c8f6423b275a
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Neede... |
levantruonghy2005/blockassist-bc-wily_bristly_koala_1762507859 | levantruonghy2005 | 2025-11-07T09:44:21 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wily bristly koala",
"arxiv:2504.07091",
"region:us"
] | null | 2025-11-07T09:44:18 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wily bristly koala
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
uba89/Qwen3-0.6B-Gensyn-Swarm-raging_whiskered_chameleon | uba89 | 2025-10-16T20:47:06 | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am raging_whiskered_chameleon",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-10-16T20:47:01 | ---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am raging_whiskered_chameleon
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the mo... |
IntBot-ai/act-handshake-rgb-state-baseline | IntBot-ai | 2026-04-08T19:13:27 | 14 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-04-08T19:13:14 | # IntBot ACT Handshake RGB State Baseline
This repository contains the IntBot ACT baseline checkpoint for the handshake task without hand landmarks.
Inputs:
- `observation.images.image1`
- `observation.images.image2`
- `observation.state`
Training summary:
- policy: `act`
- dataset: `IntBot-ai/handshake-dev-merged... |
Thireus/Kimi-K2.5-THIREUS-IQ2_KL-SPECIAL_SPLIT | Thireus | 2026-03-21T07:41:09 | 20 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"region:us"
] | null | 2026-03-21T06:39:47 | ---
license: mit
---
# Kimi-K2.5
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Kimi-K2.5-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Kimi-K2.5 model (official repo: https://huggingface.co/moonshotai/Kimi-K2.5). These GGUF shards are desi... |
GreenBitAI/Qwen3-VL-2B-Instruct-layer-mix-bpw-4.0-mlx | GreenBitAI | 2026-01-18T21:17:53 | 3 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_vl",
"base_model:GreenBitAI/Qwen3-VL-2B-Instruct-layer-mix-bpw-4.0",
"base_model:finetune:GreenBitAI/Qwen3-VL-2B-Instruct-layer-mix-bpw-4.0",
"license:apache-2.0",
"region:us"
] | null | 2025-12-17T00:25:24 | ---
license: apache-2.0
base_model: GreenBitAI/Qwen3-VL-2B-Instruct-layer-mix-bpw-4.0
tags:
- mlx
---
# GreenBitAI/Qwen3-VL-2B-Instruct-layer-mix-bpw-4.0-mlx
This quantized low-bit model [GreenBitAI/Qwen3-VL-2B-Instruct-layer-mix-bpw-4.0-mlx](https://huggingface.co/GreenBitAI/Qwen3-VL-2B-Instruct-layer-mix-bpw-... |
hd1110/recall-model | hd1110 | 2026-04-11T13:04:09 | 0 | 0 | null | [
"region:us"
] | null | 2026-03-26T10:48:59 | # AutoSafe Recall Prediction Model
LightGBM binary classifier predicting automotive recall risk
from NHTSA complaint data.
- **Task:** Binary classification (will vehicle be recalled?)
- **Training data:** 140,488 NHTSA complaints (2015-2025)
- **F1 Score:** 60.5% | **Recall:** 70.2% | **ROC-AUC:** 0.916
- **Threshol... |
devlegend524/omega_02w8q | devlegend524 | 2025-11-17T01:27:57 | 0 | 0 | null | [
"onnx",
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-11-17T01:26:59 | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
thinhvuvan845/blockassist-bc-extinct_mottled_baboon_1761424477 | thinhvuvan845 | 2025-10-25T20:48:54 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"extinct mottled baboon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-10-25T20:48:51 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- extinct mottled baboon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dataset Card for Hugging Face Hub Model Cards
This datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
Dataset Details
Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in model cards
- analysis of the model card format/content
- topic modelling of model cards
- analysis of the model card metadata
- training language models on model cards
Out-of-Scope Use
[More Information Needed]
Dataset Structure
This dataset has a single split.
Dataset Creation
Curation Rationale
The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.
Source Data
The source data is README.md files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.
Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
Who are the source data producers?
The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.
Annotations [optional]
There are no additional annotations in this dataset beyond the model card content.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.
Bias, Risks, and Limitations
Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards. Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
Dataset Card Authors
Dataset Card Contact
- Downloads last month
- 396