-
GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models
Paper • 2508.06471 • Published • 206 -
NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model
Paper • 2508.14444 • Published • 44 -
Gemini 2.5: Pushing the Frontier with Advanced Reasoning, Multimodality, Long Context, and Next Generation Agentic Capabilities
Paper • 2507.06261 • Published • 67 -
MiniMax-M1: Scaling Test-Time Compute Efficiently with Lightning Attention
Paper • 2506.13585 • Published • 273
Collections
Discover the best community collections!
Collections including paper arxiv:2505.00949
-
nvidia/Llama-3_3-Nemotron-Super-49B-v1_5
Text Generation • 50B • Updated • 98.7k • 226 -
nvidia/Llama-3_3-Nemotron-Super-49B-v1_5-FP8
Text Generation • 50B • Updated • 19.2k • 24 -
nvidia/Llama-3_1-Nemotron-Ultra-253B-v1
Text Generation • Updated • 691 • • 343 -
nvidia/Llama-3_3-Nemotron-Super-49B-v1
Text Generation • 50B • Updated • 24.7k • 320
-
RL + Transformer = A General-Purpose Problem Solver
Paper • 2501.14176 • Published • 28 -
Towards General-Purpose Model-Free Reinforcement Learning
Paper • 2501.16142 • Published • 31 -
SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training
Paper • 2501.17161 • Published • 124 -
MaxInfoRL: Boosting exploration in reinforcement learning through information gain maximization
Paper • 2412.12098 • Published • 4
-
Human-like Episodic Memory for Infinite Context LLMs
Paper • 2407.09450 • Published • 62 -
MUSCLE: A Model Update Strategy for Compatible LLM Evolution
Paper • 2407.09435 • Published • 23 -
Refuse Whenever You Feel Unsafe: Improving Safety in LLMs via Decoupled Refusal Training
Paper • 2407.09121 • Published • 6 -
ChatQA 2: Bridging the Gap to Proprietary LLMs in Long Context and RAG Capabilities
Paper • 2407.14482 • Published • 26
-
GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models
Paper • 2508.06471 • Published • 206 -
NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model
Paper • 2508.14444 • Published • 44 -
Gemini 2.5: Pushing the Frontier with Advanced Reasoning, Multimodality, Long Context, and Next Generation Agentic Capabilities
Paper • 2507.06261 • Published • 67 -
MiniMax-M1: Scaling Test-Time Compute Efficiently with Lightning Attention
Paper • 2506.13585 • Published • 273
-
nvidia/Llama-3_3-Nemotron-Super-49B-v1_5
Text Generation • 50B • Updated • 98.7k • 226 -
nvidia/Llama-3_3-Nemotron-Super-49B-v1_5-FP8
Text Generation • 50B • Updated • 19.2k • 24 -
nvidia/Llama-3_1-Nemotron-Ultra-253B-v1
Text Generation • Updated • 691 • • 343 -
nvidia/Llama-3_3-Nemotron-Super-49B-v1
Text Generation • 50B • Updated • 24.7k • 320
-
Human-like Episodic Memory for Infinite Context LLMs
Paper • 2407.09450 • Published • 62 -
MUSCLE: A Model Update Strategy for Compatible LLM Evolution
Paper • 2407.09435 • Published • 23 -
Refuse Whenever You Feel Unsafe: Improving Safety in LLMs via Decoupled Refusal Training
Paper • 2407.09121 • Published • 6 -
ChatQA 2: Bridging the Gap to Proprietary LLMs in Long Context and RAG Capabilities
Paper • 2407.14482 • Published • 26
-
RL + Transformer = A General-Purpose Problem Solver
Paper • 2501.14176 • Published • 28 -
Towards General-Purpose Model-Free Reinforcement Learning
Paper • 2501.16142 • Published • 31 -
SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training
Paper • 2501.17161 • Published • 124 -
MaxInfoRL: Boosting exploration in reinforcement learning through information gain maximization
Paper • 2412.12098 • Published • 4