-
SageAttention2 Technical Report: Accurate 4 Bit Attention for Plug-and-play Inference Acceleration
Paper • 2411.10958 • Published • 55 -
SpargeAttn: Accurate Sparse Attention Accelerating Any Model Inference
Paper • 2502.18137 • Published • 58 -
SageAttention3: Microscaling FP4 Attention for Inference and An Exploration of 8-Bit Training
Paper • 2505.11594 • Published • 75 -
SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration
Paper • 2410.02367 • Published • 49
Jintao Zhang
jt-zhang
AI & ML interests
Efficient ML
Recent Activity
updated
a model
about 2 hours ago
TurboDiffusion/TurboWan2.2-I2V-A14B-720P
updated
a Space
about 8 hours ago
TurboDiffusion/README
published
a Space
about 8 hours ago
TurboDiffusion/README