📈 Predict Impact & Quality of Newborn Papers
LLM-powered estimates from a paper’s title and abstract.
Which model should I use?
- NAIPv1 — predicts academic impact
- NAIPv2 — predicts paper quality (default)
See the papers for methodology and evaluation details.
âš¡ Note: Local inference is instantaneous. On Hugging Face ZeroGPU, the quantized model is reloaded from disk on each prediction, which can introduce significant disk-I/O delays (typically <30 s).
For NAIPv2, the output Normalized score may not be comparable across different domains. It is recommended to use the Raw score magnitude for quality estimation within the same domain.
Select Model Version
Predicted Scores
Type | Score |
|---|---|
0 |
Important Notes
- The reported performance reflects aggregate statistical outcomes, rather than guarantees for individual instances.
- It is intended as a tool for research and educational purposes only.
- Please refrain from deliberately embellishing the title and abstract to boost scores, and avoid making false claims.
- This demo is an early exploration of using LLMs for paper quality estimation and is not optimized against prompt injection attacks.
- The predicted value is a probability generated by the model and does NOT reflect the paper's true quality or novelty.
- For NAIPv1, a normalized score greater than 0.60 is considered to indicate a potentially impactful paper.
- For NAIPv2, a normalized score above 0.60 corresponds to the statistical mean of NeurIPS accepted papers (Poster).
Examples
| Paper Title | Paper Abstract |
|---|