---
pretty_name: Revvity-25
license: cc-by-nc-4.0
modalities:
- image
task_categories:
- image-segmentation
- object-detection
task_ids:
- instance-segmentation
annotations_creators:
- expert-generated
size_categories:
- n<1K
language:
- en
tags:
- image
- json
- computer-vision
- transformers
- instance-segmentation
- object-detection
- coco
- unet
- microscopy
- cells
- biomedical
- cell-segmentation
- biomedical-imaging
- microscopy-images
- brightfield
- cancer-cells
- semantic-segmentation
- datasets
- fiftyone
- cell
- amodal
---
Revvity-25 (CVPRW 2025)
[](https://www.arxiv.org/abs/2508.01928)
[](https://github.com/SlavkoPrytula/IAUNet)
[](https://github.com/SlavkoPrytula/IAUNet)
[](https://slavkoprytula.github.io/IAUNet/)
π₯ Paper: [https://arxiv.org/abs/2508.01928](https://arxiv.org/abs/2508.01928) \
βοΈ Github: [https://github.com/SlavkoPrytula/IAUNet](https://github.com/SlavkoPrytula/IAUNet) \
π Project page: [https://slavkoprytula.github.io/IAUNet/](https://slavkoprytula.github.io/IAUNet/)
We present the **Revvity-25 Full Cell Segmentation Dataset**, a novel 2025 benchmark designed to advance cell segmentation research. One of our key contributions in the paper **[IAUNet: Instance-Aware U-Net](https://www.arxiv.org/abs/2508.01928)** is a novel cell instance segmentation dataset named `Revvity-25`. It includes `110` high-resolution **`1080 x 1080` brightfield images**, each containing, on average, `27` manually labeled and expert-validated cancer cells, totaling `2937` annotated cells. To our knowledge, this is the first dataset with accurate and detailed annotations for cell borders and overlaps, with each cell annotated using an average of `60` polygon points, reaching up to `400` points for more complex structures. `Revvity-25` dataset provides a unique resource that opens new possibilities for testing and benchmarking models for modal and amodal semantic and instance segmentation.
* You can also check out and download the dataset from our webpage: [Revvity-25](https://bcv.cs.ut.ee/datasets/)
## Directory structure
```
Revvity-25/
βββ images/
βββ annotations/
βββ train.json
βββ valid.json
````
---
## Citing Revvity-25
If you use this work in your research, please cite:
```bibtex
@InProceedings{Prytula_2025_CVPR,
author = {Prytula, Yaroslav and Tsiporenko, Illia and Zeynalli, Ali and Fishman, Dmytro},
title = {IAUNet: Instance-Aware U-Net},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops},
month = {June},
year = {2025},
pages = {4739--4748}
}
````
---
## License
[](https://creativecommons.org/licenses/by-nc/4.0/)
This project is licensed under the **Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)**.
You are free to share and adapt the work **for non-commercial purposes** as long as you give appropriate credit.
For more details, see the [LICENSE](LICENSE) file or visit [Creative Commons](https://creativecommons.org/licenses/by-nc/4.0/).
---
## Contact
π§ [s.prytula@ucu.edu.ua](mailto:s.prytula@ucu.edu.ua) or [yaroslav.prytula@ut.ee](mailto:yaroslav.prytula@ut.ee)
---
## Acknowledgements
This work was supported by [Revvity](https://www.revvity.com/) and funded by the TEM-TA101 grant βArtificial Intelligence for Smart Automation.β Computational resources were provided by the High-Performance Computing Cluster at the University of Tartu πͺπͺ. We thank the [Biomedical Computer Vision Lab](https://bcv.cs.ut.ee/) for their invaluable support. We express gratitude to the Armed Forces of Ukraine πΊπ¦ and the bravery of the Ukrainian people for enabling a secure working environment, without which this work would not have been possible.