Spaces:
Running
A newer version of the Gradio SDK is available:
6.1.0
ScanNet++
Download the dataset, extract RGB frames and masks from the iPhone data following the official instruction.
Preprocess the data with the following command:
python datasets_preprocess/preprocess_scannetpp.py \
--scannetpp_dir $SCANNETPP_DATA_ROOT\
--output_dir data/scannetpp_processed
the processed data will be saved at ./data/scannetpp_processed
We only use ScanNetpp-V1 (280 scenes in total) to train and validate our SLAM3R models now. ScanNetpp-V2 (906 scenes) is available for potential use, but you may need to modify the scripts for certain scenes in it.
Aria Synthetic Environments
For more details, please refer to the official website
- Prepare the codebase and environment
mkdir data/projectaria
cd data/projectaria
git clone https://github.com/facebookresearch/projectaria_tools.git -b 1.5.7
cd -
conda create -n aria python=3.10
conda activate aria
pip install projectaria-tools'[all]' opencv-python open3d
- Get the download-urls file here and place it under .
/data/projectaria/projectaria_tools. Then download the ASE dataset:
cd ./data/projectaria/projectaria_tools
python projects/AriaSyntheticEnvironment/aria_synthetic_environments_downloader.py \
--set train \
--scene-ids 0-499 \
--unzip True \
--cdn-file aria_synthetic_environments_dataset_download_urls.json \
--output-dir $SLAM3R_DIR/data/projectaria/ase_raw
We only use the first 500 scenes to train and validate our SLAM3R models now. You can leverage more scenes depending on your resources.
- Preprocess the data.
cp ./datasets_preprocess/preprocess_ase.py ./data/projectaria/projectaria_tools/
cd ./data/projectaria
python projectaria_tools/preprocess_ase.py
The processed data will be saved at ./data/projectaria/ase_processed