Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    RuntimeError
Message:      Dataset scripts are no longer supported, but found california_burned_areas.py
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory
                  raise e1 from None
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 989, in dataset_module_factory
                  raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
              RuntimeError: Dataset scripts are no longer supported, but found california_burned_areas.py

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

California Burned Areas Dataset

Working on adding more data

You can find an official implementation on TorchGeo.

Dataset Summary

This dataset contains images from Sentinel-2 satellites taken before and after a wildfire. The ground truth masks are provided by the California Department of Forestry and Fire Protection and they are mapped on the images.

Supported Tasks

The dataset is designed to do binary semantic segmentation of burned vs unburned areas.

Dataset Structure

We opted to use HDF5 to grant better portability and lower file size than GeoTIFF.

WIP: You can find additional metadata (coordinates, crs, timestamp) in metadata.parquet. (For now, they are not automatically loaded)

Dataset opening

Using the dataset library, you download only the pre-patched raw version for simplicity.

from dataset import load_dataset

# There are two available configurations, "post-fire" and "pre-post-fire."
dataset = load_dataset("DarthReca/california_burned_areas", name="post-fire")

The dataset was compressed using h5py and BZip2 from hdf5plugin. WARNING: hdf5plugin is necessary to extract data.

Data Instances

Each matrix has a shape of 5490x5490xC, where C is 12 for pre-fire and post-fire images, while it is 0 for binary masks. Pre-patched version is provided with matrices of size 512x512xC, too. In this case, only mask with at least one positive pixel is present.

You can find two versions of the dataset: raw (without any transformation) and normalized (with data normalized in the range 0-255). Our suggestion is to use the raw version to have the possibility to apply any wanted pre-processing step.

Data Fields

In each standard HDF5 file, you can find post-fire, pre-fire images, and binary masks. The file is structured in this way:

β”œβ”€β”€ foldn
β”‚   β”œβ”€β”€ uid0
β”‚   β”‚   β”œβ”€β”€ pre_fire
β”‚   β”‚   β”œβ”€β”€ post_fire
β”‚   β”‚   β”œβ”€β”€ mask 
β”‚   β”œβ”€β”€ uid1
β”‚       β”œβ”€β”€ post_fire
β”‚       β”œβ”€β”€ mask
β”‚  
β”œβ”€β”€ foldm
    β”œβ”€β”€ uid2
    β”‚   β”œβ”€β”€ post_fire
    β”‚   β”œβ”€β”€ mask 
    β”œβ”€β”€ uid3
        β”œβ”€β”€ pre_fire
        β”œβ”€β”€ post_fire
        β”œβ”€β”€ mask
...

where foldn and foldm are fold names and uidn is a unique identifier for the wildfire.

For the pre-patched version, the structure is:

root
|
|-- uid0_x: {post_fire, pre_fire, mask}
|
|-- uid0_y: {post_fire, pre_fire, mask}
|
|-- uid1_x: {post_fire, mask}
|
...

the fold name is stored as an attribute.

Data Splits

There are 5 random splits whose names are: 0, 1, 2, 3, and 4.

Source Data

Data are collected directly from Copernicus Open Access Hub through the API. The band files are aggregated into one single matrix.

Additional Information

Licensing Information

This work is under cc-by-nc-4.0 license.

Citation Information

If you plan to use this dataset in your work please give the credit to Sentinel-2 mission and the California Department of Forestry and Fire Protection and cite using this BibTex:

@ARTICLE{cabuar,
  author={Cambrin, Daniele Rege and Colomba, Luca and Garza, Paolo},
  journal={IEEE Geoscience and Remote Sensing Magazine}, 
  title={CaBuAr: California burned areas dataset for delineation [Software and Data Sets]}, 
  year={2023},
  volume={11},
  number={3},
  pages={106-113},
  doi={10.1109/MGRS.2023.3292467}
}
Downloads last month
31

Paper for 9334hq/california_burned_areas