disabilityparking / README.md
jaredhwang's picture
update example usage
72fe1bb
metadata
license: mit
task_categories:
  - object-detection
tags:
  - disability-parking
  - accessibility
  - streetscape
dataset_info:
  features:
    - name: image
      dtype: image
    - name: width
      dtype: int32
    - name: height
      dtype: int32
    - name: objects
      sequence:
        - name: bbox
          sequence: float32
          length: 4
        - name: category
          dtype: int64
        - name: area
          dtype: float32
        - name: iscrowd
          dtype: bool
        - name: id
          dtype: int64
        - name: segmentation
          sequence:
            sequence: float32
  splits:
    - name: train
      num_examples: 3688
    - name: test
      num_examples: 717
    - name: validation
      num_examples: 720

AccessParkCV

AccessParkCV is a deep learning pipeline that detects and characterizes the width of disability parking spaces from orthorectified aerial imagery. We publish a dataset of 7,069 labeled parking spaces (and 4,693 labeled access aisles), which we used to train the models making AccessParkCV possible.

(This repo contains the data in a HuggingFace format. For raw COCO format, see link).

Dataset Description

This is an object detection dataset with 8 classes:

  • objects
  • access_aisle
  • curbside
  • dp_no_aisle
  • dp_one_aisle
  • dp_two_aisle
  • one_aisle
  • two_aisle

Dataset Structure

Data Fields

  • image: PIL Image object
  • width: Image width in pixels
  • height: Image height in pixels
  • objects: Dictionary containing:
    • bbox: List of bounding boxes in [x_min, y_min, x_max, y_max] format
    • category: List of category IDs
    • area: List of bounding box areas
    • iscrowd: List of crowd flags (boolean)
    • id: List of annotation IDs
    • segmentation: List of polygon segmentations (each as list of [x1,y1,x2,y2,...] coordinates)

Category IDs to Category

Category ID Class
0 objects
1 access_aisle
2 curbside
3 dp_no_aisle
4 dp_one_aisle
5 dp_two_aisle
6 one_aisle
7 two_aisle

Data Sources

Region Lat/Long Bounding Coordinates Source Resolution # images in dataset
Seattle (47.9572, -122.4489), (47.4091, -122.1551) 3 inch/pixel 2,790
Washington D.C. (38.9979, -77.1179), (38.7962, -76.9008) 3 inch/pixel 1,801
Spring Hill (35.7943, -87.0034), (35.6489, -86.8447) Unknown 534
Total 5,125

Class Composition

Class Quantity in dataset
access_aisle 4,693
curbside 36
dp_no_aisle 300
dp_one_aisle 2,790
dp_two_aisle 402
one_aisle 3,424
two_aisle 117
Total 11,762

Data Splits

Split Examples
train 3688
test 717
valid 720

Class splits

Usage

from datasets import load_dataset

train_dataset = load_dataset("makeabilitylab/disabilityparking", split="train", streaming=True)

example = next(iter(train_dataset))

# Example of accessing an item
image = example["image"]
bboxes = example["objects"]["bbox"]
categories = example["objects"]["category"]
segmentations = example["objects"]["segmentation"]  # Polygon coordinates

Citation

@inproceedings{hwang_wherecanIpark,
  title={Where Can I Park? Understanding Human Perspectives and Scalably Detecting Disability Parking from Aerial Imagery},
  author={Hwang, Jared and Li, Chu and Kang, Hanbyul and Hosseini, Maryam and Froehlich, Jon E.},
  booktitle={The 27th International ACM SIGACCESS Conference on Computers and Accessibility},
  series={ASSETS '25},
  pages={20 pages},
  year={2025},
  month={October},
  address={Denver, CO, USA},
  publisher={ACM},
  location={New York, NY, USA},
  doi={10.1145/3663547.3746377},
  url={https://doi.org/10.1145/3663547.3746377}
}