CODa is available to download using manual and programmatic methods. CODa is distributed in four sizes: tiny, small, medium, and full. We offer the tiny dataset for download from the Texas Data Repository (TDR) and the remaining splits by url from the Texas Advanced Computing Center (TACC) server. The tiny dataset is only meant for preview purposes and is roughly 5% of the total dataset. The small dataset can be used for preliminary experiments and contains 25% of the total dataset. The medium dataset is roughly 50% of the full dataset and can be used for model training and experiments. The full dataset is 4 terabytes and should be used for public CODa benchmarks. All download links below are for the latest dataset version.

Programmatic Download

  1. Clone the repository:
    git clone git@github.com:ut-amrl/coda-devkit.git
  2. Install the conda environment:
    conda env create -f environment.yml
  3. Activate the conda environment:
    conda activate coda
  4. Download dataset (downloads sequence 0 by default):
    python scripts/download_split.py -d ./data -t sequence -se 0

Manual Download

Download by split:
   Download each split’s .zip file from TACC
   Place the zipfile in the CODa folder before extracting.

Download by sequence:
   Download each sequence’s .zip file from TACC
   Place the zipfile in the CODa folder before extracting.

Download only RGBD dataset (New!):
   Download each sequence’s .zip file from TACC
   Place the zipfile in the CODa folder before extracting.

Download Pretrained Model Weights

Organized by LiDAR Resolution (16, 32, 64, 128)
   We currently offer pretrained models on all CODa classes and a pedestrian only version on TACC

Running 3D Object Detection ROS Module

We provide a ROS module for running pretrained 3D object detection models in realtime on custom point cloud data. Documentation on how to run the module can be found on the CODa models repository