
CUBIT
BENCHMARK
"A Large-Scale Benchmark for Infrastructure Defect Assessment and Physical Quantification Challenges."
12,500+
Images
32,400+
Instances
Download & References
Shared Resources
The linked drive folder contains the training materials, test logs, and verification notes used to support the benchmark release.
Training folders include the reference files plus short notes that explain the setup and file naming.
The test logs are collected separately so reviewers can trace the inference outputs and evaluation flow.
During the repair process, we keep a comparison with the official repository to verify consistency and avoid regressions.
We also keep repeated-training checks for the Top-5 models to confirm the reported ranking is stable.
Academic Benchmark
Performance Leaderboard
| Rank | Model | Crack AP | Spalling AP | Latency |
|---|---|---|---|---|
| #01 | YOLOv6-l | 85.7% | 91.7% | 15.9ms |
| #02 | YOLOv5-x | 81.2% | 88.4% | 28.4ms |
| #03 | YOLOX-x | 83.9% | 89.5% | 41.2ms |
| #04 | Faster R-CNN (ResNet50) | 72.5% | 71.5% | 55.0ms |
Participate in CUBIT Challenges
Submit your models to our automated evaluation server. We follow the MS COCO (101 points) and Pascal VOC protocols.
Dataset Taxonomy
Data Modality
UAV RGB Imagery (8K/4K/HD)
Defect Classes
Cracks (Linear, Branch, Web), Spalling, Moisture
Data Split
70% Train, 10% Val, 20% Robust Test
Metric Scale
Pixel-to-MM (Calibrated via GSD)
Physical Severity Grading
Crack: < 0.2 mm
Action: Routine monitoring
Crack: 0.2 - 0.5 mm
Action: Repair scheduling
Crack: > 0.5 mm
Action: Urgent repair
Crack: Severe structural damage
Action: Immediate structural assessment
* Based on BS ISO 15686-7:2017 and HK Surveyors Practice.
Protocols
Submission & Metrics
CUBIT defines standardized data structures for fair evaluation across all participating models.
Focuses on high-resolution bounding box localization. Each submission must contain category-specific .txt files in the following format:
[imgname] [score] [xmin] [ymin] [xmax] [ymax] pavement_001 0.985 450 120 890 340 pavement_001 0.742 1200 4500 1350 4800 ...
Challenge Rules
- • Images must not be resized below 1024x1024 during inference.
- • Results are evaluated using Pascal VOC AP@0.5 and COCO AP@0.5:0.95.
Focuses on pixel-accurate masks and physical quantification. Submissions should follow the COCO JSON format with an additional metric field.
{
"image_id": 405,
"segmentation": [45.1, 89.2, ...],
"physical_metric": 0.42, // mm for cracks
"physical_unit": "mm"
}Quantification Metric
Evaluated by Root Mean Square Error (RMSE) against ground-truth physical measurements.