InSpaceType: A Dataset and Analysis Tool for Space Type in Indoor Monocular Depth Estimation
In submission landing page

* Code and evalution tools release: Here

* Data

[Sample data]: This contains 167MB sample data
[InSpaceType Eval set]: This contains 1260 RGBD pairs for evaluation use about 11.5G. For evaluation please go to our codebase
[InSpaceType all data]:This contains 40K RGBD pairs, about 500G the whole InSpaceType dataset. The whole data is split into 8 chunks. Please download all chunks in the folder and extract them.

* TL;DR

A new dataset and benchmark are presented that consider a crucial but often ignored facet- space type. We study 13 SOTA works to unveil underlying imbalance and assess 4 training sets to discover bias to prompt discussion on synthetic data curation.

* Abstract

Indoor monocular depth estimation helps home automation, including robot navigation or AR/VR for surrounding perception. Most previous methods primarily experiment with the NYUv2 Dataset and concentrate on the overall performance in their evaluation. However, their robustness and generalization to diversely unseen types or categories for indoor spaces (spaces types) have yet to be discovered. Researchers may empirically find degraded performance in a released pretrained model on custom data or less-frequent types. This paper studies the common but easily overlooked factor- space type and realizes a model's performance variances across spaces. We present InSpaceType Dataset, a high-quality RGBD dataset for general indoor scenes, and benchmark 13 recent state-of-the-art methods on InSpaceType. Our examination shows that most of them suffer from performance imbalance between head and tailed types, and some top methods are even more severe. The work reveals and analyzes underlying bias in detail for transparency and robustness. We extend the analysis to a total of 4 datasets and discuss the best practice in synthetic data curation for training indoor monocular depth. Further, dataset ablation is conducted to find out the key factor in generalization. This work marks the first in-depth investigation of performance variances across space types and, more importantly, releases useful tools, including datasets and codes, to closely examine your pretrained depth models.

* Analysis I-II [Benchmark on overall performance and space type breakdown]


InSpaceType benchmark overall performance. The best number is in bold, and the second-best is underlined.


The tables study top methods among those trained only on NYUv2 for depth estimation only (N-only): MIM, PixelFormer, and among those pretrained on multiple datasets or learned from large-scale pertaining (M&LS-Pre) then finetuned on NYUv2: ZoeDepth, VPD, DepthAnything, Unidepth. Beside the breakdown, we also list easy and hard types for each method.

* Analysis III [More training datasets]

Space type breakdown and characteristics for SimSIN, UniSIN, and Hypersim Dataset.

* Conclusion


The work pioneers studying space types in indoor monocular depth for practical purposes, especially with the advent of many large models, but the evaluation and quality assessment still primarily focus on a single and older benchmark. First, we present novel InSpaceType Dataset that meets the high-resolution and high-quality RGBD data requirements for cutting-edge applications in AR/VR displays and indoor robotics. Previous works focusing on methods may overlook performance variances. We use InSpaceType to study 13 recent high-performing methods and analyze their zero-shot cross-dataset performance for both overall results and performance variances across space types. Even some top methods have severe imbalance, and some methods are actually less imbalanced than higher-performing ones.
We extend our analysis to more synthetic and real datasets, including SimSIN, UniSIN, and Hypersim, to reveal their bias and guide proper usage. Especially, current synthetic data curation may not faithfully reflect the real-world high complexity in cluttered and small objects, and we suggest the best practice. Further, they may miss some common types like hallway if rendering single 3D CAD spaces separately. We further do ablation on InSpaceType and find space scale is the key factor that hinders generalization. As part of our contribution, our released tools for both research and practical aids, including codes and datasets, can diagnose a pretrained model and show its hierarchical performance breakdown. Overall, this work underscores the importance of considering performance variances in the practical deployment of models, a crucial aspect often overlooked in the field.

Sample heirarchy labeling and breakdown:

The website template was borrowed from Michaƫl Gharbi