Object detection typically assumes that training and test data are drawn froman identical distribution, which, however, does not always hold in practice.Such a distribution mismatch will lead to a significant performance drop. Inthis work, we aim to improve the cross-domain robustness of object detection.We tackle the domain shift on two levels: 1) the image-level shift, such asimage style, illumination, etc, and 2) the instance-level shift, such as objectappearance, size, etc. We build our approach based on the recentstate-of-the-art Faster R-CNN model, and design two domain adaptationcomponents, on image level and instance level, to reduce the domaindiscrepancy. The two domain adaptation components are based on H-divergencetheory, and are implemented by learning a domain classifier in adversarialtraining manner. The domain classifiers on different levels are furtherreinforced with a consistency regularization to learn a domain-invariant regionproposal network (RPN) in the Faster R-CNN model. We evaluate our newlyproposed approach using multiple datasets including Cityscapes, KITTI, SIM10K,etc. The results demonstrate the effectiveness of our proposed approach forrobust object detection in various domain shift scenarios.
translated by 谷歌翻译