自动交通事故检测已吸引机器视觉社区,因为它对自动智能运输系统(ITS)的发展产生了影响和对交通安全的重要性。然而,大多数关于有效分析和交通事故预测的研究都使用了覆盖范围有限的小规模数据集,从而限制了其效果和适用性。交通事故中现有的数据集是小规模,不是来自监视摄像机,而不是开源的,或者不是为高速公路场景建造的。由于在高速公路上发生事故,因此往往会造成严重损坏,并且太快了,无法赶上现场。针对从监视摄像机收集的高速公路交通事故的开源数据集非常需要和实际上。为了帮助视觉社区解决这些缺点,我们努力收集涵盖丰富场景的真实交通事故的视频数据。在通过各个维度进行集成和注释后,在这项工作中提出了一个名为TAD的大规模交通事故数据集。在这项工作中,使用公共主流视觉算法或框架进行了有关图像分类,对象检测和视频分类任务的各种实验,以证明不同方法的性能。拟议的数据集以及实验结果将作为改善计算机视觉研究的新基准提出,尤其是在其中。
translated by 谷歌翻译
深度学习在工业场景中具有广泛的应用,但是减少虚假警报(FA)仍然是一个主要困难。优化网络体系结构或网络参数用于在学术界解决这一挑战,同时忽略了应用程序场景中数据的基本特征,这通常会导致新场景中的FA增加。在本文中,我们提出了一个新颖的范式,用于由工业应用驱动的数据集的细粒度设计。我们根据数据和应用程序要求的基本特征灵活地选择正面和负面样本集,并将其余样本添加到训练集中作为不确定性类别。我们收集了10,000多个戴面膜识别样本,涵盖了各种应用程序方案作为我们的实验数据。与传统的数据设计方法相比,我们的方法可获得更好的结果并有效地减少了FA。我们为研究社区提供所有贡献,以提供更广泛的使用。该贡献将在https://github.com/huh30/opendatasets上获得。
translated by 谷歌翻译
很难收集足够的缺陷图像来训练工业生产中的深度学习网络。因此,现有的工业异常检测方法更喜欢使用基于CNN的无监督检测和本地化网络来实现此任务。但是,由于传统的端到端网络在高维空间中符合非线性模型的障碍,因此这些方法总是失败。此外,它们通过将正常图像的特征群群群群群群集成,这基本上是导致纹理变化不健壮的。为此,我们提出了基于视觉变压器的(基于VIT)的无监督异常检测网络。它利用层次任务学习和人类经验来增强其解释性。我们的网络包括模式生成和比较网络。模式生成网络使用两个基于VIT的编码器模块来提取两个连续图像贴片的功能,然后使用基于VIT的解码器模块来学习这些功能的人类设计样式并预测第三张图像贴片。之后,我们使用基于暹罗的网络来计算“生成图像补丁”和“原始图像补丁”的相似性。最后,我们通过双向推理策略来完善异常定位。公共数据集MVTEC数据集的比较实验显示我们的方法达到了99.8%的AUC,它超过了先前的最新方法。此外,我们在自己的皮革和布数据集上给出了定性插图。准确的片段结果强烈证明了我们方法在异常检测中的准确性。
translated by 谷歌翻译
在废物铜颗粒回收的领域,工程师应该能够识别废物铜颗粒中的各种杂质,并在评级之前估计其质量比例。这种手动评级方法是昂贵的,缺乏客观性和全面性。为了解决这个问题,我们建议基于机器视觉和深度学习的废铜颗粒评级系统。我们首先将评级任务提出为2D图像识别和纯度回归任务。然后,我们设计了一个两阶段的卷积等级网络,以计算废物铜颗粒的质量纯度和评级水平。我们的评分网络包括分割网络和一个纯度回归网络,该网络分别计算废物铜颗粒的语义分割热图和纯度结果。在训练增强数据集上的评级网络之后,对真正的废铜颗粒进行了实验,证明了拟议网络的有效性和优势。具体而言,就准确性,有效性,鲁棒性和客观性而言,我们的系统优于手动方法。
translated by 谷歌翻译
与行业4.0的发展相一致,越来越多的关注被表面缺陷检测领域所吸引。提高效率并节省劳动力成本已稳步成为行业领域引起人们关注的问题,近年来,基于深度学习的算法比传统的视力检查方法更好。尽管现有的基于深度学习的算法偏向于监督学习,但这不仅需要大量标记的数据和大量的劳动力,而且还效率低下,并且有一定的局限性。相比之下,最近的研究表明,无监督的学习在解决视觉工业异常检测的高于缺点方面具有巨大的潜力。在这项调查中,我们总结了当前的挑战,并详细概述了最近提出的针对视觉工业异常检测的无监督算法,涵盖了五个类别,其创新点和框架详细描述了。同时,提供了包含表面图像样本的公开可用数据集的信息。通过比较不同类别的方法,总结了异常检测算法的优点和缺点。预计将协助研究社区和行业发展更广泛,更跨域的观点。
translated by 谷歌翻译
Deploying reliable deep learning techniques in interdisciplinary applications needs learned models to output accurate and ({even more importantly}) explainable predictions. Existing approaches typically explicate network outputs in a post-hoc fashion, under an implicit assumption that faithful explanations come from accurate predictions/classifications. We have an opposite claim that explanations boost (or even determine) classification. That is, end-to-end learning of explanation factors to augment discriminative representation extraction could be a more intuitive strategy to inversely assure fine-grained explainability, e.g., in those neuroimaging and neuroscience studies with high-dimensional data containing noisy, redundant, and task-irrelevant information. In this paper, we propose such an explainable geometric deep network dubbed as NeuroExplainer, with applications to uncover altered infant cortical development patterns associated with preterm birth. Given fundamental cortical attributes as network input, our NeuroExplainer adopts a hierarchical attention-decoding framework to learn fine-grained attentions and respective discriminative representations to accurately recognize preterm infants from term-born infants at term-equivalent age. NeuroExplainer learns the hierarchical attention-decoding modules under subject-level weak supervision coupled with targeted regularizers deduced from domain knowledge regarding brain development. These prior-guided constraints implicitly maximizes the explainability metrics (i.e., fidelity, sparsity, and stability) in network training, driving the learned network to output detailed explanations and accurate classifications. Experimental results on the public dHCP benchmark suggest that NeuroExplainer led to quantitatively reliable explanation results that are qualitatively consistent with representative neuroimaging studies.
translated by 谷歌翻译
In the field of cross-modal retrieval, single encoder models tend to perform better than dual encoder models, but they suffer from high latency and low throughput. In this paper, we present a dual encoder model called BagFormer that utilizes a cross modal interaction mechanism to improve recall performance without sacrificing latency and throughput. BagFormer achieves this through the use of bag-wise interactions, which allow for the transformation of text to a more appropriate granularity and the incorporation of entity knowledge into the model. Our experiments demonstrate that BagFormer is able to achieve results comparable to state-of-the-art single encoder models in cross-modal retrieval tasks, while also offering efficient training and inference with 20.72 times lower latency and 25.74 times higher throughput.
translated by 谷歌翻译
Body Mass Index (BMI), age, height and weight are important indicators of human health conditions, which can provide useful information for plenty of practical purposes, such as health care, monitoring and re-identification. Most existing methods of health indicator prediction mainly use front-view body or face images. These inputs are hard to be obtained in daily life and often lead to the lack of robustness for the models, considering their strict requirements on view and pose. In this paper, we propose to employ gait videos to predict health indicators, which are more prevalent in surveillance and home monitoring scenarios. However, the study of health indicator prediction from gait videos using deep learning was hindered due to the small amount of open-sourced data. To address this issue, we analyse the similarity and relationship between pose estimation and health indicator prediction tasks, and then propose a paradigm enabling deep learning for small health indicator datasets by pre-training on the pose estimation task. Furthermore, to better suit the health indicator prediction task, we bring forward Global-Local Aware aNd Centrosymmetric Encoder (GLANCE) module. It first extracts local and global features by progressive convolutions and then fuses multi-level features by a centrosymmetric double-path hourglass structure in two different ways. Experiments demonstrate that the proposed paradigm achieves state-of-the-art results for predicting health indicators on MoVi, and that the GLANCE module is also beneficial for pose estimation on 3DPW.
translated by 谷歌翻译
We present IMAS, a method that segments the primary objects in videos without manual annotation in training or inference. Previous methods in unsupervised video object segmentation (UVOS) have demonstrated the effectiveness of motion as either input or supervision for segmentation. However, motion signals may be uninformative or even misleading in cases such as deformable objects and objects with reflections, causing unsatisfactory segmentation. In contrast, IMAS achieves Improved UVOS with Motion-Appearance Synergy. Our method has two training stages: 1) a motion-supervised object discovery stage that deals with motion-appearance conflicts through a learnable residual pathway; 2) a refinement stage with both low- and high-level appearance supervision to correct model misconceptions learned from misleading motion cues. Additionally, we propose motion-semantic alignment as a model-agnostic annotation-free hyperparam tuning method. We demonstrate its effectiveness in tuning critical hyperparams previously tuned with human annotation or hand-crafted hyperparam-specific metrics. IMAS greatly improves the segmentation quality on several common UVOS benchmarks. For example, we surpass previous methods by 8.3% on DAVIS16 benchmark with only standard ResNet and convolutional heads. We intend to release our code for future research and applications.
translated by 谷歌翻译
Current audio-visual separation methods share a standard architecture design where an audio encoder-decoder network is fused with visual encoding features at the encoder bottleneck. This design confounds the learning of multi-modal feature encoding with robust sound decoding for audio separation. To generalize to a new instrument: one must finetune the entire visual and audio network for all musical instruments. We re-formulate visual-sound separation task and propose Instrument as Query (iQuery) with a flexible query expansion mechanism. Our approach ensures cross-modal consistency and cross-instrument disentanglement. We utilize "visually named" queries to initiate the learning of audio queries and use cross-modal attention to remove potential sound source interference at the estimated waveforms. To generalize to a new instrument or event class, drawing inspiration from the text-prompt design, we insert an additional query as an audio prompt while freezing the attention mechanism. Experimental results on three benchmarks demonstrate that our iQuery improves audio-visual sound source separation performance.
translated by 谷歌翻译