Audio-visual approaches involving visual inputs have laid the foundation for recent progress in speech separation. However, the optimization of the concurrent usage of auditory and visual inputs is still an active research area. Inspired by the cortico-thalamo-cortical circuit, in which the sensory processing mechanisms of different modalities modulate one another via the non-lemniscal sensory thalamus, we propose a novel cortico-thalamo-cortical neural network (CTCNet) for audio-visual speech separation (AVSS). First, the CTCNet learns hierarchical auditory and visual representations in a bottom-up manner in separate auditory and visual subnetworks, mimicking the functions of the auditory and visual cortical areas. Then, inspired by the large number of connections between cortical regions and the thalamus, the model fuses the auditory and visual information in a thalamic subnetwork through top-down connections. Finally, the model transmits this fused information back to the auditory and visual subnetworks, and the above process is repeated several times. The results of experiments on three speech separation benchmark datasets show that CTCNet remarkably outperforms existing AVSS methods with considerablely fewer parameters. These results suggest that mimicking the anatomical connectome of the mammalian brain has great potential for advancing the development of deep neural networks. Project repo is https://github.com/JusperLee/CTCNet.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
We propose a sparse end-to-end multi-person pose regression framework, termed QueryPose, which can directly predict multi-person keypoint sequences from the input image. The existing end-to-end methods rely on dense representations to preserve the spatial detail and structure for precise keypoint localization. However, the dense paradigm introduces complex and redundant post-processes during inference. In our framework, each human instance is encoded by several learnable spatial-aware part-level queries associated with an instance-level query. First, we propose the Spatial Part Embedding Generation Module (SPEGM) that considers the local spatial attention mechanism to generate several spatial-sensitive part embeddings, which contain spatial details and structural information for enhancing the part-level queries. Second, we introduce the Selective Iteration Module (SIM) to adaptively update the sparse part-level queries via the generated spatial-sensitive part embeddings stage-by-stage. Based on the two proposed modules, the part-level queries are able to fully encode the spatial details and structural information for precise keypoint regression. With the bipartite matching, QueryPose avoids the hand-designed post-processes and surpasses the existing dense end-to-end methods with 73.6 AP on MS COCO mini-val set and 72.7 AP on CrowdPose test set. Code is available at https://github.com/buptxyb666/QueryPose.
translated by 谷歌翻译
Implementing fully automatic unmanned surface vehicles (USVs) monitoring water quality is challenging since effectively collecting environmental data while keeping the platform stable and environmental-friendly is hard to approach. To address this problem, we construct a USV that can automatically navigate an efficient path to sample water quality parameters in order to monitor the aquatic environment. The detection device needs to be stable enough to resist a hostile environment or climates while enormous volumes will disturb the aquaculture environment. Meanwhile, planning an efficient path for information collecting needs to deal with the contradiction between the restriction of energy and the amount of information in the coverage region. To tackle with mentioned challenges, we provide a USV platform that can perfectly balance mobility, stability, and portability attributed to its special round-shape structure and redundancy motion design. For informative planning, we combined the TSP and CPP algorithms to construct an optimistic plan for collecting more data within a certain range and limiting energy restrictions.We designed a fish existence prediction scenario to verify the novel system in both simulation experiments and field experiments. The novel aquaculture environment monitoring system significantly reduces the burden of manual operation in the fishery inspection field. Additionally, the simplicity of the sensor setup and the minimal cost of the platform enables its other possible applications in aquatic exploration and commercial utilization.
translated by 谷歌翻译
离线增强学习吸引了人们对解决传统强化学习的应用挑战的极大兴趣。离线增强学习使用先前收集的数据集来训练代理而无需任何互动。为了解决对OOD的高估(分布式)动作的高估,保守的估计值对所有输入都具有较低的价值。以前的保守估计方法通常很难避免OOD作用对Q值估计的影响。此外,这些算法通常需要失去一些计算效率,以实现保守估计的目的。在本文中,我们提出了一种简单的保守估计方法,即双重保守估计(DCE),该方法使用两种保守估计方法来限制政策。我们的算法引入了V功能,以避免分发作用的错误,同时隐含得出保守的估计。此外,我们的算法使用可控的罚款术语,改变了培训中保守主义的程度。从理论上讲,我们说明了该方法如何影响OOD动作和分布动作的估计。我们的实验分别表明,两种保守的估计方法影响了所有国家行动的估计。 DCE展示了D4RL的最新性能。
translated by 谷歌翻译
我们提出了一种新颖的方法来重新定位或放置识别,这是许多机器人技术,自动化和AR应用中要解决的基本问题。我们不依靠通常不稳定的外观信息,而是考虑以局部对象形式给出参考图的情况。我们的本地化框架依赖于3D语义对象检测,然后与地图中的对象关联。可能的配对关联集是基于评估空间兼容性的合并度量的层次聚类而生长的。后者特别使用有关​​相对对象配置的信息,该信息相对于全局转换是不变的。随着相机逐步探索环境并检测更多对象,关联集将进行更新和扩展。我们在几种具有挑战性的情况下测试我们的算法,包括动态场景,大型视图变化以及具有重复实例的场景。我们的实验表明,我们的方法在鲁棒性和准确性方面都优于先前的艺术。
translated by 谷歌翻译
机器学习模型在许多领域都表现出了有希望的表现。但是,担心他们可能会偏向特定的群体,阻碍了他们在高级申请中的采用。因此,必须确保机器学习模型中的公平性。以前的大多数努力都需要访问敏感属性以减轻偏见。尽管如此,由于人们对隐私和法律依从性的认识日益增加,获得具有敏感属性的大规模数据通常是不可行的。因此,一个重要的研究问题是如何在隐私下做出公平的预测?在本文中,我们研究了半私人环境中公平分类的新问题,其中大多数敏感属性都是私有的,只有少量的干净敏感属性可用。为此,我们提出了一个新颖的框架Fairsp,可以首先学会通过利用有限的清洁敏感属性来纠正隐私保证下的嘈杂敏感属性。然后,它以对抗性方式共同建模校正和清洁数据以进行歧义和预测。理论分析表明,当大多数敏感属性都是私有的时,提出的模型可以确保公平。现实世界数据集的实验结果证明了所提出的模型在隐私下做出公平预测并保持高精度的有效性。
translated by 谷歌翻译
虽然自我监督的表示学习(SSL)在大型模型中证明是有效的,但在遵循相同的解决方案时,轻量级模型中的SSL和监督方法之间仍然存在巨大差距。我们深入研究这个问题,发现轻量级模型在简单地执行实例对比时易于在语义空间中崩溃。为了解决这个问题,我们提出了一种与关系知识蒸馏(REKD)的关系方面的对比范例。我们介绍一个异构教师,明确地挖掘语义信息并将新颖的关系知识转移到学生(轻量级模型)。理论分析支持我们对案例对比度的主要担忧,验证了我们关系的对比学习的有效性。广泛的实验结果还表明,我们的方法达到了多种轻量级模型的显着改进。特别是,亚历谢的线性评估显然将目前的最先进从44.7%提高到50.1%,这是第一个接近监督50.5%的工作。代码将可用。
translated by 谷歌翻译
场景文本检测仍然是一个具有挑战性的任务,因为可能存在极小的小或低分辨率的笔划,并且关闭或任意形状的文本。在本文中,提出了通过捕获细粒度的笔划来有效地检测文本,并在图中的分层表示之间推断结构关系。不同于由一系列点或矩形框表示文本区域的现有方法,我们通过笔划辅助预测网络(SAPN)直接本地化每个文本实例的笔划。此外,采用分层关系图网络(HRGN)来执行关系推理和预测链接的可能性,有效地将关闭文本实例和分组节点分类结果分割成任意形状的文本区域。我们介绍了一个小型数据集,其中具有笔划级注释,即SyntheTroke,用于我们模型的脱机预培训。宽范围基准测试的实验验证了我们方法的最先进的性能。我们的数据集和代码将可用。
translated by 谷歌翻译
视频场景解析是计算机视觉中的长期挑战性的任务,旨在为给定视频中的所有帧的像素分配预定义的语义标签。与图像语义分割相比,这项任务更多地关注研究如何采用时间信息获得更高的预测准确性。在本报告中,我们介绍了我们在野外挑战中解析的第一个视频场景解决方案,这实现了57.44的MIOU,并获得了第二个地方(我们的团队名称是CharlesBlwx)。
translated by 谷歌翻译