The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
多模式学习,尤其是大规模的多模式预训练,在过去的几年中已经迅速发展,并带来了人工智能(AI)的最大进步。尽管具有有效性,但了解多模式预训练模型的潜在机制仍然是一个巨大的挑战。揭示此类模型的解释性可能会使AI领域中新型学习范式的突破。为此,鉴于人脑的多模式性质,我们建议借助非侵入性脑成像技术(例如功能磁共振成像(fMRI))探索多模式学习模型的解释性。具体而言,我们首先提出了1500万个图像文本对预训练的新设计的多模式基础模型,该模型在各种认知下游任务中显示出强烈的多模式理解和概括能力。此外,从神经编码的角度来看(基于我们的基础模型),我们发现,与单峰相比,经过多模式训练的视觉和舌编码器都更像脑状。特别是,我们确定了许多大脑区域,其中多模式训练的编码器表现出更好的神经编码性能。这与现有有关探索大脑多感觉整合的研究的发现是一致的。因此,我们认为,多模式基础模型是神经科学家研究人脑中多模式信号处理机制的更合适的工具。我们的发现还证明了多模式基础模型作为理想的计算模拟器的潜力,以促进脑和大脑的AI研究。
translated by 谷歌翻译
本文介绍了Thuee团队的语音识别系统,用于IARPA Open自动语音识别挑战(OpenASR21),并进行了进一步的实验探索。我们在受限和受约束的训练条件下取得了出色的成果。对于受限的训练条件,我们基于标准混合体系结构构建基本ASR系统。为了减轻摄影库(OOV)的问题,我们使用针对OOV和潜在的新单词的素式至phoneme(G2P)技术扩展了发音词典。采用了标准的声学模型结构,例如CNN-TDNN-F和CNN-TDNN-F-A。此外,还应用了多种数据增强技术。对于约束训练条件,我们使用自我监督的学习框架WAV2VEC2.0。我们在公开可用的预训练XLSR-53的基础上使用连接式时间分类(CTC)标准进行各种微调技术。我们发现,在将WAV2VEC2.0预训练的模型应用于基于编码器的CTC/CTC/COATION ASR体系结构时,前端特征提取器在将WAV2VEC2.0预训练的模型应用时起着重要作用。通过将目标语言用作为前端功能提取器使用的CTC模型填充可以实现额外的改进。
translated by 谷歌翻译
尖峰神经网络(SNNS)是一种实用方法,可以通过模拟神经元对时间信息的杠杆作用来进行更高的数据有效学习。在本文中,我们提出了时间通道联合注意(TCJA)架构单元,这是一种有效的SNN技术,依赖于注意机制,通过有效地沿空间和时间维度沿着尖峰序列的相关性来实现。我们的基本技术贡献在于:1)通过采用挤压操作,将尖峰流压缩为平均矩阵,然后使用具有高效1-D卷积的两种局部注意机制来建立时间和渠道关系,以在频道和渠道关系中进行特征提取灵活的时尚。 2)利用交叉卷积融合(CCF)层在时间范围和通道范围之间建模相互依赖性,从而破坏了两个维度的独立性,并实现了特征之间的相互作用。通过共同探索和重新启用数据流,我们的方法在所有测试的主流静态和神经形态数据集上,在包括时尚量的所有测试的主流静态数据集上,最高可先进的(SOTA)高达15.7% ,CIFAR10-DVS,N-Caltech 101和DVS128手势。
translated by 谷歌翻译
为了在带宽洪泛环境(例如无线网络)中启用大规模的机器学习,最近在设计借助通信压缩的帮助下,最近在设计沟通效率的联合学习算法方面取得了重大进展。另一方面,隐私保护,尤其是在客户层面上,是另一个重要的避税,在存在高级通信压缩技术的情况下尚未同时解决。在本文中,我们提出了一个统一的框架,以通过沟通压缩提高私人联邦学习的沟通效率。利用通用压缩操作员和局部差异隐私,我们首先检查了一种简单的算法,该算法将压缩直接应用于差异私密的随机梯度下降,并确定其局限性。然后,我们为私人联合学习提出了一个统一的框架Soteriafl,该框架适应了一般的局部梯度估计剂家庭,包括流行的随机方差减少梯度方法和最先进的变化压缩方案。我们在隐私,公用事业和沟通复杂性方面提供了其性能权衡的全面表征,在这种情况下,Soterafl被证明可以在不牺牲隐私或实用性的情况下实现更好的沟通复杂性,而不是其他私人联合联盟学习算法而没有沟通压缩。
translated by 谷歌翻译
由于分布式和联邦学习应用中的通信瓶颈,使用通信压缩的算法引起了显着的关注,并且广泛用于实践中。此外,由于异构客户端的总数通常非常大,并且服务器无法与每个通信中的所有客户端通信,存在联合学习的客户端 - 方差。在本文中,我们通过提出压缩和客户端 - 方差减少方法来解决这两个问题。具体地,我们介绍了COFIG和FRECON,成功享受了客户方差减少的通信压缩。 COFIG的总通信轮是$ O(\ FRAC {(1+ \ OMEGA)^ {3/2} \ sqrt {n}} {s \ epsilon ^ 2} + \ frac {(1+ \ omega)n ^ {2/3}} {s \ epsilon ^ 2})$中的非核心设置,其中$ n $是客户的总数,$ s $是每轮的传达客户端的数量,$ \ epsilon $收敛误差和$ \ omega $是压缩运算符的参数。此外,我们的FRECON可以比非核心设置中的COFIG汇聚,它与$ O(\ FRAC {(1+ \ OMEGA)\ SQRT {n})汇聚在一起。在凸设置中,COFIG在通信中收敛于通信循环$ O(\ FRAC {(1+ \ OMEGA)\ SQRT {n}} $,这也是不存在压缩方案的第一个收敛结果与每轮的所有客户通信。总之,COFIG和FRECON都不需要与所有客户端通信,并提供凸面和非谐波联合学习的第一/更快的融合结果,而以前的作用需要完整的客户端通信(因此不实用)或获得更糟糕的收敛结果。
translated by 谷歌翻译
我们为姿势传输任务提供了一种定制的3D网格变压器模型。随着3D姿势转移基本上是依赖于给定网格的变形过程,这项工作的直觉是在具有强大的自我关注机制之间感知给定网格之间的几何不一致。具体而言,我们提出了一种新的几何对比变压器,其具有高效的3D结构感知能力,对给定网格的全局几何不一致。此外,在本地,进一步提出了一种简单但高效的中央测地对比损失,以改善区域几何不一致学习。最后,我们将潜在的等距正则化模块与新的半合成数据集一起呈现,用于跨DataSet 3D姿势传输任务对未知空间。大规模的实验结果证明了我们对SMPL-NPT,浮点和新建议的数据集SMG-3D数据集的最新定量表演的效果,以及在MG布和SMAL数据集中有前途的定性结果。结果证明,我们的方法可以实现鲁棒3D姿势传输,并且广泛地挑战来自跨数据集任务的未知空间的网格。代码和数据集可用。代码可用:https://github.com/mikecheninoulu/cgt。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
The study of stability and sensitivity of statistical methods or algorithms with respect to their data is an important problem in machine learning and statistics. The performance of the algorithm under resampling of the data is a fundamental way to measure its stability and is closely related to generalization or privacy of the algorithm. In this paper, we study the resampling sensitivity for the principal component analysis (PCA). Given an $ n \times p $ random matrix $ \mathbf{X} $, let $ \mathbf{X}^{[k]} $ be the matrix obtained from $ \mathbf{X} $ by resampling $ k $ randomly chosen entries of $ \mathbf{X} $. Let $ \mathbf{v} $ and $ \mathbf{v}^{[k]} $ denote the principal components of $ \mathbf{X} $ and $ \mathbf{X}^{[k]} $. In the proportional growth regime $ p/n \to \xi \in (0,1] $, we establish the sharp threshold for the sensitivity/stability transition of PCA. When $ k \gg n^{5/3} $, the principal components $ \mathbf{v} $ and $ \mathbf{v}^{[k]} $ are asymptotically orthogonal. On the other hand, when $ k \ll n^{5/3} $, the principal components $ \mathbf{v} $ and $ \mathbf{v}^{[k]} $ are asymptotically colinear. In words, we show that PCA is sensitive to the input data in the sense that resampling even a negligible portion of the input may completely change the output.
translated by 谷歌翻译
Temporal reasoning is the task of predicting temporal relations of event pairs with corresponding contexts. While some temporal reasoning models perform reasonably well on in-domain benchmarks, we have little idea of the systems' generalizability due to existing datasets' limitations. In this work, we introduce a novel task named TODAY that bridges this gap with temporal differential analysis, which as the name suggests, evaluates if systems can correctly understand the effect of incremental changes. Specifically, TODAY makes slight context changes for given event pairs, and systems need to tell how this subtle contextual change will affect temporal relation distributions. To facilitate learning, TODAY also annotates human explanations. We show that existing models, including GPT-3, drop to random guessing on TODAY, suggesting that they heavily rely on spurious information rather than proper reasoning for temporal predictions. On the other hand, we show that TODAY's supervision style and explanation annotations can be used in joint learning and encourage models to use more appropriate signals during training and outperform across several benchmarks. TODAY can also be used to train models to solicit incidental supervision from noisy sources such as GPT-3 and moves farther towards generic temporal reasoning systems.
translated by 谷歌翻译