The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
We launch EVA, a vision-centric foundation model to explore the limits of visual representation at scale using only publicly accessible data. EVA is a vanilla ViT pre-trained to reconstruct the masked out image-text aligned vision features conditioned on visible image patches. Via this pretext task, we can efficiently scale up EVA to one billion parameters, and sets new records on a broad range of representative vision downstream tasks, such as image recognition, video action recognition, object detection, instance segmentation and semantic segmentation without heavy supervised training. Moreover, we observe quantitative changes in scaling EVA result in qualitative changes in transfer learning performance that are not present in other models. For instance, EVA takes a great leap in the challenging large vocabulary instance segmentation task: our model achieves almost the same state-of-the-art performance on LVISv1.0 dataset with over a thousand categories and COCO dataset with only eighty categories. Beyond a pure vision encoder, EVA can also serve as a vision-centric, multi-modal pivot to connect images and text. We find initializing the vision tower of a giant CLIP from EVA can greatly stabilize the training and outperform the training from scratch counterpart with much fewer samples and less compute, providing a new direction for scaling up and accelerating the costly training of multi-modal foundation models. To facilitate future research, we release all the code and models at https://github.com/baaivision/EVA.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
Recent studies reveal that deep neural network (DNN) based object detectors are vulnerable to adversarial attacks in the form of adding the perturbation to the images, leading to the wrong output of object detectors. Most current existing works focus on generating perturbed images, also called adversarial examples, to fool object detectors. Though the generated adversarial examples themselves can remain a certain naturalness, most of them can still be easily observed by human eyes, which limits their further application in the real world. To alleviate this problem, we propose a differential evolution based dual adversarial camouflage (DE_DAC) method, composed of two stages to fool human eyes and object detectors simultaneously. Specifically, we try to obtain the camouflage texture, which can be rendered over the surface of the object. In the first stage, we optimize the global texture to minimize the discrepancy between the rendered object and the scene images, making human eyes difficult to distinguish. In the second stage, we design three loss functions to optimize the local texture, making object detectors ineffective. In addition, we introduce the differential evolution algorithm to search for the near-optimal areas of the object to attack, improving the adversarial performance under certain attack area limitations. Besides, we also study the performance of adaptive DE_DAC, which can be adapted to the environment. Experiments show that our proposed method could obtain a good trade-off between the fooling human eyes and object detectors under multiple specific scenes and objects.
translated by 谷歌翻译
深度神经网络(DNNS)在训练过程中容易受到后门攻击的影响。该模型以这种方式损坏正常起作用,但是当输入中的某些模式触发时,会产生预定义的目标标签。现有防御通常依赖于通用后门设置的假设,其中有毒样品共享相同的均匀扳机。但是,最近的高级后门攻击表明,这种假设在动态后门中不再有效,在动态后门中,触发者因输入而异,从而击败了现有的防御。在这项工作中,我们提出了一种新颖的技术BEATRIX(通过革兰氏矩阵检测)。 BEATRIX利用革兰氏矩阵不仅捕获特征相关性,还可以捕获表示形式的适当高阶信息。通过从正常样本的激活模式中学习类条件统计,BEATRIX可以通过捕获激活模式中的异常来识别中毒样品。为了进一步提高识别目标标签的性能,BEATRIX利用基于内核的测试,而无需对表示分布进行任何先前的假设。我们通过与最先进的防御技术进行了广泛的评估和比较来证明我们的方法的有效性。实验结果表明,我们的方法在检测动态后门时达到了91.1%的F1得分,而最新技术只能达到36.9%。
translated by 谷歌翻译
最近,音频驱动的会说话的面部视频产生引起了广泛的关注。但是,很少有研究能够解决这些会说话的面部视频的情感编辑问题,并具有连续可控的表达式,这是行业中强烈的需求。面临的挑战是,与语音有关的表达和与情感有关的表达通常是高度耦合的。同时,由于表达式与其他属性(例如姿势)的耦合,即在每个框架中翻译角色的表达可能会同时改变头部姿势,因此传统的图像到图像翻译方法无法在我们的应用中很好地工作。培训数据分布。在本文中,我们提出了一种高质量的面部表达编辑方法,用于谈话面部视频,使用户可以连续控制编辑视频中的目标情感。我们为该任务提供了一个新的视角,作为运动信息编辑的特殊情况,我们使用3DMM捕获主要的面部运动和由StyleGAN模拟的相关纹理图,以捕获外观细节。两种表示(3DMM和纹理图)都包含情感信息,并且可以通过神经网络进行连续修改,并通过系数/潜在空间平均轻松平滑,从而使我们的方法变得简单而有效。我们还引入了口腔形状的保存损失,以控制唇部同步和编辑表达的夸张程度之间的权衡。广泛的实验和用户研究表明,我们的方法在各种评估标准中实现了最先进的表现。
translated by 谷歌翻译
美国的意识形态分裂在日常交流中变得越来越突出。因此,关于政治两极分化的许多研究,包括最近采取计算观点的许多努力。通过检测文本语料库中的政治偏见,可以尝试描述和辨别该文本的两极分性。从直觉上讲,命名的实体(即,用作名词的名词和短语)和文本中的标签经常带有有关政治观点的信息。例如,使用“支持选择”一词的人可能是自由的,而使用“亲生生命”一词的人可能是保守的。在本文中,我们试图揭示社交媒体文本数据中的政治极性,并通过将极性得分分配给实体和标签来量化这些极性。尽管这个想法很简单,但很难以可信赖的定量方式进行这种推论。关键挑战包括少数已知标签,连续的政治观点,以及在嵌入单词媒介中的极性得分和极性中性语义含义的保存。为了克服这些挑战,我们提出了极性感知的嵌入多任务学习(PEM)模型。该模型包括(1)自制的上下文保护任务,(2)基于注意力的推文级别的极性推导任务,以及(3)对抗性学习任务,可促进嵌入式的极性维度及其语义之间的独立性方面。我们的实验结果表明,我们的PEM模型可以成功学习极性感知的嵌入。我们检查了各种应用,从而证明了PEM模型的有效性。我们还讨论了我们的工作的重要局限性,并在将PEM模型应用于现实世界情景时的压力谨慎。
translated by 谷歌翻译
尽管人工智能(AI)在理解各个领域的分子方面取得了重大进展,但现有模型通常从单个分子模态中获得单个认知能力。由于分子知识的层次结构是深刻的,即使人类也从不同的方式中学习,包括直觉图和专业文本,以帮助他们的理解。受到这一点的启发,我们提出了一个分子多模式基础模型,该模型是从分子图及其语义相关的文本数据(从发表的科学引用索引论文中爬立)的。该AI模型代表了直接桥接分子图和自然语言的关键尝试。重要的是,通过捕获两种方式的特定和互补信息,我们提出的模型可以更好地掌握分子专业知识。实验结果表明,我们的模型不仅在诸如跨模式检索和分子标题之类的跨模式任务中表现出有希望的性能,而且还可以增强分子属性预测,并具有从自然语言描述中产生有意义的分子图的能力。我们认为,我们的模型将对跨生物学,化学,材料,环境和医学等学科的AI能力领域产生广泛的影响。
translated by 谷歌翻译
实体对齐是知识图融合中的至关重要任务。但是,大多数实体对准方法都有可伸缩性问题。最近的方法通过将大型公斤分成小块来解决这个问题,以嵌入和对齐学习。但是,这种分区和学习过程导致结构和对齐过度损失过多。因此,在这项工作中,我们提出了一种可扩展的基于GNN的实体对准方法,以从三个角度降低结构和对齐损失。首先,我们提出一种基于中心性的子图生成算法,以回顾一些具有不同子图之间桥梁的地标实体。其次,我们介绍了自我监督的实体重建,以从不完整的邻里子图中恢复实体表示形式,并设计了跨纸笔负面抽样,以在对齐学习中纳入其他子图中的实体。第三,在推理过程中,我们合并子图的嵌入,以制作一个单个空间进行对齐搜索。基准开放数据集和提议的大型DBPEDIA1M数据集的实验结果验证了我们方法的有效性。
translated by 谷歌翻译
多模式学习,尤其是大规模的多模式预训练,在过去的几年中已经迅速发展,并带来了人工智能(AI)的最大进步。尽管具有有效性,但了解多模式预训练模型的潜在机制仍然是一个巨大的挑战。揭示此类模型的解释性可能会使AI领域中新型学习范式的突破。为此,鉴于人脑的多模式性质,我们建议借助非侵入性脑成像技术(例如功能磁共振成像(fMRI))探索多模式学习模型的解释性。具体而言,我们首先提出了1500万个图像文本对预训练的新设计的多模式基础模型,该模型在各种认知下游任务中显示出强烈的多模式理解和概括能力。此外,从神经编码的角度来看(基于我们的基础模型),我们发现,与单峰相比,经过多模式训练的视觉和舌编码器都更像脑状。特别是,我们确定了许多大脑区域,其中多模式训练的编码器表现出更好的神经编码性能。这与现有有关探索大脑多感觉整合的研究的发现是一致的。因此,我们认为,多模式基础模型是神经科学家研究人脑中多模式信号处理机制的更合适的工具。我们的发现还证明了多模式基础模型作为理想的计算模拟器的潜力,以促进脑和大脑的AI研究。
translated by 谷歌翻译