The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
We introduce a new method for diverse foreground generation with explicit control over various factors. Existing image inpainting based foreground generation methods often struggle to generate diverse results and rarely allow users to explicitly control specific factors of variation (e.g., varying the facial identity or expression for face inpainting results). We leverage contrastive learning with latent codes to generate diverse foreground results for the same masked input. Specifically, we define two sets of latent codes, where one controls a pre-defined factor (``known''), and the other controls the remaining factors (``unknown''). The sampled latent codes from the two sets jointly bi-modulate the convolution kernels to guide the generator to synthesize diverse results. Experiments demonstrate the superiority of our method over state-of-the-arts in result diversity and generation controllability.
translated by 谷歌翻译
为设计控制器选择适当的参数集对于最终性能至关重要,但通常需要一个乏味而仔细的调整过程,这意味着强烈需要自动调整方法。但是,在现有方法中,无衍生物的可扩展性或效率低下,而基于梯度的方法可能由于可能是非差异的控制器结构而无法使用。为了解决问题,我们使用新颖的无衍生化强化学习(RL)框架来解决控制器调整问题,该框架在经验收集过程中在参数空间中执行时间段的扰动,并将无衍生策略更新集成到高级参与者 - 批判性RL中实现高多功能性和效率的体系结构。为了证明该框架的功效,我们在自动驾驶的两个具体示例上进行数值实验,即使用PID控制器和MPC控制器进行轨迹跟踪的自适应巡航控制。实验结果表明,所提出的方法的表现优于流行的基线,并突出了其强大的控制器调整潜力。
translated by 谷歌翻译
血压(BP)监测对于日常医疗保健至关重要,尤其是对于心血管疾病。但是,BP值主要是通过接触传感方法获得的,这是不方便且不友好的BP测量。因此,我们提出了一个有效的端到端网络,以估算面部视频中的BP值,以实现日常生活中的远程BP测量。在这项研究中,我们首先得出了短期(〜15s)面部视频的时空图。根据时空图,我们随后通过设计的血压分类器回归了BP范围,并同时通过每个BP范围内的血压计算器来计算特定值。此外,我们还制定了一种创新的过采样培训策略,以解决不平衡的数据分配问题。最后,我们在私有数据集ASPD上培训了拟议的网络,并在流行的数据集MMSE-HR上对其进行了测试。结果,拟议的网络实现了收缩压和舒张压测量的最先进的MAE,为12.35 mmHg和9.5 mmHg,这比最近的工作要好。它得出的结论是,在现实世界中,提出的方法对于基于摄像头的BP监测具有巨大潜力。
translated by 谷歌翻译
最近基于进化的零级优化方法和基于策略梯度的一阶方法是解决加强学习(RL)问题的两个有希望的替代方案。前者的方法与任意政策一起工作,依赖状态依赖和时间扩展的探索,具有健壮性的属性,但遭受了较高的样本复杂性,而后者的方法更有效,但仅限于可区分的政策,并且学习的政策是不太强大。为了解决这些问题,我们提出了一种新颖的零级演员 - 批评算法(ZOAC),该算法将这两种方法统一为派对演员 - 批判性结构,以保留两者的优势。 ZOAC在参数空间,一阶策略评估(PEV)和零订单策略改进(PIM)的参数空间中进行了推出集合,每次迭代中都会进行推出。我们使用不同类型的策略在广泛的挑战连续控制基准上进行广泛评估我们的方法,其中ZOAC优于零阶和一阶基线算法。
translated by 谷歌翻译
对手补丁攻击旨在通过在输入图像的限制区域内任意修改像素来欺骗机器学习模型。这种攻击是对部署在物理世界中的模型的主要威胁,因为通过在相机视图中呈现自定义对象,可以容易地实现它们。由于补丁的任意性,防止这种攻击措施是挑战,并且现有的可提供防御遭受较差的认证准确性。本文提出了根据视觉变压器(VIV)模型的对抗对抗斑块的零点认证防御。而不是训练强大的模型来抵抗可能不可避免地牺牲精度的对抗斑块,而是在没有任何额外训练的情况下重用预纯的VIT模型,这可以通过简单地操纵Vit的注意力地图来达到清洁输入的高精度。具体地,通过用不同的关注掩模投票来测试每个输入,其中至少有一个推断被保证排除对抗修补程序。如果所有掩蔽的推迟达到共识,则预测是可疑的,这确保了不会用假阴性检测到任何对抗性贴剂。广泛的实验表明,PACKVETO能够实现高认证的准确性(例如,在Imagenet中为2%-pixel对抗性贴片的67.1%),显着优于最先进的方法。清洁精度与Vanilla Vit模型相同(想象成81.8%),因​​为模型参数直接重复使用。同时,通过简单地改变掩蔽策略,我们的方法可以灵活地处理不同的对抗斑块尺寸。
translated by 谷歌翻译
Deepfakes的恶意应用(即,从面部图像产生目标面部属性或整个面部的技术)对个人的声誉和安全构成了巨大的威胁。为了减轻这些威胁,最近的研究已经提出了对抗DeepFake模型的对抗水印,导致它们产生扭曲的输出。尽管结果令人印象深刻,但这些对抗水印具有低的图像水平和模型级可转移性,这意味着它们可以仅保护一个特定的DeepFake模型的一个面部图像。为了解决这些问题,我们提出了一种新的解决方案,可以产生跨模型通用对抗水印(CMUA-Watermark),保护来自多个DeepFake模型的大量面部图像。具体而言,我们首先提出跨模型通用攻击管道,迭代地攻击多个DeepFake模型。然后,我们设计了一种双层扰动融合策略,以减轻不同面部图像和模型产生的对抗水印之间的冲突。此外,我们通过启发式方法解决了跨模型优化的关键问题,以自动找到不同型号的合适的攻击步骤尺寸,进一步削弱了模型级冲突。最后,我们介绍了一种更合理和全面的评估方法来完全测试所提出的方法并将其与现有的方法进行比较。广泛的实验结果表明,所提出的CMUA-Watermark可以有效地扭曲由多个DeepFake模型产生的假面部图像,同时实现比现有方法更好的性能。
translated by 谷歌翻译
It is known that neural networks have the problem of being over-confident when directly using the output label distribution to generate uncertainty measures. Existing methods mainly resolve this issue by retraining the entire model to impose the uncertainty quantification capability so that the learned model can achieve desired performance in accuracy and uncertainty prediction simultaneously. However, training the model from scratch is computationally expensive and may not be feasible in many situations. In this work, we consider a more practical post-hoc uncertainty learning setting, where a well-trained base model is given, and we focus on the uncertainty quantification task at the second stage of training. We propose a novel Bayesian meta-model to augment pre-trained models with better uncertainty quantification abilities, which is effective and computationally efficient. Our proposed method requires no additional training data and is flexible enough to quantify different uncertainties and easily adapt to different application settings, including out-of-domain data detection, misclassification detection, and trustworthy transfer learning. We demonstrate our proposed meta-model approach's flexibility and superior empirical performance on these applications over multiple representative image classification benchmarks.
translated by 谷歌翻译
By transferring knowledge from large, diverse, task-agnostic datasets, modern machine learning models can solve specific downstream tasks either zero-shot or with small task-specific datasets to a high level of performance. While this capability has been demonstrated in other fields such as computer vision, natural language processing or speech recognition, it remains to be shown in robotics, where the generalization capabilities of the models are particularly critical due to the difficulty of collecting real-world robotic data. We argue that one of the keys to the success of such general robotic models lies with open-ended task-agnostic training, combined with high-capacity architectures that can absorb all of the diverse, robotic data. In this paper, we present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties. We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks. The project's website and videos can be found at robotics-transformer.github.io
translated by 谷歌翻译
Deep learning (DL) has become a driving force and has been widely adopted in many domains and applications with competitive performance. In practice, to solve the nontrivial and complicated tasks in real-world applications, DL is often not used standalone, but instead contributes as a piece of gadget of a larger complex AI system. Although there comes a fast increasing trend to study the quality issues of deep neural networks (DNNs) at the model level, few studies have been performed to investigate the quality of DNNs at both the unit level and the potential impacts on the system level. More importantly, it also lacks systematic investigation on how to perform the risk assessment for AI systems from unit level to system level. To bridge this gap, this paper initiates an early exploratory study of AI system risk assessment from both the data distribution and uncertainty angles to address these issues. We propose a general framework with an exploratory study for analyzing AI systems. After large-scale (700+ experimental configurations and 5000+ GPU hours) experiments and in-depth investigations, we reached a few key interesting findings that highlight the practical need and opportunities for more in-depth investigations into AI systems.
translated by 谷歌翻译