The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP), owing to their excellent understanding and generation abilities. Remarkably, what further sets these models apart is the massive amounts of world knowledge they internalize during pretraining. While many downstream applications provide the model with an informational context to aid its performance on the underlying task, how the model's world knowledge interacts with the factual information presented in the context remains under explored. As a desirable behavior, an LLM should give precedence to the context whenever it contains task-relevant information that conflicts with the model's memorized knowledge. This enables model predictions to be grounded in the context, which can then be used to update or correct specific model predictions without frequent retraining. By contrast, when the context is irrelevant to the task, the model should ignore it and fall back on its internal knowledge. In this paper, we undertake a first joint study of the aforementioned two properties, namely controllability and robustness, in the context of LLMs. We demonstrate that state-of-the-art T5 and PaLM (both pretrained and finetuned) could exhibit poor controllability and robustness, which do not scale with increasing model size. As a solution, we propose a novel method - Knowledge Aware FineTuning (KAFT) - to strengthen both controllability and robustness by incorporating counterfactual and irrelevant contexts to standard supervised datasets. Our comprehensive evaluation showcases the utility of KAFT across model architectures and sizes.
translated by 谷歌翻译
联邦学习〜(FL)最近引起了学术界和行业的越来越多的关注,其最终目标是在隐私和沟通限制下进行协作培训。现有的基于FL算法的现有迭代模型需要大量的通信回合,以获得良好的模型,这是由于不同客户之间的极为不平衡和非平衡的I.D数据分配。因此,我们建议FedDM从多个本地替代功能中构建全球培训目标,这使服务器能够获得对损失格局的更全球视野。详细说明,我们在每个客户端构建了合成数据集,以在本地匹配从原始数据到分发匹配的损失景观。与笨拙的模型权重相比,FedDM通过传输更多信息和较小的合成数据来降低通信回合并提高模型质量。我们对三个图像分类数据集进行了广泛的实验,结果表明,在效率和模型性能方面,我们的方法可以优于其他FL的实验。此外,我们证明,FedDM可以适应使用高斯机制来保护差异隐私,并在相同的隐私预算下训练更好的模型。
translated by 谷歌翻译
我们研究了在通信约束下的分布式平均值估计和优化问题。我们提出了一个相关的量化协议,该协议的误差保证中的主项取决于数据点的平均偏差,而不仅仅是它们的绝对范围。该设计不需要关于数据集的集中属性的任何先验知识,这是在以前的工作中获得这种依赖所必需的。我们表明,在分布式优化算法中应用提出的协议作为子规则会导致更好的收敛速率。我们还在轻度假设下证明了我们的方案的最佳性。实验结果表明,我们提出的算法在各种任务方面优于现有的平均估计协议。
translated by 谷歌翻译
面部地标检测是具有许多重要应用的非常基本和重要的愿景任务。在实践中,面部地标检测可能受到大量自然降级的影响。最常见和最重要的降解之一是光源阻塞引起的阴影。虽然已经提出了许多先进的阴影去除方法来恢复近年来的图像质量,但它们对面部地标检测的影响并不具备很好的研究。例如,它仍然不清楚阴影去除是否可以增强面部地标检测的鲁棒性,以与不同的阴影模式。在这项工作中,为了第一次尝试,我们构建了一个新颖的基准,以将两个独立但相关任务联系起来(即阴影去除和面部地标检测)。特别是,所提出的基准覆盖具有不同强度,尺寸,形状和位置的不同面孔阴影。此外,对于对面部地标检测的挤出硬影模式,我们提出了一种新的方法(即,普发的阴影攻击),这使我们能够构建基准的具有挑战性的综合分析。通过构造的基准,我们对三个最先进的阴影清除方法和三个地标检测器进行了广泛的分析。这项工作的观察激励我们设计一种新颖的检测感知阴影拆除框架,使暗影去除以实现更高的恢复质量并增强部署的面部地标检测器的阴影稳健性。
translated by 谷歌翻译
深面识别(FR)在几个具有挑战性的数据集上取得了很高的准确性,并促进了成功的现实世界应用程序,甚至表现出对照明变化的高度鲁棒性,通常被认为是对FR系统的主要威胁。但是,在现实世界中,有限的面部数据集无法完全涵盖由不同的照明条件引起的照明变化。在本文中,我们从新角度(即对抗性攻击)研究对FR的照明的威胁,并确定一项新任务,即对对抗性的重视。鉴于面部图像,对抗性的重新获得旨在在欺骗最先进的深FR方法的同时产生自然重新的对应物。为此,我们首先提出了基于物理模型的对抗重新攻击(ARA),称为反照率基于反击的对抗性重新攻击(AQ-ARA)。它在物理照明模型和FR系统的指导下生成了自然的对抗光,并合成了对抗性重新重新确认的面部图像。此外,我们通过训练对抗性重新确定网络(ARNET)提出自动预测性的对抗重新攻击(AP-ARA),以根据不同的输入面自动以一步的方式自动预测对抗光,从而允许对效率敏感的应用。更重要的是,我们建议将上述数字攻击通过精确的重新确定设备将上述数字攻击转移到物理ARA(PHY-AARA)上,从而使估计的对抗照明条件在现实世界中可再现。我们在两个公共数据集上验证了三种最先进的深FR方法(即面部,街道和符号)的方法。广泛而有见地的结果表明,我们的工作可以产生逼真的对抗性重新贴心的面部图像,轻松地欺骗了fr,从而揭示了特定的光方向和优势的威胁。
translated by 谷歌翻译
共同突出的对象检测(Cosod)最近实现了重大进展,并在检索相关任务中发挥了关键作用。但是,它不可避免地构成了完全新的安全问题,即,高度个人和敏感的内容可能会通过强大的COSOD方法提取。在本文中,我们从对抗性攻击的角度解决了这个问题,并确定了一种小说任务:对抗的共同显着性攻击。特别地,给定从包含某种常见和突出对象的一组图像中选择的图像,我们的目标是生成可能误导Cosod方法以预测不正确的共突变区域的侵略性版本。注意,与分类的一般白盒对抗攻击相比,这项新任务面临两种额外的挑战:(1)由于本集团中图像的不同外观,成功率低; (2)Cosod方法的低可转换性由于Cosod管道之间的差异相当差异。为了解决这些挑战,我们提出了第一个黑匣子联合对抗的暴露和噪声攻击(JADENA),在那里我们共同和本地调整图像的曝光和添加剂扰动,根据新设计的高特征级对比度敏感损失功能。我们的方法,没有关于最先进的Cosod方法的任何信息,导致各种共同显着性检测数据集的显着性能下降,并使共同突出的物体无法检测到。这在适当地确保目前在互联网上共享的大量个人照片中可以具有很强的实际效益。此外,我们的方法是用于评估Cosod方法的稳健性的指标的潜力。
translated by 谷歌翻译
Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.
translated by 谷歌翻译
Graph Convolutional Networks (GCNs) and their variants have experienced significant attention and have become the de facto methods for learning graph representations. GCNs derive inspiration primarily from recent deep learning approaches, and as a result, may inherit unnecessary complexity and redundant computation. In this paper, we reduce this excess complexity through successively removing nonlinearities and collapsing weight matrices between consecutive layers. We theoretically analyze the resulting linear model and show that it corresponds to a fixed low-pass filter followed by a linear classifier. Notably, our experimental evaluation demonstrates that these simplifications do not negatively impact accuracy in many downstream applications. Moreover, the resulting model scales to larger datasets, is naturally interpretable, and yields up to two orders of magnitude speedup over FastGCN.
translated by 谷歌翻译
Federated Learning is a machine learning setting where the goal is to train a highquality centralized model while training data remains distributed over a large number of clients each with unreliable and relatively slow network connections. We consider learning algorithms for this setting where on each round, each client independently computes an update to the current model based on its local data, and communicates this update to a central server, where the client-side updates are aggregated to compute a new global model. The typical clients in this setting are mobile phones, and communication efficiency is of the utmost importance. In this paper, we propose two ways to reduce the uplink communication costs: structured updates, where we directly learn an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, where we learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling before sending it to the server. Experiments on both convolutional and recurrent networks show that the proposed methods can reduce the communication cost by two orders of magnitude. * Work performed while also affiliated with University of Edinburgh.
translated by 谷歌翻译