Automatic image colorization is a particularly challenging problem. Due to the high illness of the problem and multi-modal uncertainty, directly training a deep neural network usually leads to incorrect semantic colors and low color richness. Existing transformer-based methods can deliver better results but highly depend on hand-crafted dataset-level empirical distribution priors. In this work, we propose DDColor, a new end-to-end method with dual decoders, for image colorization. More specifically, we design a multi-scale image decoder and a transformer-based color decoder. The former manages to restore the spatial resolution of the image, while the latter establishes the correlation between semantic representations and color queries via cross-attention. The two decoders incorporate to learn semantic-aware color embedding by leveraging the multi-scale visual features. With the help of these two decoders, our method succeeds in producing semantically consistent and visually plausible colorization results without any additional priors. In addition, a simple but effective colorfulness loss is introduced to further improve the color richness of generated results. Our extensive experiments demonstrate that the proposed DDColor achieves significantly superior performance to existing state-of-the-art works both quantitatively and qualitatively. Codes will be made publicly available.
translated by 谷歌翻译
Persuasion modeling is a key building block for conversational agents. Existing works in this direction are limited to analyzing textual dialogue corpus. We argue that visual signals also play an important role in understanding human persuasive behaviors. In this paper, we introduce the first multimodal dataset for modeling persuasion behaviors. Our dataset includes 199 dialogue transcriptions and videos captured in a multi-player social deduction game setting, 26,647 utterance level annotations of persuasion strategy, and game level annotations of deduction game outcomes. We provide extensive experiments to show how dialogue context and visual signals benefit persuasion strategy prediction. We also explore the generalization ability of language models for persuasion modeling and the role of persuasion strategies in predicting social deduction game outcomes. Our dataset, code, and models can be found at https://persuasion-deductiongame.socialai-data.org.
translated by 谷歌翻译
A key barrier to using reinforcement learning (RL) in many real-world applications is the requirement of a large number of system interactions to learn a good control policy. Off-policy and Offline RL methods have been proposed to reduce the number of interactions with the physical environment by learning control policies from historical data. However, their performances suffer from the lack of exploration and the distributional shifts in trajectories once controllers are updated. Moreover, most RL methods require that all states are directly observed, which is difficult to be attained in many settings. To overcome these challenges, we propose a trajectory generation algorithm, which adaptively generates new trajectories as if the system is being operated and explored under the updated control policies. Motivated by the fundamental lemma for linear systems, assuming sufficient excitation, we generate trajectories from linear combinations of historical trajectories. For linear feedback control, we prove that the algorithm generates trajectories with the exact distribution as if they are sampled from the real system using the updated control policy. In particular, the algorithm extends to systems where the states are not directly observed. Experiments show that the proposed method significantly reduces the number of sampled data needed for RL algorithms.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
Federated learning (FL) enables the building of robust and generalizable AI models by leveraging diverse datasets from multiple collaborators without centralizing the data. We created NVIDIA FLARE as an open-source software development kit (SDK) to make it easier for data scientists to use FL in their research and real-world applications. The SDK includes solutions for state-of-the-art FL algorithms and federated machine learning approaches, which facilitate building workflows for distributed learning across enterprises and enable platform developers to create a secure, privacy-preserving offering for multiparty collaboration utilizing homomorphic encryption or differential privacy. The SDK is a lightweight, flexible, and scalable Python package, and allows researchers to bring their data science workflows implemented in any training libraries (PyTorch, TensorFlow, XGBoost, or even NumPy) and apply them in real-world FL settings. This paper introduces the key design principles of FLARE and illustrates some use cases (e.g., COVID analysis) with customizable FL workflows that implement different privacy-preserving algorithms. Code is available at https://github.com/NVIDIA/NVFlare.
translated by 谷歌翻译
尽管促进机器学习(ML)公平的最新进展激增,但现有的主流方法主要需要培训或填充神经网络的整个权重以满足公平标准。但是,由于较大的计算和存储成本,低数据效率和模型隐私问题,对于那些大规模训练的模型来说,这通常是不可行的。在本文中,我们提出了一种称为FairreProgragr的新的通用公平学习范式,该范式结合了模型重编程技术。具体而言,Fairreprogrogram考虑了固定的神经模型,而是将输入一组扰动(称为公平触发器)附加到,该触发触发器在Min-Max公式下朝着公平标准调整为公平触发器。我们进一步介绍了一个信息理论框架,该框架解释了为什么以及在什么条件下,使用公平触发器可以实现公平目标。我们从理论和经验上都表明,公平触发器可以通过提供错误的人口统计信息来有效地掩盖固定ML模型的输出预测中的人口偏见,从而阻碍模型利用正确的人口统计信息来进行预测。对NLP和CV数据集进行的广泛实验表明,与在两个广泛使用的公平标准下,基于培训成本和数据依赖性的基于重新培训的方法相比,我们的方法可以实现更好的公平性改进。
translated by 谷歌翻译
传统的多视图光度立体声(MVP)方法通常由多个不相交阶段组成,从而导致明显的累积错误。在本文中,我们提出了一种基于隐式表示的MVP的神经反向渲染方法。给定通过多个未知方向灯照亮的非陆层物体的多视图图像,我们的方法共同估计几何形状,材料和灯光。我们的方法首先采用多光图像来估计每视图正常地图,这些图用于使从神经辐射场得出的正态定向。然后,它可以根据具有阴影可区分的渲染层共同优化表面正态,空间变化的BRDF和灯。优化后,重建的对象可用于新颖的视图渲染,重新定义和材料编辑。合成数据集和真实数据集的实验表明,与现有的MVP和神经渲染方法相比,我们的方法实现了更准确的形状重建。我们的代码和模型可以在https://ywq.github.io/psnerf上找到。
translated by 谷歌翻译
本文解决了对预先训练的深神经网络进行排名并筛选最下游任务的重要问题。这是具有挑战性的,因为每个任务的基本模型排名只能通过微调目标数据集中的预训练模型来生成,该模型是蛮力且计算昂贵的。最近的高级方法提出了几个轻巧的可转移性指标来预测微调结果。但是,这些方法仅捕获静态表示,但忽略了微调动态。为此,本文提出了一个新的可传递性度量,称为\ textbf {s} elf-challenging \ textbf {f} isher \ textbf {d} is Criminant \ textbf {a} nalisy(\ textbf {\ textbf {sfda})现有作品没有的有吸引力的好处。首先,SFDA可以将静态特征嵌入渔民空间中,并完善它们,以在类之间更好地分离性。其次,SFDA使用一种自我挑战的机制来鼓励不同的预训练模型来区分硬性示例。第三,SFDA可以轻松地为模型集合选择多个预训练的模型。 $ 33 $预培训的$ 11 $下游任务的$ 33 $预培训模型的广泛实验表明,在测量预训练模型的可传递性时,SFDA具有高效,有效和健壮。例如,与最先进的方法NLEEP相比,SFDA平均显示了59.1美元的增益,同时带来了$ 22.5 $ x的墙壁速度速度。该代码将在\ url {https://github.com/tencentarc/sfda}上提供。
translated by 谷歌翻译
联合学习(FL)是一种分布式机器学习技术,可以在避免明确的数据共享的同时进行协作模型培训。 FL算法的固有保护属性使其对医疗领域特别有吸引力。但是,如果有异质的客户数据分布,则标准FL方法是不稳定的,需要密集的超参数调整以实现最佳性能。常规的超参数优化算法在现实世界中的FL应用中是不切实际的,因为它们涉及大量的培训试验,而计算预算有限,这些试验通常是不起作用的。在这项工作中,我们提出了一种有效的增强学习(RL)的联合次数超参数优化算法,称为自动FEDRL,其中在线RL代理可以根据当前的培训进度动态调整每个客户的超参数。进行了广泛的实验以研究不同的搜索策略和RL代理。该方法的有效性在CIFAR-10数据集的异质数据分配以及两个现实世界中的医学图像分割数据集上进行了验证,用于胸部CT中的COVID-19变病变分段,腹部CT中的胰腺细分。
translated by 谷歌翻译
模糊文物可以严重降低图像的视觉质量,并且已经提出了许多用于特定场景的脱模方法。然而,在大多数现实世界的图像中,模糊是由不同因素引起的,例如运动和散焦。在本文中,我们解决了不同的去纹身方法如何在一般类型的模糊上进行。对于深入的性能评估,我们构建一个名为(MC-Blur)的新型大规模的多个原因图像去孔数据集,包括现实世界和合成模糊图像,具有模糊的混合因素。采用不同的技术收集所提出的MC-Blur数据集中的图像:卷积超高清(UHD)具有大核的锐利图像,平均由1000 FPS高速摄像头捕获的清晰图像,向图像添加Defocus,而且真实-world模糊的图像由各种相机型号捕获。这些结果概述了当前的去纹理方法的优缺点。此外,我们提出了一种新的基线模型,适应多种模糊的原因。通过包括对不同程度的特征的不同重量,所提出的网络导出更强大的特征,重量分配给更重要的水平,从而增强了特征表示。新数据集上的广泛实验结果展示了多原因模糊情景所提出的模型的有效性。
translated by 谷歌翻译