Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.
translated by 谷歌翻译
Ensemble learning serves as a straightforward way to improve the performance of almost any machine learning algorithm. Existing deep ensemble methods usually naively train many different models and then aggregate their predictions. This is not optimal in our view from two aspects: i) Naively training multiple models adds much more computational burden, especially in the deep learning era; ii) Purely optimizing each base model without considering their interactions limits the diversity of ensemble and performance gains. We tackle these issues by proposing deep negative correlation classification (DNCC), in which the accuracy and diversity trade-off is systematically controlled by decomposing the loss function seamlessly into individual accuracy and the correlation between individual models and the ensemble. DNCC yields a deep classification ensemble where the individual estimator is both accurate and negatively correlated. Thanks to the optimized diversities, DNCC works well even when utilizing a shared network backbone, which significantly improves its efficiency when compared with most existing ensemble systems. Extensive experiments on multiple benchmark datasets and network structures demonstrate the superiority of the proposed method.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
The task of Compositional Zero-Shot Learning (CZSL) is to recognize images of novel state-object compositions that are absent during the training stage. Previous methods of learning compositional embedding have shown effectiveness in closed-world CZSL. However, in Open-World CZSL (OW-CZSL), their performance tends to degrade significantly due to the large cardinality of possible compositions. Some recent works separately predict simple primitives (i.e., states and objects) to reduce cardinality. However, they consider simple primitives as independent probability distributions, ignoring the heavy dependence between states, objects, and compositions. In this paper, we model the dependence of compositions via feasibility and contextuality. Feasibility-dependence refers to the unequal feasibility relations between simple primitives, e.g., \textit{hairy} is more feasible with \textit{cat} than with \textit{building} in the real world. Contextuality-dependence represents the contextual variance in images, e.g., \textit{cat} shows diverse appearances under the state of \textit{dry} and \textit{wet}. We design Semantic Attention (SA) and generative Knowledge Disentanglement (KD) to learn the dependence of feasibility and contextuality, respectively. SA captures semantics in compositions to alleviate impossible predictions, driven by the visual similarity between simple primitives. KD disentangles images into unbiased feature representations, easing contextual bias in predictions. Moreover, we complement the current compositional probability model with feasibility and contextuality in a compatible format. Finally, we conduct comprehensive experiments to analyze and validate the superior or competitive performance of our model, Semantic Attention and knowledge Disentanglement guided Simple Primitives (SAD-SP), on three widely-used benchmark OW-CZSL datasets.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
最近的研究证明,图形神经网络容易受到对抗性攻击的影响。攻击者可以仅依靠培训标签来破坏Edge扰动不可知论受害者模型的性能。研究人员观察到,基于显着性的攻击者倾向于添加边缘而不是删除它们,这是通过以下事实来解释的:添加边缘通过聚集来污染节点的特征,同时删除边缘只会导致一些信息丢失。在本文中,我们进一步证明了攻击者通过添加类间边缘来扰动图,这也表现为降低扰动图的同层。从这个角度来看,基于显着的攻击者仍然有提高能力和不可识别的空间。基于GNN的替代模型的消息传递导致通过类间边缘连接的节点的过度厚度,从而阻止了攻击者获得节点特征的独特性。为了解决此问题,我们引入了一个多跳的汇总消息传递,以保留节点之间的属性差异。此外,我们提出了一个正规化术语来限制同质方差,以增强攻击不可识别。实验验证我们提出的替代模型改善了攻击者的多功能性,正则化项有助于限制扰动图的同质性。
translated by 谷歌翻译
否决单图是一项普遍但又具有挑战性的任务。复杂的降雪降解和各种降解量表需要强大的代表能力。为了使否定的网络看到各种降雪并建模本地细节和全球信息的上下文相互作用,我们提出了一种称为Snowformer的功能强大的建筑。首先,它在编码器中执行比例感知功能聚合,以捕获各种降解的丰富积雪信息。其次,为了解决大规模降级,它使用了解码器中的新颖上下文交互变压器块,该互动器块在全球上下文交互中从前范围内的局部细节和全局信息进行了上下文交互。并引入本地上下文互动可改善场景细节的恢复。第三,我们设计了一个异质的特征投影头,该功能投影头逐渐融合了编码器和解码器的特征,并将精制功能投影到干净的图像中。广泛的实验表明,所提出的雪诺形雪孔比其他SOTA方法取得了重大改进。与SOTA单图像HDCW-NET相比,它在CSD测试集上将PSNR度量提高了9.2dB。此外,与一般图像恢复体系结构NAFNET相比,PSNR的增加5.13db,这验证了我们的雪诺形雪地降雪任务的强大表示能力。该代码在\ url {https://github.com/ephemeral182/snowformer}中发布。
translated by 谷歌翻译
事件提取是医学文本处理的重要工作。根据医学文本注释的复杂特征,我们使用端到端事件提取模型来增强事件的输出格式信息。通过预训练和微调,我们可以提取医学文本四个维度的属性:解剖位置,主题单词,描述单词和发生状态。在测试集中,准确率为0.4511,召回率为0.3928,F1值为0.42。该模型的方法很简单,并且在第七届中国健康信息处理会议(CHIP2021)的中国电子医疗记录中赢得了挖掘临床发现事件(任务2)的任务中的第二名。
translated by 谷歌翻译
立场检测任务旨在对给定文件和主题的立场进行分类。由于该主题可以隐含在文档中,并且在零摄影设置的培训数据中看不见,因此我们建议通过使用情感和常识知识来提高立场检测模型的可传递性,这在先前的研究中很少考虑。我们的模型包括一个图形自动编码器模块,以获取常识性知识和带有情感和常识的立场检测模块。实验结果表明,我们的模型优于零射击和少量基准数据集(VAST)上的最新方法。同时,消融研究证明了我们模型中每个模块的重要性。对情感,常识和立场之间关系的分析表明了情感和常识的有效性。
translated by 谷歌翻译