对象检测是各种关键计算机视觉任务的基础,例如分割,对象跟踪和事件检测。要以令人满意的精度训练对象探测器,需要大量数据。但是,由于注释大型数据集涉及大量劳动力,这种数据策展任务通常被外包给第三方或依靠志愿者。这项工作揭示了此类数据策展管道的严重脆弱性。我们提出MACAB,即使数据策展人可以手动审核图像,也可以将干净的图像制作清洁的图像将后门浸入对象探测器中。我们观察到,当后门被不明确的天然物理触发器激活时,在野外实现了错误分类和披肩的后门效应。与带有清洁标签的现有图像分类任务相比,带有清洁通道的非分类对象检测具有挑战性,这是由于每个帧内有多个对象的复杂性,包括受害者和非视野性对象。通过建设性地滥用深度学习框架使用的图像尺度函数,II结合了所提出的对抗性清洁图像复制技术,以及在考虑到毒品数据选择标准的情况下,通过建设性地滥用图像尺度尺度,可以确保MACAB的功效。广泛的实验表明,在各种现实世界中,MacAB在90%的攻击成功率中表现出超过90%的攻击成功率。这包括披肩和错误分类后门效应,甚至限制了较小的攻击预算。最先进的检测技术无法有效地识别中毒样品。全面的视频演示位于https://youtu.be/ma7l_lpxkp4上,该演示基于yolov4倒置的毒药率为0.14%,yolov4 clokaking后门和更快的速度R-CNN错误分类后门。
translated by 谷歌翻译
后门深度学习(DL)模型的行为通常在清洁输入上,但在触发器输入时不端行为,因为后门攻击者希望为DL模型部署构成严重后果。最先进的防御是限于特定的后门攻击(源无关攻击)或在该机器学习(ML)专业知识或昂贵的计算资源中不适用于源友好的攻击。这项工作观察到所有现有的后门攻击都具有不可避免的内在弱点,不可转换性,即触发器输入劫持劫持模型,但不能对另一个尚未植入同一后门的模型有效。通过此密钥观察,我们提出了不可转换性的反向检测(NTD)来识别运行时在运行时的模型欠测试(MUT)的触发输入。特定,NTD允许潜在的回溯静电预测输入的类别。同时,NTD利用特征提取器(FE)来提取输入的特征向量,并且从其预测类随机拾取的一组样本,然后比较FE潜在空间中的输入和样本之间的相似性。如果相似性低,则输入是对逆势触发输入;否则,良性。 FE是一个免费的预训练模型,私下从开放平台保留。随着FE和MUT来自不同来源,攻击者非常不可能将相同的后门插入其中两者。由于不可转换性,不能将突变处工作的触发效果转移到FE,使NTD对不同类型的后门攻击有效。我们在三个流行的定制任务中评估NTD,如面部识别,交通标志识别和一般动物分类,结果确认NDT具有高效率(低假验收率)和具有低检测延迟的可用性(低误报率)。
translated by 谷歌翻译
尽管深度神经网络模型在各种应用程序中表现出出色的性能,但它们的较大模型大小和广泛的浮点操作使移动计算平台上的部署成为主要挑战,尤其是在物联网设备上。一种吸引人的解决方案是模型量化,可降低模型大小并使用微控制器通常支持的整数操作。为此,1位量化的DNN模型或深二进制神经网络可最大化存储效率,其中BNN模型中的每个参数仅具有1位。在本文中,我们提出了一个可重构的BNN(RBNN),以进一步扩大资源约束的物联网设备的内存效率。通常,可以根据需要重新配置RBNN,以实现具有相同参数集的M(m> 1)不同的任务,因此只有一个任务决定了内存要求。换句话说,通过时间M改善了内存利用率。我们的广泛实验证实了多达七个常用的任务可以共存(M的值更大)。这些具有不同类别的任务在三个二氧化流行的DNN体系结构(包括VGG,Resnet和ReactNet)上没有准确性或微不足道的准确性下降。这些任务跨越了不同域,例如本文验证的计算机视觉和音频域,并以模型体系结构可以服务于这些跨域任务的先决条件。为了保护RBNN模型的知识属性,可以通过用户密钥和由固有硬件指纹生成的设备唯一的根键来控制重新配置。通过这样做,RBNN模型只能使用每个授权设备的每个付费用户使用,从而使用户和模型提供商受益。
translated by 谷歌翻译
会话问题生成(CQG)是机器通过对话等人类(例如交互式阅读理解)的重要任务。与传统的单转交问题(SQG)相比,CQG更具挑战性的意义,即生成的问题不仅需要有意义,而且要与发生的对话历史保持一致。虽然先前的研究主要集中于如何建模对话的流量和对齐,但迄今为止,尚无对模型必需部分和历史的部分进行全面的研究。我们认为,缩短上下文和历史是至关重要的,因为它可以帮助该模型对对话的一致性进行更多优化。为此,我们提出了一个两阶段CQG框架COHS-CQG,该框架采用COHS模块来缩短输入的上下文和历史记录。特别是,COHS选择连续的句子,并根据其相关性得分通过顶级P策略转弯。我们的模型在答案感和答案环境中都可以在COQA上实现最先进的表演。
translated by 谷歌翻译
科学机器学习的进步改善了现代计算科学和工程应用。数据驱动的方法(例如动态模式分解(DMD))可以从动态系统生成的时空数据中提取相干结构,并推断上述系统的不同方案。时空数据作为快照,每次瞬间包含空间信息。在现代工程应用中,高维快照的产生可能是时间和/或资源要求。在本研究中,我们考虑了在大型数值模拟中增强DMD工作流程的两种策略:(i)快照压缩以减轻磁盘压力; (ii)使用原位可视化图像在运行时重建动力学(或部分)。我们通过两个3D流体动力学模拟评估我们的方法,并考虑DMD重建解决方案。结果表明,快照压缩大大减少了所需的磁盘空间。我们已经观察到,损耗的压缩将存储降低了几乎$ 50 \%$,而信号重建和其他关注数量的相对错误则较低。我们还使用原位可视化工具将分析扩展到了直接生成的数据,在运行时生成状态向量的图像文件。在大型模拟中,快照的产生可能足够慢,可以使用批处理算法进行推理。流DMD利用增量SVD算法,并随着每个新快照的到来更新模式。我们使用流式DMD来重建原位生成的图像的动力学。我们证明此过程是有效的,并且重建的动力学是准确的。
translated by 谷歌翻译
视觉和语言领域的研究包括挑战的主题,寻求连接视觉和文本信息。视频到文本问题是以下主题之一,其中目标是将输入视频连接到其文本描述。通过检索来自语料库的最重要描述或者在给定上下文视频的情况下,可以主要通过检索最重要的描述。这两种方式代表了计算机视觉和自然语言处理社区的基本任务,称为来自视频任务和视频字幕/描述任务的文本检索。这两个任务基本上比预测或从图像中检索单个句子更复杂。视频中存在的时空信息引入了关于视觉内容和相关语言描述的结构的多样性和复杂性。此述评分类并描述了视频到文本问题的最先进的技术。它涵盖了主要的视频到文本方法和评估其性能的方法。我们分析了如何创建最多报告的基准数据集,显示出问题要求的缺点和优势。我们还展示了研究人员在每个数据集上进行的令人印象深刻的进展,我们分析为什么,尽管这一进展,视频到文本转换仍未解决。最先进的技术仍然是在生成或检索视频描述时实现人类的性能很长。我们涵盖了该领域的几个重大挑战,并讨论了未来的研究方向。
translated by 谷歌翻译
This paper presents a machine learning approach to multidimensional item response theory (MIRT), a class of latent factor models that can be used to model and predict student performance from observed assessment data. Inspired by collaborative filtering, we define a general class of models that includes many MIRT models. We discuss the use of penalized joint maximum likelihood (JML) to estimate individual models and cross-validation to select the best performing model. This model evaluation process can be optimized using batching techniques, such that even sparse large-scale data can be analyzed efficiently. We illustrate our approach with simulated and real data, including an example from a massive open online course (MOOC). The high-dimensional model fit to this large and sparse dataset does not lend itself well to traditional methods of factor interpretation. By analogy to recommender-system applications, we propose an alternative "validation" of the factor model, using auxiliary information about the popularity of items consulted during an open-book exam in the course.
translated by 谷歌翻译
Real-world robotic grasping can be done robustly if a complete 3D Point Cloud Data (PCD) of an object is available. However, in practice, PCDs are often incomplete when objects are viewed from few and sparse viewpoints before the grasping action, leading to the generation of wrong or inaccurate grasp poses. We propose a novel grasping strategy, named 3DSGrasp, that predicts the missing geometry from the partial PCD to produce reliable grasp poses. Our proposed PCD completion network is a Transformer-based encoder-decoder network with an Offset-Attention layer. Our network is inherently invariant to the object pose and point's permutation, which generates PCDs that are geometrically consistent and completed properly. Experiments on a wide range of partial PCD show that 3DSGrasp outperforms the best state-of-the-art method on PCD completion tasks and largely improves the grasping success rate in real-world scenarios. The code and dataset will be made available upon acceptance.
translated by 谷歌翻译
Optical coherence tomography (OCT) captures cross-sectional data and is used for the screening, monitoring, and treatment planning of retinal diseases. Technological developments to increase the speed of acquisition often results in systems with a narrower spectral bandwidth, and hence a lower axial resolution. Traditionally, image-processing-based techniques have been utilized to reconstruct subsampled OCT data and more recently, deep-learning-based methods have been explored. In this study, we simulate reduced axial scan (A-scan) resolution by Gaussian windowing in the spectral domain and investigate the use of a learning-based approach for image feature reconstruction. In anticipation of the reduced resolution that accompanies wide-field OCT systems, we build upon super-resolution techniques to explore methods to better aid clinicians in their decision-making to improve patient outcomes, by reconstructing lost features using a pixel-to-pixel approach with an altered super-resolution generative adversarial network (SRGAN) architecture.
translated by 谷歌翻译
Using Structural Health Monitoring (SHM) systems with extensive sensing arrangements on every civil structure can be costly and impractical. Various concepts have been introduced to alleviate such difficulties, such as Population-based SHM (PBSHM). Nevertheless, the studies presented in the literature do not adequately address the challenge of accessing the information on different structural states (conditions) of dissimilar civil structures. The study herein introduces a novel framework named Structural State Translation (SST), which aims to estimate the response data of different civil structures based on the information obtained from a dissimilar structure. SST can be defined as Translating a state of one civil structure to another state after discovering and learning the domain-invariant representation in the source domains of a dissimilar civil structure. SST employs a Domain-Generalized Cycle-Generative (DGCG) model to learn the domain-invariant representation in the acceleration datasets obtained from a numeric bridge structure that is in two different structural conditions. In other words, the model is tested on three dissimilar numeric bridge models to translate their structural conditions. The evaluation results of SST via Mean Magnitude-Squared Coherence (MMSC) and modal identifiers showed that the translated bridge states (synthetic states) are significantly similar to the real ones. As such, the minimum and maximum average MMSC values of real and translated bridge states are 91.2% and 97.1%, the minimum and the maximum difference in natural frequencies are 5.71% and 0%, and the minimum and maximum Modal Assurance Criterion (MAC) values are 0.998 and 0.870. This study is critical for data scarcity and PBSHM, as it demonstrates that it is possible to obtain data from structures while the structure is actually in a different condition or state.
translated by 谷歌翻译