Inductive reasoning is a core component of human intelligence. In the past research of inductive reasoning within computer science, logic language is used as representations of knowledge (facts and rules, more specifically). However, logic language can cause systematic problems for inductive reasoning such as disability of handling raw input such as natural language, sensitiveness to mislabeled data, and incapacity to handle ambiguous input. To this end, we propose a new task, which is to induce natural language rules from natural language facts, and create a dataset termed DEER containing 1.2k rule-fact pairs for the task, where rules and facts are written in natural language. New automatic metrics are also proposed and analysed for the evaluation of this task. With DEER, we investigate a modern approach for inductive reasoning where we use natural language as representation for knowledge instead of logic language and use pretrained language models as ''reasoners''. Moreover, we provide the first and comprehensive analysis of how well pretrained language models can induce natural language rules from natural language facts. We also propose a new framework drawing insights from philosophy literature for this task, which we show in the experiment section that surpasses baselines in both automatic and human evaluations.
translated by 谷歌翻译
We examine the problem of learning a single occurrence regular expression with interleaving (SOIRE) from a set of text strings with noise. SOIRE has unrestricted support for interleaving and covers most of the regular expressions in practice. Learning SOIREs is challenging because it needs heavy computation and text strings usually contains noise in practice. Most of the previous work only learns restricted SOIREs and is not robust on noisy data. To tackle these issues, we proposea noise-tolerant differentiable learning approach SOIREDL for SOIRE. We design a neural network to simulate SOIRE matching of given text strings and theoretically prove that a class of the set of parameters learnt by the neural network, called faithful encoding, is one-to-one corresponding to SOIRE for a bounded size. Based on this correspondence, we interpret the target SOIRE from the set of parameters of the neural network by exploring nearest faithful encodings. Experimental results show that SOIREDL outperforms the state-of-the-art approaches especially on noisy data.
translated by 谷歌翻译
人的大脑可以毫不费力地识别和定位对象,而基于激光雷达点云的当前3D对象检测方法仍然报告了较低的性能,以检测闭塞和远处的对象:点云的外观由于遮挡而变化很大,并且在沿线的固有差异沿点固有差异变化。传感器的距离。因此,设计功能表示对此类点云至关重要。受到人类联想识别的启发,我们提出了一个新颖的3D检测框架,该框架通过域的适应来使对象完整特征。我们弥合感知域之间的差距,其中特征是从具有亚最佳表示的真实场景中得出的,以及概念域,其中功能是从由不批准对象组成的增强场景中提取的,并具有丰富的详细信息。研究了一种可行的方法,可以在没有外部数据集的情况下构建概念场景。我们进一步介绍了一个基于注意力的重新加权模块,该模块可适应地增强更翔实区域的特征。该网络的功能增强能力将被利用,而无需在推理过程中引入额外的成本,这是各种3D检测框架中的插件。我们以准确性和速度都在Kitti 3D检测基准上实现了新的最先进性能。关于Nuscenes和Waymo数据集的实验也验证了我们方法的多功能性。
translated by 谷歌翻译
宫颈异常细胞检测是一项具有挑战性的任务,因为异常细胞和正常细胞之间的形态差异通常是微妙的。为了确定宫颈细胞是正常还是异常,细胞病理学家总是将周围细胞作为参考,并进行仔细比较以鉴定其异常。为了模仿这些临床行为,我们建议探索上下文关系,以提高宫颈异常细胞检测的性能。具体而言,利用细胞和细胞到全球图像之间的上下文关系,以增强每个感兴趣区域(ROI)建议的特征。因此,开发了两个模块,称为ROI关系注意模块(RRAM)和全球ROI注意模块(GRAM),还研究了它们的组合策略。我们通过使用特征金字塔网络(FPN)使用单头或双头更快的R-CNN来设置强基础,并将我们的RRAM和革兰氏集整合到它们中以验证提出的模块的有效性。由40,000个细胞学图像组成的大宫颈细胞检测数据集进行的实验表明,RRAM和GRAM的引入都比基线方法获得了更好的平均精度(AP)。此外,当级联RRAM和GRAM时,我们的方法优于最先进的方法(SOTA)方法。此外,我们还显示了提出的功能增强方案可以促进图像级别和涂片级别的分类。代码和训练有素的模型可在https://github.com/cviu-csu/cr4cacd上公开获得。
translated by 谷歌翻译
低成本单眼的3D对象检测在自主驾驶中起着基本作用,而其精度仍然远非令人满意。在本文中,我们挖掘了3D对象检测任务,并将其重构为对象本地化和外观感知的子任务,这有​​利于整个任务的互惠信息的深度挖掘。我们介绍了一个名为DFR-Net的动态特征反射网络,其中包含两种新的独立模块:(i)首先将任务特征分开的外观定位特征反射模块(ALFR),然后自相互反映互核特征; (ii)通过自学习方式自适应地重建各个子任务的培训过程的动态内部交易模块(DIT)。关于挑战基蒂数据集的广泛实验证明了DFR网的有效性和泛化。我们在基蒂测试集中的所有单眼3D对象探测器中排名第一(直到2021年3月16日)。所提出的方法在许多尖端的3D检测框架中也容易在较忽略的成本下以忽略的成本来播放。该代码将公开可用。
translated by 谷歌翻译
由于LIDAR传感器捕获的精确深度信息缺乏准确的深度信息,单眼3D对象检测是一个关键而挑战的自主驾驶任务。在本文中,我们提出了一种立体引导的单目3D对象检测网络,称为SGM3D,其利用立体图像提取的鲁棒3D特征来增强从单眼图像中学到的特征。我们创新地研究了多粒度域适配模块(MG-DA)以利用网络的能力,以便仅基于单手套提示产生立体模拟功能。利用粗均衡特征级以及精细锚级域适配,以引导单眼分支。我们介绍了一个基于IOO匹配的对齐模块(iou-ma),用于立体声和单眼域之间的对象级域适应,以减轻先前阶段中的不匹配。我们对最具挑战性的基蒂和Lyft数据集进行了广泛的实验,并实现了新的最先进的性能。此外,我们的方法可以集成到许多其他单眼的方法中以提高性能而不引入任何额外的计算成本。
translated by 谷歌翻译
Practical applications employing deep learning must guarantee inference quality. However, we found that the inference quality of state-of-the-art and state-of-the-practice in practical applications has a long tail distribution. In the real world, many tasks have strict requirements for the quality of deep learning inference, such as safety-critical and mission-critical tasks. The fluctuation of inference quality seriously affects its practical applications, and the quality at the tail may lead to severe consequences. State-of-the-art and state-of-the-practice with outstanding inference quality designed and trained under loose constraints still have poor inference quality under constraints with practical application significance. On the one hand, the neural network models must be deployed on complex systems with limited resources. On the other hand, safety-critical and mission-critical tasks need to meet more metric constraints while ensuring high inference quality. We coin a new term, ``tail quality,'' to characterize this essential requirement and challenge. We also propose a new metric, ``X-Critical-Quality,'' to measure the inference quality under certain constraints. This article reveals factors contributing to the failure of using state-of-the-art and state-of-the-practice algorithms and systems in real scenarios. Therefore, we call for establishing innovative methodologies and tools to tackle this enormous challenge.
translated by 谷歌翻译
We present X-Decoder, a generalized decoding model that can predict pixel-level segmentation and language tokens seamlessly. X-Decodert takes as input two types of queries: (i) generic non-semantic queries and (ii) semantic queries induced from text inputs, to decode different pixel-level and token-level outputs in the same semantic space. With such a novel design, X-Decoder is the first work that provides a unified way to support all types of image segmentation and a variety of vision-language (VL) tasks. Further, our design enables seamless interactions across tasks at different granularities and brings mutual benefits by learning a common and rich pixel-level visual-semantic understanding space, without any pseudo-labeling. After pretraining on a mixed set of a limited amount of segmentation data and millions of image-text pairs, X-Decoder exhibits strong transferability to a wide range of downstream tasks in both zero-shot and finetuning settings. Notably, it achieves (1) state-of-the-art results on open-vocabulary segmentation and referring segmentation on eight datasets; (2) better or competitive finetuned performance to other generalist and specialist models on segmentation and VL tasks; and (3) flexibility for efficient finetuning and novel task composition (e.g., referring captioning and image editing). Code, demo, video, and visualization are available at https://x-decoder-vl.github.io.
translated by 谷歌翻译
Previous computation models either have equivalent abilities in representing all computations but fail to provide primitive operators for programming complex algorithms or lack generalized expression ability to represent newly-added computations. This article presents a unified computation model with generalized expression ability and a concise set of primitive operators for programming high-level algorithms. We propose a unified data abstraction -- Tensor of List, and offer a unified computation model based on Tensor of List, which we call the ToL model (in short, ToL). ToL introduces five atomic computations that can represent any elementary computation by finite composition, ensured with strict formal proof. Based on ToL, we design a pure-functional language -- ToLang. ToLang provides a concise set of primitive operators that can be used to program complex big data and AI algorithms. Our evaluations show ToL has generalized expression ability and a built-in performance indicator, born with a strictly defined computation metric -- elementary operation count (EOPs), consistent with FLOPs within a small error range.
translated by 谷歌翻译
Dialogue summarization has recently garnered significant attention due to its wide range of applications. However, existing methods for summarizing dialogues are suboptimal because they do not take into account the inherent structure of dialogue and rely heavily on labeled data, which can lead to poor performance in new domains. In this work, we propose DIONYSUS (dynamic input optimization in pre-training for dialogue summarization), a pre-trained encoder-decoder model for summarizing dialogues in any new domain. To pre-train DIONYSUS, we create two pseudo summaries for each dialogue example: one is produced by a fine-tuned summarization model, and the other is a collection of dialogue turns that convey important information. We then choose one of these pseudo summaries based on the difference in information distribution across different types of dialogues. This selected pseudo summary serves as the objective for pre-training DIONYSUS using a self-supervised approach on a large dialogue corpus. Our experiments show that DIONYSUS outperforms existing methods on six datasets, as demonstrated by its ROUGE scores in zero-shot and few-shot settings.
translated by 谷歌翻译