If scientific discovery is one of the main driving forces of human progress, insight is the fuel for the engine, which has long attracted behavior-level research to understand and model its underlying cognitive process. However, current tasks that abstract scientific discovery mostly focus on the emergence of insight, ignoring the special role played by domain knowledge. In this concept paper, we view scientific discovery as an interplay between $thinking \ out \ of \ the \ box$ that actively seeks insightful solutions and $thinking \ inside \ the \ box$ that generalizes on conceptual domain knowledge to keep correct. Accordingly, we propose Mindle, a semantic searching game that triggers scientific-discovery-like thinking spontaneously, as infrastructure for exploring scientific discovery on a large scale. On this basis, the meta-strategies for insights and the usage of concepts can be investigated reciprocally. In the pilot studies, several interesting observations inspire elaborated hypotheses on meta-strategies, context, and individual diversity for further investigations.
translated by 谷歌翻译
作为重要的数据挖掘技术,高公用事业项目集挖掘(HUIM)用于找出有趣但隐藏的信息(例如,利润和风险)。 HUIM已广泛应用于许多应用程序方案,例如市场分析,医疗检测和网络点击流分析。但是,大多数以前的HUIM方法通常忽略项目集中项目之间的关系。因此,在Huim中发现了许多无关的组合(例如,\ {Gold,Apple \}和\ {笔记本,书籍\})。为了解决这一限制,已经提出了许多算法来开采相关的高公用事业项目集(Cohuis)。在本文中,我们提出了一种新型算法,称为Itemset实用性最大化,相关度量(COIUM),该算法既考虑较强的相关性,又考虑了项目的有利可图。此外,新型算法采用数据库投影机制来降低数据库扫描的成本。此外,利用了两种上限和四种修剪策略来有效修剪搜索空间。并使用一个名为“实用程序”的简洁阵列结构来计算和存储在线性时间和空间中所采用的上限。最后,对密集和稀疏数据集的广泛实验结果表明,在运行时和内存消耗方面,COIUM显着优于最新算法。
translated by 谷歌翻译
通常通过过去的选择来告知机器学习中的评估,例如要使用哪些数据集或指标。该标准化可以使用排行榜对平等基础进行比较,但是随着出现更好的替代方案,评估选择变得不佳。这个问题在自然语言生成中尤其相关,该语言需要不断改善的数据集,指标和人类评估以提出确定性的主张。为了使遵循最佳模型评估实践更加容易,我们介绍了GEMV2。新版本的一代,评估和指标基准为数据集,模型和指标开发人员提供了模块化基础架构,以使彼此受益。GEMV2支持40种记录的数据集中51种语言。所有数据集的模型都可以在线评估,我们的交互式数据卡创建和渲染工具使得在Living Benchmark中添加新数据集变得更加容易。
translated by 谷歌翻译
理论思想和实证研究向我们展示了一个看似令人惊讶的结果:孩子,甚至很年轻的孩子,都以与正式研究中的科学推理非常相似的方式展示学习和思考。遇到一种新现象,儿童对数据提出假设,从观察进行因果推断,通过实验检验其理论,并纠正是否出现不一致的命题。此类过程的回合一直持续到发现基本机制为止。建立可以像人一样学习和思考的机器,我们要问的一个自然的问题是:我们今天实现的智能是否设法执行这样的科学思维过程,以及在什么水平上进行的。在这项工作中,我们设计了EST环境,以评估人造药物中的科学思维能力。在因果发现的研究流中,我们基于爆炸检测来构建我们的交互式EST环境。具体而言,在EST的每个情节中,都会呈现一个新颖的观察结果,并要求找出所有对象的衰落。在每个时间步骤中,代理都提出了新的实验来验证其假设并更新其当前信念。通过在此任务的象征和视觉版本上评估强化学习(RL)代理,我们注意到当今学习方法的明显失败在达到与人类相当的智力水平方面。科学思维中学习的这种效率低下,需要在建立人类智能方面进行未来的研究。
translated by 谷歌翻译
跨场型模型适应对于在实际场景中的摄像机重新定位至关重要。通常,最好将预学的模型快速适应新颖的场景,并尽可能少地训练样本。但是,由于图像特征提取和场景坐标回归的纠缠,现有的最新方法几乎不能支持如此少的场景适应。为了解决此问题,我们使用解耦的解决方案接近摄像机重新定位,在该解决方案中,分别执行特征提取,坐标回归和姿势估计。我们的关键见解是,应通过删除坐标系的分心因子来学习用于坐标回归的功能编码器,从而从多个场景中学到了特征编码器,以获得一般特征表示和更重要的,不敏感的功能。具有此功能先验,并与坐标回归器结合使用,与现有集成解决方案相比,新场景中几乎没有射击的观测比3D世界更容易。实验表明,与最先进的方法相比,我们的方法的优越性,在具有不同的视觉外观和观点分布的几个场景上产生了更高的精度。
translated by 谷歌翻译
Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful representations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
translated by 谷歌翻译
This paper illustrates the technologies of user next intent prediction with a concept knowledge graph. The system has been deployed on the Web at Alipay, serving more than 100 million daily active users. Specifically, we propose AlipayKG to explicitly characterize user intent, which is an offline concept knowledge graph in the Life-Service domain modeling the historical behaviors of users, the rich content interacted by users and the relations between them. We further introduce a Transformer-based model which integrates expert rules from the knowledge graph to infer the online user's next intent. Experimental results demonstrate that the proposed system can effectively enhance the performance of the downstream tasks while retaining explainability.
translated by 谷歌翻译
Transformer, originally devised for natural language processing, has also attested significant success in computer vision. Thanks to its super expressive power, researchers are investigating ways to deploy transformers to reinforcement learning (RL) and the transformer-based models have manifested their potential in representative RL benchmarks. In this paper, we collect and dissect recent advances on transforming RL by transformer (transformer-based RL or TRL), in order to explore its development trajectory and future trend. We group existing developments in two categories: architecture enhancement and trajectory optimization, and examine the main applications of TRL in robotic manipulation, text-based games, navigation and autonomous driving. For architecture enhancement, these methods consider how to apply the powerful transformer structure to RL problems under the traditional RL framework, which model agents and environments much more precisely than deep RL methods, but they are still limited by the inherent defects of traditional RL algorithms, such as bootstrapping and "deadly triad". For trajectory optimization, these methods treat RL problems as sequence modeling and train a joint state-action model over entire trajectories under the behavior cloning framework, which are able to extract policies from static datasets and fully use the long-sequence modeling capability of the transformer. Given these advancements, extensions and challenges in TRL are reviewed and proposals about future direction are discussed. We hope that this survey can provide a detailed introduction to TRL and motivate future research in this rapidly developing field.
translated by 谷歌翻译
Fine-grained capturing of 3D HOI boosts human activity understanding and facilitates downstream visual tasks, including action recognition, holistic scene reconstruction, and human motion synthesis. Despite its significance, existing works mostly assume that humans interact with rigid objects using only a few body parts, limiting their scope. In this paper, we address the challenging problem of f-AHOI, wherein the whole human bodies interact with articulated objects, whose parts are connected by movable joints. We present CHAIRS, a large-scale motion-captured f-AHOI dataset, consisting of 16.2 hours of versatile interactions between 46 participants and 81 articulated and rigid sittable objects. CHAIRS provides 3D meshes of both humans and articulated objects during the entire interactive process, as well as realistic and physically plausible full-body interactions. We show the value of CHAIRS with object pose estimation. By learning the geometrical relationships in HOI, we devise the very first model that leverage human pose estimation to tackle the estimation of articulated object poses and shapes during whole-body interactions. Given an image and an estimated human pose, our model first reconstructs the pose and shape of the object, then optimizes the reconstruction according to a learned interaction prior. Under both evaluation settings (e.g., with or without the knowledge of objects' geometries/structures), our model significantly outperforms baselines. We hope CHAIRS will promote the community towards finer-grained interaction understanding. We will make the data/code publicly available.
translated by 谷歌翻译
The acquisition of high-quality human annotations through crowdsourcing platforms like Amazon Mechanical Turk (MTurk) is more challenging than expected. The annotation quality might be affected by various aspects like annotation instructions, Human Intelligence Task (HIT) design, and wages paid to annotators, etc. To avoid potentially low-quality annotations which could mislead the evaluation of automatic summarization system outputs, we investigate the recruitment of high-quality MTurk workers via a three-step qualification pipeline. We show that we can successfully filter out bad workers before they carry out the evaluations and obtain high-quality annotations while optimizing the use of resources. This paper can serve as basis for the recruitment of qualified annotators in other challenging annotation tasks.
translated by 谷歌翻译