Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译
受到人类在不依赖视觉的情况下感知陌生物体表面质地的能力的启发,触觉感可以在机器人探索环境的过程中起着至关重要的作用,尤其是在某些场景中,视力难以应用或闭塞是不可避免地存在。现有的触觉表面重建方法依赖外部传感器或具有强大的先前假设,这将限制其应用程序场景并使操作更加复杂。本文提出了一种表面重建算法,该算法仅使用基于新的触觉传感器,其中不熟悉的物体的表面结构由多个触觉测量重建。与现有算法相比,提出的算法不依赖外部设备,而是专注于提高大规模对象表面的重建精度。为了使重建精度很容易受到采样压力的影响,我们提出了一种校正算法以使其适应它。从多次接触产生的多帧触觉烙印可以通过共同使用点云注册算法,基于深度学习的环闭合检测算法和姿势图优化算法来准确地重建全局对象表面。实验验证所提出的算法可以在重建交互式对象表面并为机器人提供准确的触觉信息以了解周围环境时实现毫米级的精度。
translated by 谷歌翻译
人类的生态成功依赖于我们在合作社会群体中灵活地组织的特征能力。成功的团体采用实质性专业和劳动分工。与大多数其他动物不同,人类在生活中通过反复试验学习什么角色。但是,当某些关键角色比其他角色更具吸引力,并且个人是自私的,那么就会存在社会困难:每个人都希望其他人扮演关键但无人机的角色,因此他们可能会自由自在地接受一个付费的人更好的。但是,如果所有人都采取行动,并且关键作用没有填补,就会发生灾难。在这种情况下,学习最佳角色分布可能是不可能的。因此,一个基本的问题是:劳动分裂如何在自私自利的学习个人群体中出现?在这里,我们表明,通过引入一种社会规范模型,我们将其视为分散的社会制裁模式,自私自利的个人群体可以学习涉及所有关键角色的劳动力划分。这种社会规范是通过重新分配人口中的奖励来努力使反社会角色不利的,同时激励亲社会角色,而这些角色不像本质上一样付出。
translated by 谷歌翻译
表格数据在现实世界中普遍存在。虽然许多常用的神经组件(例如,卷积)和可扩展的神经网络(例如,Reset)已经由机器学习界开发,但是很少有人对表格数据有效,并且对于表格数据结构,很少有设计。在本文中,我们为表格数据提出了一种新颖且灵活的神经组件,称为抽象层(ABStlay),其学会明确地统一相关输入功能并为语义抽象生成更高级别的功能。此外,我们设计了一种结构重新参数化方法来压缩ABStlay,从而通过参考阶段的清晰边际降低计算复杂度。使用ABStlays建立了一个特殊的基本块,我们通过堆叠这些块来构建用于表格数据分类和回归的深度抽象网络(DANET)系列。在DANET中,引入特殊的快捷路径以从原始表格特征获取信息,协助不同级别的功能交互。七个现实世界的表格数据集的综合实验表明,我们的ABStlay和Danets对表格数据分类和回归有效,并且计算复杂性优于竞争方法。此外,我们评估DANET的性能收益,因为它深入了解我们的方法的可扩展性。我们的代码可在https://github.com/whatashot/danet上获得。
translated by 谷歌翻译
In this paper we explore the task of modeling (semi) structured object sequences; in particular we focus our attention on the problem of developing a structure-aware input representation for such sequences. In such sequences, we assume that each structured object is represented by a set of key-value pairs which encode the attributes of the structured object. Given a universe of keys, a sequence of structured objects can then be viewed as an evolution of the values for each key, over time. We encode and construct a sequential representation using the values for a particular key (Temporal Value Modeling - TVM) and then self-attend over the set of key-conditioned value sequences to a create a representation of the structured object sequence (Key Aggregation - KA). We pre-train and fine-tune the two components independently and present an innovative training schedule that interleaves the training of both modules with shared attention heads. We find that this iterative two part-training results in better performance than a unified network with hierarchical encoding as well as over, other methods that use a {\em record-view} representation of the sequence \cite{de2021transformers4rec} or a simple {\em flattened} representation of the sequence. We conduct experiments using real-world data to demonstrate the advantage of interleaving TVM-KA on multiple tasks and detailed ablation studies motivating our modeling choices. We find that our approach performs better than flattening sequence objects and also allows us to operate on significantly larger sequences than existing methods.
translated by 谷歌翻译
Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets.
translated by 谷歌翻译
Proteins are fundamental biological entities that play a key role in life activities. The amino acid sequences of proteins can be folded into stable 3D structures in the real physicochemical world, forming a special kind of sequence-structure data. With the development of Artificial Intelligence (AI) techniques, Protein Representation Learning (PRL) has recently emerged as a promising research topic for extracting informative knowledge from massive protein sequences or structures. To pave the way for AI researchers with little bioinformatics background, we present a timely and comprehensive review of PRL formulations and existing PRL methods from the perspective of model architectures, pretext tasks, and downstream applications. We first briefly introduce the motivations for protein representation learning and formulate it in a general and unified framework. Next, we divide existing PRL methods into three main categories: sequence-based, structure-based, and sequence-structure co-modeling. Finally, we discuss some technical challenges and potential directions for improving protein representation learning. The latest advances in PRL methods are summarized in a GitHub repository https://github.com/LirongWu/awesome-protein-representation-learning.
translated by 谷歌翻译
Unlike traditional distributed machine learning, federated learning stores data locally for training and then aggregates the models on the server, which solves the data security problem that may arise in traditional distributed machine learning. However, during the training process, the transmission of model parameters can impose a significant load on the network bandwidth. It has been pointed out that the vast majority of model parameters are redundant during model parameter transmission. In this paper, we explore the data distribution law of selected partial model parameters on this basis, and propose a deep hierarchical quantization compression algorithm, which further compresses the model and reduces the network load brought by data transmission through the hierarchical quantization of model parameters. And we adopt a dynamic sampling strategy for the selection of clients to accelerate the convergence of the model. Experimental results on different public datasets demonstrate the effectiveness of our algorithm.
translated by 谷歌翻译
In the new era of personalization, learning the heterogeneous treatment effect (HTE) becomes an inevitable trend with numerous applications. Yet, most existing HTE estimation methods focus on independently and identically distributed observations and cannot handle the non-stationarity and temporal dependency in the common panel data setting. The treatment evaluators developed for panel data, on the other hand, typically ignore the individualized information. To fill the gap, in this paper, we initialize the study of HTE estimation in panel data. Under different assumptions for HTE identifiability, we propose the corresponding heterogeneous one-side and two-side synthetic learner, namely H1SL and H2SL, by leveraging the state-of-the-art HTE estimator for non-panel data and generalizing the synthetic control method that allows flexible data generating process. We establish the convergence rates of the proposed estimators. The superior performance of the proposed methods over existing ones is demonstrated by extensive numerical studies.
translated by 谷歌翻译
As an important variant of entity alignment (EA), multi-modal entity alignment (MMEA) aims to discover identical entities across different knowledge graphs (KGs) with multiple modalities like images. However, current MMEA algorithms all adopt KG-level modality fusion strategies but ignore modality differences among individual entities, hurting the robustness to potential noise involved in modalities (e.g., unidentifiable images and relations). In this paper we present MEAformer, a multi-modal entity alignment transformer approach for meta modality hybrid, to dynamically predict the mutual correlation coefficients among modalities for instance-level feature fusion. A modal-aware hard entity replay strategy is also proposed for addressing vague entity details. Extensive experimental results show that our model not only achieves SOTA performance on multiple training scenarios including supervised, unsupervised, iterative, and low resource, but also has limited parameters, optimistic speed, and good interpretability. Our code will be available soon.
translated by 谷歌翻译