Recent years witnessed the breakthrough of face recognition with deep convolutional neural networks. Dozens of papers in the field of FR are published every year. Some of them were applied in the industrial community and played an important role in human life such as device unlock, mobile payment, and so on. This paper provides an introduction to face recognition, including its history, pipeline, algorithms based on conventional manually designed features or deep learning, mainstream training, evaluation datasets, and related applications. We have analyzed and compared state-of-the-art works as many as possible, and also carefully designed a set of experiments to find the effect of backbone size and data distribution. This survey is a material of the tutorial named The Practical Face Recognition Technology in the Industrial World in the FG2023.
translated by 谷歌翻译
A central challenge of building more powerful Graph Neural Networks (GNNs) is the oversmoothing phenomenon, where increasing the network depth leads to homogeneous node representations and thus worse classification performance. While previous works have only demonstrated that oversmoothing is inevitable when the number of graph convolutions tends to infinity, in this paper, we precisely characterize the mechanism behind the phenomenon via a non-asymptotic analysis. Specifically, we distinguish between two different effects when applying graph convolutions -- an undesirable mixing effect that homogenizes node representations in different classes, and a desirable denoising effect that homogenizes node representations in the same class. By quantifying these two effects on random graphs sampled from the Contextual Stochastic Block Model (CSBM), we show that oversmoothing happens once the mixing effect starts to dominate the denoising effect, and the number of layers required for this transition is $O(\log N/\log (\log N))$ for sufficiently dense graphs with $N$ nodes. We also extend our analysis to study the effects of Personalized PageRank (PPR) on oversmoothing. Our results suggest that while PPR mitigates oversmoothing at deeper layers, PPR-based architectures still achieve their best performance at a shallow depth and are outperformed by the graph convolution approach on certain graphs. Finally, we support our theoretical results with numerical experiments, which further suggest that the oversmoothing phenomenon observed in practice may be exacerbated by the difficulty of optimizing deep GNN models.
translated by 谷歌翻译
Privacy in AI remains a topic that draws attention from researchers and the general public in recent years. As one way to implement privacy-preserving AI, differentially private learning is a framework that enables AI models to use differential privacy (DP). To achieve DP in the learning process, existing algorithms typically limit the magnitude of gradients with a constant clipping, which requires carefully tuned due to its significant impact on model performance. As a solution to this issue, latest works NSGD and Auto-S innovatively propose to use normalization instead of clipping to avoid hyperparameter tuning. However, normalization-based approaches like NSGD and Auto-S rely on a monotonic weight function, which imposes excessive weight on small gradient samples and introduces extra deviation to the update. In this paper, we propose a Differentially Private Per-Sample Adaptive Clipping (DP-PSAC) algorithm based on a non-monotonic adaptive weight function, which guarantees privacy without the typical hyperparameter tuning process of using a constant clipping while significantly reducing the deviation between the update and true batch-averaged gradient. We provide a rigorous theoretical convergence analysis and show that with convergence rate at the same order, the proposed algorithm achieves a lower non-vanishing bound, which is maintained over training iterations, compared with NSGD/Auto-S. In addition, through extensive experimental evaluation, we show that DP-PSAC outperforms or matches the state-of-the-art methods on multiple main-stream vision and language tasks.
translated by 谷歌翻译
Recently, there has been significant progress in teaching language models to perform step-by-step reasoning to solve complex numerical reasoning tasks. Chain-of-thoughts prompting (CoT) is by far the state-of-art method for these tasks. CoT uses language models to perform both reasoning and computation in the multi-step `thought' process. To disentangle computation from reasoning, we propose `Program of Thoughts' (PoT), which uses language models (mainly Codex) to express the reasoning process as a program. The computation is relegated to an external computer, which executes the generated programs to derive the answer. We evaluate PoT on five math word problem datasets (GSM, AQuA, SVAMP, TabMWP, MultiArith) and three financial-QA datasets (FinQA, ConvFinQA, TATQA) for both few-shot and zero-shot setups. Under both few-shot and zero-shot settings, PoT can show an average performance gain over CoT by around 12\% across all the evaluated datasets. By combining PoT with self-consistency decoding, we can achieve SoTA performance on all math problem datasets and near-SoTA performance on financial datasets. All of our data and code are released in Github\footnote{\url{https://github.com/wenhuchen/Program-of-Thoughts}}.
translated by 谷歌翻译
发现广泛使用的深度学习模型的稳健性差。几乎没有噪音可以欺骗最先进的模型来做出错误的预测。尽管有很多高性能攻击生成方法,但其中大多数直接在原始数据中添加了扰动,并使用L_P规范对其进行测量;这可能会破坏数据的主要结构,从而产生无效的攻击。在本文中,我们提出了一个黑框攻击,该攻击不是修改原始数据,而是修改由自动编码器提取的数据的潜在特征;然后,我们测量语义空间中的噪音以保护数据的语义。我们在MNIST和CIFAR-10数据集上训练了自动编码器,并使用遗传算法发现了最佳的对抗扰动。我们的方法在MNIST和CIFAR-10数据集的前100个数据上获得了100%的攻击成功率,而扰动率较小。
translated by 谷歌翻译
多标签图像分类旨在预测图像中的所有可能标签。考虑到在每个培训图像中注释所有标签可能是昂贵的,通常将其作为部分标签的学习问题。关于部分标签学习的现有作品集中在每个训练图像只有其标签的子集注释的情况下。一种特殊情况是在每个训练图像中仅注释一个正标签。为了进一步减轻注释负担并增强了分类器的性能,本文提出了一个新的部分标签设置,其中仅标记了训练图像的一个子集,每个图像只有一个正面标签,而其余的培训图像仍保留未标记。为了处理这个新设置,我们建议一个端到端的深层网络PLMCL(部分标签动量课程学习),可以学会为部分标记和未标记的培训图像生成自信的伪标签。基于动量的新法律通过考虑更新伪标签的速度,更新每个训练图像上的软伪标签,这些标签的更新有助于避免捕获到低信心的本地最低限度,尤其是在培训的早期阶段,由于缺乏观察到的标签和培训的早期阶段对伪标签的信心。此外,我们还提出了一个信心的调度程序,以适应性地对不同标签进行易于锻炼的学习。广泛的实验表明,我们提出的PLMCL在三个不同数据集上的各个部分标签设置下优于许多最先进的多标签分类方法。
translated by 谷歌翻译
文档级关系提取(RE)旨在确定整个文档中实体之间的关系。它需要复杂的推理能力来综合各种知识,例如核心和常识。大规模知识图(kgs)包含大量现实世界事实,并可以为文档级别提供宝贵的知识。在本文中,我们提出了一个实体知识注入框架,以增强当前的文档级RE模型。具体而言,我们将核心蒸馏引入注入核心知识,并具有更一般的核心推理能力。我们还采用代表对帐来注入事实知识,并将kg表示形式汇总到统一空间中。两个基准数据集的实验验证了我们实体知识注入框架的概括,并对多个文档级RE模型的一致改进。
translated by 谷歌翻译
大多数现有的复合面部表达识别(FER)方法依赖于用于训练的大型化合物表达数据。但是,收集此类数据是劳动密集型且耗时的。在本文中,我们解决了跨域少数学习(FSL)设置中的复合FER任务,该设置仅需要几个在目标域中的复合表达式样本。具体而言,我们提出了一个新型的级联分解网络(CDNET),该网络将基于顺序分解机制的几个学习到分解模块层叠,以获得可转移的特征空间。为了减轻我们任务中基本班级有限的过度拟合问题,部分正则化策略旨在有效利用情节培训和批处理培训的最佳功能。通过在多个基本表达数据集上进行类似任务的培训,CDNET了解了可以轻松适应以识别看不见的化合物表达式的学习能力。对利润和野外复合表达数据集进行的广泛实验证明了我们提出的CDNET与几种最先进的FSL方法的优越性。代码可在以下网址获得:https://github.com/zouxinyi0625/cdnet。
translated by 谷歌翻译
深度监督或称为“中间监督”或“辅助监督”是在神经网络的隐藏层上增加监督。最近,该技术越来越多地应用于深层神经网络学习系统中,以用于各种计算机视觉应用。人们达成共识,即深层监督有助于通过减轻梯度消失的问题来改善神经网络的性能,这是深层监督的众多优势之一。此外,在不同的计算机视觉应用程序中,可以以不同的方式应用深度监督。如何最大程度地利用深度监督来改善不同应用程序中的网络性能。在本文中,我们对理论和应用程序中的深入监督进行了全面的深入审查。我们建议对不同深度监督网络进行新的分类,并讨论计算机视觉应用程序中当前深层监督网络的优势和局限性。
translated by 谷歌翻译
旨在恢复图像中影子区域的原始强度,并使它们与剩余的非阴影区域兼容,而没有跟踪,删除阴影是一个非常具有挑战性的问题,使许多下游图像/视频相关的任务受益。最近,变形金刚通过捕获全局像素相互作用来显示它们在各种应用中的强大能力,并且这种能力在删除阴影时非常可取。然而,由于以下两个原因,应用变压器促进阴影去除是非平凡的:1)修补程序操作不适用于由于不规则的阴影形状而导致阴影去除; 2)阴影去除只需要从非阴影区域到阴影区域的单向交互,而不是图像中所有像素之间的共同双向相互作用。在本文中,我们提出了一种新型的跨区域变压器,即CRFormer,用于去除阴影,它与现有变压器的不同之处仅通过考虑从非阴影区域到阴影区域的像素相互作用而不将图像分为斑块。这是通过精心设计的区域感知的跨注意操作来实现的,该操作可以汇总以非阴影区域特征为条件的恢复的阴影区域特征。与其他最先进的方法相比,关于ISTD,AISTD,SRD和视频阴影删除数据集的广泛实验证明了我们方法的优势。
translated by 谷歌翻译