Recent years have witnessed great success in handling graph-related tasks with Graph Neural Networks (GNNs). Despite their great academic success, Multi-Layer Perceptrons (MLPs) remain the primary workhorse for practical industrial applications. One reason for this academic-industrial gap is the neighborhood-fetching latency incurred by data dependency in GNNs, which make it hard to deploy for latency-sensitive applications that require fast inference. Conversely, without involving any feature aggregation, MLPs have no data dependency and infer much faster than GNNs, but their performance is less competitive. Motivated by these complementary strengths and weaknesses, we propose a Graph Self-Distillation on Neighborhood (GSDN) framework to reduce the gap between GNNs and MLPs. Specifically, the GSDN framework is based purely on MLPs, where structural information is only implicitly used as prior to guide knowledge self-distillation between the neighborhood and the target, substituting the explicit neighborhood information propagation as in GNNs. As a result, GSDN enjoys the benefits of graph topology-awareness in training but has no data dependency in inference. Extensive experiments have shown that the performance of vanilla MLPs can be greatly improved with self-distillation, e.g., GSDN improves over stand-alone MLPs by 15.54\% on average and outperforms the state-of-the-art GNNs on six datasets. Regarding inference speed, GSDN infers 75X-89X faster than existing GNNs and 16X-25X faster than other inference acceleration methods.
translated by 谷歌翻译
设计私人投票规则是值得信赖的民主的重要问题。在本文中,根据差异隐私的框架,我们根据知名的Condorcet方法提出了三类随机投票规则:Laplacian Condorcet方法($ cm^{lap} _ \ lambda $),指数condorcet方法($ cmcmential condorcet方法^{exp} _ \ lambda $)和随机响应condorcet方法($ cm^{rr} _ \ lambda $),其中$ \ lambda $代表噪声级别。通过准确估计随机性引入的错误,我们表明$ cm^{exp} _ \ lambda $是大多数情况下最准确的机制。我们证明,我们的所有规则都满足绝对单调性,Lexi参与,概率帕累托效率,近似概率孔孔标准和近似SD-StrategyProofness。此外,$ cm^{rr} _ \ lambda $满足(非适当的)概率condorcet标准,而$ cm^{lap} _ \ lambda $和$ cm^{exp} _ \ \ lambda _ 。最后,我们将差异隐私视为投票公理,并讨论其与其他公理的关系。
translated by 谷歌翻译
For centuries, it has been widely believed that the influence of a small coalition of voters is negligible in a large election. Consequently, there is a large body of literature on characterizing the asymptotic likelihood for an election to be influenced, especially by the manipulation of a single voter, establishing an $O(\frac{1}{\sqrt n})$ upper bound and an $\Omega(\frac{1}{n^{67}})$ lower bound for many commonly studied voting rules under the i.i.d.~uniform distribution, known as Impartial Culture (IC) in social choice, where $n$ is the number is voters. In this paper, we extend previous studies in three aspects: (1) we consider a more general and realistic semi-random model, where a distribution adversary chooses a worst-case distribution and then a data adversary modifies up to $\psi$ portion of the data, (2) we consider many coalitional influence problems, including coalitional manipulation, margin of victory, and various vote controls and bribery, and (3) we consider arbitrary and variable coalition size $B$. Our main theorem provides asymptotically tight bounds on the semi-random likelihood of the existence of a size-$B$ coalition that can successfully influence the election under a wide range of voting rules. Applications of the main theorem and its proof techniques resolve long-standing open questions about the likelihood of coalitional manipulability under IC, by showing that the likelihood is $\Theta\left(\min\left\{\frac{B}{\sqrt n}, 1\right\}\right)$ for many commonly studied voting rules. The main technical contribution is a characterization of the semi-random likelihood for a Poisson multinomial variable (PMV) to be unstable, which we believe to be a general and useful technique with independent interest.
translated by 谷歌翻译
对比度学习(CL)已成为无监督表示学习的主要技术,该技术将锚固的增强版本相互接近(正样本),并将其他样品(负)(负)的嵌入到分开。正如最近的研究所揭示的那样,CL可以受益于艰苦的负面因素(与锚定的负面因素)。但是,当我们在图对比度学习中采用现有的其他域的硬采矿技术(GCL)时,我们会观察到有限的好处。我们对该现象进行实验和理论分析,发现它可以归因于图神经网络(GNNS)的信息传递。与其他域中的CL不同,大多数硬否负面因素是潜在的假否(与锚共享同一类的负面因素),如果仅根据锚和本身之间的相似性选择它们,这将不必要地推开同一类的样本。为了解决这种缺陷,我们提出了一种称为\ textbf {progcl}的有效方法,以估计否定的概率是真实的,这构成了更合适的衡量否定性与否定性的衡量标准。此外,我们设计了两个方案(即\ textbf {progcl-weight}和\ textbf {progcl-mix}),以提高GCL的性能。广泛的实验表明,POGCL对基本GCL方法具有显着和一致的改进,并在几个无监督的基准上产生多个最新的结果,甚至超过了受监督的基准。此外,Progcl很容易将基于负面的GCL方法插入以改进性能的GCL方法。我们以\ textColor {magenta} {\ url {https://github.com/junxia97/progcl}}}发布代码。
translated by 谷歌翻译
Truth discovery is a general name for a broad range of statistical methods aimed to extract the correct answers to questions, based on multiple answers coming from noisy sources. For example, workers in a crowdsourcing platform. In this paper, we consider an extremely simple heuristic for estimating workers' competence using average proximity to other workers. We prove that this estimates well the actual competence level and enables separating high and low quality workers in a wide spectrum of domains and statistical models. Under Gaussian noise, this simple estimate is the unique solution to the MLE with a constant regularization factor. Finally, weighing workers according to their average proximity in a crowdsourcing setting, results in substantial improvement over unweighted aggregation and other truth discovery algorithms in practice.
translated by 谷歌翻译
In this work, we focus on instance-level open vocabulary segmentation, intending to expand a segmenter for instance-wise novel categories without mask annotations. We investigate a simple yet effective framework with the help of image captions, focusing on exploiting thousands of object nouns in captions to discover instances of novel classes. Rather than adopting pretrained caption models or using massive caption datasets with complex pipelines, we propose an end-to-end solution from two aspects: caption grounding and caption generation. In particular, we devise a joint Caption Grounding and Generation (CGG) framework based on a Mask Transformer baseline. The framework has a novel grounding loss that performs explicit and implicit multi-modal feature alignments. We further design a lightweight caption generation head to allow for additional caption supervision. We find that grounding and generation complement each other, significantly enhancing the segmentation performance for novel categories. We conduct extensive experiments on the COCO dataset with two settings: Open Vocabulary Instance Segmentation (OVIS) and Open Set Panoptic Segmentation (OSPS). The results demonstrate the superiority of our CGG framework over previous OVIS methods, achieving a large improvement of 6.8% mAP on novel classes without extra caption data. Our method also achieves over 15% PQ improvements for novel classes on the OSPS benchmark under various settings.
translated by 谷歌翻译
Proteins are fundamental biological entities that play a key role in life activities. The amino acid sequences of proteins can be folded into stable 3D structures in the real physicochemical world, forming a special kind of sequence-structure data. With the development of Artificial Intelligence (AI) techniques, Protein Representation Learning (PRL) has recently emerged as a promising research topic for extracting informative knowledge from massive protein sequences or structures. To pave the way for AI researchers with little bioinformatics background, we present a timely and comprehensive review of PRL formulations and existing PRL methods from the perspective of model architectures, pretext tasks, and downstream applications. We first briefly introduce the motivations for protein representation learning and formulate it in a general and unified framework. Next, we divide existing PRL methods into three main categories: sequence-based, structure-based, and sequence-structure co-modeling. Finally, we discuss some technical challenges and potential directions for improving protein representation learning. The latest advances in PRL methods are summarized in a GitHub repository https://github.com/LirongWu/awesome-protein-representation-learning.
translated by 谷歌翻译
Recent studies have shown that using an external Language Model (LM) benefits the end-to-end Automatic Speech Recognition (ASR). However, predicting tokens that appear less frequently in the training set is still quite challenging. The long-tail prediction problems have been widely studied in many applications, but only been addressed by a few studies for ASR and LMs. In this paper, we propose a new memory augmented lookup dictionary based Transformer architecture for LM. The newly introduced lookup dictionary incorporates rich contextual information in training set, which is vital to correctly predict long-tail tokens. With intensive experiments on Chinese and English data sets, our proposed method is proved to outperform the baseline Transformer LM by a great margin on both word/character error rate and tail tokens error rate. This is achieved without impact on the decoding efficiency. Overall, we demonstrate the effectiveness of our proposed method in boosting the ASR decoding performance, especially for long-tail tokens.
translated by 谷歌翻译
It is crucial to evaluate the quality and determine the optimal number of clusters in cluster analysis. In this paper, the multi-granularity characterization of the data set is carried out to obtain the hyper-balls. The cluster internal evaluation index based on hyper-balls(HCVI) is defined. Moreover, a general method for determining the optimal number of clusters based on HCVI is proposed. The proposed methods can evaluate the clustering results produced by the several classic methods and determine the optimal cluster number for data sets containing noises and clusters with arbitrary shapes. The experimental results on synthetic and real data sets indicate that the new index outperforms existing ones.
translated by 谷歌翻译
Generalizability to unseen forgery types is crucial for face forgery detectors. Recent works have made significant progress in terms of generalization by synthetic forgery data augmentation. In this work, we explore another path for improving the generalization. Our goal is to reduce the features that are easy to learn in the training phase, so as to reduce the risk of overfitting on specific forgery types. Specifically, in our method, a teacher network takes as input the face images and generates an attention map of the deep features by a diverse multihead attention ViT. The attention map is used to guide a student network to focus on the low-attended features by reducing the highly-attended deep features. A deep feature mixup strategy is also proposed to synthesize forgeries in the feature domain. Experiments demonstrate that, without data augmentation, our method is able to achieve promising performances on unseen forgeries and highly compressed data.
translated by 谷歌翻译