Modern deep neural networks have achieved superhuman performance in tasks from image classification to game play. Surprisingly, these various complex systems with massive amounts of parameters exhibit the same remarkable structural properties in their last-layer features and classifiers across canonical datasets. This phenomenon is known as "Neural Collapse," and it was discovered empirically by Papyan et al. \cite{Papyan20}. Recent papers have theoretically shown the global solutions to the training network problem under a simplified "unconstrained feature model" exhibiting this phenomenon. We take a step further and prove the Neural Collapse occurrence for deep linear network for the popular mean squared error (MSE) and cross entropy (CE) loss. Furthermore, we extend our research to imbalanced data for MSE loss and present the first geometric analysis for Neural Collapse under this setting.
translated by 谷歌翻译
In the era of Internet of Things (IoT), network-wide anomaly detection is a crucial part of monitoring IoT networks due to the inherent security vulnerabilities of most IoT devices. Principal Components Analysis (PCA) has been proposed to separate network traffics into two disjoint subspaces corresponding to normal and malicious behaviors for anomaly detection. However, the privacy concerns and limitations of devices' computing resources compromise the practical effectiveness of PCA. We propose a federated PCA-based Grassmannian optimization framework that coordinates IoT devices to aggregate a joint profile of normal network behaviors for anomaly detection. First, we introduce a privacy-preserving federated PCA framework to simultaneously capture the profile of various IoT devices' traffic. Then, we investigate the alternating direction method of multipliers gradient-based learning on the Grassmann manifold to guarantee fast training and the absence of detecting latency using limited computational resources. Empirical results on the NSL-KDD dataset demonstrate that our method outperforms baseline approaches. Finally, we show that the Grassmann manifold algorithm is highly adapted for IoT anomaly detection, which permits drastically reducing the analysis time of the system. To the best of our knowledge, this is the first federated PCA algorithm for anomaly detection meeting the requirements of IoT networks.
translated by 谷歌翻译
Current work in named entity recognition (NER) uses either cross entropy (CE) or conditional random fields (CRF) as the objective/loss functions to optimize the underlying NER model. Both of these traditional objective functions for the NER problem generally produce adequate performance when the data distribution is balanced and there are sufficient annotated training examples. But since NER is inherently an imbalanced tagging problem, the model performance under the low-resource settings could suffer using these standard objective functions. Based on recent advances in area under the ROC curve (AUC) maximization, we propose to optimize the NER model by maximizing the AUC score. We give evidence that by simply combining two binary-classifiers that maximize the AUC score, significant performance improvement over traditional loss functions is achieved under low-resource NER settings. We also conduct extensive experiments to demonstrate the advantages of our method under the low-resource and highly-imbalanced data distribution settings. To the best of our knowledge, this is the first work that brings AUC maximization to the NER setting. Furthermore, we show that our method is agnostic to different types of NER embeddings, models and domains. The code to replicate this work will be provided upon request.
translated by 谷歌翻译
The introduction of high-quality image generation models, particularly the StyleGAN family, provides a powerful tool to synthesize and manipulate images. However, existing models are built upon high-quality (HQ) data as desired outputs, making them unfit for in-the-wild low-quality (LQ) images, which are common inputs for manipulation. In this work, we bridge this gap by proposing a novel GAN structure that allows for generating images with controllable quality. The network can synthesize various image degradation and restore the sharp image via a quality control code. Our proposed QC-StyleGAN can directly edit LQ images without altering their quality by applying GAN inversion and manipulation techniques. It also provides for free an image restoration solution that can handle various degradations, including noise, blur, compression artifacts, and their mixtures. Finally, we demonstrate numerous other applications such as image degradation synthesis, transfer, and interpolation.
translated by 谷歌翻译
Graph Neural Networks (GNNs) had been demonstrated to be inherently susceptible to the problems of over-smoothing and over-squashing. These issues prohibit the ability of GNNs to model complex graph interactions by limiting their effectiveness at taking into account distant information. Our study reveals the key connection between the local graph geometry and the occurrence of both of these issues, thereby providing a unified framework for studying them at a local scale using the Ollivier's Ricci curvature. Based on our theory, a number of principled methods are proposed to alleviate the over-smoothing and over-squashing issues.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Generating consistent and high-quality images from given texts is essential for visual-language understanding. Although impressive results have been achieved in generating high-quality images, text-image consistency is still a major concern in existing GAN-based methods. Particularly, the most popular metric $R$-precision may not accurately reflect the text-image consistency, often resulting in very misleading semantics in the generated images. Albeit its significance, how to design a better text-image consistency metric surprisingly remains under-explored in the community. In this paper, we make a further step forward to develop a novel CLIP-based metric termed as Semantic Similarity Distance ($SSD$), which is both theoretically founded from a distributional viewpoint and empirically verified on benchmark datasets. Benefiting from the proposed metric, we further design the Parallel Deep Fusion Generative Adversarial Networks (PDF-GAN) that aims at improving text-image consistency by fusing semantic information at different granularities and capturing accurate semantics. Equipped with two novel plug-and-play components: Hard-Negative Sentence Constructor and Semantic Projection, the proposed PDF-GAN can mitigate inconsistent semantics and bridge the text-image semantic gap. A series of experiments show that, as opposed to current state-of-the-art methods, our PDF-GAN can lead to significantly better text-image consistency while maintaining decent image quality on the CUB and COCO datasets.
translated by 谷歌翻译
切成薄片的Wasserstein(SW)距离已在不同的应用程序场景中广泛使用,因为它可以缩放到大量的支撑量,而不会受到维数的诅咒。切成薄片的瓦斯坦距离的值是通过radon变换(RT)获得的原始度量的一维表示(投影)之间运输成本的平均值。尽管估计切成薄片的瓦斯坦族的支持效率,但仍需要在高维环境中进行相对较大的预测。因此,对于与维度相比,支撑次数相对较少的应用,例如,使用微型批量方法的几个深度学习应用,radon transform的矩阵乘法中的复杂性成为主要计算瓶颈。为了解决这个问题,我们建议通过线性和随机组合少量的预测来得出预测,这些预测被称为瓶颈预测。我们通过引入层次ra transform(HRT)来解释这些投影的用法,该层rad rad transform(HRT)是通过递归应用radon变换变体构建的。然后,我们将方法制定为措施之间的新指标,该指标命名为分层切片瓦斯坦(HSW)距离。通过证明HRT的注入性,我们得出了HSW的指标。此外,我们研究了HSW的理论特性,包括其与SW变体的联系及其计算和样品复杂性。最后,我们将HSW的计算成本和生成质量与常规SW进行比较,使用包括CIFAR10,Celeba和Tiny Imagenet在内的各种基准数据集进行深层生成建模的任务。
translated by 谷歌翻译
大规模复杂动力系统的实时精确解决方案非常需要控制,优化,不确定性量化以及实践工程和科学应用中的决策。本文朝着这个方向做出了贡献,模型限制了切线流形学习(MCTANGENT)方法。 McTangent的核心是几种理想策略的协同作用:i)切线的学术学习,以利用神经网络速度和线条方法的准确性; ii)一种模型限制的方法,将神经网络切线与基础管理方程式进行编码; iii)促进长时间稳定性和准确性的顺序学习策略;和iv)数据随机方法,以隐式强制执行神经网络切线的平滑度及其对真相切线的可能性,以进一步提高麦克氏解决方案的稳定性和准确性。提供了半启发式和严格的论点,以分析和证明拟议的方法是合理的。提供了几个用于传输方程,粘性汉堡方程和Navier Stokes方程的数值结果,以研究和证明所提出的MCTANGENT学习方法的能力。
translated by 谷歌翻译
变形金刚在序列建模及以后取得了显着的成功,但相对于输入序列的长度,二次计算和记忆复杂性遭受了损失。利用技术包括稀疏和线性的注意力和哈希技巧;已经提出了有效的变压器来降低变压器的二次复杂性,但会显着降低准确性。作为响应,我们首先将计算注意图的线性注意力和残差连接解释为梯度下降步骤。然后,我们将动量引入这些组件,并提出\ emph {动量变压器},该动量利用动量来提高线性变压器的精度,同时保持线性内存和计算复杂性。此外,我们制定了一种自适应策略,以根据二次优化的最佳动量计算模型的动量值。这种自适应动量消除了寻找最佳动量值的需求,并进一步增强了动量变压器的性能。包括图像生成和机器翻译在内的自回归和非自动回归任务的一系列实验表明,动量变压器在训练效率和准确性方面优于流行的线性变压器。
translated by 谷歌翻译