The ultimate goal of artificial intelligence is to mimic the human brain to perform decision-making and control directly from high-dimensional sensory input. All-optical diffractive neural networks provide a promising solution for realizing artificial intelligence with high-speed and low-power consumption. To date, most of the reported diffractive neural networks focus on single or multiple tasks that do not involve interaction with the environment, such as object recognition and image classification, while the networks that can perform decision-making and control, to our knowledge, have not been developed yet. Here, we propose to use deep reinforcement learning to realize diffractive neural networks that enable imitating the human-level capability of decision-making and control. Such networks allow for finding optimal control policies through interaction with the environment and can be readily realized with the dielectric metasurfaces. The superior performances of these networks are verified by engaging three types of classic games, Tic-Tac-Toe, Super Mario Bros., and Car Racing, and achieving the same or even higher levels comparable to human players. Our work represents a solid step of advancement in diffractive neural networks, which promises a fundamental shift from the target-driven control of a pre-designed state for simple recognition or classification tasks to the high-level sensory capability of artificial intelligence. It may find exciting applications in autonomous driving, intelligent robots, and intelligent manufacturing.
translated by 谷歌翻译
Mathematical reasoning is a fundamental aspect of human intelligence and is applicable in various fields, including science, engineering, finance, and everyday life. The development of artificial intelligence (AI) systems capable of solving math problems and proving theorems has garnered significant interest in the fields of machine learning and natural language processing. For example, mathematics serves as a testbed for aspects of reasoning that are challenging for powerful deep learning models, driving new algorithmic and modeling advances. On the other hand, recent advances in large-scale neural language models have opened up new benchmarks and opportunities to use deep learning for mathematical reasoning. In this survey paper, we review the key tasks, datasets, and methods at the intersection of mathematical reasoning and deep learning over the past decade. We also evaluate existing benchmarks and methods, and discuss future research directions in this domain.
translated by 谷歌翻译
Humans are sophisticated at reading interlocutors' emotions from multimodal signals, such as speech contents, voice tones and facial expressions. However, machines might struggle to understand various emotions due to the difficulty of effectively decoding emotions from the complex interactions between multimodal signals. In this paper, we propose a multimodal emotion analysis framework, InterMulti, to capture complex multimodal interactions from different views and identify emotions from multimodal signals. Our proposed framework decomposes signals of different modalities into three kinds of multimodal interaction representations, including a modality-full interaction representation, a modality-shared interaction representation, and three modality-specific interaction representations. Additionally, to balance the contribution of different modalities and learn a more informative latent interaction representation, we developed a novel Text-dominated Hierarchical High-order Fusion(THHF) module. THHF module reasonably integrates the above three kinds of representations into a comprehensive multimodal interaction representation. Extensive experimental results on widely used datasets, (i.e.) MOSEI, MOSI and IEMOCAP, demonstrate that our method outperforms the state-of-the-art.
translated by 谷歌翻译
Humans are skilled in reading the interlocutor's emotion from multimodal signals, including spoken words, simultaneous speech, and facial expressions. It is still a challenge to effectively decode emotions from the complex interactions of multimodal signals. In this paper, we design three kinds of multimodal latent representations to refine the emotion analysis process and capture complex multimodal interactions from different views, including a intact three-modal integrating representation, a modality-shared representation, and three modality-individual representations. Then, a modality-semantic hierarchical fusion is proposed to reasonably incorporate these representations into a comprehensive interaction representation. The experimental results demonstrate that our EffMulti outperforms the state-of-the-art methods. The compelling performance benefits from its well-designed framework with ease of implementation, lower computing complexity, and less trainable parameters.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Due to the ambiguity of homophones, Chinese Spell Checking (CSC) has widespread applications. Existing systems typically utilize BERT for text encoding. However, CSC requires the model to account for both phonetic and graphemic information. To adapt BERT to the CSC task, we propose a token-level self-distillation contrastive learning method. We employ BERT to encode both the corrupted and corresponding correct sentence. Then, we use contrastive learning loss to regularize corrupted tokens' hidden states to be closer to counterparts in the correct sentence. On three CSC datasets, we confirmed our method provides a significant improvement above baselines.
translated by 谷歌翻译
最近流行的对比学习范式提出了无监督的哈希的发展。但是,以前的基于学习的作品受到(1)基于全球图像表示的数据相似性挖掘的障碍,以及(2)由数据增强引起的哈希代码语义损失。在本文中,我们提出了一种新颖的方法,即加权的伴侣哈希(WCH),以朝着解决这两个问题迈出一步。我们介绍了一个新型的相互注意模块,以减轻由缺失的图像结构引起的网络特征中信息不对称问题的问题。此外,我们探索了图像之间的细粒语义关系,即,我们将图像分为多个斑块并计算斑块之间的相似性。反映深层图像关系的聚合加权相似性是经过蒸馏而来的,以促进哈希码以蒸馏损失的方式学习,从而获得更好的检索性能。广泛的实验表明,所提出的WCH在三个基准数据集上显着优于现有的无监督哈希方法。
translated by 谷歌翻译
近年来,由于其在数字人物,角色产生和动画中的广泛应用,人们对3D人脸建模的兴趣越来越大。现有方法压倒性地强调了对面部的外部形状,质地和皮肤特性建模,而忽略了内部骨骼结构和外观之间的固有相关性。在本文中,我们使用学习的参数面部发电机提出了雕塑家,具有骨骼一致性的3D面部创作,旨在通过混合参数形态表示轻松地创建解剖上正确和视觉上令人信服的面部模型。雕塑家的核心是露西(Lucy),这是与整形外科医生合作的第一个大型形状面部脸部数据集。我们的Lucy数据集以最古老的人类祖先之一的化石命名,其中包含正牙手术前后全人头的高质量计算机断层扫描(CT)扫描,这对于评估手术结果至关重要。露西(Lucy)由144次扫描,分别对72名受试者(31名男性和41名女性)组成,其中每个受试者进行了两次CT扫描,并在恐惧后手术中进行了两次CT扫描。根据我们的Lucy数据集,我们学习了一个新颖的骨骼一致的参数面部发电机雕塑家,它可以创建独特而细微的面部特征,以帮助定义角色,同时保持生理声音。我们的雕塑家通过将3D脸的描绘成形状混合形状,姿势混合形状和面部表达混合形状,共同在统一数据驱动的框架下共同建模头骨,面部几何形状和面部外观。与现有方法相比,雕塑家在面部生成任务中保留了解剖学正确性和视觉现实主义。最后,我们展示了雕塑家在以前看不见的各种花式应用中的鲁棒性和有效性。
translated by 谷歌翻译
广义文本表示是许多自然语言理解任务的基础。要充分利用不同的语料库,不可避免地需要了解它们之间的相关性。但是,许多方法忽略了相关性,并直接用于所有任务的单通道模型(粗糙的范式),这缺乏足够的理性和解释。此外,一些现有的作品通过针迹技能块(一个精细的范式)学习下游任务,这可能会导致其冗余和噪音,从而导致非理性。在这项工作中,我们首先通过三种不同的观点分析任务相关性,即数据属性,手动设计和基于模型的相关性,基于相似的任务被分组在一起。然后,我们提出了一个用粗到细范式的层次结构框架,其最底层共享了所有任务,中层级别分为不同的组,以及分配给每个任务的顶级级别。这使我们的模型可以从所有任务中学习基本的语言属性,提高相关任务的性能,并减少不相关任务的负面影响。我们在五个自然语言理解任务的13个基准数据集上进行的实验证明了我们方法的优势。
translated by 谷歌翻译
复杂的流量分析,例如加密的流量分析和未知的恶意软件检测,强调需要进行高级方法来分析网络流量。使用固定模式,签名匹配和检测网络流量中已知模式的规则的传统方法已被AI(人工智能)驱动算法取代。但是,缺乏高性能AI网络特定的框架使得不可能在网络工作负载中部署基于AI的实时处理。在本文中,我们描述了流量分析开发工具包(TADK)的设计,这是一个针对基于AI的网络工作负载处理的行业标准框架。 TADK可以在数据中心到边缘的网络设备中基于实时的AI网络工作负载处理,而无需专门硬件(例如GPU,神经处理单元等)。我们已经在商品WAF和5G UPF中部署了TADK,评估结果表明,Tadk可以在流量功能提取时达到每个核心最多35.3Gbps的吞吐量,每核6.5Gbps在流量分类中,并且可以减少SQLI/XSS检测到下降至4.5us每个请求的精度比固定模式解决方案更高。
translated by 谷歌翻译