现有的数据依赖性哈希方法使用具有数百万个参数的大型骨干网络,并且计算复杂。现有的知识蒸馏方法使用深(教师)模型的逻辑和其他功能,并将其作为紧凑型(学生)模型的知识,这要求教师的网络在上下文中与上下文中的学生模型平行进行微调。在目标环境中培训老师需要更多的时间和计算资源。在本文中,我们提出了不知道知识蒸馏的上下文,该蒸馏使用教师模型的知识而不在目标环境上进行微调。我们还提出了一种新的高效学生模型架构,用于知识蒸馏。提出的方法遵循两步过程。第一步涉及在不知道教师模型的不知道知识蒸馏的情况下预先培训学生模型。第二步涉及在图像检索的上下文上微调学生模型。为了显示拟议方法的功效,我们比较了检索结果。参数和否。在不同检索框架下,学生模型的运营与教师模型的运作,包括Deep Cauchy Hashing(DCH)和中央相似性量化(CSQ)。实验结果证实,所提出的方法在检索结果与效率之间提供了有希望的权衡。本文中使用的代码通过\ url {https://github.com/satoru2001/cukdfir}公开发布。
translated by 谷歌翻译
针对边缘设备的实用眼睛认证(EA)系统需要对呈现攻击进行身份验证并强大,同时剩余计算和延迟效率。然而,现有的基于眼框架A)独立地执行认证和呈现攻击检测(PAD),B)涉及提取虹膜区域的显着预处理步骤。在这里,我们使用围绕图像介绍EA和垫的联合框架。虽然深度多任务学习(MTL)网络可以执行任务,但由于EA和焊盘的训练数据集是不相交的,因此MTL遭受遗忘效果。为了克服这一点,我们提出了用垫(眼部)的眼睛认证,一种基于蒸馏的方法,该方法为EA和垫训练了一个网络,同时降低了遗忘的效果。为了进一步提高EA性能,我们介绍了一种名为Eyepad ++的新方法,包括在EA和焊盘数据上训练MTL网络,同时通过额外的蒸馏步骤蒸馏眼网网络的“通用性”。我们所提出的方法优于垫中的SOTA,并在眼睛验证中获得近的SOTA性能,而无需任何预处理。我们还展示了眼部和眼部++在用户到用户验证中的疗效,跨网络骨干网和图像质量。
translated by 谷歌翻译
在这项研究中,要求各种印度生物的听众倾听并认识到美国扬声器所说的速度话语。我们识别出一个话语时,我们有三种来自每个听众的回应:1。句子难度评级,2.扬声器难度评级,以及讲话的转录。从这些转录中,计算并用作标准以评估识别和原始句子之间的相似性。本研究中选择的句子分为三组:简单,中和硬,基于此研究它们中的单词的频率。我们观察到句子,扬声器难度评级和行动从易于难以句子的句子增加。我们还使用以下三种自动语音识别(ASR)进行人类语音识别性能,在声学模型(AM)和语言模型(LM)(LM)(LM):ASR1)训练中,录制了印度源头和LM的录音Timit Text,ASR2)我正在使用来自Libli语音语料库的本地美国扬声器和LM的录音,以及ASR3)正在使用来自美国原住民扬声器和LM构建的录音在Libli语音和Timit文本上。我们观察到HSR性能类似于ASR1的性能,而ASR3则实现最佳性能。扬声器诞生明智的分析表明,与少数其他生命神相比,印度听众的扬声器的话语更难以识别
translated by 谷歌翻译
Quadruped robots are currently used in industrial robotics as mechanical aid to automate several routine tasks. However, presently, the usage of such a robot in a domestic setting is still very much a part of the research. This paper discusses the understanding and virtual simulation of such a robot capable of detecting and understanding human emotions, generating its gait, and responding via sounds and expression on a screen. To this end, we use a combination of reinforcement learning and software engineering concepts to simulate a quadruped robot that can understand emotions, navigate through various terrains and detect sound sources, and respond to emotions using audio-visual feedback. This paper aims to establish the framework of simulating a quadruped robot that is emotionally intelligent and can primarily respond to audio-visual stimuli using motor or audio response. The emotion detection from the speech was not as performant as ERANNs or Zeta Policy learning, still managing an accuracy of 63.5%. The video emotion detection system produced results that are almost at par with the state of the art, with an accuracy of 99.66%. Due to its "on-policy" learning process, the PPO algorithm was extremely rapid to learn, allowing the simulated dog to demonstrate a remarkably seamless gait across the different cadences and variations. This enabled the quadruped robot to respond to generated stimuli, allowing us to conclude that it functions as predicted and satisfies the aim of this work.
translated by 谷歌翻译
Searching long egocentric videos with natural language queries (NLQ) has compelling applications in augmented reality and robotics, where a fluid index into everything that a person (agent) has seen before could augment human memory and surface relevant information on demand. However, the structured nature of the learning problem (free-form text query inputs, localized video temporal window outputs) and its needle-in-a-haystack nature makes it both technically challenging and expensive to supervise. We introduce Narrations-as-Queries (NaQ), a data augmentation strategy that transforms standard video-text narrations into training data for a video query localization model. Validating our idea on the Ego4D benchmark, we find it has tremendous impact in practice. NaQ improves multiple top models by substantial margins (even doubling their accuracy), and yields the very best results to date on the Ego4D NLQ challenge, soundly outperforming all challenge winners in the CVPR and ECCV 2022 competitions and topping the current public leaderboard. Beyond achieving the state-of-the-art for NLQ, we also demonstrate unique properties of our approach such as gains on long-tail object queries, and the ability to perform zero-shot and few-shot NLQ.
translated by 谷歌翻译
Machine Translation (MT) system generally aims at automatic representation of source language into target language retaining the originality of context using various Natural Language Processing (NLP) techniques. Among various NLP methods, Statistical Machine Translation(SMT). SMT uses probabilistic and statistical techniques to analyze information and conversion. This paper canvasses about the development of bilingual SMT models for translating English to fifteen low-resource Indian Languages (ILs) and vice versa. At the outset, all 15 languages are briefed with a short description related to our experimental need. Further, a detailed analysis of Samanantar and OPUS dataset for model building, along with standard benchmark dataset (Flores-200) for fine-tuning and testing, is done as a part of our experiment. Different preprocessing approaches are proposed in this paper to handle the noise of the dataset. To create the system, MOSES open-source SMT toolkit is explored. Distance reordering is utilized with the aim to understand the rules of grammar and context-dependent adjustments through a phrase reordering categorization framework. In our experiment, the quality of the translation is evaluated using standard metrics such as BLEU, METEOR, and RIBES
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
Cashews are grown by over 3 million smallholders in more than 40 countries worldwide as a principal source of income. As the third largest cashew producer in Africa, Benin has nearly 200,000 smallholder cashew growers contributing 15% of the country's national export earnings. However, a lack of information on where and how cashew trees grow across the country hinders decision-making that could support increased cashew production and poverty alleviation. By leveraging 2.4-m Planet Basemaps and 0.5-m aerial imagery, newly developed deep learning algorithms, and large-scale ground truth datasets, we successfully produced the first national map of cashew in Benin and characterized the expansion of cashew plantations between 2015 and 2021. In particular, we developed a SpatioTemporal Classification with Attention (STCA) model to map the distribution of cashew plantations, which can fully capture texture information from discriminative time steps during a growing season. We further developed a Clustering Augmented Self-supervised Temporal Classification (CASTC) model to distinguish high-density versus low-density cashew plantations by automatic feature extraction and optimized clustering. Results show that the STCA model has an overall accuracy of 80% and the CASTC model achieved an overall accuracy of 77.9%. We found that the cashew area in Benin has doubled from 2015 to 2021 with 60% of new plantation development coming from cropland or fallow land, while encroachment of cashew plantations into protected areas has increased by 70%. Only half of cashew plantations were high-density in 2021, suggesting high potential for intensification. Our study illustrates the power of combining high-resolution remote sensing imagery and state-of-the-art deep learning algorithms to better understand tree crops in the heterogeneous smallholder landscape.
translated by 谷歌翻译
We demonstrate how efficient autonomous drone swarms can be in detecting and tracking occluded targets in densely forested areas, such as lost people during search and rescue missions. Exploration and optimization of local viewing conditions, such as occlusion density and target view obliqueness, provide much faster and much more reliable results than previous, blind sampling strategies that are based on pre-defined waypoints. An adapted real-time particle swarm optimization and a new objective function are presented that are able to deal with dynamic and highly random through-foliage conditions. Synthetic aperture sensing is our fundamental sampling principle, and drone swarms are employed to approximate the optical signals of extremely wide and adaptable airborne lenses.
translated by 谷歌翻译
We propose an ensemble approach to predict the labels in linear programming word problems. The entity identification and the meaning representation are two types of tasks to be solved in the NL4Opt competition. We propose the ensembleCRF method to identify the named entities for the first task. We found that single models didn't improve for the given task in our analysis. A set of prediction models predict the entities. The generated results are combined to form a consensus result in the ensembleCRF method. We present an ensemble text generator to produce the representation sentences for the second task. We thought of dividing the problem into multiple small tasks due to the overflow in the output. A single model generates different representations based on the prompt. All the generated text is combined to form an ensemble and produce a mathematical meaning of a linear programming problem.
translated by 谷歌翻译