本文讨论了面部表达识别模型和描述生成模型,以构建图像中人的图像和面部表情的描述性句子。我们的研究表明,Yolov5比传统的CNN获得了KDEF数据集的所有情绪的更好结果。特别是,CNN和Yolov5模型的精度分别为0.853和0.938。使用VGG16与LSTM模型编码的描述提出了用于基于合并体系结构的图像描述的模型。 Yolov5还用于识别图像中对象的主要颜色,并在必要时纠正生成的描述中的颜色单词。如果描述包含指称一个人的单词,我们会认识到图像中人的情感。最后,我们结合了所有模型的结果,以创建描述图像中视觉内容和人类情感的句子。越南语中FlickR8K数据集的实验结果实现了BLLEU-1,BLEU-2,BLEU-3,BLEU-4分数为0.628; 0.425; 0.280;和0.174。
translated by 谷歌翻译
本文提出的方法是通过单个输入双语词典自动为低资源语言(尤其是资源贫乏的语言)创建大量新的双语词典。我们的算法使用可用的WordNets和Machine Translator(MT)生成了源语言的单词翻译为丰富的目标语言。由于我们的方法仅依赖于一个输入字典,可用的WordNet和MT,因此它们适用于任何双语词典,只要两种语言之一是英语,或者具有链接到Princeton WordNet的WordNet。从5个可用的双语词典开始,我们创建了48个新的双语词典。其中,流行的MTS不支持30双语言:Google和Bing。
translated by 谷歌翻译
本文报道的研究通过应用计算机视觉技术将普通的垃圾桶转化为更聪明的垃圾箱。在传感器和执行器设备的支持下,垃圾桶可以自动对垃圾进行分类。特别是,垃圾箱上的摄像头拍摄垃圾的照片,然后进行中央处理单元分析,并决定将垃圾桶放入哪个垃圾箱中。我们的垃圾箱系统的准确性达到90%。此外,我们的模型已连接到Internet,以更新垃圾箱状态以进行进一步管理。开发了用于管理垃圾箱的移动应用程序。
translated by 谷歌翻译
本文研究了为濒危语言生成词汇资源的方法。我们的算法使用公共文字网和机器翻译器(MT)构建双语词典和多语言词库。由于我们的作品仅依赖于濒危语言和“中间帮手”语言之间的一个双语词典,因此它适用于缺乏许多现有资源的语言。
translated by 谷歌翻译
手动构建WordNet是一项艰巨的任务,需要多年的专家时间。作为自动构建完整WordNet的第一步,我们建议使用公开可用的WordNet,机器翻译器和/或单语言词典来生成有关资源丰富和资源贫乏语言的WordNet Synset的方法。我们的算法将现有WordNet的合成器转换为目标语言t,然后在翻译候选者上应用排名方法以查找T中的最佳翻译。我们的方法适用于任何至少有一个从英语翻译到它的现有双语字典的语言。
translated by 谷歌翻译
双语词典是昂贵的资源,当其中一种语言贫穷时,没有多少可用。在本文中,我们提出了从现有双语词典中创建新的反向双语词典的算法,其中英语是两种语言之一。我们的算法利用了使用英语WordNet产生反向字典条目之间单词概念对之间的相似性。由于我们的算法依赖于可用的双语词典,因此只要两种语言之一具有WordNet型词汇本体论,它们就适用于任何双语词典。
translated by 谷歌翻译
使用基于词典的方法将语言L1中的短语转换为语言L2的过去方法需要语法规则来重组初始翻译。本文引入了一种新颖的方法,而无需使用任何语法规则将L1中不存在的L1中的给定短语转换为L2。我们在L2中至少需要一个L1-L2双语词典和N-Gram数据。我们翻译的平均手动评估得分为4.29/5.00,这意味着非常高质量。
translated by 谷歌翻译
In this paper, we propose a novel framework dubbed peer learning to deal with the problem of biased scene graph generation (SGG). This framework uses predicate sampling and consensus voting (PSCV) to encourage different peers to learn from each other, improving model diversity and mitigating bias in SGG. To address the heavily long-tailed distribution of predicate classes, we propose to use predicate sampling to divide and conquer this issue. As a result, the model is less biased and makes more balanced predicate predictions. Specifically, one peer may not be sufficiently diverse to discriminate between different levels of predicate distributions. Therefore, we sample the data distribution based on frequency of predicates into sub-distributions, selecting head, body, and tail classes to combine and feed to different peers as complementary predicate knowledge during the training process. The complementary predicate knowledge of these peers is then ensembled utilizing a consensus voting strategy, which simulates a civilized voting process in our society that emphasizes the majority opinion and diminishes the minority opinion. This approach ensures that the learned representations of each peer are optimally adapted to the various data distributions. Extensive experiments on the Visual Genome dataset demonstrate that PSCV outperforms previous methods. We have established a new state-of-the-art (SOTA) on the SGCls task by achieving a mean of \textbf{31.6}.
translated by 谷歌翻译
Audio-Visual scene understanding is a challenging problem due to the unstructured spatial-temporal relations that exist in the audio signals and spatial layouts of different objects and various texture patterns in the visual images. Recently, many studies have focused on abstracting features from convolutional neural networks while the learning of explicit semantically relevant frames of sound signals and visual images has been overlooked. To this end, we present an end-to-end framework, namely attentional graph convolutional network (AGCN), for structure-aware audio-visual scene representation. First, the spectrogram of sound and input image is processed by a backbone network for feature extraction. Then, to build multi-scale hierarchical information of input features, we utilize an attention fusion mechanism to aggregate features from multiple layers of the backbone network. Notably, to well represent the salient regions and contextual information of audio-visual inputs, the salient acoustic graph (SAG) and contextual acoustic graph (CAG), salient visual graph (SVG), and contextual visual graph (CVG) are constructed for the audio-visual scene representation. Finally, the constructed graphs pass through a graph convolutional network for structure-aware audio-visual scene recognition. Extensive experimental results on the audio, visual and audio-visual scene recognition datasets show that promising results have been achieved by the AGCN methods. Visualizing graphs on the spectrograms and images have been presented to show the effectiveness of proposed CAG/SAG and CVG/SVG that could focus on the salient and semantic relevant regions.
translated by 谷歌翻译
We introduce a machine-learning (ML)-based weather simulator--called "GraphCast"--which outperforms the most accurate deterministic operational medium-range weather forecasting system in the world, as well as all previous ML baselines. GraphCast is an autoregressive model, based on graph neural networks and a novel high-resolution multi-scale mesh representation, which we trained on historical weather data from the European Centre for Medium-Range Weather Forecasts (ECMWF)'s ERA5 reanalysis archive. It can make 10-day forecasts, at 6-hour time intervals, of five surface variables and six atmospheric variables, each at 37 vertical pressure levels, on a 0.25-degree latitude-longitude grid, which corresponds to roughly 25 x 25 kilometer resolution at the equator. Our results show GraphCast is more accurate than ECMWF's deterministic operational forecasting system, HRES, on 90.0% of the 2760 variable and lead time combinations we evaluated. GraphCast also outperforms the most accurate previous ML-based weather forecasting model on 99.2% of the 252 targets it reported. GraphCast can generate a 10-day forecast (35 gigabytes of data) in under 60 seconds on Cloud TPU v4 hardware. Unlike traditional forecasting methods, ML-based forecasting scales well with data: by training on bigger, higher quality, and more recent data, the skill of the forecasts can improve. Together these results represent a key step forward in complementing and improving weather modeling with ML, open new opportunities for fast, accurate forecasting, and help realize the promise of ML-based simulation in the physical sciences.
translated by 谷歌翻译