Quantum image processing draws a lot of attention due to faster data computation and storage compared to classical data processing systems. Converting classical image data into the quantum domain and state label preparation complexity is still a challenging issue. The existing techniques normally connect the pixel values and the state position directly. Recently, the EFRQI (efficient flexible representation of the quantum image) approach uses an auxiliary qubit that connects the pixel-representing qubits to the state position qubits via Toffoli gates to reduce state connection. Due to the twice use of Toffoli gates for each pixel connection still it requires a significant number of bits to connect each pixel value. In this paper, we propose a new SCMFRQI (state connection modification FRQI) approach for further reducing the required bits by modifying the state connection using a reset gate rather than repeating the use of the same Toffoli gate connection as a reset gate. Moreover, unlike other existing methods, we compress images using block-level for further reduction of required qubits. The experimental results confirm that the proposed method outperforms the existing methods in terms of both image representation and compression points of view.
translated by 谷歌翻译
Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data.
translated by 谷歌翻译
In recent years several learning approaches to point goal navigation in previously unseen environments have been proposed. They vary in the representations of the environments, problem decomposition, and experimental evaluation. In this work, we compare the state-of-the-art Deep Reinforcement Learning based approaches with Partially Observable Markov Decision Process (POMDP) formulation of the point goal navigation problem. We adapt the (POMDP) sub-goal framework proposed by [1] and modify the component that estimates frontier properties by using partial semantic maps of indoor scenes built from images' semantic segmentation. In addition to the well-known completeness of the model-based approach, we demonstrate that it is robust and efficient in that it leverages informative, learned properties of the frontiers compared to an optimistic frontier-based planner. We also demonstrate its data efficiency compared to the end-to-end deep reinforcement learning approaches. We compare our results against an optimistic planner, ANS and DD-PPO on Matterport3D dataset using the Habitat Simulator. We show comparable, though slightly worse performance than the SOTA DD-PPO approach, yet with far fewer data.
translated by 谷歌翻译
End-to-end text-to-speech (TTS) systems have been developed for European languages like English and Spanish with state-of-the-art speech quality, prosody, and naturalness. However, development of end-to-end TTS for Indian languages is lagging behind in terms of quality. The challenges involved in such a task are: 1) scarcity of quality training data; 2) low efficiency during training and inference; 3) slow convergence in the case of large vocabulary size. In our work reported in this paper, we have investigated the use of fine-tuning the English-pretrained Tacotron2 model with limited Sanskrit data to synthesize natural sounding speech in Sanskrit in low resource settings. Our experiments show encouraging results, achieving an overall MOS of 3.38 from 37 evaluators with good Sanskrit spoken knowledge. This is really a very good result, considering the fact that the speech data we have used is of duration 2.5 hours only.
translated by 谷歌翻译
Memes are powerful means for effective communication on social media. Their effortless amalgamation of viral visuals and compelling messages can have far-reaching implications with proper marketing. Previous research on memes has primarily focused on characterizing their affective spectrum and detecting whether the meme's message insinuates any intended harm, such as hate, offense, racism, etc. However, memes often use abstraction, which can be elusive. Here, we introduce a novel task - EXCLAIM, generating explanations for visual semantic role labeling in memes. To this end, we curate ExHVV, a novel dataset that offers natural language explanations of connotative roles for three types of entities - heroes, villains, and victims, encompassing 4,680 entities present in 3K memes. We also benchmark ExHVV with several strong unimodal and multimodal baselines. Moreover, we posit LUMEN, a novel multimodal, multi-task learning framework that endeavors to address EXCLAIM optimally by jointly learning to predict the correct semantic roles and correspondingly to generate suitable natural language explanations. LUMEN distinctly outperforms the best baseline across 18 standard natural language generation evaluation metrics. Our systematic evaluation and analyses demonstrate that characteristic multimodal cues required for adjudicating semantic roles are also helpful for generating suitable explanations.
translated by 谷歌翻译
With the rising adoption of Machine Learning across the domains like banking, pharmaceutical, ed-tech, etc, it has become utmost important to adopt responsible AI methods to ensure models are not unfairly discriminating against any group. Given the lack of clean training data, generative adversarial techniques are preferred to generate synthetic data with several state-of-the-art architectures readily available across various domains from unstructured data such as text, images to structured datasets modelling fraud detection and many more. These techniques overcome several challenges such as class imbalance, limited training data, restricted access to data due to privacy issues. Existing work focusing on generating fair data either works for a certain GAN architecture or is very difficult to tune across the GANs. In this paper, we propose a pipeline to generate fairer synthetic data independent of the GAN architecture. The proposed paper utilizes a pre-processing algorithm to identify and remove bias inducing samples. In particular, we claim that while generating synthetic data most GANs amplify bias present in the training data but by removing these bias inducing samples, GANs essentially focuses more on real informative samples. Our experimental evaluation on two open-source datasets demonstrates how the proposed pipeline is generating fair data along with improved performance in some cases.
translated by 谷歌翻译
现有的自我监督学习策略被限制在有限的目标或主要针对单峰应用程序的通用下游任务。对于复杂性和域亲和力(例如模因分析)而言,这对命令性的多模式应用有了孤立的进展。在这里,我们介绍了两种自我监督的预训练方法,即ext-pie-net和mm-simclr(i)在预训练期间使用现成的多模式仇恨语音数据,并且(ii)执行自我 - 通过合并多个专业借口任务,有效地迎合模因分析所需的复杂多模式表示学习,从而有效地迎合了学习。我们实验不同的自我实验策略,包括可以帮助学习丰富的跨模式表示并使用流行的线性探测来评估可恨模因任务的潜在变体。拟议的解决方案通过标签有效的培训与完全监督的基线竞争,同时在梅诺特挑战的所有三个任务上明显优于他们,分别为0.18%,23.64%和0.93%的绩效增长。此外,我们通过在Harmeme任务上报告竞争性能来证明所提出的解决方案的普遍性。最后,我们通过分析特定于任务的学习,使用更少的标记培训样本来建立学习表现的质量,并争辩说,自主策略和手头下游任务的复杂性是相关的。我们的努力强调了更好的多模式自学方法的要求,涉及有效的微调和可推广性能的专业借口任务。
translated by 谷歌翻译
在现实世界的情况下,分布(OOD)数据集可能与培训数据集有很大的分配变化。当训练有素的分类器部署在不同的动态环境中时,这种现象通常发生,这会导致性能显着下降。为了解决这个问题,我们提出了这项工作中端到端的深度多任务网络。观察旋转预测(自我监督)精度和语义分类精度之间的牢固关系,我们在多任务网络中引入了一个附加的辅助分类头,以及语义分类和旋转预测头。为了观察该加法分类器在改善旋转预测头上的影响,我们提出的学习方法被构成双层优化问题,其中训练了上层级别以更新语义分类和旋转预测头的参数。在较低级别的优化中,仅通过固定语义分类头的参数来通过语义分类头进行更新。该方法已通过三个看不见的OOD数据集进行了验证,在该数据集中,它比其他两种基线方法表现出了清晰的语义分类精度。我们的代码可在github \ url {https://github.com/harshita-555/ossl}上获得
translated by 谷歌翻译
对抗性持续学习对于持续学习问题有效,因为存在特征对齐过程,从而产生了对灾难性遗忘问题敏感性低的任务不变特征。然而,ACL方法施加了相当大的复杂性,因为它依赖于特定于任务的网络和歧视器。它还经历了一个迭代培训过程,该过程不适合在线(单周)持续学习问题。本文提出了一种可扩展的对抗性持续学习(比例)方法,提出了一个参数生成器,将共同特征转换为特定于任务的功能,并在对抗性游戏中进行单个歧视器,以诱导共同的特征。训练过程是在元学习时尚中使用三个损失功能组合进行的。缩放比例优于明显的基线,其准确性和执行时间都明显。
translated by 谷歌翻译
许多现实世界的分类问题的班级标签频率不平衡;一个被称为“阶级失衡”问题的著名问题。经典的分类算法往往会偏向多数级别,使分类器容易受到少数族裔类别的分类。尽管文献富含解决此问题的方法,但随着问题的维度的增加,许多方法没有扩展,并且运行它们的成本变得越来越高。在本文中,我们提出了端到端的深层生成分类器。我们提出了一个域构成自动编码器,以将潜在空间保留为发电机的先验,然后将其用于与其他两个深网,一个歧视器和一个分类器一起玩对抗游戏。对三个不同的多级不平衡问题进行了广泛的实验,并与最先进的方法进行了比较。实验结果证实了我们方法比流行算法在处理高维不平衡分类问题方面具有优势。我们的代码可在https://github.com/tanmdl/slppl-gan上找到。
translated by 谷歌翻译