在本文中,我们使用称为BSGD(块随机梯度下降)的非常通用的公式研究凸优化。在每次迭代中,有些但没有必要的参数所有组件都会更新。更新的方向可以是两种可能性之一:(i)使用一阶近似计算的噪声浪费的测量,或(ii)使用可能被噪声损坏的函数值计算的近似梯度。该公式包含大多数当前使用的随机梯度方法。我们基于随机近似理论,建立了BSGD收敛到全局最小值的条件。然后,我们通过数值实验来验证预测的收敛性。结果结果表明,当使用近似梯度时,BSGD会收敛,而基于动量的方法可能会差异。但是,不仅是我们的BSGD,还包括标准(全级别)梯度下降,以及各种基于动量的方法,即使有嘈杂的梯度也收敛。
translated by 谷歌翻译
众所周知,由于现代实施实践,类似于Horn和Schunck模型的古典配方仍然在很大程度上具有竞争力。在大多数情况下,这些模型的表现优于许多现代流动估计方法。鉴于此,我们为光流的边缘$ l^1 $正则化方法提出了有效的实施设计。在有限变化的函数$ bv(\ omega,\ mathbb {r}^2)$的功能空间中研究了我们提出的模型的数学良好性。实施方案以多个步骤设计。使用强大的Chambolle-Pock原始二重式算法计算流场。在最近的Castro和Donoho的研究中,我们将迭代中位过滤的启发式方法扩展到了我们的流量估计。此外,为了完善流动边缘,我们将Li和Osher建立的加权中值过滤器作为后处理步骤。我们在Middlebury数据集上进行的实验表明,与基于最新的角和Schunck的一些变异方法相比,所提出的方法达到了最佳的平均角和终点错误。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
本文的目标是提出一个统一的非线性细化光流的框架。第一模型是两相细化过程,其中使用附加约束在第二阶段中获得第一阶段中获得的初始估计。我们使用进化PDE方法研究这种配方的数学良好姿势。第二种模型是一种单相改进过程,涉及流动卷曲的各向异性正则化。为了实现,我们使用一阶原语 - 双槽柱算法。我们观察到两种方法获得的结果在自然界中相当。我们执行若干数值实验并经验证明,通过使用两相细化过程,比订单收敛率较高的单相过程实现了更快的收敛率$ O(1 / N)$$ o(1 / n ^ 2)$。
translated by 谷歌翻译
The previous fine-grained datasets mainly focus on classification and are often captured in a controlled setup, with the camera focusing on the objects. We introduce the first Fine-Grained Vehicle Detection (FGVD) dataset in the wild, captured from a moving camera mounted on a car. It contains 5502 scene images with 210 unique fine-grained labels of multiple vehicle types organized in a three-level hierarchy. While previous classification datasets also include makes for different kinds of cars, the FGVD dataset introduces new class labels for categorizing two-wheelers, autorickshaws, and trucks. The FGVD dataset is challenging as it has vehicles in complex traffic scenarios with intra-class and inter-class variations in types, scale, pose, occlusion, and lighting conditions. The current object detectors like yolov5 and faster RCNN perform poorly on our dataset due to a lack of hierarchical modeling. Along with providing baseline results for existing object detectors on FGVD Dataset, we also present the results of a combination of an existing detector and the recent Hierarchical Residual Network (HRN) classifier for the FGVD task. Finally, we show that FGVD vehicle images are the most challenging to classify among the fine-grained datasets.
translated by 谷歌翻译
The primary goal of this work is to study the effectiveness of an unsupervised domain adaptation approach for various applications such as binary classification and anomaly detection in the context of Alzheimer's disease (AD) detection for the OASIS datasets. We also explore image reconstruction and image synthesis for analyzing and generating 3D structural MRI data to establish performance benchmarks for anomaly detection. We successfully demonstrate that domain adaptation improves the performance of AD detection when implemented in both supervised and unsupervised settings. Additionally, the proposed methodology achieves state-of-the-art performance for binary classification on the OASIS-1 dataset.
translated by 谷歌翻译
Document summarization aims to create a precise and coherent summary of a text document. Many deep learning summarization models are developed mainly for English, often requiring a large training corpus and efficient pre-trained language models and tools. However, English summarization models for low-resource Indian languages are often limited by rich morphological variation, syntax, and semantic differences. In this paper, we propose GAE-ISumm, an unsupervised Indic summarization model that extracts summaries from text documents. In particular, our proposed model, GAE-ISumm uses Graph Autoencoder (GAE) to learn text representations and a document summary jointly. We also provide a manually-annotated Telugu summarization dataset TELSUM, to experiment with our model GAE-ISumm. Further, we experiment with the most publicly available Indian language summarization datasets to investigate the effectiveness of GAE-ISumm on other Indian languages. Our experiments of GAE-ISumm in seven languages make the following observations: (i) it is competitive or better than state-of-the-art results on all datasets, (ii) it reports benchmark results on TELSUM, and (iii) the inclusion of positional and cluster information in the proposed model improved the performance of summaries.
translated by 谷歌翻译
In this paper, we propose Adam-Hash: an adaptive and dynamic multi-resolution hashing data-structure for fast pairwise summation estimation. Given a data-set $X \subset \mathbb{R}^d$, a binary function $f:\mathbb{R}^d\times \mathbb{R}^d\to \mathbb{R}$, and a point $y \in \mathbb{R}^d$, the Pairwise Summation Estimate $\mathrm{PSE}_X(y) := \frac{1}{|X|} \sum_{x \in X} f(x,y)$. For any given data-set $X$, we need to design a data-structure such that given any query point $y \in \mathbb{R}^d$, the data-structure approximately estimates $\mathrm{PSE}_X(y)$ in time that is sub-linear in $|X|$. Prior works on this problem have focused exclusively on the case where the data-set is static, and the queries are independent. In this paper, we design a hashing-based PSE data-structure which works for the more practical \textit{dynamic} setting in which insertions, deletions, and replacements of points are allowed. Moreover, our proposed Adam-Hash is also robust to adaptive PSE queries, where an adversary can choose query $q_j \in \mathbb{R}^d$ depending on the output from previous queries $q_1, q_2, \dots, q_{j-1}$.
translated by 谷歌翻译
The emergence of large pretrained models has enabled language models to achieve superior performance in common NLP tasks, including language modeling and question answering, compared to previous static word representation methods. Augmenting these models with a retriever to retrieve the related text and documents as supporting information has shown promise in effectively solving NLP problems in a more interpretable way given that the additional knowledge is injected explicitly rather than being captured in the models' parameters. In spite of the recent progress, our analysis on retriever-augmented language models shows that this class of language models still lack reasoning over the retrieved documents. In this paper, we study the strengths and weaknesses of different retriever-augmented language models such as REALM, kNN-LM, FiD, ATLAS, and Flan-T5 in reasoning over the selected documents in different tasks. In particular, we analyze the reasoning failures of each of these models and study how the models' failures in reasoning are rooted in the retriever module as well as the language model.
translated by 谷歌翻译
This paper proposes a perception and path planning pipeline for autonomous racing in an unknown bounded course. The pipeline was initially created for the 2021 evGrandPrix autonomous division and was further improved for the 2022 event, both of which resulting in first place finishes. Using a simple LiDAR-based perception pipeline feeding into an occupancy grid based expansion algorithm, we determine a goal point to drive. This pipeline successfully achieved reliable and consistent laps in addition with occupancy grid algorithm to know the ways around a cone-defined track with an averaging speeds of 6.85 m/s over a distance 434.2 meters for a total lap time of 63.4 seconds.
translated by 谷歌翻译