在此评论中,我们为模糊C均值问题的“迭代重新加权算法”中提出了一个简单的替代推导。我们表明,对于IRW-FCM算法而得出的迭代步骤不过是流行的多数化最小化(MM)算法的步骤。本说明中提出的推导更简单明了,与IRW-FCM的推导不同,此处的推导不涉及引入任何辅助变量。此外,通过将IRW-FCM的步骤显示为MM算法,可以消除IRW-FCM算法的内环,并且可以有效地作为“单个环”算法运行算法。更确切地说,新的基于MM的推导推论IRW-FCM的单个内部环足够降低模糊C均值的目标函数,从而加快了IRW-FCM算法的速度。
translated by 谷歌翻译
Real-world datasets exhibit imbalances of varying types and degrees. Several techniques based on re-weighting and margin adjustment of loss are often used to enhance the performance of neural networks, particularly on minority classes. In this work, we analyze the class-imbalanced learning problem by examining the loss landscape of neural networks trained with re-weighting and margin-based techniques. Specifically, we examine the spectral density of Hessian of class-wise loss, through which we observe that the network weights converge to a saddle point in the loss landscapes of minority classes. Following this observation, we also find that optimization methods designed to escape from saddle points can be effectively used to improve generalization on minority classes. We further theoretically and empirically demonstrate that Sharpness-Aware Minimization (SAM), a recent technique that encourages convergence to a flat minima, can be effectively used to escape saddle points for minority classes. Using SAM results in a 6.2\% increase in accuracy on the minority classes over the state-of-the-art Vector Scaling Loss, leading to an overall average increase of 4\% across imbalanced datasets. The code is available at: https://github.com/val-iisc/Saddle-LongTail.
translated by 谷歌翻译
Current self-supervised learning algorithms are often modality-specific and require large amounts of computational resources. To address these issues, we increase the training efficiency of data2vec, a learning objective that generalizes across several modalities. We do not encode masked tokens, use a fast convolutional decoder and amortize the effort to build teacher representations. data2vec 2.0 benefits from the rich contextualized target representations introduced in data2vec which enable a fast self-supervised learner. Experiments on ImageNet-1K image classification show that data2vec 2.0 matches the accuracy of Masked Autoencoders in 16.4x lower pre-training time, on Librispeech speech recognition it performs as well as wav2vec 2.0 in 10.6x less time, and on GLUE natural language understanding it matches a retrained RoBERTa model in half the time. Trading some speed for accuracy results in ImageNet-1K top-1 accuracy of 86.8\% with a ViT-L model trained for 150 epochs.
translated by 谷歌翻译
In this paper, we propose and showcase, for the first time, monocular multi-view layout estimation for warehouse racks and shelves. Unlike typical layout estimation methods, MVRackLay estimates multi-layered layouts, wherein each layer corresponds to the layout of a shelf within a rack. Given a sequence of images of a warehouse scene, a dual-headed Convolutional-LSTM architecture outputs segmented racks, the front and the top view layout of each shelf within a rack. With minimal effort, such an output is transformed into a 3D rendering of all racks, shelves and objects on the shelves, giving an accurate 3D depiction of the entire warehouse scene in terms of racks, shelves and the number of objects on each shelf. MVRackLay generalizes to a diverse set of warehouse scenes with varying number of objects on each shelf, number of shelves and in the presence of other such racks in the background. Further, MVRackLay shows superior performance vis-a-vis its single view counterpart, RackLay, in layout accuracy, quantized in terms of the mean IoU and mAP metrics. We also showcase a multi-view stitching of the 3D layouts resulting in a representation of the warehouse scene with respect to a global reference frame akin to a rendering of the scene from a SLAM pipeline. To the best of our knowledge, this is the first such work to portray a 3D rendering of a warehouse scene in terms of its semantic components - Racks, Shelves and Objects - all from a single monocular camera.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
We present the design, development, and evaluation of HREyes: biomimetic communication devices which use light to communicate information and, for the first time, gaze direction from AUVs to humans. First, we introduce two types of information displays using the HREye devices: active lucemes and ocular lucemes. Active lucemes communicate information explicitly through animations, while ocular lucemes communicate gaze direction implicitly by mimicking human eyes. We present a human study in which our system is compared to the use of an embedded digital display that explicitly communicates information to a diver by displaying text. Our results demonstrate accurate recognition of active lucemes for trained interactants, limited intuitive understanding of these lucemes for untrained interactants, and relatively accurate perception of gaze direction for all interactants. The results on active luceme recognition demonstrate more accurate recognition than previous light-based communication systems for AUVs (albeit with different phrase sets). Additionally, the ocular lucemes we introduce in this work represent the first method for communicating gaze direction from an AUV, a critical aspect of nonverbal communication used in collaborative work. With readily available hardware as well as open-source and easily re-configurable programming, HREyes can be easily integrated into any AUV with the physical space for the devices and used to communicate effectively with divers in any underwater environment with appropriate visibility.
translated by 谷歌翻译
相干显微镜技术提供了跨科学和技术领域的材料的无与伦比的多尺度视图,从结构材料到量子设备,从综合电路到生物细胞。在构造更明亮的来源和高速探测器的驱动下,连贯的X射线显微镜方法(如Ptychography)有望彻底改变纳米级材料的特征。但是,相关的数据和计算需求显着增加意味着,常规方法不再足以从高速相干成像实验实时恢复样品图像。在这里,我们演示了一个工作流程,该工作流利用边缘的人工智能和高性能计算,以实现直接从检测器直接从检测器流出的X射线ptychography数据实时反演。拟议的AI支持的工作流程消除了传统的Ptychography施加的采样约束,从而使用比传统方法所需的数据较少的数据级允许低剂量成像。
translated by 谷歌翻译
主动映射的传统方法专注于构建几何图。但是,对于大多数真实世界应用程序,可行的信息与环境中的语义有意义的对象有关。我们提出了一种用于主动度量语义映射问题的方法,该方法使多个异质机器人能够协作构建环境地图。这些机器人积极探索以最大程度地减少语义(对象分类)和几何(对象建模)信息中的不确定性。我们使用信息丰富但稀疏的对象模型表示环境,每个模型由基本形状和语义类标签组成,并使用大量现实世界数据在经验上表征不确定性。鉴于先前的地图,我们使用此模型为每个机器人选择动作以最大程度地减少不确定性。通过多种现实世界环境中的多机器人实验证明了我们的算法的性能。所提出的框架适用于广泛的现实问题,例如精确农业,基础设施检查和工厂中的资产映射。
translated by 谷歌翻译
自然界中多元化的生态学在许多物种中具有各种形式的群体行为。蝴蝶物种是随机飞行的突出物种之一,有点有见地,并将其转化为人造隐喻将导致巨大的可能性。本文认为一种这种隐喻称为蝴蝶交配优化(BMO)。在BMO中,BFLE遵循巡逻的交配现象,并同时捕获了多模式函数的所有局部优势。为了模仿该算法,设计了一个移动机器人(BFlyBot),以满足BMO算法中BFLE的功能。此外,多Bflybot群的设计旨在像蝴蝶本质上的作用,并遵循该算法的规则。实时实验是在多动物领域的BMO算法上进行的,并将信号源视为光源。实验结果表明,BMO算法适用于检测多个信号源,其运动的变化显着,即静态和动态。在静态信号源的情况下,随着BFlybot的初始位置的不同,收敛性在时间和平稳性方面受到影响。而具有不同阶梯尺寸的实验会导致它们在机器人的执行时间和速度方面的变化。在这项工作中,在动态环境中进行了实验,在该环境中,信号源在操纵和非操作场景中的运动。 Bflybot群能够检测到单个和多信号源,在两个固定点之间在两个固定点之间进行线性移动,以圆形,向上和向下运动。评估BMO现象,各种正在进行的和前瞻性的作品,例如中海船舶检测,讨论了空中搜索应用和地震预测。
translated by 谷歌翻译
最近的波能转化器(WEC)配备了多个腿和发电机,以最大程度地发电。传统控制器显示出捕获复杂波形模式的局限性,并且控制器必须有效地最大化能量捕获。本文介绍了多项式增强学习控制器(MARL),该控制器的表现优于传统使用的弹簧减震器控制器。我们的最初研究表明,问题的复杂性质使训练很难融合。因此,我们提出了一种新颖的跳过训练方法,使MARL训练能够克服性能饱和,并与默认的MARL训练相比,融合到最佳控制器,从而增强发电。我们还提出了另一种新型的混合训练初始化(STHTI)方法,其中最初可以单独针对基线弹簧减震器(SD)控制器对MARL控制器的个别代理进行训练,然后在将来一次或将来培训一个代理商或全部培训加速收敛。我们使用异步参与者-Critic(A3C)算法在基线弹簧减震器控制器上实现了基线弹簧减震器控制器的能源效率的两位数提高。
translated by 谷歌翻译