Language use changes over time, and this impacts the effectiveness of NLP systems. This phenomenon is even more prevalent in social media data during crisis events where meaning and frequency of word usage may change over the course of days. Contextual language models fail to adapt temporally, emphasizing the need for temporal adaptation in models which need to be deployed over an extended period of time. While existing approaches consider data spanning large periods of time (from years to decades), shorter time spans are critical for crisis data. We quantify temporal degradation for this scenario and propose methods to cope with performance loss by leveraging techniques from domain adaptation. To the best of our knowledge, this is the first effort to explore effects of rapid language change driven by adversarial adaptations, particularly during natural and human-induced disasters. Through extensive experimentation on diverse crisis datasets, we analyze under what conditions our approaches outperform strong baselines while highlighting the current limitations of temporal adaptation methods in scenarios where access to unlabeled data is scarce.
translated by 谷歌翻译
This paper presents a two-step algorithm for online trajectory planning in indoor environments with unknown obstacles. In the first step, sampling-based path planning techniques such as the optimal Rapidly exploring Random Tree (RRT*) algorithm and the Line-of-Sight (LOS) algorithm are employed to generate a collision-free path consisting of multiple waypoints. Then, in the second step, constrained quadratic programming is utilized to compute a smooth trajectory that passes through all computed waypoints. The main contribution of this work is the development of a flexible trajectory planning framework that can detect changes in the environment, such as new obstacles, and compute alternative trajectories in real time. The proposed algorithm actively considers all changes in the environment and performs the replanning process only on waypoints that are occupied by new obstacles. This helps to reduce the computation time and realize the proposed approach in real time. The feasibility of the proposed algorithm is evaluated using the Intel Aero Ready-to-Fly (RTF) quadcopter in simulation and in a real-world experiment.
translated by 谷歌翻译
Solving the analytical inverse kinematics (IK) of redundant manipulators in real time is a difficult problem in robotics since its solution for a given target pose is not unique. Moreover, choosing the optimal IK solution with respect to application-specific demands helps to improve the robustness and to increase the success rate when driving the manipulator from its current configuration towards a desired pose. This is necessary, especially in high-dynamic tasks like catching objects in mid-flights. To compute a suitable target configuration in the joint space for a given target pose in the trajectory planning context, various factors such as travel time or manipulability must be considered. However, these factors increase the complexity of the overall problem which impedes real-time implementation. In this paper, a real-time framework to compute the analytical inverse kinematics of a redundant robot is presented. To this end, the analytical IK of the redundant manipulator is parameterized by so-called redundancy parameters, which are combined with a target pose to yield a unique IK solution. Most existing works in the literature either try to approximate the direct mapping from the desired pose of the manipulator to the solution of the IK or cluster the entire workspace to find IK solutions. In contrast, the proposed framework directly learns these redundancy parameters by using a neural network (NN) that provides the optimal IK solution with respect to the manipulability and the closeness to the current robot configuration. Monte Carlo simulations show the effectiveness of the proposed approach which is accurate and real-time capable ($\approx$ \SI{32}{\micro\second}) on the KUKA LBR iiwa 14 R820.
translated by 谷歌翻译
Planet formation is a multi-scale process in which the coagulation of $\mathrm{\mu m}$-sized dust grains in protoplanetary disks is strongly influenced by the hydrodynamic processes on scales of astronomical units ($\approx 1.5\times 10^8 \,\mathrm{km}$). Studies are therefore dependent on subgrid models to emulate the micro physics of dust coagulation on top of a large scale hydrodynamic simulation. Numerical simulations which include the relevant physical effects are complex and computationally expensive. Here, we present a fast and accurate learned effective model for dust coagulation, trained on data from high resolution numerical coagulation simulations. Our model captures details of the dust coagulation process that were so far not tractable with other dust coagulation prescriptions with similar computational efficiency.
translated by 谷歌翻译
Machine learning tasks often require a significant amount of training data for the resultant network to perform suitably for a given problem in any domain. In agriculture, dataset sizes are further limited by phenotypical differences between two plants of the same genotype, often as a result of differing growing conditions. Synthetically-augmented datasets have shown promise in improving existing models when real data is not available. In this paper, we employ a contrastive unpaired translation (CUT) generative adversarial network (GAN) and simple image processing techniques to translate indoor plant images to appear as field images. While we train our network to translate an image containing only a single plant, we show that our method is easily extendable to produce multiple-plant field images. Furthermore, we use our synthetic multi-plant images to train several YoloV5 nano object detection models to perform the task of plant detection and measure the accuracy of the model on real field data images. Including training data generated by the CUT-GAN leads to better plant detection performance compared to a network trained solely on real data.
translated by 谷歌翻译
This work proposes a novel singularity avoidance approach for real-time trajectory optimization based on known singular configurations. The focus of this work lies on analyzing kinematically singular configurations for three robots with different kinematic structures, i.e., the Comau Racer 7-1.4, the KUKA LBR iiwa R820, and the Franka Emika Panda, and exploiting these configurations in form of tailored potential functions for singularity avoidance. Monte Carlo simulations of the proposed method and the commonly used manipulability maximization approach are performed for comparison. The numerical results show that the average computing time can be reduced and shorter trajectories in both time and path length are obtained with the proposed approach
translated by 谷歌翻译
元梯度提供了一种一般方法,以优化增强学习算法(RL)算法的元参数。元梯度的估计对于这些元算法的性能至关重要,并且已经在MAML式短距离元元RL问题的情况下进行了研究。在这种情况下,先前的工作调查了对RL目标的Hessian的估计,并通过进行抽样校正来解决信贷分配问题,以解决预先适应行为。但是,我们表明,例如由DICE及其变体实施的Hessian估计始终会增加偏差,还可以为元梯度估计增加差异。同时,在重要的长马设置中,元梯度估计的研究较少,在这种情况下,通过完整的内部优化轨迹的反向传播是不可行的。我们研究了截短的反向传播和采样校正引起的偏见和差异权衡,并与进化策略进行了比较,这是最近流行的长期替代策略。虽然先前的工作隐含地选择了这个偏见变化空间中的点,但我们解散了偏见和差异的来源,并提出了将现有估计器相互关联的经验研究。
translated by 谷歌翻译
借助情境化语言模型的成功,许多研究探讨了这些模型真正学到的知识,并且在哪些情况下仍然失败。这项工作的大部分都集中在特定的NLP任务和学习成果上。很少的研究试图使模型的弱点与特定任务的弱点相结合,并专注于嵌入本身及其学习方式。在本文中,我们抓住了这一研究机会:基于理论语言见解,我们探讨了功能词的语义限制是否是学习的,以及周围环境如何影响其嵌入。我们创建合适的数据集,为LMS VIS-VIS功能单词的内部工作提供新的见解,并实施辅助视觉网络界面以进行定性分析。
translated by 谷歌翻译
已经开发了许多本体论,即描述逻辑(DL)知识库,以提供有关各个领域的丰富知识,并且其中许多基于ALC,即原型和表达的DL或其扩展。探索ALC本体论的主要任务是计算语义范围。符号方法可以保证声音和完整的语义需要,但对不一致和缺失信息敏感。为此,我们提出了一个模糊的ALC本体神经推理器Falcon。 Falcon使用模糊逻辑运算符为任意ALC本体论生成单个模型结构,并使用多个模型结构来计算语义索引。理论结果表明,保证猎鹰是计算ALC本体学语义索引的声音和完整算法。实验结果表明,Falcon不仅可以近似推理(不完整的本体理由)和chanseansissist的推理(因本体不一致的推理),还可以通过结合ALC本体的背景知识来改善生物医学领域的机器学习。
translated by 谷歌翻译
视网膜成像数据中解剖特征的自动检测和定位与许多方面有关。在这项工作中,我们遵循一种以数据为中心的方法,以优化分类器训练,用于视神经层析成像中的视神经头部检测和定位。我们研究了域知识驱动空间复杂性降低对所得视神经头部分割和定位性能的影响。我们提出了一种机器学习方法,用于分割2D的视神经头3D广场扫描源光源光学相干断层扫描扫描,该扫描能够自动评估大量数据。对视网膜的手动注释2D EN的评估表明,当基础像素级分类任务通过域知识在空间上放松时,标准U-NET的训练可以改善视神经头部细分和定位性能。
translated by 谷歌翻译