Bimanual activities like coffee stirring, which require coordination of dual arms, are common in daily life and intractable to learn by robots. Adopting reinforcement learning to learn these tasks is a promising topic since it enables the robot to explore how dual arms coordinate together to accomplish the same task. However, this field has two main challenges: coordination mechanism and long-horizon task decomposition. Therefore, we propose the Mixline method to learn sub-tasks separately via the online algorithm and then compose them together based on the generated data through the offline algorithm. We constructed a learning environment based on the GPU-accelerated Isaac Gym. In our work, the bimanual robot successfully learned to grasp, hold and lift the spoon and cup, insert them together and stir the coffee. The proposed method has the potential to be extended to other long-horizon bimanual tasks.
translated by 谷歌翻译
深度神经网络(DNN)已在脑病变检测和分割中广泛采用。但是,在2D MRI切片中定位小病变是具有挑战性的,需要在3D上下文聚集的粒度和计算复杂性之间取得平衡。在本文中,我们提出了一种新型的视角变压器,以增强MRI特征的提取,以进行更准确的肿瘤检测。首先,所提出的变压器在3D脑扫描中收获了不同位置之间的远程相关性。其次,变压器将一堆切片功能堆叠为多个2D视图,并增强这些特征的视图,该功能大致以有效的方式实现了3D相关计算。第三,我们将提出的变压器模块部署在变压器主链中,该模块可以有效地检测到脑损伤周围的2D区域。实验结果表明,我们提出的观看式变压器在具有挑战性的大脑MRI数据集上对大脑病变检测表现良好。
translated by 谷歌翻译
因果发现旨在从观察数据中学习因果图。迄今为止,大多数因果发现方法需要将数据存储在中央服务器中。但是,数据所有者逐渐拒绝分享他们的个性化数据以避免隐私泄漏,使这项任务通过切断第一步来更加麻烦。出现拼图:$ \ texit {如何从分散数据的原因关系推断出来自分散数据的因果关系?} $本文,具有数据的添加性噪声模型假设,我们参加了开发基于渐变的学习框架命名为DAG共享的渐变学习框架联邦因果发现(DS-FCD),可以在不直接触摸本地数据的情况下学习因果图,并自然地处理数据异质性。 DS-FCD受益于每个本地模型的两级结构。第一级别学习因果图并与服务器通信以获取来自其他客户端的模型信息,而第二级别近似于因果机制,并且从其自身的数据逐步更新以适应数据异质性。此外,DS-FCD通过利用平等的非循环性约束,将整体学习任务制定为连续优化问题,这可以通过梯度下降方法自然地解决。对合成和现实世界数据集的广泛实验验证了所提出的方法的功效。
translated by 谷歌翻译
背景:基于其可变的历史视觉记录,对青少年的球形等效物进行定量预测。方法:从2019年10月到2022年3月,我们检查了来自中国成都成都6-20岁的37,586名青少年的双眼未校正视力,轴向长度,角膜曲率和轴向75,172眼。 80 \%样品由训练集和剩余的20 \%组成测试集。时间感知的长期短期记忆被用来定量预测青少年在两年半内的球形当量。结果:球形当量的测试集的平均绝对预测误差为0.273-0.257,如果我们考虑不同的历史记录和不同的预测持续时间,则从0.189-0.160到0.596-0.473。结论:时间感知时间长的短期记忆被应用于不规则采样时间序列中的时间特征,这更符合实际数据的特征,因此具有更高的适用性,并有助于较早地识别近视的进展。总体误差0.273远小于临床上可接受预测的标准,例如0.75。
translated by 谷歌翻译
基于图像补丁重建的自我监督学习方法在培训自动编码器方面取得了巨大的成功,其预训练的权重可以转移到微调图像理解的其他下游任务。但是,现有方法很少研究重建斑块的各种重要性和解剖结构的对称性,当它们应用于3D医学图像时。在本文中,我们提出了一种基于3D脑MRI分割任务的视觉变压器(VIT)的新颖的对称自动编码器(ASA)。我们猜想,强迫自动编码器恢复信息性图像区域可以收获更多的判别性表示,而不是恢复光滑的图像贴片。然后,我们采用基于梯度的指标来估计每个图像补丁的重要性。在预训练阶段,提议的自动编码器更多地注意根据梯度指标重建信息贴片。此外,我们求助于大脑结构的先验,并开发一种对称位置编码(SPE)方法,以更好地利用远距离但空间对称区域之间的相关性以获得有效的特征。实验结果表明,我们提出的细心对称自动编码器的表现优于三个大脑MRI分割基准的最先进的自我监督学习方法和医学图像分割模型。
translated by 谷歌翻译
Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on two datasets with different organs and modalities, where it substantially outperforms existing techniques.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence methods, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the overall policy performance with respect to the frames in demonstrations. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames' importance equality, the effectiveness of the importance map, and connections between importance maps from different IL models. The result shows that R2RISE successfully distinguishes important frames from the demonstrations.
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
We propose a distributionally robust return-risk model for Markov decision processes (MDPs) under risk and reward ambiguity. The proposed model optimizes the weighted average of mean and percentile performances, and it covers the distributionally robust MDPs and the distributionally robust chance-constrained MDPs (both under reward ambiguity) as special cases. By considering that the unknown reward distribution lies in a Wasserstein ambiguity set, we derive the tractable reformulation for our model. In particular, we show that that the return-risk model can also account for risk from uncertain transition kernel when one only seeks deterministic policies, and that a distributionally robust MDP under the percentile criterion can be reformulated as its nominal counterpart at an adjusted risk level. A scalable first-order algorithm is designed to solve large-scale problems, and we demonstrate the advantages of our proposed model and algorithm through numerical experiments.
translated by 谷歌翻译