With the advanced request to employ a team of robots to perform a task collaboratively, the research community has become increasingly interested in collaborative simultaneous localization and mapping. Unfortunately, existing datasets are limited in the scale and variation of the collaborative trajectories, even though generalization between inter-trajectories among different agents is crucial to the overall viability of collaborative tasks. To help align the research community's contributions with realistic multiagent ordinated SLAM problems, we propose S3E, a large-scale multimodal dataset captured by a fleet of unmanned ground vehicles along four designed collaborative trajectory paradigms. S3E consists of 7 outdoor and 5 indoor sequences that each exceed 200 seconds, consisting of well temporal synchronized and spatial calibrated high-frequency IMU, high-quality stereo camera, and 360 degree LiDAR data. Crucially, our effort exceeds previous attempts regarding dataset size, scene variability, and complexity. It has 4x as much average recording time as the pioneering EuRoC dataset. We also provide careful dataset analysis as well as baselines for collaborative SLAM and single counterparts. Data and more up-to-date details are found at https://github.com/PengYu-Team/S3E.
translated by 谷歌翻译
3D场景感性风格化旨在根据给定的样式图像从任意新颖的视图中生成光真逼真的图像,同时在从不同观点呈现时确保一致性。一些带有神经辐射场的现有风格化方法可以通过将样式图像的特征与多视图图像结合到训练3D场景来有效地预测风格化的场景。但是,这些方法生成了包含令人反感的伪影的新型视图图像。此外,他们无法为3D场景实现普遍的影迷风格化。因此,样式图像必须根据神经辐射场重新训练3D场景表示网络。我们提出了一个新颖的3D场景,逼真的风格转移框架来解决这些问题。它可以通过2D样式图像实现感性3D场景样式转移。我们首先预先训练了2D逼真的样式传输网络,该网络可以符合任何给定内容图像和样式图像之间的影片风格转移。然后,我们使用体素特征来优化3D场景并获得场景的几何表示。最后,我们共同优化了一个超级网络,以实现场景的逼真风格传输的任意样式图像。在转移阶段,我们使用预先训练的2D影视网络来限制3D场景中不同视图和不同样式图像的感性风格。实验结果表明,我们的方法不仅实现了任意样式图像的3D影像风格转移,而且还优于视觉质量和一致性方面的现有方法。项目页面:https://semchan.github.io/upst_nerf。
translated by 谷歌翻译
深度学习方法论为高光谱图像(HSI)分析社区的发展做出了很大贡献。但是,这也使HSI分析系统容易受到对抗攻击的影响。为此,我们在本文中提出了一个掩盖的空间光谱自动编码器(MSSA),根据自我监督的学习理论,以增强HSI分析系统的鲁棒性。首先,进行了一个掩盖的序列注意学习模块,以促进沿光谱通道的HSI分析系统的固有鲁棒性。然后,我们开发了一个具有可学习的图形结构的图形卷积网络,以建立全局像素的组合。这样,每种组合中的所有相关像素都可以分散攻击效果,并且在空间方面可以实现更好的防御性能。最后,为了提高防御能力并解决有限标记样品的问题,MSSA采用光谱重建作为借口任务,并以自我监督的方式适合数据集。 - 高光谱分类方法和代表性的对抗防御策略。
translated by 谷歌翻译
快速可靠的连接对于提高公共安全关键任务(MC)用户的情境意识和运营效率至关重要。在紧急情况或灾害环境中,如果现有的蜂窝网络覆盖和容量可能无法满足MC通信需求,可以迅速地利用可部署网络的解决方案,例如单元轮/翼,以确保对MC用户的可靠连接。在本文中,我们考虑一种情况,其中宏基站(BS)由于自然灾害而被破坏,并且设置了携带BS(UAV-BS)的无人驾驶飞行器(UAV-BS)以为灾区中的用户提供临时覆盖。使用5G集成访问和回程(IAB)技术将UAV-BS集成到移动网络中。我们提出了一种框架和信令程序,用于将机器学习应用于此用例。深度加强学习算法旨在共同优化访问和回程天线倾斜以及UAV-BS的三维位置,以便在保持良好的回程连接的同时最佳地服务于地面MC用户。我们的结果表明,所提出的算法可以自主地导航和配置UAV-BS以提高吞吐量并降低MC用户的下降速率。
translated by 谷歌翻译
知识图表通常掺入到推荐系统,以提高整体性能。由于知识图的推广和规模,大多数知识的关系是不是目标用户项预测有帮助。要利用知识图在推荐系统捕捉目标具体知识的关系,我们需要提炼知识图,以保留有用的信息和完善的知识来捕捉用户的喜好。为了解决这个问题,我们提出了知识感知条件注意网络(KCAN),这是一个终端到终端的模式纳入知识图形转换为推荐系统。具体来说,我们使用一个知识感知注意传播方式,以获得所述节点表示第一,其捕获用户 - 项目网络和知识图表对全球语义相似度。然后给出一个目标,即用户 - 项对,我们会自动提炼出知识图到基于知识感知关注的具体目标子。随后,通过在应用子有条件的注意力聚集,我们细化知识图,以获得特定目标节点表示。因此,我们可以得到两个表示性和个性化,以实现整体性能。现实世界的数据集实验结果表明,我们对国家的最先进的算法框架的有效性。
translated by 谷歌翻译
The ability to jointly learn from multiple modalities, such as text, audio, and visual data, is a defining feature of intelligent systems. While there have been promising advances in designing neural networks to harness multimodal data, the enormous success of data augmentation currently remains limited to single-modality tasks like image classification. Indeed, it is particularly difficult to augment each modality while preserving the overall semantic structure of the data; for example, a caption may no longer be a good description of an image after standard augmentations have been applied, such as translation. Moreover, it is challenging to specify reasonable transformations that are not tailored to a particular modality. In this paper, we introduce LeMDA, Learning Multimodal Data Augmentation, an easy-to-use method that automatically learns to jointly augment multimodal data in feature space, with no constraints on the identities of the modalities or the relationship between modalities. We show that LeMDA can (1) profoundly improve the performance of multimodal deep learning architectures, (2) apply to combinations of modalities that have not been previously considered, and (3) achieve state-of-the-art results on a wide range of applications comprised of image, text, and tabular data.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Multimodal image-text models have shown remarkable performance in the past few years. However, evaluating their robustness against distribution shifts is crucial before adopting them in real-world applications. In this paper, we investigate the robustness of 9 popular open-sourced image-text models under common perturbations on five tasks (image-text retrieval, visual reasoning, visual entailment, image captioning, and text-to-image generation). In particular, we propose several new multimodal robustness benchmarks by applying 17 image perturbation and 16 text perturbation techniques on top of existing datasets. We observe that multimodal models are not robust to image and text perturbations, especially to image perturbations. Among the tested perturbation methods, character-level perturbations constitute the most severe distribution shift for text, and zoom blur is the most severe shift for image data. We also introduce two new robustness metrics (MMI and MOR) for proper evaluations of multimodal models. We hope our extensive study sheds light on new directions for the development of robust multimodal models.
translated by 谷歌翻译
Real-time monocular 3D reconstruction is a challenging problem that remains unsolved. Although recent end-to-end methods have demonstrated promising results, tiny structures and geometric boundaries are hardly captured due to their insufficient supervision neglecting spatial details and oversimplified feature fusion ignoring temporal cues. To address the problems, we propose an end-to-end 3D reconstruction network SST, which utilizes Sparse estimated points from visual SLAM system as additional Spatial guidance and fuses Temporal features via a novel cross-modal attention mechanism, achieving more detailed reconstruction results. We propose a Local Spatial-Temporal Fusion module to exploit more informative spatial-temporal cues from multi-view color information and sparse priors, as well a Global Spatial-Temporal Fusion module to refine the local TSDF volumes with the world-frame model from coarse to fine. Extensive experiments on ScanNet and 7-Scenes demonstrate that SST outperforms all state-of-the-art competitors, whilst keeping a high inference speed at 59 FPS, enabling real-world applications with real-time requirements.
translated by 谷歌翻译
Image-text retrieval in remote sensing aims to provide flexible information for data analysis and application. In recent years, state-of-the-art methods are dedicated to ``scale decoupling'' and ``semantic decoupling'' strategies to further enhance the capability of representation. However, these previous approaches focus on either the disentangling scale or semantics but ignore merging these two ideas in a union model, which extremely limits the performance of cross-modal retrieval models. To address these issues, we propose a novel Scale-Semantic Joint Decoupling Network (SSJDN) for remote sensing image-text retrieval. Specifically, we design the Bidirectional Scale Decoupling (BSD) module, which exploits Salience Feature Extraction (SFE) and Salience-Guided Suppression (SGS) units to adaptively extract potential features and suppress cumbersome features at other scales in a bidirectional pattern to yield different scale clues. Besides, we design the Label-supervised Semantic Decoupling (LSD) module by leveraging the category semantic labels as prior knowledge to supervise images and texts probing significant semantic-related information. Finally, we design a Semantic-guided Triple Loss (STL), which adaptively generates a constant to adjust the loss function to improve the probability of matching the same semantic image and text and shorten the convergence time of the retrieval model. Our proposed SSJDN outperforms state-of-the-art approaches in numerical experiments conducted on four benchmark remote sensing datasets.
translated by 谷歌翻译