A growing number of Machine Learning Frameworks recently made Deep Learning accessible to a wider audience of engineers, scientists, and practitioners, by allowing straightforward use of complex neural network architectures and algorithms. However, since deep learning is rapidly evolving, not only through theoretical advancements but also with respect to hardware and software engineering, ML frameworks often lose backward compatibility and introduce technical debt that can lead to bottlenecks and sub-optimal resource utilization. Moreover, the focus is in most cases not on deep learning engineering, but rather on new models and theoretical advancements. In this work, however, we focus on engineering, more specifically on the data loading pipeline in the PyTorch Framework. We designed a series of benchmarks that outline performance issues of certain steps in the data loading process. Our findings show that for classification tasks that involve loading many files, like images, the training wall-time can be significantly improved. With our new, modified ConcurrentDataloader we can reach improvements in GPU utilization and significantly reduce batch loading time, up to 12X. This allows for the use of the cloud-based, S3-like object storage for datasets, and have comparable training time as if datasets are stored on local drives.
translated by 谷歌翻译
基于文本描述的高分辨率遥感图像的合成在许多实际应用方案中具有巨大的潜力。尽管深度神经网络在许多重要的遥感任务中取得了巨大的成功,但是从文本描述中生成现实的遥感图像仍然非常困难。为了应对这一挑战,我们提出了一个新颖的文本形象现代霍普菲尔德网络(TXT2IMG-MHN)。 TXT2IMG-MHN的主要思想是在具有现代Hopfield层的文本和图像嵌入方式上进行层次原型学习。 TXT2IMG-MHN并没有直接学习具体但高度多样化的文本图像联合特征表示,而是旨在从文本图像嵌入中学习最具代表性的原型,从而实现一种粗略的学习策略。然后可以利用这些学到的原型来代表文本到图像生成任务中更复杂的语义。为了更好地评估生成图像的现实主义和语义一致性,我们使用对合成图像训练的分类模型对真实遥感数据进行零击分类。尽管它很简单,但我们发现,零弹性分类的总体准确性可以作为评估从文本生成图像的能力的良好指标。基准遥感文本图像数据集上的广泛实验表明,所提出的TXT2IMG-MHN比现有方法可以生成更现实的遥感图像。代码和预培训模型可在线获得(https://github.com/yonghaoxu/txt2img-mhn)。
translated by 谷歌翻译
这项研究介绍了\ textit {landslide4sense},这是一种从遥感中检测到滑坡检测的参考基准。该存储库具有3,799个图像贴片,可从Sentinel-2传感器中融合光学层,并带有数字高程模型和来自ALOS Palsar的斜率层。附加的地形信息促进了对滑坡边界的准确检测,而最近的研究表明,仅使用光学数据,这是具有挑战性的。广泛的数据集支持在滑坡检测中进行深度学习(DL)研究,以及用于系统更新滑坡库存的方法的开发和验证。基准数据集已在四个不同的时间和地理位置收集:伊伯里(2018年9月),科达古(2018年8月),戈尔卡(2015年4月)和台湾(2009年8月)。每个图像像素均标记为属于滑坡,包括各种来源和彻底的手动注释。然后,我们评估11个最先进的DL分割模型的滑坡检测性能:U-NET,RESU-NET,PSPNET,CONTECTNET,DEEPLAB-V2,DEEPLAB-V3+,FCN-8,LINKNET,FRRRN-A,FRRN-A,, FRRN-B和SQNET。所有型号均已从划痕上对每个研究区域的四分之一的补丁进行培训,并在其他三个季度的独立贴片上进行了测试。我们的实验表明,Resu-NET的表现优于其他模型,用于滑坡检测任务。我们在\ url {www.landslide4sense.org}公开获得多种源滑坡基准数据(Landslide4sense)和经过测试的DL模型,为遥感,计算机视觉和机器学习社区建立了重要的资源通常,尤其是对滑坡检测的应用。
translated by 谷歌翻译
剪辑在零拍传输学习任务上产生了令人印象深刻的结果,并被视为BERT或GPT3等基础模型。具有丰富表示形式的剪辑视觉模型是使用Infonce目标和自然语言监督对特定任务进行微调之前进行预训练的。尽管剪辑在零拍传输学习方面表现出色,但它遭受了解释的问题,也就是说,它的重点是一个或几个功能,同时忽略了其他相关功能。该问题是由于原始多模式数据中未充分提取协方差结构而引起的。我们建议使用现代Hopfield网络来解决解释的问题。他们检索到的嵌入具有富集的协方差结构,该结构源自存储嵌入中特征的共发生。但是,现代的Hopfield网络增加了阻碍学习的Infonce目标的饱和效应。我们建议使用Infoloob目标来减轻这种饱和效果。我们介绍了小说``对比抛弃了一个增压'(Cloob),该小说使用现代的Hopfield网络与Infoloob Opportions一起进行协方差丰富。在实验中,我们将Cloob与概念标题进行预培训后的剪辑和YFCC数据集进行了比较,相对于其在其他数据集上的零拍传输学习性能。 Cloob在所有考虑的架构和数据集中始终在零摄像转移学习上胜过剪辑。
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
We present a dynamic path planning algorithm to navigate an amphibious rotor craft through a concave time-invariant obstacle field while attempting to minimize energy usage. We create a nonlinear quaternion state model that represents the rotor craft dynamics above and below the water. The 6 degree of freedom dynamics used within a layered architecture to generate motion paths for the vehicle to follow and the required control inputs. The rotor craft has a 3 dimensional map of its surroundings that is updated via limited range onboard sensor readings within the current medium (air or water). Path planning is done via PRM and D* Lite.
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
The visual dimension of cities has been a fundamental subject in urban studies, since the pioneering work of scholars such as Sitte, Lynch, Arnheim, and Jacobs. Several decades later, big data and artificial intelligence (AI) are revolutionizing how people move, sense, and interact with cities. This paper reviews the literature on the appearance and function of cities to illustrate how visual information has been used to understand them. A conceptual framework, Urban Visual Intelligence, is introduced to systematically elaborate on how new image data sources and AI techniques are reshaping the way researchers perceive and measure cities, enabling the study of the physical environment and its interactions with socioeconomic environments at various scales. The paper argues that these new approaches enable researchers to revisit the classic urban theories and themes, and potentially help cities create environments that are more in line with human behaviors and aspirations in the digital age.
translated by 谷歌翻译
Logic Mill is a scalable and openly accessible software system that identifies semantically similar documents within either one domain-specific corpus or multi-domain corpora. It uses advanced Natural Language Processing (NLP) techniques to generate numerical representations of documents. Currently it leverages a large pre-trained language model to generate these document representations. The system focuses on scientific publications and patent documents and contains more than 200 million documents. It is easily accessible via a simple Application Programming Interface (API) or via a web interface. Moreover, it is continuously being updated and can be extended to text corpora from other domains. We see this system as a general-purpose tool for future research applications in the social sciences and other domains.
translated by 谷歌翻译