图像着色是计算机视觉中的一个众所周知的问题。但是,由于任务的性质不足,图像着色本质上是具有挑战性的。尽管研究人员已经进行了几次尝试制作着色管道自动化,但由于缺乏调理,这些过程通常会产生不切实际的结果。在这项工作中,我们试图将文本描述与要着色的灰度图像一起集成为辅助条件,以提高着色过程的忠诚度。据我们所知,这是将文本条件纳入着色管道中的首次尝试之一。为此,我们提出了一个新颖的深网,该网络采用了两个输入(灰度图像和相应的编码文本描述),并试图预测相关的颜色范围。由于各自的文本描述包含场景中存在的对象的颜色信息,因此文本编码有助于提高预测颜色的整体质量。我们已经使用不同的指标评估了我们提出的模型,并发现它在定性和定量上都优于最先进的着色算法。
translated by 谷歌翻译
在计算机视觉中,人类的姿势合成和转移与以前看不见的姿势的概率图像产生相关的概率图像产生。尽管研究人员最近提出了几种实现此任务的方法,但这些技术中的大多数直接从特定数据集中的所需目标图像中得出了姿势,这使得基础过程挑战在现实世界情景中应用于目标图像的生成是实际目标。在本文中,我们首先介绍当前姿势转移算法的缺点,然后提出一种新型的基于文本的姿势转移技术来解决这些问题。我们将问题分为三个独立的阶段:(a)文本构成表示,(b)姿势改进,(c)姿势渲染。据我们所知,这是开发基于文本的姿势转移框架的首次尝试之一,我们还通过为DeepFashion数据集的图像添加描述性姿势注释,从而引入了新的数据集DF-PASS。所提出的方法在我们的实验中产生了具有显着定性和定量得分的有希望的结果。
translated by 谷歌翻译
接受场(RF)的大小一直是时间序列分类任务中一维卷积神经网络(1D-CNN)的最重要因素之一。已经采取了巨大的努力来选择适当的大小,因为它对性能产生了巨大影响,并且每个数据集都有很大的不同。在本文中,我们为1D-CNN提出了一个Omni级块(OS-Block),其中内核大小由简单而通用的规则决定。特别是,它是一组内核大小,可以根据时间序列的长度通过多个素数组成,可以有效地覆盖不同数据集的最佳RF大小。实验结果表明,具有OSBlock的模型可以达到与搜索最佳RF尺寸的模型相似的性能,并且由于最佳的最佳RF尺寸捕获能力,具有OS-Block的简单1D-CNN模型可实现最新状态。四个时间序列基准的ART性能,包括来自多个域的单变量和多元数据。全面的分析和讨论阐明了为什么OS-Block可以在不同数据集中捕获最佳的RF尺寸。可用代码[https://github.com/wensi-tang/os-cnn]
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
We present a dynamic path planning algorithm to navigate an amphibious rotor craft through a concave time-invariant obstacle field while attempting to minimize energy usage. We create a nonlinear quaternion state model that represents the rotor craft dynamics above and below the water. The 6 degree of freedom dynamics used within a layered architecture to generate motion paths for the vehicle to follow and the required control inputs. The rotor craft has a 3 dimensional map of its surroundings that is updated via limited range onboard sensor readings within the current medium (air or water). Path planning is done via PRM and D* Lite.
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
The visual dimension of cities has been a fundamental subject in urban studies, since the pioneering work of scholars such as Sitte, Lynch, Arnheim, and Jacobs. Several decades later, big data and artificial intelligence (AI) are revolutionizing how people move, sense, and interact with cities. This paper reviews the literature on the appearance and function of cities to illustrate how visual information has been used to understand them. A conceptual framework, Urban Visual Intelligence, is introduced to systematically elaborate on how new image data sources and AI techniques are reshaping the way researchers perceive and measure cities, enabling the study of the physical environment and its interactions with socioeconomic environments at various scales. The paper argues that these new approaches enable researchers to revisit the classic urban theories and themes, and potentially help cities create environments that are more in line with human behaviors and aspirations in the digital age.
translated by 谷歌翻译
Logic Mill is a scalable and openly accessible software system that identifies semantically similar documents within either one domain-specific corpus or multi-domain corpora. It uses advanced Natural Language Processing (NLP) techniques to generate numerical representations of documents. Currently it leverages a large pre-trained language model to generate these document representations. The system focuses on scientific publications and patent documents and contains more than 200 million documents. It is easily accessible via a simple Application Programming Interface (API) or via a web interface. Moreover, it is continuously being updated and can be extended to text corpora from other domains. We see this system as a general-purpose tool for future research applications in the social sciences and other domains.
translated by 谷歌翻译
The release of ChatGPT, a language model capable of generating text that appears human-like and authentic, has gained significant attention beyond the research community. We expect that the convincing performance of ChatGPT incentivizes users to apply it to a variety of downstream tasks, including prompting the model to simplify their own medical reports. To investigate this phenomenon, we conducted an exploratory case study. In a questionnaire, we asked 15 radiologists to assess the quality of radiology reports simplified by ChatGPT. Most radiologists agreed that the simplified reports were factually correct, complete, and not potentially harmful to the patient. Nevertheless, instances of incorrect statements, missed key medical findings, and potentially harmful passages were reported. While further studies are needed, the initial insights of this study indicate a great potential in using large language models like ChatGPT to improve patient-centered care in radiology and other medical domains.
translated by 谷歌翻译