Large, text-conditioned generative diffusion models have recently gained a lot of attention for their impressive performance in generating high-fidelity images from text alone. However, achieving high-quality results is almost unfeasible in a one-shot fashion. On the contrary, text-guided image generation involves the user making many slight changes to inputs in order to iteratively carve out the envisioned image. However, slight changes to the input prompt often lead to entirely different images being generated, and thus the control of the artist is limited in its granularity. To provide flexibility, we present the Stable Artist, an image editing approach enabling fine-grained control of the image generation process. The main component is semantic guidance (SEGA) which steers the diffusion process along variable numbers of semantic directions. This allows for subtle edits to images, changes in composition and style, as well as optimization of the overall artistic conception. Furthermore, SEGA enables probing of latent spaces to gain insights into the representation of concepts learned by the model, even complex ones such as 'carbon emission'. We demonstrate the Stable Artist on several tasks, showcasing high-quality image editing and composition.
translated by 谷歌翻译
Text-conditioned image generation models have recently achieved astonishing results in image quality and text alignment and are consequently employed in a fast-growing number of applications. Since they are highly data-driven, relying on billion-sized datasets randomly scraped from the internet, they also suffer, as we demonstrate, from degenerated and biased human behavior. In turn, they may even reinforce such biases. To help combat these undesired side effects, we present safe latent diffusion (SLD). Specifically, to measure the inappropriate degeneration due to unfiltered and imbalanced training sets, we establish a novel image generation test bed-inappropriate image prompts (I2P)-containing dedicated, real-world image-to-text prompts covering concepts such as nudity and violence. As our exhaustive empirical evaluation demonstrates, the introduced SLD removes and suppresses inappropriate image parts during the diffusion process, with no additional training required and no adverse effect on overall image quality or text alignment.
translated by 谷歌翻译
文本到图像模型最近通过光合现实质量看似准确的样本取得了巨大的成功。但是,随着最先进的语言模型仍在努力评估精确陈述,基于语言模型的图像生成过程也是如此。在这项工作中,我们展示了最先进的文本对图像模型(例如Dall-e)的问题,并通过与Draw基准基准相关的语句生成准确的样本。此外,我们表明剪辑无法始终如一地重新读取这些样品。为此,我们提出了Logicrank,这是一种神经符号推理框架,可以为这种精确要求设置提供更准确的排名系统。Logicrank平稳地集成到文本到图像模型的生成过程中,而且可以用于进一步调整更逻辑的精确模型。
translated by 谷歌翻译
从预训练的语言模型中进行的引导已被证明是用于建立基础视觉模型(VLM)的有效方法,例如图像字幕或视觉问题的答案。但是,很难用它来使模型符合用户的理由来获得特定答案。为了引起和加强常识性原因,我们提出了一个迭代采样和调整范式,称为Illume,执行以下循环:给定图像问题提示提示,VLM采样了多个候选人,并通过人类评论家通过偏好提供最小的反馈。选择,用于微调。该循环增加了训练数据,并逐渐雕刻出VLM的合理化功能。我们的详尽实验表明,Illume在使用较少的培训数据的同时,仅需要最少的反馈,与标准监督的微调竞争。
translated by 谷歌翻译
当前机器学习的大部分基础的大型数据集提出了有关不适当内容的严重问题,例如冒犯,侮辱,威胁或可能引起焦虑。这要求增加数据集文档,例如使用数据表。它们除其他主题外,还鼓励反思数据集的组成。到目前为止,该文档是手动完成的,因此可能是乏味且容易出错的,尤其是对于大型图像数据集。在这里,我们询问了机器是否可以帮助我们反思不适当的内容的“循环”问题,回答了数据表中的问题16。为此,我们建议使用存储在预训练的变压器模型中的信息来协助我们进行文档过程。具体而言,基于社会 - 道德价值数据集的及时调整引导剪辑以识别潜在的不适当的内容,从而减少了人工的劳动。然后,我们根据使用视觉模型生成的字幕来记录使用单词云找到的不适当图像。两个流行的大规模计算机视觉数据集的文档(ImageNet和OpenImages)以这种方式产生,这表明机器确实可以帮助数据集创建者回答有关不适当图像内容的问题16。
translated by 谷歌翻译
没有强烈监督的原始图像学习视觉概念是一个具有挑战性的任务。在这项工作中,我们展示了理解和修改神经概念学习者的潜在空间的原型表示的优势。为此目的,我们介绍交互式概念交换网络(ICSNS),这是一种通过弱监督和隐式原型表示学习概念接地表示的新框架。ICSNS学习通过交换配对图像的潜在表示来将概念信息与特定的原型插槽绑定。这种语义接地和离散的潜在空间有助于人类的理解和人机相互作用。我们通过对我们的小说数据集“基本概念推理”(ECR)进行实验来支持这一主张,重点关注几何对象共享的视觉概念。
translated by 谷歌翻译
生物学的最新见解表明,智力不仅从神经元之间的连接中出现,而是单独的神经元肩部比以前预期的计算责任更多。这种观点在不断变化不同的加强学习环境的背景下,目前的方法仍然主要采用静态激活功能。在这项工作中,我们激励为什么理性适合适应性激活函数以及为什么将其纳入神经网络至关重要。灵感来自剩余网络中的复发,我们得出了一个有理单位在残留连接下关闭的条件,并制定了自然的正规化版本:复发性理性。我们证明,用(反复间)的Rational Activations的流行算法导致Atari Games的一致性改进,特别是将简单的DQN转化为稳定的方法,竞争DDQN和Rainbow。
translated by 谷歌翻译
We consider the contextual bandit problem on general action and context spaces, where the learner's rewards depend on their selected actions and an observable context. This generalizes the standard multi-armed bandit to the case where side information is available, e.g., patients' records or customers' history, which allows for personalized treatment. We focus on consistency -- vanishing regret compared to the optimal policy -- and show that for large classes of non-i.i.d. contexts, consistency can be achieved regardless of the time-invariant reward mechanism, a property known as universal consistency. Precisely, we first give necessary and sufficient conditions on the context-generating process for universal consistency to be possible. Second, we show that there always exists an algorithm that guarantees universal consistency whenever this is achievable, called an optimistically universal learning rule. Interestingly, for finite action spaces, learnable processes for universal learning are exactly the same as in the full-feedback setting of supervised learning, previously studied in the literature. In other words, learning can be performed with partial feedback without any generalization cost. The algorithms balance a trade-off between generalization (similar to structural risk minimization) and personalization (tailoring actions to specific contexts). Lastly, we consider the case of added continuity assumptions on rewards and show that these lead to universal consistency for significantly larger classes of data-generating processes.
translated by 谷歌翻译
In this paper, we present a novel visual SLAM and long-term localization benchmark for autonomous driving in challenging conditions based on the large-scale 4Seasons dataset. The proposed benchmark provides drastic appearance variations caused by seasonal changes and diverse weather and illumination conditions. While significant progress has been made in advancing visual SLAM on small-scale datasets with similar conditions, there is still a lack of unified benchmarks representative of real-world scenarios for autonomous driving. We introduce a new unified benchmark for jointly evaluating visual odometry, global place recognition, and map-based visual localization performance which is crucial to successfully enable autonomous driving in any condition. The data has been collected for more than one year, resulting in more than 300 km of recordings in nine different environments ranging from a multi-level parking garage to urban (including tunnels) to countryside and highway. We provide globally consistent reference poses with up to centimeter-level accuracy obtained from the fusion of direct stereo-inertial odometry with RTK GNSS. We evaluate the performance of several state-of-the-art visual odometry and visual localization baseline approaches on the benchmark and analyze their properties. The experimental results provide new insights into current approaches and show promising potential for future research. Our benchmark and evaluation protocols will be available at https://www.4seasons-dataset.com/.
translated by 谷歌翻译
Implicit Neural Representations (INR) have recently shown to be powerful tool for high-quality video compression. However, existing works are limiting as they do not explicitly exploit the temporal redundancy in videos, leading to a long encoding time. Additionally, these methods have fixed architectures which do not scale to longer videos or higher resolutions. To address these issues, we propose NIRVANA, which treats videos as groups of frames and fits separate networks to each group performing patch-wise prediction. This design shares computation within each group, in the spatial and temporal dimensions, resulting in reduced encoding time of the video. The video representation is modeled autoregressively, with networks fit on a current group initialized using weights from the previous group's model. To further enhance efficiency, we perform quantization of the network parameters during training, requiring no post-hoc pruning or quantization. When compared with previous works on the benchmark UVG dataset, NIRVANA improves encoding quality from 37.36 to 37.70 (in terms of PSNR) and the encoding speed by 12X, while maintaining the same compression rate. In contrast to prior video INR works which struggle with larger resolution and longer videos, we show that our algorithm is highly flexible and scales naturally due to its patch-wise and autoregressive designs. Moreover, our method achieves variable bitrate compression by adapting to videos with varying inter-frame motion. NIRVANA achieves 6X decoding speed and scales well with more GPUs, making it practical for various deployment scenarios.
translated by 谷歌翻译