在多人2D姿势估计中,自下而上的方法同时预测了所有人的姿势,与自上而下的方法不同,不依赖于人类的检测。但是,与现有的自上而下方法相比,SOTA自下而上的方法的精度仍然不如较低。这是由于预测的人类姿势是根据不一致的人类边界箱中心进行回归的,并且缺乏人类规范的正常化,从而导致预测的人类姿势被遗漏了不准确和小规模的人。为了推动自下而上的姿势估计的信封,我们首先提出了多尺度训练,以增强网络以通过单尺度测试来处理规模变化,尤其是对于小规模的人。其次,我们介绍了双解剖中心(即头部和身体),在这里我们可以更准确,可靠地预测人类的姿势,尤其是对于小规模的人。此外,现有的自下而上方法采用多尺度测试来以多个额外的前向通行证的价格提高姿势估计的准确性,这削弱了自下而上方法的效率,与自上而下的方法相比,核心强度。相比之下,我们的多尺度训练使该模型能够预测单个前向通行证(即单尺度测试)中的高质量姿势。我们的方法在边界框的精度方面取得了38.4 \%的改进,在边界框上进行了39.1 \%的改进,以对可可的具有挑战性的小规模人群进行对现状(SOTA)的回忆(SOTA)。对于人类姿势AP评估,我们在带有单尺度测试的可可测试-DEV集中实现了新的SOTA(71.0 AP)。我们还在跨数据库评估中在Ochuman数据集上实现了最高的性能(40.3 AP)。
translated by 谷歌翻译
夜间图像不仅遭受弱光,而且遭受光线分布不均匀的影响。大多数现有的夜间可见性增强方法主要集中在增强弱光区域。这不可避免地会导致明亮区域的过度增强和饱和度,例如受光效应(眩光,泛光灯等)影响的区域。为了解决这个问题,我们需要抑制明亮区域的光效应,同时促进黑暗区域的强度。考虑到这个想法,我们引入了一种无监督的方法,该方法集成了层分解网络和光效应抑制网络。给定单夜图像作为输入,我们的分解网络学会了分解阴影,反射率和光效应层,并在无监督的特定层特定的先前损失的指导下。我们的光效应抑制网络进一步抑制了光效应,同时增强了黑暗区域的照明。该光效应抑制网络利用了估计的光效应层,作为专注于光效应区域的指导。为了恢复背景细节并减少幻觉/人工制品,我们提出了结构和高频一致性损失。我们对真实图像的定量和定性评估表明,我们的方法在抑制夜光效应和提高黑暗区域的强度方面优于最先进的方法。
translated by 谷歌翻译
从单个图像中删除阴影通常仍然是一个开放的问题。大多数现有的基于学习的方法都使用监督的学习,并需要大量的配对图像(阴影和相应的非阴影图像)进行培训。最近的无监督方法,面具 - 饰面方法解决了这一限制。但是,它需要二进制掩码来表示阴影区域,从而使其不适合柔软的阴影。为了解决这个问题,在本文中,我们提出了一个无监督的域分类器引导删除网络DC-Shadownet。具体而言,我们建议将无阴影/无阴影域分类器集成到发电机及其歧视器中,从而使它们能够专注于阴影区域。为了训练我们的网络,我们引入了基于基于物理的无阴影色彩,阴影的感知特征和边界平滑度的新颖损失。此外,我们表明我们的无监督网络可用于测试时间培训,以进一步改善结果。我们的实验表明,所有这些新型组件允许我们的方法处理柔和的阴影,并且比现有的最新阴影去除方法在定量和定性上都能在硬阴影上表现更好。
translated by 谷歌翻译
本文分析了交付功能步态结果的联合空间步行机制和冗余。分析了两名参加多因素研究并在三个课程中行走的健康男性成年人的生物力学措施。两位参与者都采用不同的人体内部和人际补偿策略(例如,拱顶,髋关节远足)跨步行条件,并表现出显着的步态模式改变,同时保持任务空间(功能)步态参数不变。他们还更喜欢各种不对称的步长,但在自由步行过程中保持了对称步长的一致性和Cadence-Invariant。结果表明,个性化方法的重要性以及需要从功能(任务空间)到关节空间步态分析的范式转变,以便在(a)典型步态和提供以人为中心的人类机器人相互作用。
translated by 谷歌翻译
我们提出了一种雷达惯性内径测量的方法,其使用连续时间框架来熔断来自多个汽车雷达的熔丝测量和惯性测量单元(IMU)。不利的天气条件对雷达传感器的操作性能不同,与相机和激光器传感器不同,对雷达传感器的操作性能没有显着影响。雷达在这种情况下的鲁棒性和乘客车辆雷达的普遍普遍激励我们来看看雷达用于自我运动估计。连续时间轨迹表示不仅应用于实现异构和异步多传感器融合的框架,还应用于通过能够计算封闭形式的姿势及其衍生物来实现高效优化,并且在任何特定时间沿着弹道。我们将我们的连续时间估计与来自离散时间雷达 - 惯性内径型方法的方法进行比较,并表明我们的连续时间方法优于离散时间方法。据我们所知,这是第一次将连续时间框架应用于雷达惯性内径术。
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译
Although many studies have successfully applied transfer learning to medical image segmentation, very few of them have investigated the selection strategy when multiple source tasks are available for transfer. In this paper, we propose a prior knowledge guided and transferability based framework to select the best source tasks among a collection of brain image segmentation tasks, to improve the transfer learning performance on the given target task. The framework consists of modality analysis, RoI (region of interest) analysis, and transferability estimation, such that the source task selection can be refined step by step. Specifically, we adapt the state-of-the-art analytical transferability estimation metrics to medical image segmentation tasks and further show that their performance can be significantly boosted by filtering candidate source tasks based on modality and RoI characteristics. Our experiments on brain matter, brain tumor, and white matter hyperintensities segmentation datasets reveal that transferring from different tasks under the same modality is often more successful than transferring from the same task under different modalities. Furthermore, within the same modality, transferring from the source task that has stronger RoI shape similarity with the target task can significantly improve the final transfer performance. And such similarity can be captured using the Structural Similarity index in the label space.
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
Optical coherence tomography (OCT) captures cross-sectional data and is used for the screening, monitoring, and treatment planning of retinal diseases. Technological developments to increase the speed of acquisition often results in systems with a narrower spectral bandwidth, and hence a lower axial resolution. Traditionally, image-processing-based techniques have been utilized to reconstruct subsampled OCT data and more recently, deep-learning-based methods have been explored. In this study, we simulate reduced axial scan (A-scan) resolution by Gaussian windowing in the spectral domain and investigate the use of a learning-based approach for image feature reconstruction. In anticipation of the reduced resolution that accompanies wide-field OCT systems, we build upon super-resolution techniques to explore methods to better aid clinicians in their decision-making to improve patient outcomes, by reconstructing lost features using a pixel-to-pixel approach with an altered super-resolution generative adversarial network (SRGAN) architecture.
translated by 谷歌翻译