Generative models, particularly GANs, have been utilized for image editing. Although GAN-based methods perform well on generating reasonable contents aligned with the user's intentions, they struggle to strictly preserve the contents outside the editing region. To address this issue, we use diffusion models instead of GANs and propose a novel image-editing method, based on pixel-wise guidance. Specifically, we first train pixel-classifiers with few annotated data and then estimate the semantic segmentation map of a target image. Users then manipulate the map to instruct how the image is to be edited. The diffusion model generates an edited image via guidance by pixel-wise classifiers, such that the resultant image aligns with the manipulated map. As the guidance is conducted pixel-wise, the proposed method can create reasonable contents in the editing region while preserving the contents outside this region. The experimental results validate the advantages of the proposed method both quantitatively and qualitatively.
translated by 谷歌翻译
对于机器人来说,在人口稠密地区的自主航行仍然是一项艰巨的任务,因为难以确保在非结构化情况下与行人进行安全互动。在这项工作中,我们提出了一个人群导航控制框架,该框架可在自动驾驶汽车上提供连续避免障碍物和接触后控制。我们建议评估指标,以了解自然人群中的会计效率,控制器响应和人群相互作用。我们报告了不同人群类型的110多种试验的结果:稀疏,流量和混合流量,低 - (<0.15 ppsm),中部(<0.65 ppsm)和高 - (<1 ppsm)的行人密度。我们提出了两种低级避免障碍方法与共享控制基线之间的比较结果。结果表明,在最高密度测试上,相对时间下降了10%,没有其他效率度量降低。此外,自主导航显示与共享控制导航相当,相对混蛋较低,命令的流利度明显更高,表明与人群的兼容性很高。我们得出的结论是,反应性控制器履行了对人群导航的快速和连续适应的必要任务,并且应该与高级计划者一起以进行环境和情境意识。
translated by 谷歌翻译
我们通过在轮子上的光加权外骨骼提出了一个用于低体积受损的用户的个人移动装置。在其核心上,一种新型的被动外骨骼提供姿势过渡,利用自然身体姿势,该姿势在静坐的静止和静坐(STS)过渡时,通过单个气体弹簧作为储能单元,通过支撑架上的躯干。我们通过双轮线系统提出膝盖和髋关节的方向依赖性耦合,从躯干运动转移到膝关节致动器处的力矩负载来平衡躯干运动。在这里,外骨骼最大化能量转移和用户运动的自然。我们介绍了一个体现的用户界面,用于通过躯干压力感测通过躯干压力感测,导致平均$ 19 ^ {\ rIC} \ PM 13 ^ {\ rIC} $上六个未受害的用户。我们评估了11月11日未受害的用户在过渡期间观察动作和肌肉活动的STS帮助的设计。结果比较辅助和无归档的STS转型验证了涉及的肌肉群体的显着减少(高达68美元\%$ 5,01.01 $)。此外,我们通过自然躯干倾斜运动来显示它是可行的$ + 12 ^ {\ riC} \ pm 6.5 ^ {\ circ} $和$ - 13.7 ^ {\ rIC} \ pm 6.1 ^ {\ riC} $ staity和分别坐着。被动灾害迁移援助保证进一步努力提高其适用性和扩大用户人口。
translated by 谷歌翻译
建立一种人类综合人工认知系统,即人工综合情报(AGI),是人工智能(AI)领域的圣杯。此外,实现人工系统实现认知发展的计算模型将是脑和认知科学的优秀参考。本文介绍了一种通过集成元素认知模块来开发认知架构的方法,以实现整个模块的训练。这种方法是基于两个想法:(1)脑激发AI,学习人类脑建筑以构建人类级智能,(2)概率的生成模型(PGM)基础的认知系统,为发展机器人开发认知系统通过整合PGM。发展框架称为全大脑PGM(WB-PGM),其根本地不同于现有的认知架构,因为它可以通过基于感官电机信息的系统不断学习。在这项研究中,我们描述了WB-PGM的基本原理,基于PGM的元素认知模块的当前状态,与人类大脑的关系,对认知模块的整合的方法,以及未来的挑战。我们的研究结果可以作为大脑研究的参考。随着PGMS描述变量之间的明确信息关系,本说明书提供了从计算科学到脑科学的可解释指导。通过提供此类信息,神经科学的研究人员可以向AI和机器人提供的研究人员提供反馈,以及目前模型缺乏对大脑的影响。此外,它可以促进神经认知科学的研究人员以及AI和机器人的合作。
translated by 谷歌翻译
Mixup is a popular data augmentation technique for training deep neural networks where additional samples are generated by linearly interpolating pairs of inputs and their labels. This technique is known to improve the generalization performance in many learning paradigms and applications. In this work, we first analyze Mixup and show that it implicitly regularizes infinitely many directional derivatives of all orders. We then propose a new method to improve Mixup based on the novel insight. To demonstrate the effectiveness of the proposed method, we conduct experiments across various domains such as images, tabular data, speech, and graphs. Our results show that the proposed method improves Mixup across various datasets using a variety of architectures, for instance, exhibiting an improvement over Mixup by 0.8% in ImageNet top-1 accuracy.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
In real-world time series recognition applications, it is possible to have data with varying length patterns. However, when using artificial neural networks (ANN), it is standard practice to use fixed-sized mini-batches. To do this, time series data with varying lengths are typically normalized so that all the patterns are the same length. Normally, this is done using zero padding or truncation without much consideration. We propose a novel method of normalizing the lengths of the time series in a dataset by exploiting the dynamic matching ability of Dynamic Time Warping (DTW). In this way, the time series lengths in a dataset can be set to a fixed size while maintaining features typical to the dataset. In the experiments, all 11 datasets with varying length time series from the 2018 UCR Time Series Archive are used. We evaluate the proposed method by comparing it with 18 other length normalization methods on a Convolutional Neural Network (CNN), a Long-Short Term Memory network (LSTM), and a Bidirectional LSTM (BLSTM).
translated by 谷歌翻译
This study proposes novel control methods that lower impact force by preemptive movement and smoothly transition to conventional contact impedance control. These suggested techniques are for force control-based robots and position/velocity control-based robots, respectively. Strong impact forces have a negative influence on multiple robotic tasks. Recently, preemptive impact reduction techniques that expand conventional contact impedance control by using proximity sensors have been examined. However, a seamless transition from impact reduction to contact impedance control has not yet been accomplished. The proposed methods utilize a serial combined impedance control framework to solve this problem. The preemptive impact reduction feature can be added to the already implemented impedance controller because the parameter design is divided into impact reduction and contact impedance control. There is no undesirable contact force during the transition. Furthermore, even though the preemptive impact reduction employs a crude optical proximity sensor, the influence of reflectance is minimized using a virtual viscous force. Analyses and real-world experiments confirm these benefits.
translated by 谷歌翻译
The demand for resilient logistics networks has increased because of recent disasters. When we consider optimization problems, entropy regularization is a powerful tool for the diversification of a solution. In this study, we proposed a method for designing a resilient logistics network based on entropy regularization. Moreover, we proposed a method for analytical resilience criteria to reduce the ambiguity of resilience. First, we modeled the logistics network, including factories, distribution bases, and sales outlets in an efficient framework using entropy regularization. Next, we formulated a resilience criterion based on probabilistic cost and Kullback--Leibler divergence. Finally, our method was performed using a simple logistics network, and the resilience of the three logistics plans designed by entropy regularization was demonstrated.
translated by 谷歌翻译
We present a data-driven framework to automate the vectorization and machine interpretation of 2D engineering part drawings. In industrial settings, most manufacturing engineers still rely on manual reads to identify the topological and manufacturing requirements from drawings submitted by designers. The interpretation process is laborious and time-consuming, which severely inhibits the efficiency of part quotation and manufacturing tasks. While recent advances in image-based computer vision methods have demonstrated great potential in interpreting natural images through semantic segmentation approaches, the application of such methods in parsing engineering technical drawings into semantically accurate components remains a significant challenge. The severe pixel sparsity in engineering drawings also restricts the effective featurization of image-based data-driven methods. To overcome these challenges, we propose a deep learning based framework that predicts the semantic type of each vectorized component. Taking a raster image as input, we vectorize all components through thinning, stroke tracing, and cubic bezier fitting. Then a graph of such components is generated based on the connectivity between the components. Finally, a graph convolutional neural network is trained on this graph data to identify the semantic type of each component. We test our framework in the context of semantic segmentation of text, dimension and, contour components in engineering drawings. Results show that our method yields the best performance compared to recent image, and graph-based segmentation methods.
translated by 谷歌翻译