We present X-Decoder, a generalized decoding model that can predict pixel-level segmentation and language tokens seamlessly. X-Decodert takes as input two types of queries: (i) generic non-semantic queries and (ii) semantic queries induced from text inputs, to decode different pixel-level and token-level outputs in the same semantic space. With such a novel design, X-Decoder is the first work that provides a unified way to support all types of image segmentation and a variety of vision-language (VL) tasks. Further, our design enables seamless interactions across tasks at different granularities and brings mutual benefits by learning a common and rich pixel-level visual-semantic understanding space, without any pseudo-labeling. After pretraining on a mixed set of a limited amount of segmentation data and millions of image-text pairs, X-Decoder exhibits strong transferability to a wide range of downstream tasks in both zero-shot and finetuning settings. Notably, it achieves (1) state-of-the-art results on open-vocabulary segmentation and referring segmentation on eight datasets; (2) better or competitive finetuned performance to other generalist and specialist models on segmentation and VL tasks; and (3) flexibility for efficient finetuning and novel task composition (e.g., referring captioning and image editing). Code, demo, video, and visualization are available at https://x-decoder-vl.github.io.
translated by 谷歌翻译
The problem of covariate-shift generalization has attracted intensive research attention. Previous stable learning algorithms employ sample reweighting schemes to decorrelate the covariates when there is no explicit domain information about training data. However, with finite samples, it is difficult to achieve the desirable weights that ensure perfect independence to get rid of the unstable variables. Besides, decorrelating within stable variables may bring about high variance of learned models because of the over-reduced effective sample size. A tremendous sample size is required for these algorithms to work. In this paper, with theoretical justification, we propose SVI (Sparse Variable Independence) for the covariate-shift generalization problem. We introduce sparsity constraint to compensate for the imperfectness of sample reweighting under the finite-sample setting in previous methods. Furthermore, we organically combine independence-based sample reweighting and sparsity-based variable selection in an iterative way to avoid decorrelating within stable variables, increasing the effective sample size to alleviate variance inflation. Experiments on both synthetic and real-world datasets demonstrate the improvement of covariate-shift generalization performance brought by SVI.
translated by 谷歌翻译
图像去除任务是一个不适的任务,其中存在无限的可行解决方案来模糊图像。现代深度学习方法通​​常会丢弃模糊内核的学习,并直接采用端到端的监督学习。流行的DeBlurring数据集将标签定义为可行解决方案之一。但是,我们认为直接指定标签是不合理的,尤其是当从随机分布中采样标签时。因此,我们建议使网络学习可行解决方案的分布,并基于此考虑,设计了一种新型的多头输出体系结构和分配学习的相应损失函数。我们的方法使该模型能够输出多个可行解决方案以近似目标分布。我们进一步提出了一种新型参数多路复用方法,该方法可以减少参数和计算工作的数量,同时改善性能。我们评估了我们在多个图像塑性模型(包括当前最新NAFNET)的方法。最佳总体(在每个验证图像中选择最高得分)的提高PSNR的表现优于比较的基准高达0.11〜0.18dB。最佳单头的改善(在验证集的多个头部中选择表现最佳的头部)PSNR优于比较的基线高达0.04〜0.08dB。这些代码可在https://github.com/liu-sd/multi-actup-deblur上找到。
translated by 谷歌翻译
当培训数据共享与即将到来的测试样本相同的分布时,标准监督学习范式有效地工作。但是,在现实世界中,通常会违反此假设,尤其是在以在线方式出现测试数据时。在本文中,我们制定和调查了在线标签转移(OLAS)的问题:学习者从标记的离线数据训练初始模型,然后将其部署到未标记的在线环境中,而基础标签分布会随着时间的推移而变化,但标签 - 条件密度没有。非平稳性和缺乏监督使问题具有挑战性。为了解决难度,我们构建了一个新的无偏风险估计器,该风险估计器利用了未标记的数据,该数据表现出许多良性特性,尽管具有潜在的非跨性别性。在此基础上,我们提出了新颖的在线合奏算法来应对环境的非平稳性。我们的方法享有最佳的动态遗憾,表明该性能与千里眼的千里眼竞争,后者是事后看来的在线环境,然后选择每轮的最佳决定。获得的动态遗憾结合量表与标签分布转移的强度和模式,因此在OLAS问题中表现出适应性。进行广泛的实验以验证有效性和支持我们的理论发现。
translated by 谷歌翻译
联合学习(FL)可以培训全球模型,而无需共享存储在多个设备上的分散的原始数据以保护数据隐私。由于设备的能力多样化,FL框架难以解决Straggler效应和过时模型的问题。此外,数据异质性在FL训练过程中会导致全球模型的严重准确性降解。为了解决上述问题,我们提出了一个层次同步FL框架,即Fedhisyn。 Fedhisyn首先根据其计算能力将所有可​​用的设备簇分为少数类别。经过一定的本地培训间隔后,将不同类别培训的模型同时上传到中央服务器。在单个类别中,设备根据环形拓扑会相互传达局部更新的模型权重。随着环形拓扑中训练的效率更喜欢具有均匀资源的设备,基于计算能力的分类减轻了Straggler效应的影响。此外,多个类别的同步更新与单个类别中的设备通信的组合有助于解决数据异质性问题,同时达到高精度。我们评估了基于MNIST,EMNIST,CIFAR10和CIFAR100数据集的提议框架以及设备的不同异质设置。实验结果表明,在训练准确性和效率方面,Fedhisyn的表现优于六种基线方法,例如FedAvg,脚手架和Fedat。
translated by 谷歌翻译
犯罪预测对于公共安全和资源优化至关重要,但由于两个方面而言,这是非常具有挑战性的:i)犯罪活动的刑事模式的动态,犯罪事件在空间和时间域之间不均匀分布; ii)延时依赖于不同类型的犯罪(例如,盗窃,抢劫,攻击,损害),其揭示了犯罪的细粒度语义。为了解决这些挑战,我们提出了空间时间顺序超图网络(ST-SHN),以集体编码复杂的犯罪空间模式以及潜在的类别明智犯罪语义关系。具体而言,在长期和全局上下文下处理空间 - 时间动态,我们设计了一个具有超图学习范例的集成的图形结构化消息传递架构。为了在动态环境中捕获类别方面的犯罪异构关系,我们介绍了多通道路由机制,以了解犯罪类型的时间不断发展的结构依赖性。我们对两个现实世界数据集进行了广泛的实验,表明我们所提出的ST-SHN框架可以显着提高与各种最先进的基线相比的预测性能。源代码可用于:https://github.com/akaxlh/st-hn。
translated by 谷歌翻译
许多以前的研究旨在增加具有深度神经网络技术的协同过滤,以实现更好的推荐性能。但是,大多数现有的基于深度学习的推荐系统专为建模单数类型的用户项目交互行为而设计,这几乎无法蒸馏用户和项目之间的异构关系。在实际推荐方案中,存在多重的用户行为,例如浏览和购买。由于用户的多行为模式在不同的项目上俯视,现有推荐方法不足以捕获来自用户多行为数据的异构协作信号。灵感灵感来自图形神经网络的结构化数据建模,这项工作提出了一个图形神经多行为增强建议(GNMR)框架,其明确地模拟了基于图形的消息传递体系结构下不同类型的用户项目交互之间的依赖性。 GNMR向关系聚合网络设计为模拟交互异质性,并且通过用户项交互图递归地执行相邻节点之间的嵌入传播。实体世界推荐数据集的实验表明,我们的GNMR始终如一地优于最先进的方法。源代码可在https://github.com/akaxlh/gnmr中获得。
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
For Prognostics and Health Management (PHM) of Lithium-ion (Li-ion) batteries, many models have been established to characterize their degradation process. The existing empirical or physical models can reveal important information regarding the degradation dynamics. However, there is no general and flexible methods to fuse the information represented by those models. Physics-Informed Neural Network (PINN) is an efficient tool to fuse empirical or physical dynamic models with data-driven models. To take full advantage of various information sources, we propose a model fusion scheme based on PINN. It is implemented by developing a semi-empirical semi-physical Partial Differential Equation (PDE) to model the degradation dynamics of Li-ion-batteries. When there is little prior knowledge about the dynamics, we leverage the data-driven Deep Hidden Physics Model (DeepHPM) to discover the underlying governing dynamic models. The uncovered dynamics information is then fused with that mined by the surrogate neural network in the PINN framework. Moreover, an uncertainty-based adaptive weighting method is employed to balance the multiple learning tasks when training the PINN. The proposed methods are verified on a public dataset of Li-ion Phosphate (LFP)/graphite batteries.
translated by 谷歌翻译
In recent years, arbitrary image style transfer has attracted more and more attention. Given a pair of content and style images, a stylized one is hoped that retains the content from the former while catching style patterns from the latter. However, it is difficult to simultaneously keep well the trade-off between the content details and the style features. To stylize the image with sufficient style patterns, the content details may be damaged and sometimes the objects of images can not be distinguished clearly. For this reason, we present a new transformer-based method named STT for image style transfer and an edge loss which can enhance the content details apparently to avoid generating blurred results for excessive rendering on style features. Qualitative and quantitative experiments demonstrate that STT achieves comparable performance to state-of-the-art image style transfer methods while alleviating the content leak problem.
translated by 谷歌翻译