风险评分系统已被广泛地部署在许多应用程序中,这些应用程序根据用户的行为序列将风险分数分配给了。尽管许多具有复杂设计的深度学习方法已经取得了令人鼓舞的结果,但由于公平,解释性和合规性考虑,黑框的性质阻碍了他们的应用。在这些敏感情况下,基于规则的系统被认为是可靠的。但是,构建规则系统是劳动密集型的。专家需要从用户行为序列,基于统计数据的设计规则中找到信息统计信息,并为每个规则分配权重。在本文中,我们弥合了有效但黑色框模型与透明规则模型之间的差距。我们提出了一种两阶段的方法Rudi,该方法将黑框教师模型的知识提炼成基于规则的学生模型。我们设计了一种基于蒙特卡洛树搜索的统计生成方法,该方法可以在第一阶段提供一组信息统计信息。然后,通过模仿教师模型的输出,将统计数据与我们提出的神经逻辑网络组成逻辑规则。我们在三个现实世界公共数据集和一个工业数据集上评估了Rudi,以证明其有效性。
translated by 谷歌翻译
矢量图形(VG)在我们的日常生活中无处不在,在工程,建筑,设计等方面进行了广泛的应用。大多数现有方法的VG识别过程是首先将VG渲染为栅格图形(RG),然后基于行为识别。 RG格式。但是,此过程丢弃了几何结构并失去了VG的高分辨率。最近,提出了另一种类别的算法以直接从原始VG格式识别。但是它受RG渲染可以滤除的拓扑错误的影响。它不是查看一种格式,而是将VG和RG格式一起使用以避免这些缺点的好解决方案。此外,我们认为VG-TO-RG渲染过程对于有效组合VG和RG信息至关重要。通过指定有关如何将VG原语转移到RG像素的规则,渲染过程描述了VG和RG之间的相互作用和相关性。结果,我们提出了Rendnet,这是在2D和3D方案上识别的统一体系结构,该体系结构考虑VG/RG表示并通过结合VG-TO-RG栅格化过程来利用其相互作用。实验表明,Rendnet可以在各种VG数据集上的2D和3D对象识别任务上实现最新性能。
translated by 谷歌翻译
在本文中,我们考虑一种用于图像的不同数据格式:矢量图形。与广泛用于图像识别的光栅图形相比,由于文档中的基元的分析表示,矢量图形可以向上或向下缩放或向下扩展到任何分辨率而不进行别名或信息丢失的分辨率。此外,向量图形能够提供有关低级别元素组如何一起形成高级形状或结构的额外结构信息。图形矢量的这些优点尚未完全利用现有方法。要探索此数据格式,我们针对基本识别任务:对象本地化和分类。我们提出了一个有效的无CNN的管道,不会将图形呈现为像素(即光栅化),并将向量图形的文本文档作为输入,称为Yolat(您只查看文本)。 Yolat构建多图来模拟矢量图形中的结构和空间信息,并提出了双流图形神经网络来检测图表中的对象。我们的实验表明,通过直接在向量图形上运行,在平均精度和效率方面,Yolat Out-ut-Proped基于的物体检测基线。
translated by 谷歌翻译
Human parsing aims to partition humans in image or video into multiple pixel-level semantic parts. In the last decade, it has gained significantly increased interest in the computer vision community and has been utilized in a broad range of practical applications, from security monitoring, to social media, to visual special effects, just to name a few. Although deep learning-based human parsing solutions have made remarkable achievements, many important concepts, existing challenges, and potential research directions are still confusing. In this survey, we comprehensively review three core sub-tasks: single human parsing, multiple human parsing, and video human parsing, by introducing their respective task settings, background concepts, relevant problems and applications, representative literature, and datasets. We also present quantitative performance comparisons of the reviewed methods on benchmark datasets. Additionally, to promote sustainable development of the community, we put forward a transformer-based human parsing framework, providing a high-performance baseline for follow-up research through universal, concise, and extensible solutions. Finally, we point out a set of under-investigated open issues in this field and suggest new directions for future study. We also provide a regularly updated project page, to continuously track recent developments in this fast-advancing field: https://github.com/soeaver/awesome-human-parsing.
translated by 谷歌翻译
Recent CLIP-guided 3D optimization methods, e.g., DreamFields and PureCLIPNeRF achieve great success in zero-shot text-guided 3D synthesis. However, due to the scratch training and random initialization without any prior knowledge, these methods usually fail to generate accurate and faithful 3D structures that conform to the corresponding text. In this paper, we make the first attempt to introduce the explicit 3D shape prior to CLIP-guided 3D optimization methods. Specifically, we first generate a high-quality 3D shape from input texts in the text-to-shape stage as the 3D shape prior. We then utilize it as the initialization of a neural radiance field and then optimize it with the full prompt. For the text-to-shape generation, we present a simple yet effective approach that directly bridges the text and image modalities with a powerful text-to-image diffusion model. To narrow the style domain gap between images synthesized by the text-to-image model and shape renderings used to train the image-to-shape generator, we further propose to jointly optimize a learnable text prompt and fine-tune the text-to-image diffusion model for rendering-style image generation. Our method, namely, Dream3D, is capable of generating imaginative 3D content with better visual quality and shape accuracy than state-of-the-art methods.
translated by 谷歌翻译
To reproduce the success of text-to-image (T2I) generation, recent works in text-to-video (T2V) generation employ large-scale text-video dataset for fine-tuning. However, such paradigm is computationally expensive. Humans have the amazing ability to learn new visual concepts from just one single exemplar. We hereby study a new T2V generation problem$\unicode{x2014}$One-Shot Video Generation, where only a single text-video pair is presented for training an open-domain T2V generator. Intuitively, we propose to adapt the T2I diffusion model pretrained on massive image data for T2V generation. We make two key observations: 1) T2I models are able to generate images that align well with the verb terms; 2) extending T2I models to generate multiple images concurrently exhibits surprisingly good content consistency. To further learn continuous motion, we propose Tune-A-Video with a tailored Sparse-Causal Attention, which generates videos from text prompts via an efficient one-shot tuning of pretrained T2I diffusion models. Tune-A-Video is capable of producing temporally-coherent videos over various applications such as change of subject or background, attribute editing, style transfer, demonstrating the versatility and effectiveness of our method.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
The recurrent structure is a prevalent framework for the task of video super-resolution, which models the temporal dependency between frames via hidden states. When applied to real-world scenarios with unknown and complex degradations, hidden states tend to contain unpleasant artifacts and propagate them to restored frames. In this circumstance, our analyses show that such artifacts can be largely alleviated when the hidden state is replaced with a cleaner counterpart. Based on the observations, we propose a Hidden State Attention (HSA) module to mitigate artifacts in real-world video super-resolution. Specifically, we first adopt various cheap filters to produce a hidden state pool. For example, Gaussian blur filters are for smoothing artifacts while sharpening filters are for enhancing details. To aggregate a new hidden state that contains fewer artifacts from the hidden state pool, we devise a Selective Cross Attention (SCA) module, in which the attention between input features and each hidden state is calculated. Equipped with HSA, our proposed method, namely FastRealVSR, is able to achieve 2x speedup while obtaining better performance than Real-BasicVSR. Codes will be available at https://github.com/TencentARC/FastRealVSR
translated by 谷歌翻译
In contrastive self-supervised learning, the common way to learn discriminative representation is to pull different augmented "views" of the same image closer while pushing all other images further apart, which has been proven to be effective. However, it is unavoidable to construct undesirable views containing different semantic concepts during the augmentation procedure. It would damage the semantic consistency of representation to pull these augmentations closer in the feature space indiscriminately. In this study, we introduce feature-level augmentation and propose a novel semantics-consistent feature search (SCFS) method to mitigate this negative effect. The main idea of SCFS is to adaptively search semantics-consistent features to enhance the contrast between semantics-consistent regions in different augmentations. Thus, the trained model can learn to focus on meaningful object regions, improving the semantic representation ability. Extensive experiments conducted on different datasets and tasks demonstrate that SCFS effectively improves the performance of self-supervised learning and achieves state-of-the-art performance on different downstream tasks.
translated by 谷歌翻译
Recently, the success of pre-training in text domain has been fully extended to vision, audio, and cross-modal scenarios. The proposed pre-training models of different modalities are showing a rising trend of homogeneity in their model structures, which brings the opportunity to implement different pre-training models within a uniform framework. In this paper, we present TencentPretrain, a toolkit supporting pre-training models of different modalities. The core feature of TencentPretrain is the modular design. The toolkit uniformly divides pre-training models into 5 components: embedding, encoder, target embedding, decoder, and target. As almost all of common modules are provided in each component, users can choose the desired modules from different components to build a complete pre-training model. The modular design enables users to efficiently reproduce existing pre-training models or build brand-new one. We test the toolkit on text, vision, and audio benchmarks and show that it can match the performance of the original implementations.
translated by 谷歌翻译