尽管深度学习已被广​​泛用于视频分析,例如视频分类和动作检测,但与体育视频的快速移动主题进行密集的动作检测仍然具有挑战性。在这项工作中,我们发布了另一个体育视频数据集$ \ textbf {p $^2 $ a} $ for $ \ usewessline {p} $ \ in $ \ usepline {p} $ ong- $ \ $ \ usepline {a} $ ction ction ction检测,由2,721个视频片段组成,这些视频片段从世界乒乓球锦标赛和奥林匹克运动会的专业乒乓球比赛的广播视频中收集。我们与一批乒乓球专业人士和裁判员合作,以获取出现在数据集中的每个乒乓球动作,并提出两组动作检测问题 - 行动定位和行动识别。我们使用$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ fextbf {p $^2 $^2 $^2 $ a^2 $^2 $ a^2 $^2 $ a^2 $ a^2 $^$^2 $ a^2 $^2 $ a^2 $^2 $ a^2 $^2 $ a^2 $^2 $^2 $ a^2 $^2 $ a^2 $^2 $^2 $^2 $^2 $^2 $^2 $ a在各种设置下,这两个问题的$} $。这些模型只能在AR-AN曲线下实现48%的面积,以进行本地化,而识别次数为82%,因为Ping-Pong的动作密集具有快速移动的主题,但广播视频仅为25 fps。结果证实,$ \ textbf {p $^2 $ a} $仍然是一项具有挑战性的任务,可以用作视频中动作检测的基准。
translated by 谷歌翻译
小鼠的自动社会行为分析已成为行为神经科学中越来越流行的研究领域。最近,已使用姿势信息(即关键点或骨骼的位置)来解释小鼠的社会行为。然而,很少在现有方法中研究了小鼠关键点基础的社会互动信息的有效编码和解码。特别是,由于高度变形的身体形状和模棱两可的运动模式,建模小鼠之间复杂的社交互动是一项挑战。为了处理交互建模问题,我们在这里提出了一个跨骨骼相互作用图聚合网络(CS-IGANET),以学习自由相互作用的小鼠的丰富动力学,其中使用了跨骨骼节点级交互模块(CS-NLI)建模多级相互作用(即内部,间和跨骨骼相互作用)。此外,我们设计了一种新颖的互动感知变压器(IAT),以动态学习社交行为的图形表示,并更新节点级表示,并在我们提出的互动意识到的自我注意力下的机制的指导下。最后,为了增强我们的模型的表示能力,提出了辅助自我监督的学习任务来衡量跨骨骼节点之间的相似性。标准CRMI13-SKERTON和我们的PDMB-Skeleton数据集的实验结果表明,我们所提出的模型的表现优于其他几种最先进的方法。
translated by 谷歌翻译
布局设计在许多应用中无处不在,例如建筑/城市规划等,涉及漫长的迭代设计过程。最近,深度学习已被利用以通过图像生成自动生成布局,从而表明了使设计师摆脱艰辛的常规的巨大潜力。尽管自动生成可以极大地提高生产率,但设计师的投入无疑至关重要。理想的AI辅助设计工具应自动化重复的例程,同时接受人类的指导并提供智能/主动的建议。但是,在主要是端到端方法的现有方法中,将使人类参与循环的能力在很大程度上被忽略了。为此,我们提出了一种新的人类生成模型Iplan,它能够自动生成布局,但在整个过程中也与设计师进行交互,使人类和AI能够逐渐协调一个粗略的想法进入最终设计。在不同的数据集上对IPLAN进行了评估,并将其与现有方法进行了比较。结果表明,IPLAN在制作与人类设计师的相似布局方面具有高忠诚,在接受设计师的投入和相应地提供设计建议方面具有极大的灵活性,并且在面对看不见的设计任务和有限的培训数据时,具有强大的概括性。
translated by 谷歌翻译
脑肿瘤分割是医学图像分析中最具挑战性问题之一。脑肿瘤细分的目标是产生准确描绘脑肿瘤区域。近年来,深入学习方法在解决各种计算机视觉问题时表现出了有希望的性能,例如图像分类,对象检测和语义分割。基于深度学习的方法已经应用于脑肿瘤细分并取得了有希望的结果。考虑到最先进技术所制作的显着突破,我们使用本调查来提供最近开发的深层学习脑肿瘤分割技术的全面研究。在本次调查中选择并讨论了100多篇科学论文,广泛地涵盖了网络架构设计,在不平衡条件下的细分等技术方面,以及多种方式流程。我们还为未来的发展方向提供了富有洞察力的讨论。
translated by 谷歌翻译
In this chapter, we review and discuss the transformation of AI technology in HCI/UX work and assess how AI technology will change how we do the work. We first discuss how AI can be used to enhance the result of user research and design evaluation. We then discuss how AI technology can be used to enhance HCI/UX design. Finally, we discuss how AI-enabled capabilities can improve UX when users interact with computing systems, applications, and services.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
Recent advances in self-supervised learning (SSL) in computer vision are primarily comparative, whose goal is to preserve invariant and discriminative semantics in latent representations by comparing siamese image views. However, the preserved high-level semantics do not contain enough local information, which is vital in medical image analysis (e.g., image-based diagnosis and tumor segmentation). To mitigate the locality problem of comparative SSL, we propose to incorporate the task of pixel restoration for explicitly encoding more pixel-level information into high-level semantics. We also address the preservation of scale information, a powerful tool in aiding image understanding but has not drawn much attention in SSL. The resulting framework can be formulated as a multi-task optimization problem on the feature pyramid. Specifically, we conduct multi-scale pixel restoration and siamese feature comparison in the pyramid. In addition, we propose non-skip U-Net to build the feature pyramid and develop sub-crop to replace multi-crop in 3D medical imaging. The proposed unified SSL framework (PCRLv2) surpasses its self-supervised counterparts on various tasks, including brain tumor segmentation (BraTS 2018), chest pathology identification (ChestX-ray, CheXpert), pulmonary nodule detection (LUNA), and abdominal organ segmentation (LiTS), sometimes outperforming them by large margins with limited annotations.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
Feature selection helps reduce data acquisition costs in ML, but the standard approach is to train models with static feature subsets. Here, we consider the dynamic feature selection (DFS) problem where a model sequentially queries features based on the presently available information. DFS is often addressed with reinforcement learning (RL), but we explore a simpler approach of greedily selecting features based on their conditional mutual information. This method is theoretically appealing but requires oracle access to the data distribution, so we develop a learning approach based on amortized optimization. The proposed method is shown to recover the greedy policy when trained to optimality and outperforms numerous existing feature selection methods in our experiments, thus validating it as a simple but powerful approach for this problem.
translated by 谷歌翻译
Human parsing aims to partition humans in image or video into multiple pixel-level semantic parts. In the last decade, it has gained significantly increased interest in the computer vision community and has been utilized in a broad range of practical applications, from security monitoring, to social media, to visual special effects, just to name a few. Although deep learning-based human parsing solutions have made remarkable achievements, many important concepts, existing challenges, and potential research directions are still confusing. In this survey, we comprehensively review three core sub-tasks: single human parsing, multiple human parsing, and video human parsing, by introducing their respective task settings, background concepts, relevant problems and applications, representative literature, and datasets. We also present quantitative performance comparisons of the reviewed methods on benchmark datasets. Additionally, to promote sustainable development of the community, we put forward a transformer-based human parsing framework, providing a high-performance baseline for follow-up research through universal, concise, and extensible solutions. Finally, we point out a set of under-investigated open issues in this field and suggest new directions for future study. We also provide a regularly updated project page, to continuously track recent developments in this fast-advancing field: https://github.com/soeaver/awesome-human-parsing.
translated by 谷歌翻译