Modern mobile burst photography pipelines capture and merge a short sequence of frames to recover an enhanced image, but often disregard the 3D nature of the scene they capture, treating pixel motion between images as a 2D aggregation problem. We show that in a "long-burst", forty-two 12-megapixel RAW frames captured in a two-second sequence, there is enough parallax information from natural hand tremor alone to recover high-quality scene depth. To this end, we devise a test-time optimization approach that fits a neural RGB-D representation to long-burst data and simultaneously estimates scene depth and camera motion. Our plane plus depth model is trained end-to-end, and performs coarse-to-fine refinement by controlling which multi-resolution volume features the network has access to at what time during training. We validate the method experimentally, and demonstrate geometrically accurate depth reconstructions with no additional hardware or separate data pre-processing and pose-estimation steps.
translated by 谷歌翻译
We propose a new neural network design paradigm Reversible Column Network (RevCol). The main body of RevCol is composed of multiple copies of subnetworks, named columns respectively, between which multi-level reversible connections are employed. Such architectural scheme attributes RevCol very different behavior from conventional networks: during forward propagation, features in RevCol are learned to be gradually disentangled when passing through each column, whose total information is maintained rather than compressed or discarded as other network does. Our experiments suggest that CNN-style RevCol models can achieve very competitive performances on multiple computer vision tasks such as image classification, object detection and semantic segmentation, especially with large parameter budget and large dataset. For example, after ImageNet-22K pre-training, RevCol-XL obtains 88.2% ImageNet-1K accuracy. Given more pre-training data, our largest model RevCol-H reaches 90.0% on ImageNet-1K, 63.8% APbox on COCO detection minival set, 61.0% mIoU on ADE20k segmentation. To our knowledge, it is the best COCO detection and ADE20k segmentation result among pure (static) CNN models. Moreover, as a general macro architecture fashion, RevCol can also be introduced into transformers or other neural networks, which is demonstrated to improve the performances in both computer vision and NLP tasks. We release code and models at https://github.com/megvii-research/RevCol
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Previous work on controllable text generation has explored the idea of control from the latent space, such as optimizing a representation with attribute-related classifiers or sampling a representation from relevant discrete samples. However, they are not effective enough in modeling both the latent space and the control, leaving controlled text with low quality and diversity. In this work, we propose a novel control framework using probability density estimation in the latent space. Our method utilizes an invertible transformation function, the Normalizing Flow, that maps the complex distributions in the latent space to simple Gaussian distributions in the prior space. Thus, we can perform sophisticated and flexible control in the prior space and feed the control effects back into the latent space owing to the one-one-mapping property of invertible transformations. Experiments on single-attribute controls and multi-attribute control reveal that our method outperforms several strong baselines on attribute relevance and text quality and achieves the SOTA. Further analysis of control strength adjustment demonstrates the flexibility of our control strategy.
translated by 谷歌翻译
Neural volumetric representations have become a widely adopted model for radiance fields in 3D scenes. These representations are fully implicit or hybrid function approximators of the instantaneous volumetric radiance in a scene, which are typically learned from multi-view captures of the scene. We investigate the new task of neural volume super-resolution - rendering high-resolution views corresponding to a scene captured at low resolution. To this end, we propose a neural super-resolution network that operates directly on the volumetric representation of the scene. This approach allows us to exploit an advantage of operating in the volumetric domain, namely the ability to guarantee consistent super-resolution across different viewing directions. To realize our method, we devise a novel 3D representation that hinges on multiple 2D feature planes. This allows us to super-resolve the 3D scene representation by applying 2D convolutional networks on the 2D feature planes. We validate the proposed method's capability of super-resolving multi-view consistent views both quantitatively and qualitatively on a diverse set of unseen 3D scenes, demonstrating a significant advantage over existing approaches.
translated by 谷歌翻译
Medical images play an important role in clinical applications. Multimodal medical images could provide rich information about patients for physicians to diagnose. The image fusion technique is able to synthesize complementary information from multimodal images into a single image. This technique will prevent radiologists switch back and forth between different images and save lots of time in the diagnostic process. In this paper, we introduce a novel Dilated Residual Attention Network for the medical image fusion task. Our network is capable to extract multi-scale deep semantic features. Furthermore, we propose a novel fixed fusion strategy termed Softmax-based weighted strategy based on the Softmax weights and matrix nuclear norm. Extensive experiments show our proposed network and fusion strategy exceed the state-of-the-art performance compared with reference image fusion methods on four commonly used fusion metrics.
translated by 谷歌翻译
Air pollution is a crucial issue affecting human health and livelihoods, as well as one of the barriers to economic and social growth. Forecasting air quality has become an increasingly important endeavor with significant social impacts, especially in emerging countries like China. In this paper, we present a novel Transformer architecture termed AirFormer to collectively predict nationwide air quality in China, with an unprecedented fine spatial granularity covering thousands of locations. AirFormer decouples the learning process into two stages -- 1) a bottom-up deterministic stage that contains two new types of self-attention mechanisms to efficiently learn spatio-temporal representations; 2) a top-down stochastic stage with latent variables to capture the intrinsic uncertainty of air quality data. We evaluate AirFormer with 4-year data from 1,085 stations in the Chinese Mainland. Compared to the state-of-the-art model, AirFormer reduces prediction errors by 5%~8% on 72-hour future predictions. Our source code is available at https://github.com/yoshall/airformer.
translated by 谷歌翻译
RGB热点对象检测(SOD)结合了两个光谱,以分段图像中的视觉明显区域。大多数现有方法都使用边界图来学习锋利的边界。这些方法忽略了孤立的边界像素与其他自信像素之间的相互作用,从而导致了次优性能。为了解决这个问题,我们为基于SWIN Transformer的RGB-T SOD提出了一个职位感知关系学习网络(PRLNET)。 PRLNET探索像素之间的距离和方向关系,以增强阶层内的紧凑性和类间的分离,从而产生具有清晰边界和均匀区域的显着对象掩模。具体而言,我们开发了一个新颖的签名距离辅助模块(SDMAM)来改善编码器特征表示,该模块考虑了边界邻域中不同像素的距离关系。然后,我们使用定向字段(FRDF)设计一种功能改进方法,该方法通过利用明显对象内部的功能来纠正边界邻域的特征。 FRDF利用对象像素之间的方向信息有效地增强了显着区域的阶层紧凑性。此外,我们构成了一个纯变压器编码器 - 模块网络,以增强RGB-T SOD的多光谱特征表示。最后,我们对三个公共基准数据集进行了定量和定性实验。结果表明,我们所提出的方法的表现优于最新方法。
translated by 谷歌翻译
VQA是一项雄心勃勃的任务,旨在回答任何与图像有关的问题。但是,实际上,由于用户的需求不断更新,并且该系统必须实施新功能,因此很难为所有人构建这样的系统。因此,持续学习(CL)能力是开发高级VQA系统的必要条件。最近,先锋工作将一个VQA数据集分为不相交的答案集以研究此主题。但是,VQA上的CL不仅涉及标签集的扩展(新答案集)。在将VQA系统部署到新环境(新的视觉场景)以及如何回答需要新功能的问题(新问题类型)时,研究如何回答问题至关重要。因此,我们提出了Clove,这是一个在视觉问题答案上连续学习的基准,其中包含上述两个CL方案的场景和功能收入设置。在方法论方面,VQA和分类的CL之间的主要区别在于,前者还涉及扩大和防止忘记推理机制,而后者则集中在班级表示上。因此,我们提出了一种为CL上量身定制的基于无数据的基于Real-DATA的基于VQA上的方法,称为场景图作为符号重播的提示。它使用一段场景图作为提示,它可以重播伪场景图,以表示过去的图像以及相关的QA对。还提出了一个统一的VQA模型来利用当前和重播数据来增强其质量检查能力。最后,实验结果揭示了丁香的挑战,并证明了我们方法的有效性。数据集和代码将在https://github.com/showlab/clvqa上找到。
translated by 谷歌翻译
基于骨架的动作识别吸引了其对照明条件的计算效率和鲁棒性的关注。现有的基于骨架的动作识别方法通常是作为单次分类任务而不充分利用动作之间的语义关系的一项单次分类任务。例如,“胜利标志”和“拇指”是手势的两个动作,其主要区别在于手的运动。该信息是从分类的动作类别编码的不可知论,但可以在动作的语言描述中揭示。因此,在培训中利用动作语言描述可能会受益于代表性学习。在这项工作中,我们提出了一种基于骨架的动作识别的语言监督培训(LST)方法。更具体地说,我们采用大型语言模型作为知识引擎来为身体部件的动作运动提供文本描述,并通过利用文本编码器来生成特征向量来为不同的身体部位生成特征向量并监督多模式训练方案并监督动作表示学习的骨架编码器。实验表明,我们提出的LST方法在没有推理时没有额外的计算成本的情况下,对各种基线模型实现了明显的改进。 LST在基于流行的骨架的动作识别基准上实现了新的最新技术,包括NTU RGB+D,NTU RGB+D 120和NW-UCLA。该代码可以在https://github.com/martinxm/lst上找到。
translated by 谷歌翻译