在我们最近在加纳被动饮食监测的饮食评估现场研究中,我们收集了超过25万件野外图像。该数据集是一种持续的努力,旨在通过被动监控摄像头技术在低收入和中等收入国家中准确测量单个食物和营养摄入量。目前的数据集涉及加纳农村地区和城市地区的20个家庭(74个受试者),研究中使用了两种不同类型的可穿戴摄像机。一旦开始,可穿戴摄像机会不断捕获受试者的活动,该活动会产生大量的数据,以便在进行分析之前清洁和注释。为了简化数据后处理和注释任务,我们提出了一个新颖的自学学习框架,以将大量以自我为中心的图像聚集到单独的事件中。每个事件都由一系列时间连续和上下文相似的图像组成。通过将图像聚集到单独的事件中,注释者和营养师可以更有效地检查和分析数据,并促进随后的饮食评估过程。在带有地面真实标签的固定测试套装上验证,拟议的框架在聚集质量和分类准确性方面优于基准。
translated by 谷歌翻译
在计算机视觉和邻近字段中,已广泛研究了盲图片脱毛(BID)。投标的现代方法可以分为两类:使用统计推断和数值优化处理单个实例的单个实体方法,以及数据驱动的方法,这些方法可以直接训练深度学习模型来直接删除未来实例。数据驱动的方法可以摆脱得出准确的模型模型的困难,但从根本上受到培训数据的多样性和质量的限制 - 收集足够表达和现实的培训数据是一个坚定的挑战。在本文中,我们专注于保持竞争力和必不可少的单一稳定方法。但是,大多数此类方法没有规定如何处理未知内核大小和实质性噪音,从而排除了实际部署。实际上,我们表明,当核大小被明确指定时,几种最新的(SOTA)单位方法是不稳定的,并且/或噪声水平很高。从积极的一面来看,我们提出了一种实用的出价方法,该方法对这两者都是稳定的,这是同类的。我们的方法建立在最新的思想,即通过整合物理模型和结构深度神经网络而没有额外的培训数据来解决反问题。我们引入了几种关键修改以实现所需的稳定性。与SOTA单位结构以及数据驱动的方法相比,对标准合成数据集以及现实世界中的NTIRE2020和REALBLUR数据集进行了广泛的经验测试。我们方法的代码可在:\ url {https://github.com/sun-unm/blind-image-deblurring}中获得。
translated by 谷歌翻译
面向任务导向的对话系统已经受到获得大规模和高质量的注释对话的困难困扰。此外,大多数公开的数据集仅包括书面对话,这不足以反映实际口头对话系统中的实际人类行为。在本文中,我们提出了面向任务的对话数据增强(TOD-DA),这是一种新型模型 - 不可知的数据增强范例,以提高面向任务对话建模的鲁棒性。 TOD-DA由两个模块组成:1)对话丰富,以扩展关于易于执行数据稀疏性的任务对话的培训数据,用于宽松数据稀疏性和2)口语对话模拟器,以模仿各种粒度的口语样式表达和语音识别错误,以弥合书面之间的差距和口头对话。通过这样的设计,我们的方法在DSTC10 Track2的两个任务中排名第一,这是针对口语对话的任务对话建模的基准,展示了我们提出的TOD-DA的优势和有效性。
translated by 谷歌翻译
在没有任何额外的训练数据的情况下,对计算机视觉中的逆问题显示出显着的潜力,展示了显着的潜力。实用的DIP型号通常很大程度上过分分开。在拟合过程中,这些模型首先学习所需的视觉内容,然后拾取潜在的建模和观察噪声,即过度装箱。因此,DIP的实用性通常在恢复过渡期的良好早期停止(ES)上批判密地取决于统治性。在这方面,愿景任务的大多数DIP工程只展示了模型的潜力 - 向地面真理报告峰值性能,但没有关于如何在没有访问地面的情况下可操作地获得近峰值性能的线索。在本文中,我们设定了破坏了这种倾向的实用屏障,并提出了一种有效的ES策略,该策略一致地检测多个视觉任务和DIP变体的近峰值性能。基于连续DIP重建的分散的简单测量,我们的es方法不仅会在现有的域中突破 - 这仅在非常窄的域中工作,而且在与许多尝试减轻过度装备的方法时也保持有效。该代码可在https://github.com/sun-umn/early_stopping_for_dip中找到。
translated by 谷歌翻译
优化非核解(NCVX)问题,尤其是那些非现场(NSMT)和约束(CSTR),是机器学习和深度学习的重要组成部分。但如果没有优化专业知识,很难可靠地解决这类问题。现有的通用NCVX优化包功能强大,但通常无法处理非现状。格兰索是瞄准NCVX,NSMT,CSTR问题的第一个包中。但是,它有几个限制,例如缺乏自动分化和GPU加速,这妨碍了非专家潜在的广泛部署。为了降低机器学习界的技术障碍,我们将Granso改装为名为NCVX的用户友好且可缩放的Python软件包,具有自我分化,GPU加速度,张量输入,可扩展QP求解器和专有封装的零依赖性。作为一个亮点,NCVX可以解决一般的CSTR深度学习问题,首先。 NCVX可在HTTPS://ncvx.org提供,具有详细文档和机器学习和其他字段的众多示例。
translated by 谷歌翻译
已经发现基于混合的增强对于培训期间的概括模型有效,特别是对于视觉变压器(VITS),因为它们很容易过度装备。然而,先前的基于混合的方法具有潜在的先验知识,即目标的线性内插比应保持与输入插值中提出的比率相同。这可能导致一个奇怪的现象,有时由于增强中的随机过程,混合图像中没有有效对象,但标签空间仍然存在响应。为了弥合输入和标签空间之间的这种差距,我们提出了透明度,该差别将基于视觉变压器的注意图混合标签。如果受关注图的相应输入图像加权,则标签的置信度将会更大。传输令人尴尬地简单,可以在几行代码中实现,而不会在不引入任何额外的参数和拖鞋到基于Vit的模型。实验结果表明,我们的方法可以在想象集分类上一致地始终改善各种基于Vit的模型。在ImageNet上预先接受过扫描后,基于Vit的模型还展示了对语义分割,对象检测和实例分割的更好的可转换性。当在评估4个不同的基准时,传输展示展示更加强劲。代码将在https://github.com/beckschen/transmix上公开提供。
translated by 谷歌翻译
Transfer learning (TL) from pretrained deep models is a standard practice in modern medical image classification (MIC). However, what levels of features to be reused are problem-dependent, and uniformly finetuning all layers of pretrained models may be suboptimal. This insight has partly motivated the recent \emph{differential} TL strategies, such as TransFusion (TF) and layer-wise finetuning (LWFT), which treat the layers in the pretrained models differentially. In this paper, we add one more strategy into this family, called \emph{TruncatedTL}, which reuses and finetunes appropriate bottom layers and directly discards the remaining layers. This yields not only superior MIC performance but also compact models for efficient inference, compared to other differential TL methods. We validate the performance and model efficiency of TruncatedTL on three MIC tasks covering both 2D and 3D images. For example, on the BIMCV COVID-19 classification dataset, we obtain improved performance with around $1/4$ model size and $2/3$ inference time compared to the standard full TL model. Code is available at https://github.com/sun-umn/Transfer-Learning-in-Medical-Imaging.
translated by 谷歌翻译
Optical coherence tomography (OCT) captures cross-sectional data and is used for the screening, monitoring, and treatment planning of retinal diseases. Technological developments to increase the speed of acquisition often results in systems with a narrower spectral bandwidth, and hence a lower axial resolution. Traditionally, image-processing-based techniques have been utilized to reconstruct subsampled OCT data and more recently, deep-learning-based methods have been explored. In this study, we simulate reduced axial scan (A-scan) resolution by Gaussian windowing in the spectral domain and investigate the use of a learning-based approach for image feature reconstruction. In anticipation of the reduced resolution that accompanies wide-field OCT systems, we build upon super-resolution techniques to explore methods to better aid clinicians in their decision-making to improve patient outcomes, by reconstructing lost features using a pixel-to-pixel approach with an altered super-resolution generative adversarial network (SRGAN) architecture.
translated by 谷歌翻译
Weakly-supervised temporal action localization (WTAL) learns to detect and classify action instances with only category labels. Most methods widely adopt the off-the-shelf Classification-Based Pre-training (CBP) to generate video features for action localization. However, the different optimization objectives between classification and localization, make temporally localized results suffer from the serious incomplete issue. To tackle this issue without additional annotations, this paper considers to distill free action knowledge from Vision-Language Pre-training (VLP), since we surprisingly observe that the localization results of vanilla VLP have an over-complete issue, which is just complementary to the CBP results. To fuse such complementarity, we propose a novel distillation-collaboration framework with two branches acting as CBP and VLP respectively. The framework is optimized through a dual-branch alternate training strategy. Specifically, during the B step, we distill the confident background pseudo-labels from the CBP branch; while during the F step, the confident foreground pseudo-labels are distilled from the VLP branch. And as a result, the dual-branch complementarity is effectively fused to promote a strong alliance. Extensive experiments and ablation studies on THUMOS14 and ActivityNet1.2 reveal that our method significantly outperforms state-of-the-art methods.
translated by 谷歌翻译
Photometric stereo recovers the surface normals of an object from multiple images with varying shading cues, i.e., modeling the relationship between surface orientation and intensity at each pixel. Photometric stereo prevails in superior per-pixel resolution and fine reconstruction details. However, it is a complicated problem because of the non-linear relationship caused by non-Lambertian surface reflectance. Recently, various deep learning methods have shown a powerful ability in the context of photometric stereo against non-Lambertian surfaces. This paper provides a comprehensive review of existing deep learning-based calibrated photometric stereo methods. We first analyze these methods from different perspectives, including input processing, supervision, and network architecture. We summarize the performance of deep learning photometric stereo models on the most widely-used benchmark data set. This demonstrates the advanced performance of deep learning-based photometric stereo methods. Finally, we give suggestions and propose future research trends based on the limitations of existing models.
translated by 谷歌翻译