Chain-of-Thought (CoT) prompting can dramatically improve the multi-step reasoning abilities of large language models (LLMs). CoT explicitly encourages the LLM to generate intermediate rationales for solving a problem, by providing a series of reasoning steps in the demonstrations. Despite its success, there is still little understanding of what makes CoT prompting effective and which aspects of the demonstrated reasoning steps contribute to its performance. In this paper, we show that CoT reasoning is possible even with invalid demonstrations - prompting with invalid reasoning steps can achieve over 80-90% of the performance obtained using CoT under various metrics, while still generating coherent lines of reasoning during inference. Further experiments show that other aspects of the rationales, such as being relevant to the query and correctly ordering the reasoning steps, are much more important for effective CoT reasoning. Overall, these findings both deepen our understanding of CoT prompting, and open up new questions regarding LLMs' capability to learn to reason in context.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
在多方转换方案中,重叠的语音检测(OSD)对于语音应用至关重要。尽管进行了许多研究工作和进展,与语音活动检测(VAD)相比,OSD仍然是一个开放的挑战,其总体表现远非令人满意。大多数先前的研究通常将OSD问题作为标准分类问题提出,以识别二进制(OSD)或三级标签(联合VAD和OSD)的语音。与主流相反,本研究从新的角度研究了联合VAD和OSD任务。特别是,我们建议使用多EXIT体系结构扩展传统的分类网络。这样的体系结构使我们的系统具有独特的功能,可以使用早期出口的低级功能或上次出口的高级功能来识别类。此外,采用了两种培训方案,知识蒸馏和密集的联系,以进一步提高我们的系统性能。基准数据集(AMI和DIHARD-III)的实验结果验证了我们提出的系统的有效性和通用性。我们的消融进一步揭示了拟议方案的互补贡献。在AMI上的$ F_1 $得分为0.792,而Dihard-III上的0.625分数,我们提出的系统在这些数据集上的表现优于几个顶级性能模型,但在两个数据集中也超过了当前的最新型号。除了性能收益外,我们提出的系统还为质量复杂性权衡提供了另一个吸引人的潜力,这是有效的OSD部署的高度优先。
translated by 谷歌翻译
为了解决单声道语音增强问题,已经进行了大量研究,以通过在语音混合物或时间域中学到的内域进行操作来增强语音,或者在时间域中 - 固定的全乐队短时间傅立叶的频率域变换(STFT)频谱图。最近,已经提出了一些关于基于子频段的语音增强的研究。通过通过子兰频谱图上的操作增强语音,这些研究表明了DNS2020基准数据集上的竞争性能。尽管有吸引力,但这个新的研究方向尚未得到充分探索,并且仍然有改进的余地。因此,在这项研究中,我们深入研究了最新的研究方向,并提出了一个基于子兰的语音增强系统,具有感知动机的优化和双重变换,称为PT-FSE。特别是,我们提出的PT-FSE模型通过三项努力改善了其主链(一种全频段和子融合模型)。首先,我们设计了一个旨在加强全局频率相关性的频率变换模块。然后引入时间转换以捕获远距离时间上下文。最后,提出了一种新的损失,具有人类听觉感知的性质杠杆作用,以促进该模型专注于低频增强。为了验证我们提出的模型的有效性,在DNS2020数据集上进行了广泛的实验。实验结果表明,我们的PT-FSE系统在其骨架上取得了重大改进,但也比当前的最新面积胜过,而比SOTA小27%。在基准数据集上,NB-PESQ平均为3.57,我们的系统提供了迄今报告的最佳语音增强结果。
translated by 谷歌翻译
无人驾驶飞机(UAV)的实时对象检测是一个具有挑战性的问题,因为Edge GPU设备作为物联网(IoT)节点的计算资源有限。为了解决这个问题,在本文中,我们提出了一种基于Yolox模型的新型轻型深度学习体系结构,用于Edge GPU上的实时对象检测。首先,我们设计了一个有效且轻巧的PixSF头,以更换Yolox的原始头部以更好地检测小物体,可以将其进一步嵌入深度可分离的卷积(DS Conv)中,以达到更轻的头。然后,开发为减少网络参数的颈层中的较小结构,这是精度和速度之间的权衡。此外,我们将注意模块嵌入头层中,以改善预测头的特征提取效果。同时,我们还改进了标签分配策略和损失功能,以减轻UAV数据集的类别不平衡和盒子优化问题。最后,提出了辅助头进行在线蒸馏,以提高PIXSF Head中嵌入位置嵌入和特征提取的能力。在NVIDIA Jetson NX和Jetson Nano GPU嵌入平台上,我们的轻质模型的性能得到了实验验证。扩展的实验表明,与目前的模型相比,Fasterx模型在Visdrone2021数据集中实现了更好的折衷和延迟之间的折衷。
translated by 谷歌翻译
自适应梯度算法借用重球加速度的移动平均思想,以估计梯度的准确梯度矩和二阶矩,以加速收敛。然而,在理论上,在理论上,在许多经验情况下,在自适应梯度环境下,Nesterov加速度比重球加速度快的速度快得多。在这项工作中,我们提出了Adan的自适应Nesterov动量算法,以有效加快深层神经网络的训练。 Adan首先重新制定了Nesterov加速度,以开发新的Nesterov动量估计(NME)方法,该方法避免了外推点上计算梯度的额外计算和内存开销。然后,Adan采用NME来估计自适应梯度算法中梯度的一阶和二阶时刻,以进行收敛加速。此外,我们证明Adan在$ O(\ epsilon^{ - 3.5})内找到了$ \ epsilon $ - 附近的一阶固定点,$最著名的下限。广泛的实验结果表明,Adan超过了视觉变压器(VIT)和CNN上的相应SOTA优化器,并为许多流行网络设置了新的SOTA,例如Resnet,Convnext,Vit,Vit,Swin,Mae,Mae,LSTM,LSTM,Transformer-XL和BERT,以及BERT和BERT和BERT 。更令人惊讶的是,Adan可以利用SOTA优化器的一半培训成本(时代)在E.T.C. Vit和Resnet上获得更高或可比的性能,并且还显示出对大型Minibatch尺寸的宽容,例如1K到32K。我们希望Adan能够通过降低培训成本并减轻尝试各种架构的不同优化者的工程负担来为深度学习的发展做出贡献。代码将在https://github.com/sail-sg/adan上发布。
translated by 谷歌翻译
本文旨在解决语义细分中异常发现的问题。我们的主要观察是,语义分类在现有方法中起着关键作用,而错误分类的像素被容易被视为异常。这种现象经常出现并且很少讨论,这显着降低了异常发现的性能。为此,我们提出了一种新颖的蒸馏比较网络(Dicnet)。它包括一个教师分支,该教师分支是一种解除语义分类头的语义分割网络,以及通过分配蒸馏从教师分支蒸馏的学生分支。我们表明蒸馏保证了两个分支的语义特征在已知类别中保持一致性,而在未知课程中反映不一致。因此,我们利用两个分支之间的语义特征差异来发现异常。 DICNET在推理过程中放弃了语义分类头,因此显着减轻了语义分类错误引起的问题。对Streethazards数据集和BDD-Anomaly数据集进行了广泛的实验结果,以验证DicNet的卓越性能。特别是,DICNET在AUPR获得6.3%的改善,并且对血红病患者数据集的FPR95改善了5.2%,在BDD - 异常数据集上达到了4.2%的AUPR和FPR95的6.8%。代码可在https://github.com/zhouhuan-hust/dicnet上获得。
translated by 谷歌翻译
设想制造部门受到基于人工智能的技术的严重影响,计算能力和数据量的大幅增加。制造业领域的一个核心挑战在于一般框架的要求,以确保满足不同制造应用中的诊断和监视性能。在这里,我们提出了一个通用数据驱动的端到端框架,用于监视制造系统。该框架是从深度学习技术中得出的,评估了融合的感觉测量值,以检测甚至预测故障和磨损条件。这项工作利用了深度学习的预测能力,从嘈杂的时间表数据中自动提取隐藏的降解功能。我们已经在从各种制造应用中绘制的十个代表性数据集上试验了拟议的框架。结果表明,该框架在检查的基准应用中表现良好,可以在不同的情况下应用,这表明其潜在用作智能制造中的关键角石。
translated by 谷歌翻译
Video Super-Resolution (VSR) aims to restore high-resolution (HR) videos from low-resolution (LR) videos. Existing VSR techniques usually recover HR frames by extracting pertinent textures from nearby frames with known degradation processes. Despite significant progress, grand challenges are remained to effectively extract and transmit high-quality textures from high-degraded low-quality sequences, such as blur, additive noises, and compression artifacts. In this work, a novel Frequency-Transformer (FTVSR) is proposed for handling low-quality videos that carry out self-attention in a combined space-time-frequency domain. First, video frames are split into patches and each patch is transformed into spectral maps in which each channel represents a frequency band. It permits a fine-grained self-attention on each frequency band, so that real visual texture can be distinguished from artifacts. Second, a novel dual frequency attention (DFA) mechanism is proposed to capture the global frequency relations and local frequency relations, which can handle different complicated degradation processes in real-world scenarios. Third, we explore different self-attention schemes for video processing in the frequency domain and discover that a ``divided attention'' which conducts a joint space-frequency attention before applying temporal-frequency attention, leads to the best video enhancement quality. Extensive experiments on three widely-used VSR datasets show that FTVSR outperforms state-of-the-art methods on different low-quality videos with clear visual margins. Code and pre-trained models are available at https://github.com/researchmm/FTVSR.
translated by 谷歌翻译
Nowadays, fake news easily propagates through online social networks and becomes a grand threat to individuals and society. Assessing the authenticity of news is challenging due to its elaborately fabricated contents, making it difficult to obtain large-scale annotations for fake news data. Due to such data scarcity issues, detecting fake news tends to fail and overfit in the supervised setting. Recently, graph neural networks (GNNs) have been adopted to leverage the richer relational information among both labeled and unlabeled instances. Despite their promising results, they are inherently focused on pairwise relations between news, which can limit the expressive power for capturing fake news that spreads in a group-level. For example, detecting fake news can be more effective when we better understand relations between news pieces shared among susceptible users. To address those issues, we propose to leverage a hypergraph to represent group-wise interaction among news, while focusing on important news relations with its dual-level attention mechanism. Experiments based on two benchmark datasets show that our approach yields remarkable performance and maintains the high performance even with a small subset of labeled news data.
translated by 谷歌翻译