人们普遍认为,人类视觉系统偏向于识别形状而不是纹理。这一假设导致了越来越多的工作,旨在使深层模型的决策过程与人类视野的基本特性保持一致。人们对形状特征的依赖主要预计会改善协变量转移下这些模型的鲁棒性。在本文中,我们重新审视了形状偏置对皮肤病变图像分类的重要性。我们的分析表明,不同的皮肤病变数据集对单个图像特征表现出不同的偏见。有趣的是,尽管深层提取器倾向于学习对皮肤病变分类的纠缠特征,但仍然可以从该纠缠的表示形式中解码单个特征。这表明这些功能仍在模型的学习嵌入空间中表示,但不用于分类。此外,不同数据集的光谱分析表明,与常见的视觉识别相反,皮肤皮肤病变分类本质上依赖于超出形状偏置的复杂特征组合。自然的结果,在某些情况下,摆脱了形状偏见模型的普遍欲望甚至可以改善皮肤病变分类器。
translated by 谷歌翻译
Algorithms that involve both forecasting and optimization are at the core of solutions to many difficult real-world problems, such as in supply chains (inventory optimization), traffic, and in the transition towards carbon-free energy generation in battery/load/production scheduling in sustainable energy systems. Typically, in these scenarios we want to solve an optimization problem that depends on unknown future values, which therefore need to be forecast. As both forecasting and optimization are difficult problems in their own right, relatively few research has been done in this area. This paper presents the findings of the ``IEEE-CIS Technical Challenge on Predict+Optimize for Renewable Energy Scheduling," held in 2021. We present a comparison and evaluation of the seven highest-ranked solutions in the competition, to provide researchers with a benchmark problem and to establish the state of the art for this benchmark, with the aim to foster and facilitate research in this area. The competition used data from the Monash Microgrid, as well as weather data and energy market data. It then focused on two main challenges: forecasting renewable energy production and demand, and obtaining an optimal schedule for the activities (lectures) and on-site batteries that lead to the lowest cost of energy. The most accurate forecasts were obtained by gradient-boosted tree and random forest models, and optimization was mostly performed using mixed integer linear and quadratic programming. The winning method predicted different scenarios and optimized over all scenarios jointly using a sample average approximation method.
translated by 谷歌翻译
While modern Text-to-Speech (TTS) systems can produce speech rated highly in terms of subjective evaluation, the distance between real and synthetic speech distributions remains understudied, where we use the term \textit{distribution} to mean the sample space of all possible real speech recordings from a given set of speakers; or of the synthetic samples that could be generated for the same set of speakers. We evaluate the distance of real and synthetic speech distributions along the dimensions of the acoustic environment, speaker characteristics and prosody using a range of speech processing measures and the respective Wasserstein distances of their distributions. We reduce these distribution distances along said dimensions by providing utterance-level information derived from the measures to the model and show they can be generated at inference time. The improvements to the dimensions translate to overall distribution distance reduction approximated using Automatic Speech Recognition (ASR) by evaluating the fitness of the synthetic data as training data.
translated by 谷歌翻译
Automatic speech recognition (ASR) has been established as a well-performing technique for many scenarios where lots of labeled data is available. Additionally, unsupervised representation learning recently helped to tackle tasks with limited data. Following this, hardware limitations and applications give rise to the question how to efficiently take advantage of large pretrained models and reduce their complexity for downstream tasks. In this work, we study a challenging low resource conversational telephony speech corpus from the medical domain in Vietnamese and German. We show the benefits of using unsupervised techniques beyond simple fine-tuning of large pre-trained models, discuss how to adapt them to a practical telephony task including bandwidth transfer and investigate different data conditions for pre-training and fine-tuning. We outperform the project baselines by 22% relative using pretraining techniques. Further gains of 29% can be achieved by refinements of architecture and training and 6% by adding 0.8 h of in-domain adaptation data.
translated by 谷歌翻译
电网已成为日常生活的重要组成部分,即使在日常生活中经常没有注意到它们。我们通常只会在不再可用的电网时特别了解这种依赖性。但是,重大变化,例如过渡到可再生能源(光伏,风力涡轮机等)以及具有复杂负载剖面(电动汽车,家用电池系统等)的越来越多的能源消费者,对电力构成了新的挑战网格。为了应对这些挑战,我们根据宽带电力线通信(PLC)基础架构中的测量结果提出了两个首先数据集。数据集FIN-1和FIN-2均在德国低压电网的一部分实际使用期间收集,该电网供应约440万人,并显示了超过5100个传感器收集的130亿个数据点。此外,我们在资产管理,网格状态可视化,预测,预测维护和新颖性检测中提出不同的用例,以突出这些类型的数据的好处。对于这些应用程序,我们特别强调了使用新颖的机器学习体系结构从现实世界数据中提取丰富信息,这些信息无法使用传统方法捕获。通过发布第一个大型现实世界数据集,我们旨在阐明PLC数据的先前很大程度上未识别的潜力,并通过呈现各种不同的用例来强调低压分布网络中基于机器的研究。
translated by 谷歌翻译
大量的研究与逼真的传感器数据的产生有关。激光点云是由复杂的模拟或学习的生成模型生成的。通常利用生成的数据来启用或改善下游感知算法。这些程序来自两个主要问题:首先,如何评估生成数据的现实主义?其次,更现实的数据还会导致更好的感知表现吗?本文解决了问题,并提出了一个新颖的指标,以量化LiDar Point Cloud的现实主义。通过训练代理分类任务,可以从现实世界和合成点云中学到相关功能。在一系列实验中,我们证明了我们的指标的应用来确定生成的LiDAR数据的现实主义,并将我们的度量的现实主义估计与分割模型的性能进行比较。我们确认我们的指标为下游细分性能提供了指示。
translated by 谷歌翻译
现代机器学习任务通常不仅需要考虑一个目标,而且需要考虑多个目标。例如,除了预测质量外,这可能是学识渊博的模型或其任何组合的效率,稳健性或公平性。多目标学习为处理此类问题提供了自然框架,而无需提交早期权衡。令人惊讶的是,到目前为止,统计学习理论几乎没有深入了解多目标学习的概括属性。在这项工作中,我们采取了第一步来填补这一空白:我们为多目标设置建立了基础概括范围,以及通过标量化学习的概括和超级界限。我们还提供了对真实目标的帕累托最佳集合与他们从训练数据中经验近似的帕累托(Pareto)最佳选择之间的关系的第一个理论分析。特别是,我们表现出令人惊讶的不对称性:所有帕累托最佳的解决方案都可以通过经验上的帕累托(Pareto)优势近似,但反之亦然。
translated by 谷歌翻译
在这项工作中,我们统一了一个框架中的标点符号预测的几个现有的解码策略,并引入了一种新的策略,该策略在不同窗口中使用每个单词的多个预测。我们表明,通过在培训模型之后优化这些策略,可以实现显着的改进,只能导致推理时间的潜在增加,没有要求再培训。我们进一步使用我们的解码策略框架,以便在实时设置中的标记和分类方法的第一次比较。我们的研究结果表明,当较少或没有右侧上下文时,标点符号预测的分类方法可能是有益的。
translated by 谷歌翻译
骨关节炎(OA)是影响全球人口大量比例的最常见的联合障碍,主要是老年人。尽管其个人和社会经济负担,但仍然无法可靠地预测OA的发病和进展。旨在填补这种诊断缺口,我们介绍了基于生成模型的无监督学习计划,以预测基于膝关节X线本的OA的未来发展。使用来自骨关节炎研究的纵向数据,我们探讨了潜在的时间轨迹,以预测患者未来的射线照片,达到八年的随访访问。我们的模型预测了对OA的进展的风险,并超越了其监督对应物,其投入由七位经验丰富的放射科医师提供。通过支持模型,灵敏度,特异性,阳性预测值和负预测值显着增加到42.1%至51.6%,从72.3%到88.6%,从28.4%到57.6%,83.9%至88.4%,分别在没有这种支撑的情况下,放射科医生仅比随机猜测更好地进行。尽管需要在训练阶段没有人为注释,但我们的预测模型可以提高对OA发作和进展的预测。
translated by 谷歌翻译
背景:虽然卷积神经网络(CNN)实现了检测基于磁共振成像(MRI)扫描的阿尔茨海默病(AD)痴呆的高诊断准确性,但它们尚未应用于临床常规。这是一个重要原因是缺乏模型可理解性。最近开发的用于导出CNN相关性图的可视化方法可能有助于填补这种差距。我们调查了具有更高准确性的模型还依赖于先前知识预定义的判别脑区域。方法:我们培训了CNN,用于检测痴呆症和Amnestic认知障碍(MCI)患者的N = 663 T1加权MRI扫描的AD,并通过交叉验证和三个独立样本验证模型的准确性= 1655例。我们评估了相关评分和海马体积的关联,以验证这种方法的临床效用。为了提高模型可理解性,我们实现了3D CNN相关性图的交互式可视化。结果:跨三个独立数据集,组分离表现出广告痴呆症与控制的高精度(AUC $ \ GEQUQ $ 0.92)和MCI与控制的中等精度(AUC $ \约0.75美元)。相关性图表明海马萎缩被认为是广告检测的最具信息性因素,其其他皮质和皮质区域中的萎缩额外贡献。海马内的相关评分与海马体积高度相关(Pearson的r $ \大约$ -0.86,p <0.001)。结论:相关性地图突出了我们假设先验的地区的萎缩。这加强了CNN模型的可理解性,这些模型基于扫描和诊断标签以纯粹的数据驱动方式培训。
translated by 谷歌翻译