With the development of natural language processing techniques(NLP), automatic diagnosis of eye diseases using ophthalmology electronic medical records (OEMR) has become possible. It aims to evaluate the condition of both eyes of a patient respectively, and we formulate it as a particular multi-label classification task in this paper. Although there are a few related studies in other diseases, automatic diagnosis of eye diseases exhibits unique characteristics. First, descriptions of both eyes are mixed up in OEMR documents, with both free text and templated asymptomatic descriptions, resulting in sparsity and clutter of information. Second, OEMR documents contain multiple parts of descriptions and have long document lengths. Third, it is critical to provide explainability to the disease diagnosis model. To overcome those challenges, we present an effective automatic eye disease diagnosis framework, NEEDED. In this framework, a preprocessing module is integrated to improve the density and quality of information. Then, we design a hierarchical transformer structure for learning the contextualized representations of each sentence in the OEMR document. For the diagnosis part, we propose an attention-based predictor that enables traceable diagnosis by obtaining disease-specific information. Experiments on the real dataset and comparison with several baseline models show the advantage and explainability of our framework.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
The security of artificial intelligence (AI) is an important research area towards safe, reliable, and trustworthy AI systems. To accelerate the research on AI security, the Artificial Intelligence Security Competition (AISC) was organized by the Zhongguancun Laboratory, China Industrial Control Systems Cyber Emergency Response Team, Institute for Artificial Intelligence, Tsinghua University, and RealAI as part of the Zhongguancun International Frontier Technology Innovation Competition (https://www.zgc-aisc.com/en). The competition consists of three tracks, including Deepfake Security Competition, Autonomous Driving Security Competition, and Face Recognition Security Competition. This report will introduce the competition rules of these three tracks and the solutions of top-ranking teams in each track.
translated by 谷歌翻译
The success of deep neural networks requires both high annotation quality and massive data. However, the size and the quality of a dataset are usually a trade-off in practice, as data collection and cleaning are expensive and time-consuming. Therefore, automatic noisy label detection (NLD) techniques are critical to real-world applications, especially those using crowdsourcing datasets. As this is an under-explored topic in automatic speaker verification (ASV), we present a simple but effective solution to the task. First, we compare the effectiveness of various commonly used metric learning loss functions under different noise settings. Then, we propose two ranking-based NLD methods, inter-class inconsistency and intra-class inconsistency ranking. They leverage the inconsistent nature of noisy labels and show high detection precision even under a high level of noise. Our solution gives rise to both efficient and effective cleaning of large-scale speaker recognition datasets.
translated by 谷歌翻译
Transformer-based language models have become the standard approach to solving natural language processing tasks. However, industry adoption usually requires the maximum throughput to comply with certain latency constraints that prevents Transformer models from being used in production. To address this gap, model compression techniques such as quantization and pruning may be used to improve inference efficiency. However, these compression techniques require specialized software to apply and deploy at scale. In this work, we propose a new pipeline for creating and running Fast Transformer models on CPUs, utilizing hardware-aware pruning, knowledge distillation, quantization, and our own Transformer inference runtime engine with optimized kernels for sparse and quantized operators. We demonstrate the efficiency of our pipeline by creating a Fast DistilBERT model showing minimal accuracy loss on the question-answering SQuADv1.1 benchmark, and throughput results under typical production constraints and environments. Our results outperform existing state-of-the-art Neural Magic's DeepSparse runtime performance by up to 50% and up to 4.1x performance speedup over ONNX Runtime. Source code is publicly available at https://github.com/intel/intel-extension-for-transformers.
translated by 谷歌翻译
尽管变形金刚及其变体构象体在语音识别方面表现出了有希望的表现,但参数化的属性在训练和推理过程中导致了很大的记忆成本。一些作品使用跨层重量分享来减少模型的参数。但是,不可避免的能力损失会损害模型性能。为了解决这个问题,本文提出了通过共享稀疏门控专家的参数效率构象异构体。具体而言,我们使用稀疏门控的专家(MOE)来扩展构型块的容量而不增加计算。然后,共享分组构象块的参数,以减少参数的数量。接下来,为了确保具有不同级别适应表示的灵活性的共享块,我们会单独设计MOE路由器和标准化。此外,我们使用知识蒸馏来进一步提高性能。实验结果表明,与全参数模型相比,所提出的模型用编码器的1/3来实现竞争性能。
translated by 谷歌翻译
在本文中,我们介绍了一项新任务,口语视频接地(SVG),旨在将口语描述中所需的视频片段定位。与使用文本相比,使用音频需要模型直接利用与原始语音视频相关的有用音素和音节。此外,我们在语音音频中随机添加环境声音,进一步增加了此任务的困难并更好地模拟真实应用程序。为了纠正歧视性音素并从嘈杂的音频中提取与视频相关的信息,我们在音频预训练过程中开发了一种新颖的视频指导课程学习(VGCL),可以利用重要的视觉感知来帮助理解口语语言并抑制外部噪音。考虑到推理期间,模型无法获得地面真实视频片段,我们设计了一种课程策略,该策略将输入视频从地面真相转移到预训练期间的整个视频内容。最后,该模型可以学习如何从整个视频剪辑中提取关键的视觉信息,以帮助了解口语。此外,我们基于ActivityNet收集了第一个大规模口语视频接地数据集,该数据集称为ActivityNet语音数据集。广泛的实验表明,我们提出的视频指导课程学习可以促进预训练过程以获得相互的音频编码器,从而大大促进了口头视频接地任务的性能。此外,我们证明,在嘈杂的声音的情况下,我们的模型优于将视频与ASR转录本扎根的方法,进一步证明了我们课程策略的有效性。
translated by 谷歌翻译
提供质量恒定流可以同时保证用户体验并防止浪费位率。在本文中,我们提出了一种基于深度学习的新型两通编码器参数预测框架来决定速率因子(RF),编码器可以通过恒定质量输出流。对于视频中的每个单发段,提出的方法首先通过超快速预处理提取空间,时间和预编码功能。基于这些功能,深度神经网络预测了RF参数。视频编码器使用RF作为第一个编码通过来压缩段。然后测量第一个通过编码的VMAF质量。如果质量不符合目标,将执行第二通过的RF预测和编码。借助第一次通过预测的RF和相应的实际质量作为反馈,第二次通过预测将非常准确。实验表明,所提出的方法仅需要平均编码复杂性的1.55倍,同时准确性,压缩视频的实际VMAF在目标VMAF附近的$ \ pm1 $之内,达到98.88%。
translated by 谷歌翻译
我们介绍了DeepGen,这是一个在网络范围内部署的系统,用于自动为宾果派客户创建赞助的搜索广告(ADS)。我们利用最新的自然语言生成(NLG)模型以抽象的方式从广告商的网页中生成流利的广告,并解决了实际问题,例如事实和推理速度。此外,我们的系统可实时创建自定义的广告,以响应用户的搜索查询,因此根据用户所需的内容突出显示了同一产品的不同方面。为了实现这一目标,我们的系统会提前生成各种较小广告的选择,并在查询时间选择最相关的广告选择,以将其缝合为完整的广告。我们通过培训可控的NLG模型来改善发电多样性,以生成相同网页的多个广告,突出显示不同的销售点。我们的系统设计通过首先运行具有不同目标训练的生成模型的合奏,然后使用多样性采样算法来选择各种各样的生成结果以进行在线选择,从而进一步改善了多样性。实验结果显示了我们提出的系统设计的有效性。我们的系统目前已在生产中部署,为Bing提供的全球广告提供$ {\ sim} 4 \%$。
translated by 谷歌翻译
在恶劣天气下降雪场景的图像恢复是一项艰巨的任务。雪图像具有复杂的降解,并在干净的图像上混乱,改变了干净的图像的分布。以前基于CNN的方法由于缺乏特定的全球建模能力,因此在恢复雪场景中完全恢复了雪场的挑战。在本文中,我们将视觉变压器应用于从单个图像中去除积雪的任务。具体而言,我们建议沿通道拆分的并行网络体系结构分别执行本地功能改进和全局信息建模。我们利用频道洗牌操作来结合其各自的优势以增强网络性能。其次,我们提出了MSP模块,该模块利用多规模的AVGPOOL来汇总不同大小的信息,并同时对多头自我注意力进行多尺度投影自我注意,以提高模型在不同规模下降下的表示能力。最后,我们设计了一个轻巧,简单的本地捕获模块,可以完善模型的本地捕获能力。在实验部分,我们进行了广泛的实验以证明我们方法的优越性。我们比较了三个雪场数据集上的先前清除方法。实验结果表明,我们的方法超过了更少的参数和计算的最新方法。在CSD测试数据集上,我们实现了1.99dB和SSIM 0.03的实质增长。在SRR和SNOW100K数据集上,与Transweather方法相比,我们还增加了2.47dB和1.62dB,在SSIM中提高了0.03。在视觉比较部分中,我们的MSP形式比现有方法获得了更好的视觉效果,证明了我们方法的可用性。
translated by 谷歌翻译