Crowdsourcing, in which human intelligence and productivity is dynamically mobilized to tackle tasks too complex for automation alone to handle, has grown to be an important research topic and inspired new businesses (e.g., Uber, Airbnb). Over the years, crowdsourcing has morphed from providing a platform where workers and tasks can be matched up manually into one which leverages data-driven algorithmic management approaches powered by artificial intelligence (AI) to achieve increasingly sophisticated optimization objectives. In this paper, we provide a survey presenting a unique systematic overview on how AI can empower crowdsourcing - which we refer to as AI-Empowered Crowdsourcing(AIEC). We propose a taxonomy which divides algorithmic crowdsourcing into three major areas: 1) task delegation, 2) motivating workers, and 3) quality control, focusing on the major objectives which need to be accomplished. We discuss the limitations and insights, and curate the challenges of doing research in each of these areas to highlight promising future research directions.
translated by 谷歌翻译
Prior works on Information Extraction (IE) typically predict different tasks and instances (e.g., event triggers, entities, roles, relations) independently, while neglecting their interactions and leading to model inefficiency. In this work, we introduce a joint IE framework, HighIE, that learns and predicts multiple IE tasks by integrating high-order cross-task and cross-instance dependencies. Specifically, we design two categories of high-order factors: homogeneous factors and heterogeneous factors. Then, these factors are utilized to jointly predict labels of all instances. To address the intractability problem of exact high-order inference, we incorporate a high-order neural decoder that is unfolded from a mean-field variational inference method. The experimental results show that our approach achieves consistent improvements on three IE tasks compared with our baseline and prior work.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Recently, the success of pre-training in text domain has been fully extended to vision, audio, and cross-modal scenarios. The proposed pre-training models of different modalities are showing a rising trend of homogeneity in their model structures, which brings the opportunity to implement different pre-training models within a uniform framework. In this paper, we present TencentPretrain, a toolkit supporting pre-training models of different modalities. The core feature of TencentPretrain is the modular design. The toolkit uniformly divides pre-training models into 5 components: embedding, encoder, target embedding, decoder, and target. As almost all of common modules are provided in each component, users can choose the desired modules from different components to build a complete pre-training model. The modular design enables users to efficiently reproduce existing pre-training models or build brand-new one. We test the toolkit on text, vision, and audio benchmarks and show that it can match the performance of the original implementations.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
在本文中,我们研究了神经视频压缩(NVC)中位分配的问题。首先,我们揭示了最近声称是最佳的位分配方法实际上是由于其实施而是最佳的。具体而言,我们发现其亚典型性在于半损坏的变异推理(SAVI)对潜在的不正确的应用,具有非物质变异后验。然后,我们表明,在非因素潜伏期上校正的SAVI校正版本需要递归地通过梯度上升应用后传播,这是我们得出校正后的最佳位分配算法的。由于校正位分配的计算不可行性,我们设计了有效的近似值以使其实用。经验结果表明,我们提出的校正显着改善了R-D性能和比特率误差的错误分配,并且比所有其他位分配方法都大大提高了。源代码在补充材料中提供。
translated by 谷歌翻译
本文考虑了有损神经图像压缩(NIC)的问题。当前的最新方法(SOTA)方法采用近似量化噪声的后部均匀的后方,单样本估计量近似于证据下限(ELBO)的梯度。在本文中,我们建议用多个样本重要性加权自动编码器(IWAE)目标训练NIC,该目标比Elbo更紧,并随着样本量的增加而收敛至对数的可能性。首先,我们确定NIC的均匀后验具有特殊的特性,这会影响IWAE目标的Pathiswise和得分函数估计器的方差和偏差。此外,从梯度差异的角度来看,我们提供了有关NIC中通常采用的技巧的见解。基于这些分析,我们进一步提出了多样本NIC(MS-NIC),这是NIC的IWAE靶标。实验结果表明,它改善了SOTA NIC方法。我们的MS-NIC是插件,可以轻松扩展到其他神经压缩任务。
translated by 谷歌翻译
磁共振成像(MRI)图像中的小病变对于多种疾病的临床诊断至关重要。但是,MRI质量很容易被各种噪声降解,这可以极大地影响小病变的诊断准确性。尽管已经提出了一些用于降级MR图像的方法,但缺乏提高特定于任务的降级方法来提高小病变的诊断信心。在这项工作中,我们建议通过体素杂种残留MLP-CNN模型来降低具有小病变的三维(3D)MR图像。我们结合了基本的深度学习体系结构MLP和CNN,以获得适当的固有偏差,以通过添加残差连接来利用远距离信息,以使图像降低并整合MLP和CNN中的每个输出层。我们在720 T2-Flair脑图像上评估了所提出的方法,其在不同的噪声水平下具有较小的病变。结果表明,与最先进的方法相比,在定量和视觉评估中,我们的方法在测试数据集上具有优势。此外,两名经验丰富的放射科医生同意,在中等和高噪声水平下,我们的方法在恢复小病变和整体图像质量方面优于其他方法。我们的方法的实现可在https://github.com/laowangbobo/Residual_MLP_CNN_MIXER上获得。
translated by 谷歌翻译
TOR(洋葱路由器)网络是一种广泛使用的开源匿名通信工具,滥用Tor使得很难监视在线犯罪的扩散,例如访问犯罪网站。大多数现有的TOR网络去匿名化的批准都在很大程度上依赖手动提取的功能,从而导致耗时和性能差。为了解决这些缺点,本文提出了一种神经表示方法,以根据分类算法识别网站指纹。我们构建了一个基于卷积神经网络(CNN)的新网站指纹攻击模型,并通过扩张和因果卷积,可以改善CNN的感知场并捕获输入数据的顺序特征。三个主流公共数据集的实验表明,与最先进的方法相比,提出的模型对网站指纹分类非常有效且有效,并将准确性提高了12.21%。
translated by 谷歌翻译
从单眼RGB图像中重建3D手网络,由于其在AR/VR领域的巨大潜在应用,引起了人们的注意力越来越多。大多数最先进的方法试图以匿名方式解决此任务。具体而言,即使在连续录制会话中用户没有变化的实际应用程序中实际上可用,因此忽略了该主题的身份。在本文中,我们提出了一个身份感知的手网格估计模型,该模型可以结合由受试者的内在形状参数表示的身份信息。我们通过将提出的身份感知模型与匿名对待主题的基线进行比较来证明身份信息的重要性。此外,为了处理未见测试对象的用例,我们提出了一条新型的个性化管道来校准固有的形状参数,仅使用该受试者的少数未标记的RGB图像。在两个大型公共数据集上进行的实验验证了我们提出的方法的最先进性能。
translated by 谷歌翻译