对于现实世界中的问题,尤其是在高风险环境中,具有可衡量的置信度的深度学习预测越来越多。共形预测(CP)框架是一种多功能解决方案,可自动保证最大错误率。但是,CP遭受计算效率低下,将其应用于大规模数据集。在本文中,我们提出了一种新型的保形损耗函数,该功能在一个步骤中近似传统上两步的CP方法。通过评估和惩罚与严格的预期CP输出分布的偏差,深度学习模型可以学习输入数据和保形P值之间的直接关系。与汇总的共形预测(ACP)(一种公认的CP近似变体)相比,我们的方法可实现高达86%的训练时间缩短。在近似有效性和预测效率方面,我们进行了全面的经验评估,以显示我们新颖的损失函数与ACP的竞争力在良好的MNIST数据集上。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Knowledge representation and reasoning in law are essential to facilitate the automation of legal analysis and decision-making tasks. In this paper, we propose a new approach based on legal science, specifically legal taxonomy, for representing and reasoning with legal documents. Our approach interprets the regulations in legal documents as binary trees, which facilitates legal reasoning systems to make decisions and resolve logical contradictions. The advantages of this approach are twofold. First, legal reasoning can be performed on the basis of the binary tree representation of the regulations. Second, the binary tree representation of the regulations is more understandable than the existing sentence-based representations. We provide an example of how our approach can be used to interpret the regulations in a legal document.
translated by 谷歌翻译
Semantic communication (SemCom) and edge computing are two disruptive solutions to address emerging requirements of huge data communication, bandwidth efficiency and low latency data processing in Metaverse. However, edge computing resources are often provided by computing service providers and thus it is essential to design appealingly incentive mechanisms for the provision of limited resources. Deep learning (DL)- based auction has recently proposed as an incentive mechanism that maximizes the revenue while holding important economic properties, i.e., individual rationality and incentive compatibility. Therefore, in this work, we introduce the design of the DLbased auction for the computing resource allocation in SemComenabled Metaverse. First, we briefly introduce the fundamentals and challenges of Metaverse. Second, we present the preliminaries of SemCom and edge computing. Third, we review various incentive mechanisms for edge computing resource trading. Fourth, we present the design of the DL-based auction for edge resource allocation in SemCom-enabled Metaverse. Simulation results demonstrate that the DL-based auction improves the revenue while nearly satisfying the individual rationality and incentive compatibility constraints.
translated by 谷歌翻译
Graph Neural Networks (GNNs) had been demonstrated to be inherently susceptible to the problems of over-smoothing and over-squashing. These issues prohibit the ability of GNNs to model complex graph interactions by limiting their effectiveness at taking into account distant information. Our study reveals the key connection between the local graph geometry and the occurrence of both of these issues, thereby providing a unified framework for studying them at a local scale using the Ollivier's Ricci curvature. Based on our theory, a number of principled methods are proposed to alleviate the over-smoothing and over-squashing issues.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
3D inference from monocular vision using neural networks is an important research area of computer vision. Applications of the research area are various with many proposed solutions and have shown remarkable performance. Although many efforts have been invested, there are still unanswered questions, some of which are fundamental. In this paper, I discuss a problem that I hope will come to be known as a generalization of the Blind Perspective-n-Point (Blind PnP) problem for object-driven 3D inference based on 2D representations. The vital difference between the fundamental problem and the Blind PnP problem is that 3D inference parameters in the fundamental problem are attached directly to 3D points and the camera concept will be represented through the sharing of the parameters of these points. By providing an explainable and robust gradient-decent solution based on 2D representations for an important special case of the problem, the paper opens up a new approach for using available information-based learning methods to solve problems related to 3D object pose estimation from 2D images.
translated by 谷歌翻译
最近,提出了经典多军强盗的多代理变体来解决在线学习中的公平问题。受社会选择和经济学方面的长期工作的启发,目标是优化NASH的社会福利,而不是全面的效用。不幸的是,就回合$ t $的数量而言,以前的算法要么不是有效的,要么实现次级遗憾。我们提出了一种新的有效算法,其遗憾也比以前效率低下的算法要低。对于$ n $ agents,$ k $ ands和$ t $ rounds,我们的方法遗憾的是$ \ tilde {o}(\ sqrt {nkt} + nk)$。这是对先前方法的改进,后者对$ \ tilde {o}(\ min(nk,\ sqrt {n} k^{3/2})\ sqrt {t})$的遗憾。我们还使用$ \ tilde {o}(\ sqrt {kt} + n^2k)$遗憾的方法来补充有效算法。实验发现证实了与先前方法相比,我们有效算法的有效性。
translated by 谷歌翻译
计算机系统的程序或功能中存在的软件漏洞是一个严重且至关重要的问题。通常,在由数百或数千个源代码语句组成的程序或功能中,只有很少的语句引起相应的漏洞。当前,在机器学习工具的协助下,专家在功能或程序级别上进行了脆弱性标签。将这种方法扩展到代码语句级别的成本更高和耗时,并且仍然是一个开放的问题。在本文中,我们提出了一种新颖的端到端深度学习方法,以识别与特定功能相关的脆弱性代码语句。受到现实世界中脆弱代码中观察到的特定结构的启发,我们首先利用相互信息来学习一组潜在变量,代表源代码语句与相应函数的漏洞的相关性。然后,我们提出了新颖的群集空间对比学习,以进一步改善与脆弱性相关的代码语句的强大选择过程。 200K+ C/C ++功能的实际数据集的实验结果表明,我们方法的优越性比其他最先进的基线相比。通常,我们的方法在无需监督的环境中在现实世界数据集上运行时,在Baselines上,VCP,VCA和TOP-10 ACC测量的较高性能在3 \%至14 \%之间。我们已发布的源代码样本可在\ href {https://github.com/vannguyennd/livuitcl} {https://github.com/vannguyennd/livuitcl。} {
translated by 谷歌翻译
近年来,获得医疗保健监管批准的机器学习(ML)技术的数量已大大增加,从而使其可以投入市场。但是,与ML的数据驱动和学习的行为相比,最初是为传统软件设计了用于它们的监管框架。由于框架正在改革的过程中,因此有必要主动确保ML的安全以防止患者的安全受到损害。在自主系统(AMLAS)方法中使用的机器学习的保证是由基于系统安全性良好概念的Assunity International计划开发的。这篇综述通过咨询ML制造商了解该方法是否融合或与其当前安全保证实践有所不同,是否存在差距和限制,是否有差距和局限性,以及当应用于医疗保健领域时是否适合目的。通过这项工作,我们认为,当应用于医疗机器学习技术时,AMLAS是一种安全保证方法,尽管医疗保健特定的补充指导的开发将使实施该方法论的人受益。
translated by 谷歌翻译