快速准确地检测该疾病可以大大帮助减少任何国家医疗机构对任何大流行期间死亡率降低死亡率的压力。这项工作的目的是使用新型的机器学习框架创建多模式系统,该框架同时使用胸部X射线(CXR)图像和临床数据来预测COVID-19患者的严重程度。此外,该研究还提出了一种基于nom图的评分技术,用于预测高危患者死亡的可能性。这项研究使用了25种生物标志物和CXR图像,以预测意大利第一波Covid-19(3月至6月2020年3月至6月)在930名Covid-19患者中的风险。提出的多模式堆叠技术分别产生了89.03%,90.44%和89.03%的精度,灵敏度和F1分数,以识别低风险或高危患者。与CXR图像或临床数据相比,这种多模式方法可提高准确性6%。最后,使用多元逻辑回归的列线图评分系统 - 用于对第一阶段确定的高风险患者的死亡风险进行分层。使用随机森林特征选择模型将乳酸脱氢酶(LDH),O2百分比,白细胞(WBC)计数,年龄和C反应蛋白(CRP)鉴定为有用的预测指标。开发了五个预测因素参数和基于CXR图像的列函数评分,以量化死亡的概率并将其分为两个风险组:分别存活(<50%)和死亡(> = 50%)。多模式技术能够预测F1评分为92.88%的高危患者的死亡概率。开发和验证队列曲线下的面积分别为0.981和0.939。
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
The acquisition of high-quality human annotations through crowdsourcing platforms like Amazon Mechanical Turk (MTurk) is more challenging than expected. The annotation quality might be affected by various aspects like annotation instructions, Human Intelligence Task (HIT) design, and wages paid to annotators, etc. To avoid potentially low-quality annotations which could mislead the evaluation of automatic summarization system outputs, we investigate the recruitment of high-quality MTurk workers via a three-step qualification pipeline. We show that we can successfully filter out bad workers before they carry out the evaluations and obtain high-quality annotations while optimizing the use of resources. This paper can serve as basis for the recruitment of qualified annotators in other challenging annotation tasks.
translated by 谷歌翻译
Achieving artificially intelligent-native wireless networks is necessary for the operation of future 6G applications such as the metaverse. Nonetheless, current communication schemes are, at heart, a mere reconstruction process that lacks reasoning. One key solution that enables evolving wireless communication to a human-like conversation is semantic communications. In this paper, a novel machine reasoning framework is proposed to pre-process and disentangle source data so as to make it semantic-ready. In particular, a novel contrastive learning framework is proposed, whereby instance and cluster discrimination are performed on the data. These two tasks enable increasing the cohesiveness between data points mapping to semantically similar content elements and disentangling data points of semantically different content elements. Subsequently, the semantic deep clusters formed are ranked according to their level of confidence. Deep semantic clusters of highest confidence are considered learnable, semantic-rich data, i.e., data that can be used to build a language in a semantic communications system. The least confident ones are considered, random, semantic-poor, and memorizable data that must be transmitted classically. Our simulation results showcase the superiority of our contrastive learning approach in terms of semantic impact and minimalism. In fact, the length of the semantic representation achieved is minimized by 57.22% compared to vanilla semantic communication systems, thus achieving minimalist semantic representations.
translated by 谷歌翻译
With the evolution of power systems as it is becoming more intelligent and interactive system while increasing in flexibility with a larger penetration of renewable energy sources, demand prediction on a short-term resolution will inevitably become more and more crucial in designing and managing the future grid, especially when it comes to an individual household level. Projecting the demand for electricity for a single energy user, as opposed to the aggregated power consumption of residential load on a wide scale, is difficult because of a considerable number of volatile and uncertain factors. This paper proposes a customized GRU (Gated Recurrent Unit) and Long Short-Term Memory (LSTM) architecture to address this challenging problem. LSTM and GRU are comparatively newer and among the most well-adopted deep learning approaches. The electricity consumption datasets were obtained from individual household smart meters. The comparison shows that the LSTM model performs better for home-level forecasting than alternative prediction techniques-GRU in this case. To compare the NN-based models with contrast to the conventional statistical technique-based model, ARIMA based model was also developed and benchmarked with LSTM and GRU model outcomes in this study to show the performance of the proposed model on the collected time series data.
translated by 谷歌翻译
Boundary conditions (BCs) are important groups of physics-enforced constraints that are necessary for solutions of Partial Differential Equations (PDEs) to satisfy at specific spatial locations. These constraints carry important physical meaning, and guarantee the existence and the uniqueness of the PDE solution. Current neural-network based approaches that aim to solve PDEs rely only on training data to help the model learn BCs implicitly. There is no guarantee of BC satisfaction by these models during evaluation. In this work, we propose Boundary enforcing Operator Network (BOON) that enables the BC satisfaction of neural operators by making structural changes to the operator kernel. We provide our refinement procedure, and demonstrate the satisfaction of physics-based BCs, e.g. Dirichlet, Neumann, and periodic by the solutions obtained by BOON. Numerical experiments based on multiple PDEs with a wide variety of applications indicate that the proposed approach ensures satisfaction of BCs, and leads to more accurate solutions over the entire domain. The proposed correction method exhibits a (2X-20X) improvement over a given operator model in relative $L^2$ error (0.000084 relative $L^2$ error for Burgers' equation).
translated by 谷歌翻译
This paper presents a Temporal Graph Neural Network (TGNN) framework for detection and localization of false data injection and ramp attacks on the system state in smart grids. Capturing the topological information of the system through the GNN framework along with the state measurements can improve the performance of the detection mechanism. The problem is formulated as a classification problem through a GNN with message passing mechanism to identify abnormal measurements. The residual block used in the aggregation process of message passing and the gated recurrent unit can lead to improved computational time and performance. The performance of the proposed model has been evaluated through extensive simulations of power system states and attack scenarios showing promising performance. The sensitivity of the model to intensity and location of the attacks and model's detection delay versus detection accuracy have also been evaluated.
translated by 谷歌翻译
本文考虑通过模型量化提高联邦学习(FL)的无线通信和计算效率。在提出的Bitwidth FL方案中,Edge设备将其本地FL模型参数的量化版本训练并传输到协调服务器,从而将它们汇总为量化的全局模型并同步设备。目的是共同确定用于本地FL模型量化的位宽度以及每次迭代中参与FL训练的设备集。该问题被视为一个优化问题,其目标是在每卷工具采样预算和延迟要求下最大程度地减少量化FL的训练损失。为了得出解决方案,进行分析表征,以显示有限的无线资源和诱导的量化误差如何影响所提出的FL方法的性能。分析结果表明,两个连续迭代之间的FL训练损失的改善取决于设备的选择和量化方案以及所学模型固有的几个参数。给定基于线性回归的这些模型属性的估计值,可以证明FL训练过程可以描述为马尔可夫决策过程(MDP),然后提出了基于模型的增强学习(RL)方法来优化动作的方法选择迭代。与无模型RL相比,这种基于模型的RL方法利用FL训练过程的派生数学表征来发现有效的设备选择和量化方案,而无需强加其他设备通信开销。仿真结果表明,与模型无RL方法和标准FL方法相比,提出的FL算法可以减少29%和63%的收敛时间。
translated by 谷歌翻译
医疗图像分割有助于计算机辅助诊断,手术和治疗。数字化组织载玻片图像用于分析和分段腺,核和其他生物标志物,这些标志物进一步用于计算机辅助医疗应用中。为此,许多研究人员开发了不同的神经网络来对组织学图像进行分割,主要是这些网络基于编码器编码器体系结构,并且还利用了复杂的注意力模块或变压器。但是,这些网络不太准确地捕获相关的本地和全局特征,并在多个尺度下具有准确的边界检测,因此,我们提出了一个编码器折叠网络,快速注意模块和多损耗函数(二进制交叉熵(BCE)损失的组合) ,焦点损失和骰子损失)。我们在两个公开可用数据集上评估了我们提出的网络的概括能力,用于医疗图像分割Monuseg和Glas,并胜过最先进的网络,在Monuseg数据集上提高了1.99%的提高,而GLAS数据集则提高了7.15%。实施代码可在此链接上获得:https://bit.ly/histoseg
translated by 谷歌翻译
近年来,人们对建立面孔和名人声音之间的关联的兴趣越来越大,从而利用YouTube的视听信息。先前的工作采用公制学习方法来学习适合关联匹配和验证任务的嵌入式空间。尽管显示出一些进展,但由于依赖距离依赖的边缘参数,运行时训练的复杂性差以及对精心制作的负面采矿程序的依赖,这种制剂是限制性的。在这项工作中,我们假设一个丰富的表示形式以及有效但有效的监督对于实现面部voice关联任务的歧视性关节嵌入空间很重要。为此,我们提出了一种轻巧的插件机制,该机制利用这两种方式中的互补线索以通过正交性约束来根据其身份标签形成丰富的融合杂物并将其簇形成。我们将我们提出的机制作为融合和正交投影(FOP)创造,并在两个流网络中实例化。在Voxceleb1和Mav-Celeb数据集上评估了总体结果框架,其中包括许多任务,包括跨模式验证和匹配。结果表明,我们的方法对当前的最新方法有利,而我们提出的监督表述比当代方法所采用的方法更有效。此外,我们还利用跨模式验证和匹配任务来分析多种语言对面部声音协会的影响。代码可用:\ url {https://github.com/msaadsaeed/fop}
translated by 谷歌翻译