分析和区分网络协议流量的能力对于网络资源管理来说至关重要,以通过电信提供差异化​​服务。自动化协议分析(APA)至关重要,以显着提高效率,减少对人类专家的依赖。在APA中群集未知协议有许多自动化的无监督方法。但是,许多这样的方法没有使用不同的测试数据集充分探索。因此,未能展示泛化的鲁棒性。本研究提出了一种综合框架,以评估APA中的特征提取和聚类方法的各种组合。它还提出了一种自动选择数据集依赖模型参数的新颖方法,用于特征提取,从而提高性能。新颖的基于田间的象形化方法的有希望的结果也导致我们对APA中未知协议的特征提取和聚类的新型自动混合方法提出。我们所提出的混合方法在不同的测试数据集中的9个中的7个中最佳地进行了最佳,从而显示宽大,以概括不同的未知协议。它还优于所有测试数据集中的最先进的开源APA工具中的无监督聚类技术。
translated by 谷歌翻译
This study evaluated the ability of ChatGPT, a recently developed artificial intelligence (AI) agent, to perform high-level cognitive tasks and produce text that is indistinguishable from human-generated text. This capacity raises concerns about the potential use of ChatGPT as a tool for academic misconduct in online exams. The study found that ChatGPT is capable of exhibiting critical thinking skills and generating highly realistic text with minimal input, making it a potential threat to the integrity of online exams, particularly in tertiary education settings where such exams are becoming more prevalent. Returning to invigilated and oral exams could form part of the solution, while using advanced proctoring techniques and AI-text output detectors may be effective in addressing this issue, they are not likely to be foolproof solutions. Further research is needed to fully understand the implications of large language models like ChatGPT and to devise strategies for combating the risk of cheating using these tools. It is crucial for educators and institutions to be aware of the possibility of ChatGPT being used for cheating and to investigate measures to address it in order to maintain the fairness and validity of online exams for all students.
translated by 谷歌翻译
Electronic Health Records (EHRs) hold detailed longitudinal information about each patient's health status and general clinical history, a large portion of which is stored within the unstructured text. Temporal modelling of this medical history, which considers the sequence of events, can be used to forecast and simulate future events, estimate risk, suggest alternative diagnoses or forecast complications. While most prediction approaches use mainly structured data or a subset of single-domain forecasts and outcomes, we processed the entire free-text portion of EHRs for longitudinal modelling. We present Foresight, a novel GPT3-based pipeline that uses NER+L tools (i.e. MedCAT) to convert document text into structured, coded concepts, followed by providing probabilistic forecasts for future medical events such as disorders, medications, symptoms and interventions. Since large portions of EHR data are in text form, such an approach benefits from a granular and detailed view of a patient while introducing modest additional noise. On tests in two large UK hospitals (King's College Hospital, South London and Maudsley) and the US MIMIC-III dataset precision@10 of 0.80, 0.81 and 0.91 was achieved for forecasting the next biomedical concept. Foresight was also validated on 34 synthetic patient timelines by 5 clinicians and achieved relevancy of 97% for the top forecasted candidate disorder. Foresight can be easily trained and deployed locally as it only requires free-text data (as a minimum). As a generative model, it can simulate follow-on disorders, medications and interventions for as many steps as required. Foresight is a general-purpose model for biomedical concept modelling that can be used for real-world risk estimation, virtual trials and clinical research to study the progression of diseases, simulate interventions and counterfactuals, and for educational purposes.
translated by 谷歌翻译
This work addresses fair generative models. Dataset biases have been a major cause of unfairness in deep generative models. Previous work had proposed to augment large, biased datasets with small, unbiased reference datasets. Under this setup, a weakly-supervised approach has been proposed, which achieves state-of-the-art quality and fairness in generated samples. In our work, based on this setup, we propose a simple yet effective approach. Specifically, first, we propose fairTL, a transfer learning approach to learn fair generative models. Under fairTL, we pre-train the generative model with the available large, biased datasets and subsequently adapt the model using the small, unbiased reference dataset. We find that our fairTL can learn expressive sample generation during pre-training, thanks to the large (biased) dataset. This knowledge is then transferred to the target model during adaptation, which also learns to capture the underlying fair distribution of the small reference dataset. Second, we propose fairTL++, where we introduce two additional innovations to improve upon fairTL: (i) multiple feedback and (ii) Linear-Probing followed by Fine-Tuning (LP-FT). Taking one step further, we consider an alternative, challenging setup when only a pre-trained (potentially biased) model is available but the dataset that was used to pre-train the model is inaccessible. We demonstrate that our proposed fairTL and fairTL++ remain very effective under this setup. We note that previous work requires access to the large, biased datasets and is incapable of handling this more challenging setup. Extensive experiments show that fairTL and fairTL++ achieve state-of-the-art in both quality and fairness of generated samples. The code and additional resources can be found at bearwithchris.github.io/fairTL/.
translated by 谷歌翻译
In this work, a machine learning approach is developed for predicting the outcomes of football matches. The novelty of this research lies in the utilisation of the Kelly Index to first classify matches into categories where each one denotes the different levels of predictive difficulty. Classification models using a wide suite of algorithms were developed for each category of matches in order to determine the efficacy of the approach. In conjunction to this, a set of previously unexplored features were engineering including Elo-based variables. The dataset originated from the Premier League match data covering the 2019-2021 seasons. The findings indicate that the process of decomposing the predictive problem into sub-tasks was effective and produced competitive results with prior works, while the ensemble-based methods were the most effective. The paper also devised an investment strategy in order to evaluate its effectiveness by benchmarking against bookmaker odds. An approach was developed that minimises risk by combining the Kelly Index with the predefined confidence thresholds of the predictive models. The experiments found that the proposed strategy can return a profit when following a conservative approach that focuses primarily on easy-to-predict matches where the predictive models display a high confidence level.
translated by 谷歌翻译
与传统的详尽搜索相反,选择性搜索第一群集文档将文档分为几个组,然后通过查询对所有文档进行详尽的搜索,以限制在一个组或仅几组中执行的搜索。选择性搜索旨在减少现代大规模搜索系统中的延迟和计算。在这项研究中,我们提出了MICO,这是一个使用搜索日志的最小监督,用于选择性搜索的相互信息共同培训框架。经过培训,MICO不仅会将文档聚集,还可以将看不见的查询路由到相关群集以进行有效检索。在我们的经验实验中,MICO显着提高了选择性搜索的多个指标的性能,并且超过了许多现有的竞争基线。
translated by 谷歌翻译
在学习分析领域的最新研究重点是利用机器学习方法来预测高危学生,以便及时启动干预措施,从而提高保留率和完成率。这些研究大多数的总体特征仅在预测科学方面。与解释模型内部的预测分析的组成部分,并在很大程度上忽略了其对利益相关者的个人案例的预测。此外,尝试使用数据驱动的规范分析来自动为高危学习者生成基于证据的补救建议的工作仍处于起步阶段。可解释的AI是一个最近出现的领域,它提供了尖端工具,该工具支持透明的预测分析和技术,以为高危学生生成量身定制的建议。这项研究提出了一个新的框架,该框架既可以统一透明的机器学习,又可以实现规定性分析的技术。这项工作实际上使用了使用预测模型来识别计划不完整的高危学习者的预测模型。然后,该研究进一步证明了如何通过两项案例研究的规定性分析来增强预测性建模,以便为有危险的人产生可读的规定反馈。
translated by 谷歌翻译
神经网络已广泛应用于垃圾邮件和网络钓鱼检测,入侵预防和恶意软件检测等安全应用程序。但是,这种黑盒方法通常在应用中具有不确定性和不良的解释性。此外,神经网络本身通常容易受到对抗攻击的影响。由于这些原因,人们对可信赖和严格的方法有很高的需求来验证神经网络模型的鲁棒性。对抗性的鲁棒性在处理恶意操纵输入时涉及神经网络的可靠性,是安全和机器学习中最热门的主题之一。在这项工作中,我们在神经网络的对抗性鲁棒性验证中调查了现有文献,并在机器学习,安全和软件工程领域收集了39项多元化研究工作。我们系统地分析了它们的方法,包括如何制定鲁棒性,使用哪种验证技术以及每种技术的优势和局限性。我们从正式验证的角度提供分类学,以全面理解该主题。我们根据财产规范,减少问题和推理策略对现有技术进行分类。我们还展示了使用样本模型在现有研究中应用的代表性技术。最后,我们讨论了未来研究的开放问题。
translated by 谷歌翻译
由于获取和存储标准的差异,从多个来源创建大量的医学放射学图像数据集可能具有挑战性。控制和/或评估图像选择过程的一种可能方法是通过医学图像聚类。但是,这需要一种有效的方法来学习潜在图像表示。在本文中,我们仅使用像素数据来解决医学图像的全面观察聚类的问题。我们测试了几种现代方法的性能,该方法建立在卷积自动编码器(CAE)的顶部 - 卷积深层嵌入式聚类(CDEC)和卷积改进的深层嵌入聚类(CIDEC)和基于预设特征提取的三种方法 - 方向提取的方法(HOG),局部二进制模式(LBP)和主成分分析(PCA)。 CDEC和CIDEC是端到端聚类解决方案,涉及同时学习潜在表示和聚类分配,而其余方法则依赖于固定嵌入的K-均值聚类。我们在30,000张图像上训练模型,并使用由8,000张图像组成的单独测试集进行测试。我们从临床医院中心Rijeka的PACS存储库档案库中取样了数据。为了进行评估,我们在两个目标参数上使用轮廓分数,同质性评分和归一化的相互信息(NMI),与通常发生的DICOM标签紧密相关 - 模态和解剖区域(调整后的身体培养标签)。 CIDEC相对于解剖区域的NMI得分为0.473,而CDEC相对于TAG模式,NMI得分为0.645,两者都优于其他常用的特征描述符。
translated by 谷歌翻译
深层生成模型已成为检测数据中任意异常的有前途的工具,并分配了手动标记的必要性。最近,自回旋变压器在医学成像中取得了最先进的性能。但是,这些模型仍然具有一些内在的弱点,例如需要将图像建模为1D序列,在采样过程中误差的积累以及与变压器相关的显着推理时间。去核扩散概率模型是一类非自动回旋生成模型,最近显示出可以在计算机视觉中产生出色的样品(超过生成的对抗网络),并实现与变压器具有竞争力同时具有快速推理时间的对数可能性。扩散模型可以应用于自动编码器学到的潜在表示,使其易于扩展,并适用于高维数据(例如医学图像)的出色候选者。在这里,我们提出了一种基于扩散模型的方法,以检测和分段脑成像中的异常。通过在健康数据上训练模型,然后探索其在马尔可夫链上的扩散和反向步骤,我们可以识别潜在空间中的异常区域,因此可以确定像素空间中的异常情况。我们的扩散模型与一系列具有2D CT和MRI数据的实验相比,具有竞争性能,涉及合成和实际病理病变,推理时间大大减少,从而使它们的用法在临床上可行。
translated by 谷歌翻译