Drug repositioning holds great promise because it can reduce the time and cost of new drug development. While drug repositioning can omit various R&D processes, confirming pharmacological effects on biomolecules is essential for application to new diseases. Biomedical explainability in a drug repositioning model can support appropriate insights in subsequent in-depth studies. However, the validity of the XAI methodology is still under debate, and the effectiveness of XAI in drug repositioning prediction applications remains unclear. In this study, we propose GraphIX, an explainable drug repositioning framework using biological networks, and quantitatively evaluate its explainability. GraphIX first learns the network weights and node features using a graph neural network from known drug indication and knowledge graph that consists of three types of nodes (but not given node type information): disease, drug, and protein. Analysis of the post-learning features showed that node types that were not known to the model beforehand are distinguished through the learning process based on the graph structure. From the learned weights and features, GraphIX then predicts the disease-drug association and calculates the contribution values of the nodes located in the neighborhood of the predicted disease and drug. We hypothesized that the neighboring protein node to which the model gave a high contribution is important in understanding the actual pharmacological effects. Quantitative evaluation of the validity of protein nodes' contribution using a real-world database showed that the high contribution proteins shown by GraphIX are reasonable as a mechanism of drug action. GraphIX is a framework for evidence-based drug discovery that can present to users new disease-drug associations and identify the protein important for understanding its pharmacological effects from a large and complex knowledge base.
translated by 谷歌翻译
从观察到的时间序列数据中学习稳定的动态是机器人技术,物理建模和系统生物学中的重要问题。这些动态中的许多被表示为与外部环境通信的输入输出系统。在这项研究中,我们专注于投入输出稳定系统,表现出对意外刺激和噪声的鲁棒性。我们提出了一种学习保证输入输出稳定性的非线性系统的方法。我们提出的方法利用了满足汉密尔顿 - 雅各比不平等的空间上的可区分投影来实现输入输出稳定性。找到该投影的问题可以作为二次约束二次编程问题,并分析得出特定的解决方案。此外,我们将方法应用于玩具双基生模型以及训练由葡萄糖胰岛素模拟器产生的基准测试的任务。结果表明,通过我们的方法,具有神经网络的非线性系统可以达到输入输出稳定性,这与天真的神经网络不同。我们的代码可在https://github.com/clinfo/deepiostability上找到。
translated by 谷歌翻译
基于有效干预措施的早期疾病检测和预防方法正在引起人们的注意。机器学习技术通过捕获多元数据中的个体差异来实现精确的疾病预测。精确医学的进展表明,在个人层面的健康数据中存在实质性异质性,并且复杂的健康因素与慢性疾病的发展有关。但是,由于多种生物标志物之间的复杂关系,确定跨疾病发作过程中的个体生理状态变化仍然是一个挑战。在这里,我们介绍了健康疾病阶段图(HDPD),它通过可视化在疾病进展过程早期波动的多种生物标志物的边界值来代表个人健康状态。在HDPD中,未来的发作预测是通过扰动多个生物标志物值的情况来表示的,同时考虑变量之间的依赖性。我们从3,238个个体的纵向健康检查队列中构建了11种非传染性疾病(NCD)的HDPD,其中包括3,215个测量项目和遗传数据。 HDPD中非发病区域的生物标志物值的改善显着阻止了11个NCD中的7个未来的疾病发作。我们的结果表明,HDPD可以在发作过程中代表单个生理状态,并用作预防疾病的干预目标。
translated by 谷歌翻译
The external visual inspections of rolling stock's underfloor equipment are currently being performed via human visual inspection. In this study, we attempt to partly automate visual inspection by investigating anomaly inspection algorithms that use image processing technology. As the railroad maintenance studies tend to have little anomaly data, unsupervised learning methods are usually preferred for anomaly detection; however, training cost and accuracy is still a challenge. Additionally, a researcher created anomalous images from normal images by adding noise, etc., but the anomalous targeted in this study is the rotation of piping cocks that was difficult to create using noise. Therefore, in this study, we propose a new method that uses style conversion via generative adversarial networks on three-dimensional computer graphics and imitates anomaly images to apply anomaly detection based on supervised learning. The geometry-consistent style conversion model was used to convert the image, and because of this the color and texture of the image were successfully made to imitate the real image while maintaining the anomalous shape. Using the generated anomaly images as supervised data, the anomaly detection model can be easily trained without complex adjustments and successfully detects anomalies.
translated by 谷歌翻译
We present a lightweight post-processing method to refine the semantic segmentation results of point cloud sequences. Most existing methods usually segment frame by frame and encounter the inherent ambiguity of the problem: based on a measurement in a single frame, labels are sometimes difficult to predict even for humans. To remedy this problem, we propose to explicitly train a network to refine these results predicted by an existing segmentation method. The network, which we call the P2Net, learns the consistency constraints between coincident points from consecutive frames after registration. We evaluate the proposed post-processing method both qualitatively and quantitatively on the SemanticKITTI dataset that consists of real outdoor scenes. The effectiveness of the proposed method is validated by comparing the results predicted by two representative networks with and without the refinement by the post-processing network. Specifically, qualitative visualization validates the key idea that labels of the points that are difficult to predict can be corrected with P2Net. Quantitatively, overall mIoU is improved from 10.5% to 11.7% for PointNet [1] and from 10.8% to 15.9% for PointNet++ [2].
translated by 谷歌翻译
We introduce KiloGram, a resource for studying abstract visual reasoning in humans and machines. Drawing on the history of tangram puzzles as stimuli in cognitive science, we build a richly annotated dataset that, with >1k distinct stimuli, is orders of magnitude larger and more diverse than prior resources. It is both visually and linguistically richer, moving beyond whole shape descriptions to include segmentation maps and part labels. We use this resource to evaluate the abstract visual reasoning capacities of recent multi-modal models. We observe that pre-trained weights demonstrate limited abstract reasoning, which dramatically improves with fine-tuning. We also observe that explicitly describing parts aids abstract reasoning for both humans and models, especially when jointly encoding the linguistic and visual inputs. KiloGram is available at https://lil.nlp.cornell.edu/kilogram .
translated by 谷歌翻译
We present lilGym, a new benchmark for language-conditioned reinforcement learning in visual environments. lilGym is based on 2,661 highly-compositional human-written natural language statements grounded in an interactive visual environment. We annotate all statements with executable Python programs representing their meaning to enable exact reward computation in every possible world state. Each statement is paired with multiple start states and reward functions to form thousands of distinct Markov Decision Processes of varying difficulty. We experiment with lilGym with different models and learning regimes. Our results and analysis show that while existing methods are able to achieve non-trivial performance, lilGym forms a challenging open problem. lilGym is available at https://lil.nlp.cornell.edu/lilgym/.
translated by 谷歌翻译
基于视频的自动化手术技能评估是协助年轻的外科学员,尤其是在资源贫乏地区的一项有前途的任务。现有作品通常诉诸CNN-LSTM联合框架,该框架对LSTM的长期关系建模在空间汇总的短期CNN功能上。但是,这种做法将不可避免地忽略了空间维度中工具,组织和背景等语义概念之间的差异,从而阻碍了随后的时间关系建模。在本文中,我们提出了一个新型的技能评估框架,视频语义聚合(Visa),该框架发现了不同的语义部分,并将它们汇总在时空维度上。语义部分的明确发现提供了一种解释性的可视化,以帮助理解神经网络的决策。它还使我们能够进一步合并辅助信息,例如运动学数据,以改善表示和性能。与最新方法相比,两个数据集的实验显示了签证的竞争力。源代码可在以下网址获得:bit.ly/miccai2022visa。
translated by 谷歌翻译
估计路径的旅行时间是智能运输系统的重要主题。它是现实世界应用的基础,例如交通监控,路线计划和出租车派遣。但是,为这样的数据驱动任务构建模型需要大量用户的旅行信息,这与其隐私直接相关,因此不太可能共享。数据所有者之间的非独立和相同分布的(非IID)轨迹数据也使一个预测模型变得极具挑战性,如果我们直接应用联合学习。最后,以前关于旅行时间估算的工作并未考虑道路的实时交通状态,我们认为这可以极大地影响预测。为了应对上述挑战,我们为移动用户组引入GOF-TTE,生成的在线联合学习框架以进行旅行时间估计,这是我)使用联合学习方法,允许在培训时将私人数据保存在客户端设备上,并设计设计和设计。所有客户共享的全球模型作为在线生成模型推断实时道路交通状态。 ii)除了在服务器上共享基本模型外,还针对每个客户调整了一个微调的个性化模型来研究其个人驾驶习惯,从而弥补了本地化全球模型预测的残余错误。 %iii)将全球模型设计为所有客户共享的在线生成模型,以推断实时道路交通状态。我们还对我们的框架采用了简单的隐私攻击,并实施了差异隐私机制,以进一步保证隐私安全。最后,我们对Didi Chengdu和Xi'an的两个现实世界公共出租车数据集进行了实验。实验结果证明了我们提出的框架的有效性。
translated by 谷歌翻译
Vision Transformer(VIT)在图像处理中变得越来越流行。具体而言,我们研究了测试时间适应(TTA)对VIT的有效性,VIT是一种已经出现的技术,可以自行纠正其在测试时间期间的预测。首先,我们在VIT-B16和VIT-L16上基准了各种测试时间适应方法。结果表明,使用适当的损耗函数时,TTA对VIT有效,并且先前的投入(明智地选择调制参数)是不需要的。基于观察结果,我们提出了一种称为类条件特征对齐(CFA)的新的测试时间适应方法,该方法将类别条件分布的差异和在线源中隐藏表示的整个分布差异最小化,在线中的整个分布差异方式。图像分类任务(CIFAR-10-C,CIFAR-100-C和Imagenet-C)和域适应性(Digits DataSet和Imagenet-Sketch)的实验表明,CFA稳定地超过了各种数据集中的现有基础。我们还通过在RESNET,MLP混合和几种VIT变体(Vit-augreg,Deit和Beit)上实验来验证CFA是模型不可知论。使用BEIT主链,CFA在Imagenet-C上达到了19.8%的TOP-1错误率,表现优于现有的测试时间适应基线44.0%。这是不需要改变训练阶段的TTA方法中的最新结果。
translated by 谷歌翻译