转置卷积在许多深度学习应用中都表现出突出。但是,由于在每个行和列中的每个元素之后添加零之后,特征映射的大小增加,因此转置卷积层在计算范围内都在计算密集型。因此,在扩展的输入特征图上进行的卷积操作导致硬件资源的利用率不佳。不必要的乘法操作的主要原因是在输入特征映射中的预定位置处的零。我们提出了一种算法级优化技术,用于有效的转置卷积实施以解决这些问题。基于内核激活,我们将原始内核隔离为四个子内核。该方案可以减少内存需求和不必要的乘法。我们提出的方法是使用Kaggle网站上的Flower DataSet使用Titan X GPU(Intel Dual Core CPU)的$ 3.09(3.02)\ Times $ $更快的计算。此外,提出的优化方法可以推广到现有设备,而无需其他硬件要求。一个简单的深度学习模型,其中包含一个转齿卷积层来评估优化方法。它显示出使用具有Intel双核CPU的MNIST数据集的$ 2.2 \ times $ $更快的培训。
translated by 谷歌翻译
最近的研究表明,X射线射线照相表现出比聚合酶链反应(PCR)检测更高的准确性。因此,将深度学习模型应用于X射线和放射线照相图像增加了确定COVID-19病例的速度和准确性。但是,由于健康保险的可移植性和问责制(HIPAA),医院由于隐私问题而不愿意共享患者数据。为了维持隐私,我们提出了不同的私人深度学习模型,以保护患者的私人信息。来自Kaggle网站的数据集用于评估用于COVID-19检测的设计模型。根据其最高测试精度选择了EditivedNet模型版本。将差异隐私约束注入到最佳模型中以评估性能。通过改变可训练的层,隐私损失以及每个样本中的限制信息来指出准确性。在微调过程中,我们获得了84 \%准确性,而隐私损失为10。
translated by 谷歌翻译
In a typical car-following scenario, target vehicle speed fluctuations act as an external disturbance to the host vehicle and in turn affect its energy consumption. To control a host vehicle in an energy-efficient manner using model predictive control (MPC), and moreover, enhance the performance of an ecological adaptive cruise control (EACC) strategy, forecasting the future velocities of a target vehicle is essential. For this purpose, a deep recurrent neural network-based vehicle speed prediction using long-short term memory (LSTM) and gated recurrent units (GRU) is studied in this work. Besides these, the physics-based constant velocity (CV) and constant acceleration (CA) models are discussed. The sequential time series data for training (e.g. speed trajectories of the target and its preceding vehicles obtained through vehicle-to-vehicle (V2V) communication, road speed limits, traffic light current and future phases collected using vehicle-to-infrastructure (V2I) communication) is gathered from both urban and highway networks created in the microscopic traffic simulator SUMO. The proposed speed prediction models are evaluated for long-term predictions (up to 10 s) of target vehicle future velocities. Moreover, the results revealed that the LSTM-based speed predictor outperformed other models in terms of achieving better prediction accuracy on unseen test datasets, and thereby showcasing better generalization ability. Furthermore, the performance of EACC-equipped host car on the predicted velocities is evaluated, and its energy-saving benefits for different prediction horizons are presented.
translated by 谷歌翻译
Next-generation sequencing technologies have enhanced the scope of Internet-of-Things (IoT) to include genomics for personalized medicine through the increased availability of an abundance of genome data collected from heterogeneous sources at a reduced cost. Given the sheer magnitude of the collected data and the significant challenges offered by the presence of highly similar genomic structure across species, there is a need for robust, scalable analysis platforms to extract actionable knowledge such as the presence of potentially zoonotic pathogens. The emergence of zoonotic diseases from novel pathogens, such as the influenza virus in 1918 and SARS-CoV-2 in 2019 that can jump species barriers and lead to pandemic underscores the need for scalable metagenome analysis. In this work, we propose MG2Vec, a deep learning-based solution that uses the transformer network as its backbone, to learn robust features from raw metagenome sequences for downstream biomedical tasks such as targeted and generalized pathogen detection. Extensive experiments on four increasingly challenging, yet realistic diagnostic settings, show that the proposed approach can help detect pathogens from uncurated, real-world clinical samples with minimal human supervision in the form of labels. Further, we demonstrate that the learned representations can generalize to completely unrelated pathogens across diseases and species for large-scale metagenome analysis. We provide a comprehensive evaluation of a novel representation learning framework for metagenome-based disease diagnostics with deep learning and provide a way forward for extracting and using robust vector representations from low-cost next generation sequencing to develop generalizable diagnostic tools.
translated by 谷歌翻译
We introduce Action-GPT, a plug and play framework for incorporating Large Language Models (LLMs) into text-based action generation models. Action phrases in current motion capture datasets contain minimal and to-the-point information. By carefully crafting prompts for LLMs, we generate richer and fine-grained descriptions of the action. We show that utilizing these detailed descriptions instead of the original action phrases leads to better alignment of text and motion spaces. Our experiments show qualitative and quantitative improvement in the quality of synthesized motions produced by recent text-to-motion models. Code, pretrained models and sample videos will be made available at https://actiongpt.github.io
translated by 谷歌翻译
This paper aims to provide a radical rundown on Conversation Search (ConvSearch), an approach to enhance the information retrieval method where users engage in a dialogue for the information-seeking tasks. In this survey, we predominantly focused on the human interactive characteristics of the ConvSearch systems, highlighting the operations of the action modules, likely the Retrieval system, Question-Answering, and Recommender system. We labeled various ConvSearch research problems in knowledge bases, natural language processing, and dialogue management systems along with the action modules. We further categorized the framework to ConvSearch and the application is directed toward biomedical and healthcare fields for the utilization of clinical social technology. Finally, we conclude by talking through the challenges and issues of ConvSearch, particularly in Bio-Medicine. Our main aim is to provide an integrated and unified vision of the ConvSearch components from different fields, which benefit the information-seeking process in healthcare systems.
translated by 谷歌翻译
This paper focuses on the task of survival time analysis for lung cancer. Although much progress has been made in this problem in recent years, the performance of existing methods is still far from satisfactory. Traditional and some deep learning-based survival time analyses for lung cancer are mostly based on textual clinical information such as staging, age, histology, etc. Unlike existing methods that predicting on the single modality, we observe that a human clinician usually takes multimodal data such as text clinical data and visual scans to estimate survival time. Motivated by this, in this work, we contribute a smart cross-modality network for survival analysis network named Lite-ProSENet that simulates a human's manner of decision making. Extensive experiments were conducted using data from 422 NSCLC patients from The Cancer Imaging Archive (TCIA). The results show that our Lite-ProSENet outperforms favorably again all comparison methods and achieves the new state of the art with the 89.3% on concordance. The code will be made publicly available.
translated by 谷歌翻译
从视觉感觉数据中控制人造代理是一项艰巨的任务。强化学习(RL)算法可以在这方面取得成功,但需要代理与环境之间进行大量相互作用。为了减轻该问题,无监督的RL建议采用自我监督的互动和学习,以更快地适应未来的任务。但是,目前的无监督策略是否可以改善概括能力,尤其是在视觉控制设置中。在这项工作中,我们为数据有效的视觉控制设计了有效的无监督RL策略。首先,我们表明,使用无监督的RL收集的数据预先训练的世界模型可以促进适应未来的任务。然后,我们与我们的混合计划者分析了一些设计选择,以有效地适应了代理的预训练组件,并在想象中学习和计划,并与我们的混合计划者一起使用,我们将其dub dyna-mpc进行了。通过结合一项大规模实证研究的发现,我们建立了一种方法,该方法强烈改善了无监督的RL基准测试的性能,需要20美元$ \ times $ $ $ $ $ \少于数据以符合监督方法的性能。该方法还表明了在现实词的RL基准测试上的稳健性能,暗示该方法概括为嘈杂的环境。
translated by 谷歌翻译
长期以来,Robotics一直是一个遍布复杂系统体系结构的领域,无论传统或基于学习的模块和联系都需要大量的人类专业知识和先验知识。受大型预训练语言模型的启发,这项工作引入了预先培训的通用表示范式,该范式可以作为给定机器人多个任务的起点。我们提出了感知性因果变压器(PACT),这是一种基于生成变压器的架构,旨在以自我监督的方式直接从机器人数据构建表示形式。通过对状态和行动的自动回归预测,我们的模型隐含地编码了特定机器人的动态和行为。我们的实验评估重点是移动药物的域,我们表明该机器人特定的表示可以作为单个起点,以实现不同的任务,例如安全导航,定位和映射。我们评估了两个形式:使用激光雷达传感器作为感知输入(MUSHR)的轮式机器人,以及使用第一人称RGB图像(栖息地)的模拟药物。我们表明,与训练单个模型的同时训练单个模型相比,对所有任务的单个模型进行训练,并且与独立培训单独的大型模型相当的性能,对每个任务的单个模型进行了可比的训练,则在较大的审计模型上进行了固定小型任务特异性网络,从而使性能明显提高。通过跨任务共享共同的优质表示,我们可以降低整体模型容量并加快此类系统的实时部署。
translated by 谷歌翻译
模拟逼真的传感器是自主系统数据生成的挑战,通常涉及精心手工的传感器设计,场景属性和物理建模。为了减轻这一点,我们引入了一条管道,用于对逼真的激光雷达传感器进行数据驱动的模拟。我们提出了一个模型,该模型可以在RGB图像和相应的LIDAR功能(例如Raydrop或每点强度)之间直接从真实数据集中进行映射。我们表明,我们的模型可以学会编码逼真的效果,例如透明表面上的掉落点或反射材料上的高强度回报。当应用于现成的模拟器软件提供的天真播放点云时,我们的模型通过根据场景的外观预测强度和删除点来增强数据,以匹配真实的激光雷达传感器。我们使用我们的技术来学习两个不同的LIDAR传感器的模型,并使用它们相应地改善模拟的LiDAR数据。通过车辆细分的示例任务,我们表明通过我们的技术增强模拟点云可以改善下游任务性能。
translated by 谷歌翻译