In the field of antibody engineering, an essential task is to design a novel antibody whose paratopes bind to a specific antigen with correct epitopes. Understanding antibody structure and its paratope can facilitate a mechanistic understanding of its function. Therefore, antibody structure prediction from its sequence alone has always been a highly valuable problem for de novo antibody design. AlphaFold2, a breakthrough in the field of structural biology, provides a solution to predict protein structure based on protein sequences and computationally expensive coevolutionary multiple sequence alignments (MSAs). However, the computational efficiency and undesirable prediction accuracy of antibodies, especially on the complementarity-determining regions (CDRs) of antibodies limit their applications in the industrially high-throughput drug design. To learn an informative representation of antibodies, we employed a deep antibody language model (ALM) on curated sequences from the observed antibody space database via a transformer model. We also developed a novel model named xTrimoABFold to predict antibody structure from antibody sequence based on the pretrained ALM as well as efficient evoformers and structural modules. The model was trained end-to-end on the antibody structures in PDB by minimizing the ensemble loss of domain-specific focal loss on CDR and the frame-aligned point loss. xTrimoABFold outperforms AlphaFold2 and other protein language model based SOTAs, e.g., OmegaFold, HelixFold-Single, and IgFold with a large significant margin (30+\% improvement on RMSD) while performing 151 times faster than AlphaFold2. To the best of our knowledge, xTrimoABFold achieved state-of-the-art antibody structure prediction. Its improvement in both accuracy and efficiency makes it a valuable tool for de novo antibody design and could make further improvements in immuno-theory.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Virtual reality (VR) over wireless is expected to be one of the killer applications in next-generation communication networks. Nevertheless, the huge data volume along with stringent requirements on latency and reliability under limited bandwidth resources makes untethered wireless VR delivery increasingly challenging. Such bottlenecks, therefore, motivate this work to seek the potential of using semantic communication, a new paradigm that promises to significantly ease the resource pressure, for efficient VR delivery. To this end, we propose a novel framework, namely WIreless SEmantic deliveRy for VR (WiserVR), for delivering consecutive 360{\deg} video frames to VR users. Specifically, deep learning-based multiple modules are well-devised for the transceiver in WiserVR to realize high-performance feature extraction and semantic recovery. Among them, we dedicatedly develop a concept of semantic location graph and leverage the joint-semantic-channel-coding method with knowledge sharing to not only substantially reduce communication latency, but also to guarantee adequate transmission reliability and resilience under various channel states. Moreover, implementation of WiserVR is presented, followed by corresponding initial simulations for performance evaluation compared with benchmarks. Finally, we discuss several open issues and offer feasible solutions to unlock the full potential of WiserVR.
translated by 谷歌翻译
反转合是药物发现的主要任务。通过许多现有方法,它被称为生成图的问题。具体而言,这些方法首先识别反应中心,并相应地打破靶分子以生成合成子。反应物是通过顺序添加到合成图或直接添加正确的离开组来生成反应物。但是,两种策略都遭受了添加原子以来会导致长期的预测顺序,从而增加了产生难度,同时添加离开组只能考虑训练集中的序列,从而导致概括不佳。在本文中,我们提出了一个新颖的端到端图生成模型,用于逆转录合成预测,该模型顺序识别反应中心,生成合成子,并将基序添加到合成子中以生成反应物。由于化学有意义的基序比原子大,比离开组还小,因此与添加原子相比,与添加离开组相比,我们的方法的预测复杂性较低。基准数据集上的实验表明,所提出的模型显着胜过先前的最新算法。
translated by 谷歌翻译
在模板和搜索区域之间学习强大的功能匹配对于3D暹罗跟踪至关重要。暹罗功能匹配的核心是如何在模板和搜索区域之间的相应点上分配高特征相似性,以进行精确的对象本地化。在本文中,我们提出了一个新颖的点云登记驱动的暹罗跟踪框架,直觉是空间对齐相应点(通过3D注册)倾向于实现一致的特征表示。具体而言,我们的方法由两个模块组成,包括特定于特定的非局部注册模块和一个注册辅助的sindhorn模板 - 特征聚合模块。登记模块在模板和搜索区域之间的精确空间对齐中进行目标。提出了跟踪特异性的空间距离约束,以优化非局部模块中的交叉注意权重,以进行判别特征学习。然后,我们使用加权SVD来计算模板和搜索区域之间的刚性转换,并对齐它们以实现所需的空间对齐相应点。对于特征聚合模型,我们将转换模板和搜索区域之间的特征匹配作为最佳传输问题,并利用Sinkhorn优化来搜索异常型匹配匹配解决方案。同样,建造了登记辅助空间距离图,以改善无法区分的区域(例如光滑的表面)的匹配鲁棒性。最后,在获得的功能匹配地图的指导下,我们将目标信息从模板中汇总到搜索区域中以构建特定于目标的特征,然后将其馈送到一个类似中心点的检测头中以进行对象定位。关于Kitti,Nuscenes和Waymo数据集的广泛实验验证了我们提出的方法的有效性。
translated by 谷歌翻译
神经网络修剪可以有效地用于压缩自动语音识别(ASR)模型。但是,在多语言ASR中,执行语言不足的修剪可能会导致某些语言的严重性能降解,因为语言 - 敏捷的修剪口罩可能不符合所有语言,并丢弃了重要的语言特定参数。在这项工作中,我们提出了ASR路径,这是一种稀疏的多语言ASR模型,该模型激活了特定语言的子网络(“路径”),从而明确地学习了每种语言的参数。通过重叠的子网络,共享参数还可以通过联合多语言培训来实现较低资源语言的知识传输。我们提出了一种新型算法来学习ASR途径,并通过流式RNN-T模型评估了4种语言的建议方法。我们提出的ASR途径的表现都优于密集模型(平均-5.0%)和语言不足的修剪模型(平均-21.4%),并且与单语稀疏模型相比,低资源语言的性能更好。
translated by 谷歌翻译
由于缺乏异常样品,因此仅具有正常样本的先验知识的异常检测才吸引更多的注意力。现有的基于CNN的像素重建方法遇到了两个问题。首先,重建源和目标是包含无法区分的语义信息的原始像素值。其次,CNN倾向于很好地重建正常样品和异常情况,使它们仍然很难区分。在本文中,我们提出异常检测变压器(ADTR)将变压器应用于重建预训练的特征。预训练的功能包含可区分的语义信息。同样,采用变压器限制以很好地重构异常,因此一旦重建失败,就可以轻松检测到异常。此外,我们提出了新的损失函数,使我们的方法与正常样本的情况以及具有图像级和像素级标记为异常的异常情况兼容。通过添加简单的合成或外部无关异常,可以进一步提高性能。广泛的实验是在包括MVTEC-AD和CIFAR-10在内的异常检测数据集上进行的。与所有基线相比,我们的方法取得了卓越的性能。
translated by 谷歌翻译
基于暹罗网络的跟踪器将3D单一对象跟踪作为模板和搜索区域的点特征之间的互相关学习。由于跟踪过程中模板和搜索区域之间的外观差异很大,因此如何学习它们之间的稳健跨相关性以识别搜索区域中的潜在目标仍然是一个挑战性的问题。在本文中,我们明确使用变压器形成一个3D Siamese变压器网络,以学习模板和点云的搜索区域之间的强大互相关。具体来说,我们开发了一个暹罗点变压器网络,以了解目标的形状上下文信息。它的编码器使用自我注意力来捕获点云的非本地信息来表征对象的形状信息,而解码器则利用交叉注意来提取歧视点特征。之后,我们开发了一个迭代的粗到加密相关网络,以了解模板与搜索区域之间的稳健跨相关性。它通过交叉注意将模板与搜索区域中的潜在目标联系起来,制定了交叉功能的增强。为了进一步增强潜在目标,它采用了自我功能增强,该增强功能将自我注意力应用于特征空间的本地K-NN图来汇总目标特征。 Kitti,Nuscenes和Waymo数据集的实验表明,我们的方法在3D单一对象跟踪任务上实现了最先进的性能。
translated by 谷歌翻译
现有的自我监督的单眼估计方法可以摆脱昂贵的注释并获得令人鼓舞的结果。但是,当直接采用接受固定分辨率训练的模型以评估其他不同决议时,这些方法会遭受严重的性能降解。在本文中,我们通过学习场景深度的规模不变性,提出了一个分辨率自适应自我监督的单眼估计方法(RA-DEPTH)。具体而言,我们提出了一种简单而有效的数据增强方法,以生成具有任意尺度的同一场景的图像。然后,我们开发了一个双重高分辨率网络,该网络使用具有密集交互的多路径编码器和解码器来汇总多尺度特征,以进行准确的深度推理。最后,为了明确了解场景深度的规模不变性,我们在具有不同尺度的深度预测上制定了跨尺度的深度一致性损失。对Kitti,Make3D和NYU-V2数据集进行了广泛的实验表明,RA-DEPTH不仅可以实现最新的性能,而且还表现出很好的解决能力。
translated by 谷歌翻译
由于深层网络的计算复杂性和功率约束的移动硬件的计算复杂性,因此在移动设备上实现神经视频编解码器的潜力是一项巨大的技术挑战。我们通过利用高通公司的技术和创新来证明可行性,从而弥合了从基于神经网络的编解码器模拟在壁式工作站运行的差距,再到由Snapdragon技术供电的移动设备上的实时操作。我们显示有史以来第一个在商用手机上运行的框架间神经视频解码器,实时解码高清视频,同时保持低比特率和高视觉质量。
translated by 谷歌翻译