纯变压器模型在自然语言处理和计算机视觉方面取得了令人印象深刻的成功。但是,变压器的一个限制是它们需要大型培训数据。在3D点云的领域中,大数据集的可用性是一个挑战,它加剧了3D任务的训练变压器问题。在这项工作中,我们凭经验研究和研究利用大量图像的知识以了解点云的理解的效果。我们制定了一条称为\ textIt {pix4point}的管道,该管道允许在图像域中利用预验证的变压器来改善下游点云任务。这是通过用于3D域专门的令牌和解码器层的帮助,通过模态无形的纯变压器主链实现。使用图像预言的变压器,我们分别在Scanobjectnn,ShapenetPart和S3DIS基准上观察到3D点云分类,部分分割和语义分割的任务的Pix4Point的显着性能提高。我们的代码和模型可在:\ url {https://github.com/guochengqian/pix4point}中获得。
translated by 谷歌翻译
Surgery is the only viable treatment for cataract patients with visual acuity (VA) impairment. Clinically, to assess the necessity of cataract surgery, accurately predicting postoperative VA before surgery by analyzing multi-view optical coherence tomography (OCT) images is crucially needed. Unfortunately, due to complicated fundus conditions, determining postoperative VA remains difficult for medical experts. Deep learning methods for this problem were developed in recent years. Although effective, these methods still face several issues, such as not efficiently exploring potential relations between multi-view OCT images, neglecting the key role of clinical prior knowledge (e.g., preoperative VA value), and using only regression-based metrics which are lacking reference. In this paper, we propose a novel Cross-token Transformer Network (CTT-Net) for postoperative VA prediction by analyzing both the multi-view OCT images and preoperative VA. To effectively fuse multi-view features of OCT images, we develop cross-token attention that could restrict redundant/unnecessary attention flow. Further, we utilize the preoperative VA value to provide more information for postoperative VA prediction and facilitate fusion between views. Moreover, we design an auxiliary classification loss to improve model performance and assess VA recovery more sufficiently, avoiding the limitation by only using the regression metrics. To evaluate CTT-Net, we build a multi-view OCT image dataset collected from our collaborative hospital. A set of extensive experiments validate the effectiveness of our model compared to existing methods in various metrics. Code is available at: https://github.com/wjh892521292/Cataract OCT.
translated by 谷歌翻译
Students' ability to ask curious questions is a crucial skill that improves their learning processes. To train this skill, previous research has used a conversational agent that propose specific cues to prompt children's curiosity during learning. Despite showing pedagogical efficiency, this method is still limited since it relies on generating the said prompts by hand for each educational resource, which can be a very long and costly process. In this context, we leverage the advances in the natural language processing field and explore using a large language model (GPT-3) to automate the generation of this agent's curiosity-prompting cues to help children ask more and deeper questions. We then used this study to investigate a different curiosity-prompting behavior for the agent. The study was conducted with 75 students aged between 9 and 10. They either interacted with a hand-crafted conversational agent that proposes "closed" manually-extracted cues leading to predefined questions, a GPT-3-driven one that proposes the same type of cues, or a GPT-3-driven one that proposes "open" cues that can lead to several possible questions. Results showed a similar question-asking performance between children who had the two "closed" agents, but a significantly better one for participants with the "open" agent. Our first results suggest the validity of using GPT-3 to facilitate the implementation of curiosity-stimulating learning technologies. In a second step, we also show that GPT-3 can be efficient in proposing the relevant open cues that leave children with more autonomy to express their curiosity.
translated by 谷歌翻译
近年来,大型语言模型(LLMS)在自然语言产生中表现出了令人印象深刻的实力。提高发电多样性的一种常见做法是从模型中采样多个输出。但是,缺乏一种简单且可靠的方式来从这些随机样品中选择最佳输出。作为一个案例研究,在问题产生的背景下,我们提出了两种基于迅速的方法,以从一组LLM生成的候选人中选择高质量问题。我们的方法在1)限制下起作用,一个黑框(不可修改)问题生成模型和2)缺乏访问人类宣传的参考文献 - 这两者都是现实世界中LLMS的现实局限性。通过自动和人类评估,我们从经验上证明,我们的方法可以有效地选择比贪婪的生成更高质量的问题。
translated by 谷歌翻译
训练键形生成(KPG)模型需要大量注释的数据,这些数据可能非常昂贵,并且通常仅限于特定域。在这项研究中,我们首先证明了不同领域之间的巨大分布变化极大地阻碍了KPG模型的可传递性。然后,我们提出了一条三阶段的管道,该管道逐渐以数据效率的方式指导KPG模型从一般句法特征到与域相关的语义的学习重点。借助域将军短语预训练,我们使用通用短语注释进行预训练序列到序列模型,这些模型在网络上广泛使用,这使模型能够在广泛的域中生成短语。然后将所得模型应用于传输标签阶段,以产生域特异性伪键形,这有助于将模型适应新域。最后,我们使用有限的数据将模型微调,以完全适应目标域。我们的实验结果表明,所提出的过程可以在新领域中产生高质量的钥匙串,并在适应有限的域注释数据后进行一致的改进。
translated by 谷歌翻译
为了解决艰巨的任务,人类提出问题以从外部来源获取知识。相反,经典的加强学习者缺乏这种能力,并且常常诉诸探索性行为。这会加剧,因为很少的当今环境支持查询知识。为了研究如何通过语言教授代理来查询外部知识,我们首先介绍了两个新环境:基于网格世界的Q-babyai和基于文本的Q-Textworld。除了物理互动外,代理还可以查询专门针对这些环境的外部知识源来收集信息。其次,我们提出了“寻求知识”(AFK)代理,该代理学会生成语言命令以查询有助于解决任务的有意义的知识。 AFK利用非参数记忆,指针机制和情节探索奖金来解决(1)无关的信息,(2)一个较大的查询语言空间,(3)延迟奖励有意义的查询。广泛的实验表明,AFK代理在具有挑战性的Q-Babyai和Q-Textworld环境方面优于最近的基线。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译