We tackle a new problem of multi-view camera and subject registration in the bird's eye view (BEV) without pre-given camera calibration. This is a very challenging problem since its only input is several RGB images from different first-person views (FPVs) for a multi-person scene, without the BEV image and the calibration of the FPVs, while the output is a unified plane with the localization and orientation of both the subjects and cameras in a BEV. We propose an end-to-end framework solving this problem, whose main idea can be divided into following parts: i) creating a view-transform subject detection module to transform the FPV to a virtual BEV including localization and orientation of each pedestrian, ii) deriving a geometric transformation based method to estimate camera localization and view direction, i.e., the camera registration in a unified BEV, iii) making use of spatial and appearance information to aggregate the subjects into the unified BEV. We collect a new large-scale synthetic dataset with rich annotations for evaluation. The experimental results show the remarkable effectiveness of our proposed method.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Can a text-to-image diffusion model be used as a training objective for adapting a GAN generator to another domain? In this paper, we show that the classifier-free guidance can be leveraged as a critic and enable generators to distill knowledge from large-scale text-to-image diffusion models. Generators can be efficiently shifted into new domains indicated by text prompts without access to groundtruth samples from target domains. We demonstrate the effectiveness and controllability of our method through extensive experiments. Although not trained to minimize CLIP loss, our model achieves equally high CLIP scores and significantly lower FID than prior work on short prompts, and outperforms the baseline qualitatively and quantitatively on long and complicated prompts. To our best knowledge, the proposed method is the first attempt at incorporating large-scale pre-trained diffusion models and distillation sampling for text-driven image generator domain adaptation and gives a quality previously beyond possible. Moreover, we extend our work to 3D-aware style-based generators and DreamBooth guidance.
translated by 谷歌翻译
Robust Model-Agnostic Meta-Learning (MAML) is usually adopted to train a meta-model which may fast adapt to novel classes with only a few exemplars and meanwhile remain robust to adversarial attacks. The conventional solution for robust MAML is to introduce robustness-promoting regularization during meta-training stage. With such a regularization, previous robust MAML methods simply follow the typical MAML practice that the number of training shots should match with the number of test shots to achieve an optimal adaptation performance. However, although the robustness can be largely improved, previous methods sacrifice clean accuracy a lot. In this paper, we observe that introducing robustness-promoting regularization into MAML reduces the intrinsic dimension of clean sample features, which results in a lower capacity of clean representations. This may explain why the clean accuracy of previous robust MAML methods drops severely. Based on this observation, we propose a simple strategy, i.e., increasing the number of training shots, to mitigate the loss of intrinsic dimension caused by robustness-promoting regularization. Though simple, our method remarkably improves the clean accuracy of MAML without much loss of robustness, producing a robust yet accurate model. Extensive experiments demonstrate that our method outperforms prior arts in achieving a better trade-off between accuracy and robustness. Besides, we observe that our method is less sensitive to the number of fine-tuning steps during meta-training, which allows for a reduced number of fine-tuning steps to improve training efficiency.
translated by 谷歌翻译
Large language models (LLMs) show excellent performance but are compute- and memory-intensive. Quantization can reduce memory and accelerate inference. However, for LLMs beyond 100 billion parameters, existing methods cannot maintain accuracy or do not run efficiently on hardware. We propose SmoothQuant, a training-free, accuracy-preserving, and general-purpose post-training quantization (PTQ) solution to enable 8-bit weight, 8-bit activation (W8A8) quantization for LLMs that can be implemented efficiently. We observe that systematic outliers appear at fixed activation channels. Based on the fact that weights are easy to quantize while activations are not, SmoothQuant smooths the activation outliers by offline migrating the quantization difficulty from activations to weights with a mathematically equivalent transformation. SmoothQuant enables an INT8 quantization of both weights and activations for all the GEMMs in LLMs, including OPT-175B, BLOOM-176B, and GLM-130B. SmoothQuant has better hardware efficiency than existing techniques using mixed-precision activation quantization or weight-only quantization. We demonstrate up to 1.56x speedup and 2x memory reduction for LLMs with negligible loss in accuracy. Thanks to the hardware-friendly design, we integrate SmoothQuant into FasterTransformer, a state-of-the-art LLM serving framework, and achieve faster inference speed with half the number of GPUs compared to FP16. Our work offers a turn-key solution that reduces hardware costs and democratizes LLMs. Code is available at: https://github.com/mit-han-lab/smoothquant.
translated by 谷歌翻译
During image editing, existing deep generative models tend to re-synthesize the entire output from scratch, including the unedited regions. This leads to a significant waste of computation, especially for minor editing operations. In this work, we present Spatially Sparse Inference (SSI), a general-purpose technique that selectively performs computation for edited regions and accelerates various generative models, including both conditional GANs and diffusion models. Our key observation is that users tend to make gradual changes to the input image. This motivates us to cache and reuse the feature maps of the original image. Given an edited image, we sparsely apply the convolutional filters to the edited regions while reusing the cached features for the unedited regions. Based on our algorithm, we further propose Sparse Incremental Generative Engine (SIGE) to convert the computation reduction to latency reduction on off-the-shelf hardware. With 1.2%-area edited regions, our method reduces the computation of DDIM by 7.5$\times$ and GauGAN by 18$\times$ while preserving the visual fidelity. With SIGE, we accelerate the speed of DDIM by 3.0x on RTX 3090 and 6.6$\times$ on Apple M1 Pro CPU, and GauGAN by 4.2$\times$ on RTX 3090 and 14$\times$ on Apple M1 Pro CPU.
translated by 谷歌翻译
准确的车辆类型分类在智能运输系统中起重要作用。对于统治者而言,重要的是要了解道路状况,通常为交通灯控制系统的贡献,以相应地响应以减轻交通拥堵。新技术和全面数据源,例如航空照片和遥感数据,提供了更丰富,高维的信息。同样,由于深度神经网络技术的快速发展,基于图像的车辆分类方法可以在处理数据时更好地提取基本的客观特征。最近,已经提出了几种深度学习模型来解决该问题。但是,基于纯卷积的传统方法对全球信息提取有限制,而复杂的环境(例如恶劣的天气)严重限制了识别能力。为了在复杂环境下提高车辆类型的分类能力,本研究提出了一种新型连接的卷积变压器在变压器神经网络(密度TNT)框架中,通过堆叠密集连接的卷积网络(Densenet)和变压器(TNT)(TNT)(TNT)(TNT )层。部署了三个区域的数据和四个不同的天气条件以评估识别能力。实验发现,即使在严重的雾气天气条件下,我们提出的车辆分类模型的识别能力也很少。
translated by 谷歌翻译
对3D对象的触觉识别仍然是一项具有挑战性的任务。与2D形状相比,3D表面的复杂几何形状需要更丰富的触觉信号,更灵活的动作和更高级的编码技术。在这项工作中,我们提出了Tandem3D,该方法将共同训练框架应用于探索和决策的框架对3D对象识别具有触觉信号。从我们以前的工作开始,该工作引入了2D识别问题的共同训练范式,我们引入了许多进步,使我们能够扩展到3D。串联3D基于一个新颖的编码器,该编码器使用PointNet ++从触点位置和正态构建3D对象表示。此外,通过启用6DOF运动,Tandem3D以高效率探索并收集歧视性触摸信息。我们的方法完全在模拟中训练,并通过现实世界实验进行验证。与最先进的基线相比,串联3D在识别3D对象方面达到了更高的准确性和较低的动作,并且也证明对不同类型和数量的传感器噪声更为强大。视频可在https://jxu.ai/tandem3d上获得。
translated by 谷歌翻译
早期退出是提高深网推理效率的有效范例。通过构建具有不同资源需求的分类器(出口),此类网络可以在早期出口处输出简单的样本,从而消除了执行更深层的需求。尽管现有作品主要关注多EXIT网络的建筑设计,但此类模型的培训策略在很大程度上没有探索。当前的最新模型在培训期间对所有样品进行了相同的处理。但是,在测试过程中的早期外观行为被忽略了,从而导致训练和测试之间存在差距。在本文中,我们建议通过样品加权来弥合这一差距。从直觉上讲,简单的样品通常在推理期间在网络早期退出,应该为培训早期分类器提供更多贡献。但是,晚期分类器应强调硬样品的培训(主要是从更深层退出)。我们的工作建议采用一个体重预测网络,以加重每个出口处不同训练样本的损失。这个重量预测网络和骨干模型在具有新的优化目标的元学习框架下共同优化。通过将推断期间的适应性行为带入训练阶段,我们表明拟议的加权机制始终提高分类准确性和推理效率之间的权衡。代码可在https://github.com/leaplabthu/l2w-den上找到。
translated by 谷歌翻译
在口语对话中构建强大的对话系统比书面对话更具挑战。在这方面,提出了DSTC10-TRACK2-TASK2,旨在构建以任务为导向的对话(TOD)系统,该系统将非结构化的外部知识结合在口语对话中,从而扩展了DSTC9-TRACK1。本文介绍了我们的系统,其中包含四种高级方法:数据构建,负面抽样,训练后和样式转移。我们首先自动构建大型培训数据,因为DSTC10-TRACK2未发布官方培训集。对于知识选择任务,我们提出了加权负抽样,以更加细粒度训练模型。我们还采用后培训和样式转移来制作响应生成任务,以生成具有与目标响应类似样式的适当响应。在实验中,我们研究了加权负抽样,训练后和样式转移的效果。我们的模型在客观评估中排名16个团队中的7个,在人类评估中排名6。
translated by 谷歌翻译