Radar, the only sensor that could provide reliable perception capability in all weather conditions at an affordable cost, has been widely accepted as a key supplement to camera and LiDAR in modern advanced driver assistance systems (ADAS) and autonomous driving systems. Recent state-of-the-art works reveal that fusion of radar and LiDAR can lead to robust detection in adverse weather, such as fog. However, these methods still suffer from low accuracy of bounding box estimations. This paper proposes a bird's-eye view (BEV) fusion learning for an anchor box-free object detection system, which uses the feature derived from the radar range-azimuth heatmap and the LiDAR point cloud to estimate the possible objects. Different label assignment strategies have been designed to facilitate the consistency between the classification of foreground or background anchor points and the corresponding bounding box regressions. Furthermore, the performance of the proposed object detector can be further enhanced by employing a novel interactive transformer module. We demonstrated the superior performance of the proposed methods in this paper using the recently published Oxford Radar RobotCar (ORR) dataset. We showed that the accuracy of our system significantly outperforms the other state-of-the-art methods by a large margin.
translated by 谷歌翻译
Various depth estimation models are now widely used on many mobile and IoT devices for image segmentation, bokeh effect rendering, object tracking and many other mobile tasks. Thus, it is very crucial to have efficient and accurate depth estimation models that can run fast on low-power mobile chipsets. In this Mobile AI challenge, the target was to develop deep learning-based single image depth estimation solutions that can show a real-time performance on IoT platforms and smartphones. For this, the participants used a large-scale RGB-to-depth dataset that was collected with the ZED stereo camera capable to generated depth maps for objects located at up to 50 meters. The runtime of all models was evaluated on the Raspberry Pi 4 platform, where the developed solutions were able to generate VGA resolution depth maps at up to 27 FPS while achieving high fidelity results. All models developed in the challenge are also compatible with any Android or Linux-based mobile devices, their detailed description is provided in this paper.
translated by 谷歌翻译
联合学习(FL)是一个新兴的隐私机器学习范式(ML)。 FL的一种重要类型是Cross-Silo FL,它使少数组织能够通过在本地保密数据并在中央参数服务器上汇总权重来合作训练共享模型。但是,在实践中,中央服务器可能容易受到恶意攻击或软件故障的影响。为了解决这个问题,在本文中,我们提出了DEFL,这是一个新颖的分散体重聚集框架,用于交叉silo fl。 DEFL通过在每个参与节点上汇总权重来消除中央服务器,并且仅在所有节点之间维护并同步当前的训练回合的权重。我们使用Multi-Krum来启用诚实节点的正确权重,并使用HotStuff来确保训练循环数和权重的一致性。此外,我们从理论上分析了DEFL的拜占庭式容错,收敛性和复杂性。我们对两个广泛的公共数据集进行了广泛的实验,即CIFAR-10和Sentiment140,以评估DEFL的性能。结果表明,与最先进的分散FL方法相比,DEFL可以防御通用的威胁模型,并以最小的精度损失损失降低了100倍的存储空间和最多减少网络开销的12倍。
translated by 谷歌翻译
对比学习在各种高级任务中取得了显着的成功,但是为低级任务提出了较少的方法。采用VANILLA对比学习技术采用直接为低级视觉任务提出的VANILLA对比度学习技术,因为所获得的全局视觉表现不足以用于需要丰富的纹理和上下文信息的低级任务。在本文中,我们提出了一种用于单图像超分辨率(SISR)的新型对比学习框架。我们从两个视角调查基于对比的学习的SISR:样品施工和特征嵌入。现有方法提出了一些天真的样本施工方法(例如,考虑到作为负样本的低质量输入以及作为正样品的地面真理),并且它们采用了先前的模型(例如,预先训练的VGG模型)来获得该特征嵌入而不是探索任务友好的。为此,我们向SISR提出了一个实用的对比学习框架,涉及在频率空间中产生许多信息丰富的正负样本。我们不是利用其他预先训练的网络,我们设计了一种从鉴别器网络继承的简单但有效的嵌入网络,并且可以用主SR网络迭代优化,使其成为任务最通报。最后,我们对我们的方法进行了广泛的实验评估,与基准方法相比,在目前的最先进的SISR方法中显示出高达0.21 dB的显着增益。
translated by 谷歌翻译
尽管在许多计算机视觉任务上具有卓越的性能,但深度卷积神经网络众所周知,在具有资源限制的设备上被压缩。大多数现有的网络修剪方法需要艰苦的人类努力和禁止的计算资源,特别是当约束改变时。当需要部署在各种设备上时,这实际上限制了模型压缩的应用。此外,现有的方法仍然受到缺失的理论指导挑战。在本文中,我们提出了一种信息理论启发的自动模型压缩策略。我们的方法背后的原理是信息瓶颈理论,即隐藏的表示应该彼此压缩信息。因此,我们在网络激活中介绍了标准化的Hilbert-Schmidt独立性标准(NHSIC),作为层重要性的稳定和广义指标。当给出某个资源约束时,我们将HSIC指示器与约束将架构搜索问题转换为具有二次约束的线性编程问题。这种问题很容易通过几秒钟的凸优化方法解决。我们还提供严格的证据,揭示优化归一化的HSIC同时最小化不同层之间的相互信息。没有任何搜索过程,我们的方法实现了与最先进的压缩算法相比的更好的压缩权衡。例如,通过Reset-50,我们达到了45.3%的杂志,在想象中有75.75前1个精度。代码是在https://github.com/mac-automl/itpruner/tree/master上的途径。
translated by 谷歌翻译
Unlike traditional distributed machine learning, federated learning stores data locally for training and then aggregates the models on the server, which solves the data security problem that may arise in traditional distributed machine learning. However, during the training process, the transmission of model parameters can impose a significant load on the network bandwidth. It has been pointed out that the vast majority of model parameters are redundant during model parameter transmission. In this paper, we explore the data distribution law of selected partial model parameters on this basis, and propose a deep hierarchical quantization compression algorithm, which further compresses the model and reduces the network load brought by data transmission through the hierarchical quantization of model parameters. And we adopt a dynamic sampling strategy for the selection of clients to accelerate the convergence of the model. Experimental results on different public datasets demonstrate the effectiveness of our algorithm.
translated by 谷歌翻译
Detecting abrupt changes in data distribution is one of the most significant tasks in streaming data analysis. Although many unsupervised Change-Point Detection (CPD) methods have been proposed recently to identify those changes, they still suffer from missing subtle changes, poor scalability, or/and sensitive to noise points. To meet these challenges, we are the first to generalise the CPD problem as a special case of the Change-Interval Detection (CID) problem. Then we propose a CID method, named iCID, based on a recent Isolation Distributional Kernel (IDK). iCID identifies the change interval if there is a high dissimilarity score between two non-homogeneous temporal adjacent intervals. The data-dependent property and finite feature map of IDK enabled iCID to efficiently identify various types of change points in data streams with the tolerance of noise points. Moreover, the proposed online and offline versions of iCID have the ability to optimise key parameter settings. The effectiveness and efficiency of iCID have been systematically verified on both synthetic and real-world datasets.
translated by 谷歌翻译
Graphic layout designs play an essential role in visual communication. Yet handcrafting layout designs are skill-demanding, time-consuming, and non-scalable to batch production. Although generative models emerge to make design automation no longer utopian, it remains non-trivial to customize designs that comply with designers' multimodal desires, i.e., constrained by background images and driven by foreground contents. In this study, we propose \textit{LayoutDETR} that inherits the high quality and realism from generative modeling, in the meanwhile reformulating content-aware requirements as a detection problem: we learn to detect in a background image the reasonable locations, scales, and spatial relations for multimodal elements in a layout. Experiments validate that our solution yields new state-of-the-art performance for layout generation on public benchmarks and on our newly-curated ads banner dataset. For practical usage, we build our solution into a graphical system that facilitates user studies. We demonstrate that our designs attract more subjective preference than baselines by significant margins. Our code, models, dataset, graphical system, and demos are available at https://github.com/salesforce/LayoutDETR.
translated by 谷歌翻译
Semantic segmentation of UAV aerial remote sensing images provides a more efficient and convenient surveying and mapping method for traditional surveying and mapping. In order to make the model lightweight and improve a certain accuracy, this research developed a new lightweight and efficient network for the extraction of ground features from UAV aerial remote sensing images, called LDMCNet. Meanwhile, this research develops a powerful lightweight backbone network for the proposed semantic segmentation model. It is called LDCNet, and it is hoped that it can become the backbone network of a new generation of lightweight semantic segmentation algorithms. The proposed model uses dual multi-scale context modules, namely the Atrous Space Pyramid Pooling module (ASPP) and the Object Context Representation module (OCR). In addition, this research constructs a private dataset for semantic segmentation of aerial remote sensing images from drones. This data set contains 2431 training sets, 945 validation sets, and 475 test sets. The proposed model performs well on this dataset, with only 1.4M parameters and 5.48G floating-point operations (FLOPs), achieving an average intersection-over-union ratio (mIoU) of 71.12%. 7.88% higher than the baseline model. In order to verify the effectiveness of the proposed model, training on the public datasets "LoveDA" and "CITY-OSM" also achieved excellent results, achieving mIoU of 65.27% and 74.39%, respectively.
translated by 谷歌翻译
In the presence of noisy labels, designing robust loss functions is critical for securing the generalization performance of deep neural networks. Cross Entropy (CE) loss has been shown to be not robust to noisy labels due to its unboundedness. To alleviate this issue, existing works typically design specialized robust losses with the symmetric condition, which usually lead to the underfitting issue. In this paper, our key idea is to induce a loss bound at the logit level, thus universally enhancing the noise robustness of existing losses. Specifically, we propose logit clipping (LogitClip), which clamps the norm of the logit vector to ensure that it is upper bounded by a constant. In this manner, CE loss equipped with our LogitClip method is effectively bounded, mitigating the overfitting to examples with noisy labels. Moreover, we present theoretical analyses to certify the noise-tolerant ability of LogitClip. Extensive experiments show that LogitClip not only significantly improves the noise robustness of CE loss, but also broadly enhances the generalization performance of popular robust losses.
translated by 谷歌翻译