自动驾驶汽车使用各种传感器和机器学习型号来预测周围道路使用者的行为。文献中的大多数机器学习模型都集中在定量误差指标上,例如均方根误差(RMSE),以学习和报告其模型的功能。对定量误差指标的关注倾向于忽略模型的更重要的行为方面,从而提出了这些模型是否真正预测类似人类行为的问题。因此,我们建议分析机器学习模型的输出,就像我们将在常规行为研究中分析人类数据一样。我们介绍定量指标,以证明在自然主义高速公路驾驶数据集中存在三种不同的行为现象:1)运动学依赖性谁通过合并点首次通过合并点2)巷道上的车道更改,可容纳坡道车辆3 )车辆通过高速公路上的车辆变化,以避免铅车冲突。然后,我们使用相同的指标分析了三个机器学习模型的行为。即使模型的RMSE值有所不同,所有模型都捕获了运动学依赖性的合并行为,但在不同程度上挣扎着捕获更细微的典型礼貌车道变更和高速公路车道的变化行为。此外,车道变化期间的碰撞厌恶分析表明,模型努力捕获人类驾驶的物理方面:在车辆之间留下足够的差距。因此,我们的分析强调了简单的定量指标不足,并且在分析人类驾驶预测的机器学习模型时需要更广泛的行为观点。
translated by 谷歌翻译
PointNet ++是Point Cloud理解的最具影响力的神经体系结构之一。尽管PointNet ++的准确性在很大程度上已经超过了诸如PointMLP和Point Transformer之类的最近网络,但我们发现,大部分性能增益是由于改进的培训策略,即数据增强和优化技术,而不是架构大小,而不是架构的大小,而不是架构。创新。因此,PointNet ++的全部潜力尚未探索。在这项工作中,我们通过对模型培训和缩放策略进行系统的研究来重新审视经典的PointNet ++,并提供两个主要贡献。首先,我们提出了一组改进的培训策略,可显着提高PointNet ++的性能。例如,我们表明,如果没有任何架构的任何变化,则可以将ScanObjectnn对象分类的PointNet ++的总体准确性(OA)从77.9 \%\%提高到86.1 \%,即使超过了最先进的端点”。其次,我们将倒置的残留瓶颈设计和可分离的MLP引入到PointNet ++中,以实现高效且有效的模型缩放,并提出了PointNext,即PointNets的下一个版本。可以在3D分类和分割任务上灵活地扩展PointNext,并优于最先进的方法。对于分类,PointNext的总体准确度为ScanObjectnn $ 87.7 \%$,超过了PointMLP $ 2.3 \%$,而推断的$ 10 \ times $ $。对于语义细分,PointNext建立了新的最先进的性能,$ 74.9 \%$ MEAN IOU在S3DIS上(6倍交叉验证),优于最近的Point Transformer。代码和型号可在https://github.com/guochengqian/pointNext上获得。
translated by 谷歌翻译
这项工作解决了通过分段线性非线性激活来表征和理解神经网络的决策界限的问题。我们使用热带几何形状,这是代数几何区域中的新开发项目,以表征形式的简单网络(Aggine,Relu,offine)的决策边界。我们的主要发现是,决策边界是热带超曲面的子集,该子集与两个分区的凸壳形成的多层密切相关。这些分区的生成器是网络参数的函数。这种几何表征为三个任务提供了新的观点。 (i)我们对彩票假说提出了一个新的热带观点,在其中我们查看了不同初始化对网络决策边界热带几何表示的影响。 (ii)此外,我们提出了新的基于热带的优化重新纠正,该重新策划直接影响网络修剪任务的网络决策边界。 (iii)最后,我们在热带意义上讨论了对抗攻击的产生的重新印象。我们证明,可以通过扰动网络中的一组参数来扰动一组特定的决策边界,在新的热带环境中构建对手。
translated by 谷歌翻译
Deep neural networks (DNNs) are vulnerable to a class of attacks called "backdoor attacks", which create an association between a backdoor trigger and a target label the attacker is interested in exploiting. A backdoored DNN performs well on clean test images, yet persistently predicts an attacker-defined label for any sample in the presence of the backdoor trigger. Although backdoor attacks have been extensively studied in the image domain, there are very few works that explore such attacks in the video domain, and they tend to conclude that image backdoor attacks are less effective in the video domain. In this work, we revisit the traditional backdoor threat model and incorporate additional video-related aspects to that model. We show that poisoned-label image backdoor attacks could be extended temporally in two ways, statically and dynamically, leading to highly effective attacks in the video domain. In addition, we explore natural video backdoors to highlight the seriousness of this vulnerability in the video domain. And, for the first time, we study multi-modal (audiovisual) backdoor attacks against video action recognition models, where we show that attacking a single modality is enough for achieving a high attack success rate.
translated by 谷歌翻译
In recent years, social media has been widely explored as a potential source of communication and information in disasters and emergency situations. Several interesting works and case studies of disaster analytics exploring different aspects of natural disasters have been already conducted. Along with the great potential, disaster analytics comes with several challenges mainly due to the nature of social media content. In this paper, we explore one such challenge and propose a text classification framework to deal with Twitter noisy data. More specifically, we employed several transformers both individually and in combination, so as to differentiate between relevant and non-relevant Twitter posts, achieving the highest F1-score of 0.87.
translated by 谷歌翻译
Pneumonia, a respiratory infection brought on by bacteria or viruses, affects a large number of people, especially in developing and impoverished countries where high levels of pollution, unclean living conditions, and overcrowding are frequently observed, along with insufficient medical infrastructure. Pleural effusion, a condition in which fluids fill the lung and complicate breathing, is brought on by pneumonia. Early detection of pneumonia is essential for ensuring curative care and boosting survival rates. The approach most usually used to diagnose pneumonia is chest X-ray imaging. The purpose of this work is to develop a method for the automatic diagnosis of bacterial and viral pneumonia in digital x-ray pictures. This article first presents the authors' technique, and then gives a comprehensive report on recent developments in the field of reliable diagnosis of pneumonia. In this study, here tuned a state-of-the-art deep convolutional neural network to classify plant diseases based on images and tested its performance. Deep learning architecture is compared empirically. VGG19, ResNet with 152v2, Resnext101, Seresnet152, Mobilenettv2, and DenseNet with 201 layers are among the architectures tested. Experiment data consists of two groups, sick and healthy X-ray pictures. To take appropriate action against plant diseases as soon as possible, rapid disease identification models are preferred. DenseNet201 has shown no overfitting or performance degradation in our experiments, and its accuracy tends to increase as the number of epochs increases. Further, DenseNet201 achieves state-of-the-art performance with a significantly a smaller number of parameters and within a reasonable computing time. This architecture outperforms the competition in terms of testing accuracy, scoring 95%. Each architecture was trained using Keras, using Theano as the backend.
translated by 谷歌翻译
Diabetic Retinopathy (DR) is a leading cause of vision loss in the world, and early DR detection is necessary to prevent vision loss and support an appropriate treatment. In this work, we leverage interactive machine learning and introduce a joint learning framework, termed DRG-Net, to effectively learn both disease grading and multi-lesion segmentation. Our DRG-Net consists of two modules: (i) DRG-AI-System to classify DR Grading, localize lesion areas, and provide visual explanations; (ii) DRG-Expert-Interaction to receive feedback from user-expert and improve the DRG-AI-System. To deal with sparse data, we utilize transfer learning mechanisms to extract invariant feature representations by using Wasserstein distance and adversarial learning-based entropy minimization. Besides, we propose a novel attention strategy at both low- and high-level features to automatically select the most significant lesion information and provide explainable properties. In terms of human interaction, we further develop DRG-Net as a tool that enables expert users to correct the system's predictions, which may then be used to update the system as a whole. Moreover, thanks to the attention mechanism and loss functions constraint between lesion features and classification features, our approach can be robust given a certain level of noise in the feedback of users. We have benchmarked DRG-Net on the two largest DR datasets, i.e., IDRID and FGADR, and compared it to various state-of-the-art deep learning networks. In addition to outperforming other SOTA approaches, DRG-Net is effectively updated using user feedback, even in a weakly-supervised manner.
translated by 谷歌翻译
The ability to distinguish between different movie scenes is critical for understanding the storyline of a movie. However, accurately detecting movie scenes is often challenging as it requires the ability to reason over very long movie segments. This is in contrast to most existing video recognition models, which are typically designed for short-range video analysis. This work proposes a State-Space Transformer model that can efficiently capture dependencies in long movie videos for accurate movie scene detection. Our model, dubbed TranS4mer, is built using a novel S4A building block, which combines the strengths of structured state-space sequence (S4) and self-attention (A) layers. Given a sequence of frames divided into movie shots (uninterrupted periods where the camera position does not change), the S4A block first applies self-attention to capture short-range intra-shot dependencies. Afterward, the state-space operation in the S4A block is used to aggregate long-range inter-shot cues. The final TranS4mer model, which can be trained end-to-end, is obtained by stacking the S4A blocks one after the other multiple times. Our proposed TranS4mer outperforms all prior methods in three movie scene detection datasets, including MovieNet, BBC, and OVSD, while also being $2\times$ faster and requiring $3\times$ less GPU memory than standard Transformer models. We will release our code and models.
translated by 谷歌翻译
An expansion of aberrant brain cells is referred to as a brain tumor. The brain's architecture is extremely intricate, with several regions controlling various nervous system processes. Any portion of the brain or skull can develop a brain tumor, including the brain's protective coating, the base of the skull, the brainstem, the sinuses, the nasal cavity, and many other places. Over the past ten years, numerous developments in the field of computer-aided brain tumor diagnosis have been made. Recently, instance segmentation has attracted a lot of interest in numerous computer vision applications. It seeks to assign various IDs to various scene objects, even if they are members of the same class. Typically, a two-stage pipeline is used to perform instance segmentation. This study shows brain cancer segmentation using YOLOv5. Yolo takes dataset as picture format and corresponding text file. You Only Look Once (YOLO) is a viral and widely used algorithm. YOLO is famous for its object recognition properties. You Only Look Once (YOLO) is a popular algorithm that has gone viral. YOLO is well known for its ability to identify objects. YOLO V2, V3, V4, and V5 are some of the YOLO latest versions that experts have published in recent years. Early brain tumor detection is one of the most important jobs that neurologists and radiologists have. However, it can be difficult and error-prone to manually identify and segment brain tumors from Magnetic Resonance Imaging (MRI) data. For making an early diagnosis of the condition, an automated brain tumor detection system is necessary. The model of the research paper has three classes. They are respectively Meningioma, Pituitary, Glioma. The results show that, our model achieves competitive accuracy, in terms of runtime usage of M2 10 core GPU.
translated by 谷歌翻译
The field of autonomous mobile robots has undergone dramatic advancements over the past decades. Despite achieving important milestones, several challenges are yet to be addressed. Aggregating the achievements of the robotic community as survey papers is vital to keep the track of current state-of-the-art and the challenges that must be tackled in the future. This paper tries to provide a comprehensive review of autonomous mobile robots covering topics such as sensor types, mobile robot platforms, simulation tools, path planning and following, sensor fusion methods, obstacle avoidance, and SLAM. The urge to present a survey paper is twofold. First, autonomous navigation field evolves fast so writing survey papers regularly is crucial to keep the research community well-aware of the current status of this field. Second, deep learning methods have revolutionized many fields including autonomous navigation. Therefore, it is necessary to give an appropriate treatment of the role of deep learning in autonomous navigation as well which is covered in this paper. Future works and research gaps will also be discussed.
translated by 谷歌翻译