数字艺术与非娱乐令牌(NFTS)的出现获得了前所未有的普及程度。NFT是存储在区块链网络上的加密资产,并表示无法伪造的数字所有权证书。NFT可以纳入智能合同,允许所有者从未来的销售百分比中受益。虽然数字艺术生产者可以用NFTs造福非常受益,但它们的生产是耗时的。因此,本文探讨了使用生成的对抗性网络(GANS)来自动生成数字艺术的可能性。GAN是深度学习架构,广泛而有效地用于综合音频,图像和视频内容。然而,他们对NFT艺术的应用受到限制。在本文中,对数字艺术生成实施和评估了基于GAN的架构。定性案例研究的结果表明所生成的艺术品与真实样品相当。
translated by 谷歌翻译
由于缺乏自动注释系统,大多数发展城市的城市机构都是数字未标记的。因此,在此类城市中,位置和轨迹服务(例如Google Maps,Uber等)仍然不足。自然场景图像中的准确招牌检测是从此类城市街道检索无错误的信息的最重要任务。然而,开发准确的招牌本地化系统仍然是尚未解决的挑战,因为它的外观包括文本图像和令人困惑的背景。我们提出了一种新型的对象检测方法,该方法可以自动检测招牌,适合此类城市。我们通过合并两种专业预处理方法和一种运行时效高参数值选择算法来使用更快的基于R-CNN的定位。我们采用了一种增量方法,通过使用我们构造的SVSO(Street View Signboard对象)签名板数据集,通过详细评估和与基线进行比较,以达到最终提出的方法,这些方法包含六个发展中国家的自然场景图像。我们在SVSO数据集和Open Image数据集上展示了我们提出的方法的最新性能。我们提出的方法可以准确地检测招牌(即使图像包含多种形状和颜色的多种嘈杂背景的招牌)在SVSO独立测试集上达到0.90 MAP(平均平均精度)得分。我们的实施可在以下网址获得:https://github.com/sadrultoaha/signboard-detection
translated by 谷歌翻译
The latent space of autoencoders has been improved for clustering image data by jointly learning a t-distributed embedding with a clustering algorithm inspired by the neighborhood embedding concept proposed for data visualization. However, multivariate tabular data pose different challenges in representation learning than image data, where traditional machine learning is often superior to deep tabular data learning. In this paper, we address the challenges of learning tabular data in contrast to image data and present a novel Gaussian Cluster Embedding in Autoencoder Latent Space (G-CEALS) algorithm by replacing t-distributions with multivariate Gaussian clusters. Unlike current methods, the proposed approach independently defines the Gaussian embedding and the target cluster distribution to accommodate any clustering algorithm in representation learning. A trained G-CEALS model extracts a quality embedding for unseen test data. Based on the embedding clustering accuracy, the average rank of the proposed G-CEALS method is 1.4 (0.7), which is superior to all eight baseline clustering and cluster embedding methods on seven tabular data sets. This paper shows one of the first algorithms to jointly learn embedding and clustering to improve multivariate tabular data representation in downstream clustering.
translated by 谷歌翻译
Deep learning methods in the literature are invariably benchmarked on image data sets and then assumed to work on all data problems. Unfortunately, architectures designed for image learning are often not ready or optimal for non-image data without considering data-specific learning requirements. In this paper, we take a data-centric view to argue that deep image embedding clustering methods are not equally effective on heterogeneous tabular data sets. This paper performs one of the first studies on deep embedding clustering of seven tabular data sets using six state-of-the-art baseline methods proposed for image data sets. Our results reveal that the traditional clustering of tabular data ranks second out of eight methods and is superior to most deep embedding clustering baselines. Our observation is in line with the recent literature that traditional machine learning of tabular data is still a competitive approach against deep learning. Although surprising to many deep learning researchers, traditional clustering methods can be competitive baselines for tabular data, and outperforming these baselines remains a challenge for deep embedding clustering. Therefore, deep learning methods for image learning may not be fair or suitable baselines for tabular data without considering data-specific contrasts and learning requirements.
translated by 谷歌翻译
Semi-supervised learning (SSL) has made significant strides in the field of remote sensing. Finding a large number of labeled datasets for SSL methods is uncommon, and manually labeling datasets is expensive and time-consuming. Furthermore, accurately identifying remote sensing satellite images is more complicated than it is for conventional images. Class-imbalanced datasets are another prevalent phenomenon, and models trained on these become biased towards the majority classes. This becomes a critical issue with an SSL model's subpar performance. We aim to address the issue of labeling unlabeled data and also solve the model bias problem due to imbalanced datasets while achieving better accuracy. To accomplish this, we create "artificial" labels and train a model to have reasonable accuracy. We iteratively redistribute the classes through resampling using a distribution alignment technique. We use a variety of class imbalanced satellite image datasets: EuroSAT, UCM, and WHU-RS19. On UCM balanced dataset, our method outperforms previous methods MSMatch and FixMatch by 1.21% and 0.6%, respectively. For imbalanced EuroSAT, our method outperforms MSMatch and FixMatch by 1.08% and 1%, respectively. Our approach significantly lessens the requirement for labeled data, consistently outperforms alternative approaches, and resolves the issue of model bias caused by class imbalance in datasets.
translated by 谷歌翻译
To ensure proper knowledge representation of the kitchen environment, it is vital for kitchen robots to recognize the states of the food items that are being cooked. Although the domain of object detection and recognition has been extensively studied, the task of object state classification has remained relatively unexplored. The high intra-class similarity of ingredients during different states of cooking makes the task even more challenging. Researchers have proposed adopting Deep Learning based strategies in recent times, however, they are yet to achieve high performance. In this study, we utilized the self-attention mechanism of the Vision Transformer (ViT) architecture for the Cooking State Recognition task. The proposed approach encapsulates the globally salient features from images, while also exploiting the weights learned from a larger dataset. This global attention allows the model to withstand the similarities between samples of different cooking objects, while the employment of transfer learning helps to overcome the lack of inductive bias by utilizing pretrained weights. To improve recognition accuracy, several augmentation techniques have been employed as well. Evaluation of our proposed framework on the `Cooking State Recognition Challenge Dataset' has achieved an accuracy of 94.3%, which significantly outperforms the state-of-the-art.
translated by 谷歌翻译
Emergence of deep neural networks (DNNs) has raised enormous attention towards artificial neural networks (ANNs) once again. They have become the state-of-the-art models and have won different machine learning challenges. Although these networks are inspired by the brain, they lack biological plausibility, and they have structural differences compared to the brain. Spiking neural networks (SNNs) have been around for a long time, and they have been investigated to understand the dynamics of the brain. However, their application in real-world and complicated machine learning tasks were limited. Recently, they have shown great potential in solving such tasks. Due to their energy efficiency and temporal dynamics there are many promises in their future development. In this work, we reviewed the structures and performances of SNNs on image classification tasks. The comparisons illustrate that these networks show great capabilities for more complicated problems. Furthermore, the simple learning rules developed for SNNs, such as STDP and R-STDP, can be a potential alternative to replace the backpropagation algorithm used in DNNs.
translated by 谷歌翻译
User-specific future activity prediction in the healthcare domain based on previous activities can drastically improve the services provided by the nurses. It is challenging because, unlike other domains, activities in healthcare involve both nurses and patients, and they also vary from hour to hour. In this paper, we employ various data processing techniques to organize and modify the data structure and an LSTM-based multi-label classifier for a novel 2-stage training approach (user-agnostic pre-training and user-specific fine-tuning). Our experiment achieves a validation accuracy of 31.58\%, precision 57.94%, recall 68.31%, and F1 score 60.38%. We concluded that proper data pre-processing and a 2-stage training process resulted in better performance. This experiment is a part of the "Fourth Nurse Care Activity Recognition Challenge" by our team "Not A Fan of Local Minima".
translated by 谷歌翻译
Skeleton-based Motion Capture (MoCap) systems have been widely used in the game and film industry for mimicking complex human actions for a long time. MoCap data has also proved its effectiveness in human activity recognition tasks. However, it is a quite challenging task for smaller datasets. The lack of such data for industrial activities further adds to the difficulties. In this work, we have proposed an ensemble-based machine learning methodology that is targeted to work better on MoCap datasets. The experiments have been performed on the MoCap data given in the Bento Packaging Activity Recognition Challenge 2021. Bento is a Japanese word that resembles lunch-box. Upon processing the raw MoCap data at first, we have achieved an astonishing accuracy of 98% on 10-fold Cross-Validation and 82% on Leave-One-Out-Cross-Validation by using the proposed ensemble model.
translated by 谷歌翻译
许多水下任务,例如电缆和折磨检查,搜索和救援,受益于强大的人类机器人互动(HRI)功能。随着基于视觉的水下HRI方法的最新进展,即使在任务期间,自动驾驶水下车辆(AUV)也可以与他们的人类伴侣进行交流。但是,这些相互作用通常需要积极参与,尤其是人类(例如,在互动过程中必须继续看机器人)。因此,AUV必须知道何时开始与人类伴侣互动,即人是否关注AUV。在本文中,我们为AUV提供了一个潜水员的注意估计框架,以自主检测潜水员的注意力,然后(如果需要)在潜水员方面进行导航和重新定位以启动交互。该框架的核心要素是一个深神经网络(称为datt-net),它利用潜水员的10个面部关键点之间的几何关系来确定其头部方向。我们的基础实验评估(使用看不见的数据)表明,所提出的Datt-Net架构可以以有希望的准确性来确定人类潜水员的注意力。我们的现实世界实验还确认了Datt-NET的功效,该实验可以实时推理,并使AUV可以将自己定位为AUV-Diver相互作用。
translated by 谷歌翻译