推荐系统是机器学习系统的子类,它们采用复杂的信息过滤策略来减少搜索时间,并向任何特定用户建议最相关的项目。混合建议系统以不同的方式结合了多种建议策略,以从其互补的优势中受益。一些混合推荐系统已经结合了协作过滤和基于内容的方法来构建更强大的系统。在本文中,我们提出了一个混合推荐系统,该系统将基于最小二乘(ALS)的交替正方(ALS)的协作过滤与深度学习结合在一起,以增强建议性能,并克服与协作过滤方法相关的限制,尤其是关于其冷启动问题。本质上,我们使用ALS(协作过滤)的输出来影响深度神经网络(DNN)的建议,该建议结合了大数据处理框架中的特征,上下文,结构和顺序信息。我们已经进行了几项实验,以测试拟议混合体架构向潜在客户推荐智能手机的功效,并将其性能与其他开源推荐人进行比较。结果表明,所提出的系统的表现优于几个现有的混合推荐系统。
translated by 谷歌翻译
The existing methods for video anomaly detection mostly utilize videos containing identifiable facial and appearance-based features. The use of videos with identifiable faces raises privacy concerns, especially when used in a hospital or community-based setting. Appearance-based features can also be sensitive to pixel-based noise, straining the anomaly detection methods to model the changes in the background and making it difficult to focus on the actions of humans in the foreground. Structural information in the form of skeletons describing the human motion in the videos is privacy-protecting and can overcome some of the problems posed by appearance-based features. In this paper, we present a survey of privacy-protecting deep learning anomaly detection methods using skeletons extracted from videos. We present a novel taxonomy of algorithms based on the various learning approaches. We conclude that skeleton-based approaches for anomaly detection can be a plausible privacy-protecting alternative for video anomaly detection. Lastly, we identify major open research questions and provide guidelines to address them.
translated by 谷歌翻译
People living with dementia often exhibit behavioural and psychological symptoms of dementia that can put their and others' safety at risk. Existing video surveillance systems in long-term care facilities can be used to monitor such behaviours of risk to alert the staff to prevent potential injuries or death in some cases. However, these behaviours of risk events are heterogeneous and infrequent in comparison to normal events. Moreover, analyzing raw videos can also raise privacy concerns. In this paper, we present two novel privacy-protecting video-based anomaly detection approaches to detect behaviours of risks in people with dementia. We either extracted body pose information as skeletons and use semantic segmentation masks to replace multiple humans in the scene with their semantic boundaries. Our work differs from most existing approaches for video anomaly detection that focus on appearance-based features, which can put the privacy of a person at risk and is also susceptible to pixel-based noise, including illumination and viewing direction. We used anonymized videos of normal activities to train customized spatio-temporal convolutional autoencoders and identify behaviours of risk as anomalies. We show our results on a real-world study conducted in a dementia care unit with patients with dementia, containing approximately 21 hours of normal activities data for training and 9 hours of data containing normal and behaviours of risk events for testing. We compared our approaches with the original RGB videos and obtained an equivalent area under the receiver operating characteristic curve performance of 0.807 for the skeleton-based approach and 0.823 for the segmentation mask-based approach. This is one of the first studies to incorporate privacy for the detection of behaviours of risks in people with dementia.
translated by 谷歌翻译
通过利用和适应到目前为止获得的知识,人类具有识别和区分他们不熟悉的实例的天生能力。重要的是,他们实现了这一目标,而不会在早期学习中恶化表现。受此启发,我们识别并制定了NCDWF的新的,务实的问题设置:新颖的类发现而无需忘记,哪个任务是机器学习模型从未标记的数据中逐步发现实例的新颖类别,同时在先前看到的类别上保持其性能。我们提出1)一种生成伪内表示的方法,该表示的代理(不再可用)标记的数据,从而减轻遗忘的遗忘,2)基于相互信息的正常化程序,可以增强对新型类别的无聊发现,而3)a 3)当测试数据包含所见类别和看不见的类别的实例时,简单的已知类标识符可以有助于广义推断。我们介绍了基于CIFAR-10,CIFAR-100和IMAGENET-1000的实验协议,以衡量知识保留和新型类发现之间的权衡。我们广泛的评估表明,现有的模型在确定新类别的同时灾难性地忘记了先前看到的类别,而我们的方法能够有效地在竞争目标之间平衡。我们希望我们的工作能够吸引对这个新确定的实用问题设定的进一步研究。
translated by 谷歌翻译
当真实数据有限,收集昂贵或由于隐私问题而无法使用时,合成表格数据生成至关重要。但是,生成高质量的合成数据具有挑战性。已经提出了几种基于概率,统计和生成的对抗网络(GAN)方法,用于合成表格数据生成。一旦生成,评估合成数据的质量就非常具有挑战性。文献中已经使用了一些传统指标,但缺乏共同,健壮和单一指标。这使得很难正确比较不同合成表格数据生成方法的有效性。在本文中,我们提出了一种新的通用度量,tabsyndex,以对合成数据进行强有力的评估。 TABSYNDEX通过不同的组件分数评估合成数据与实际数据的相似性,这些分量分数评估了“高质量”合成数据所需的特征。作为单个评分度量,TABSYNDEX也可以用来观察和评估基于神经网络的方法的训练。这将有助于获得更早的见解。此外,我们提出了几种基线模型,用于与现有生成模型对拟议评估度量的比较分析。
translated by 谷歌翻译
提高搜索结果的质量可以显着增强用户的体验和与搜索引擎的交战。尽管机器学习和数据挖掘领域的最新进展,但正确对特定用户搜索查询的项目进行了分类一直是一个长期的挑战,这仍然有很大的改进空间。本文介绍了“购物查询数据集”,这是一个很大的亚马逊搜索查询和结果的大型数据集,以促进研究以提高搜索结果的质量,以促进研究。该数据集包含大约1.3万个独特的查询和260万手动标记(查询,产品)相关性判断。该数据集具有多语言,其中包括英语,日语和西班牙语的查询。购物查询数据集用于KDDCUP'22挑战之一。在本文中,我们描述了数据集并介绍了三个评估任务以及基线结果:(i)对结果列表进行排名,(ii)将产品结果分类为相关性类别,以及(iii)确定给定查询的替代产品。我们预计这些数据将成为产品搜索主题的未来研究的黄金标准。
translated by 谷歌翻译
在本文中,我们考虑了分布式多机器人系统(MRSS)的两个耦合问题,与有限的视野(FOV)传感器协调:交互的自适应调整和传感器攻击的拒绝。首先,分布式控制框架(例如,潜在字段)的典型缺点是整体系统行为对分配给相对交互的增益非常敏感。其次,有限的FOV传感器MRSS可以更容易受到针对他们的FOV的传感器攻击,因此必须适应这种攻击。基于这些缺点,我们提出了一个全面的解决方案,将自适应增益调整和攻击恢复能力结合在拓扑控制中,为有限的FOVS拓扑控制问题。具体地,我们首先基于满足标称成对相互作用来得出自适应增益调谐方案,这产生了机器人邻域中的相互作用强度的动态平衡。然后,我们通过采用静态输出反馈技术来模拟附加传感器和执行器攻击(或故障)并导出H无限控制协议,保证受攻击(故障)信号引起的误差的界限L2增益。最后,提供了使用ROS Gazebo的仿真结果来支持我们的理论发现。
translated by 谷歌翻译
Objective: Convolutional neural networks (CNNs) have demonstrated promise in automated cardiac magnetic resonance image segmentation. However, when using CNNs in a large real-world dataset, it is important to quantify segmentation uncertainty and identify segmentations which could be problematic. In this work, we performed a systematic study of Bayesian and non-Bayesian methods for estimating uncertainty in segmentation neural networks. Methods: We evaluated Bayes by Backprop, Monte Carlo Dropout, Deep Ensembles, and Stochastic Segmentation Networks in terms of segmentation accuracy, probability calibration, uncertainty on out-of-distribution images, and segmentation quality control. Results: We observed that Deep Ensembles outperformed the other methods except for images with heavy noise and blurring distortions. We showed that Bayes by Backprop is more robust to noise distortions while Stochastic Segmentation Networks are more resistant to blurring distortions. For segmentation quality control, we showed that segmentation uncertainty is correlated with segmentation accuracy for all the methods. With the incorporation of uncertainty estimates, we were able to reduce the percentage of poor segmentation to 5% by flagging 31--48% of the most uncertain segmentations for manual review, substantially lower than random review without using neural network uncertainty (reviewing 75--78% of all images). Conclusion: This work provides a comprehensive evaluation of uncertainty estimation methods and showed that Deep Ensembles outperformed other methods in most cases. Significance: Neural network uncertainty measures can help identify potentially inaccurate segmentations and alert users for manual review.
translated by 谷歌翻译
Designing experiments often requires balancing between learning about the true treatment effects and earning from allocating more samples to the superior treatment. While optimal algorithms for the Multi-Armed Bandit Problem (MABP) provide allocation policies that optimally balance learning and earning, they tend to be computationally expensive. The Gittins Index (GI) is a solution to the MABP that can simultaneously attain optimality and computationally efficiency goals, and it has been recently used in experiments with Bernoulli and Gaussian rewards. For the first time, we present a modification of the GI rule that can be used in experiments with exponentially-distributed rewards. We report its performance in simulated 2- armed and 3-armed experiments. Compared to traditional non-adaptive designs, our novel GI modified design shows operating characteristics comparable in learning (e.g. statistical power) but substantially better in earning (e.g. direct benefits). This illustrates the potential that designs using a GI approach to allocate participants have to improve participant benefits, increase efficiencies, and reduce experimental costs in adaptive multi-armed experiments with exponential rewards.
translated by 谷歌翻译
Modelling and forecasting real-life human behaviour using online social media is an active endeavour of interest in politics, government, academia, and industry. Since its creation in 2006, Twitter has been proposed as a potential laboratory that could be used to gauge and predict social behaviour. During the last decade, the user base of Twitter has been growing and becoming more representative of the general population. Here we analyse this user base in the context of the 2021 Mexican Legislative Election. To do so, we use a dataset of 15 million election-related tweets in the six months preceding election day. We explore different election models that assign political preference to either the ruling parties or the opposition. We find that models using data with geographical attributes determine the results of the election with better precision and accuracy than conventional polling methods. These results demonstrate that analysis of public online data can outperform conventional polling methods, and that political analysis and general forecasting would likely benefit from incorporating such data in the immediate future. Moreover, the same Twitter dataset with geographical attributes is positively correlated with results from official census data on population and internet usage in Mexico. These findings suggest that we have reached a period in time when online activity, appropriately curated, can provide an accurate representation of offline behaviour.
translated by 谷歌翻译