Multivariate time series forecasting with hierarchical structure is pervasive in real-world applications, demanding not only predicting each level of the hierarchy, but also reconciling all forecasts to ensure coherency, i.e., the forecasts should satisfy the hierarchical aggregation constraints. Moreover, the disparities of statistical characteristics between levels can be huge, worsened by non-Gaussian distributions and non-linear correlations. To this extent, we propose a novel end-to-end hierarchical time series forecasting model, based on conditioned normalizing flow-based autoregressive transformer reconciliation, to represent complex data distribution while simultaneously reconciling the forecasts to ensure coherency. Unlike other state-of-the-art methods, we achieve the forecasting and reconciliation simultaneously without requiring any explicit post-processing step. In addition, by harnessing the power of deep model, we do not rely on any assumption such as unbiased estimates or Gaussian distribution. Our evaluation experiments are conducted on four real-world hierarchical datasets from different industrial domains (three public ones and a dataset from the application servers of Alipay's data center) and the preliminary results demonstrate efficacy of our proposed method.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Deep learning (DL) has become a driving force and has been widely adopted in many domains and applications with competitive performance. In practice, to solve the nontrivial and complicated tasks in real-world applications, DL is often not used standalone, but instead contributes as a piece of gadget of a larger complex AI system. Although there comes a fast increasing trend to study the quality issues of deep neural networks (DNNs) at the model level, few studies have been performed to investigate the quality of DNNs at both the unit level and the potential impacts on the system level. More importantly, it also lacks systematic investigation on how to perform the risk assessment for AI systems from unit level to system level. To bridge this gap, this paper initiates an early exploratory study of AI system risk assessment from both the data distribution and uncertainty angles to address these issues. We propose a general framework with an exploratory study for analyzing AI systems. After large-scale (700+ experimental configurations and 5000+ GPU hours) experiments and in-depth investigations, we reached a few key interesting findings that highlight the practical need and opportunities for more in-depth investigations into AI systems.
translated by 谷歌翻译
Biclustering is widely used in different kinds of fields including gene information analysis, text mining, and recommendation system by effectively discovering the local correlation between samples and features. However, many biclustering algorithms will collapse when facing heavy-tailed data. In this paper, we propose a robust version of convex biclustering algorithm with Huber loss. Yet, the newly introduced robustification parameter brings an extra burden to selecting the optimal parameters. Therefore, we propose a tuning-free method for automatically selecting the optimal robustification parameter with high efficiency. The simulation study demonstrates the more fabulous performance of our proposed method than traditional biclustering methods when encountering heavy-tailed noise. A real-life biomedical application is also presented. The R package RcvxBiclustr is available at https://github.com/YifanChen3/RcvxBiclustr.
translated by 谷歌翻译
Recent years have witnessed an astonishing explosion in the evolution of mobile applications powered by AI technologies. The rapid growth of AI frameworks enables the transition of AI technologies to mobile devices, significantly prompting the adoption of AI apps (i.e., apps that integrate AI into their functions) among smartphone devices. In this paper, we conduct the most extensive empirical study on 56,682 published AI apps from three perspectives: dataset characteristics, development issues, and user feedback and privacy. To this end, we build an automated AI app identification tool, AI Discriminator, that detects eligible AI apps from 7,259,232 mobile apps. First, we carry out a dataset analysis, where we explore the AndroZoo large repository to identify AI apps and their core characteristics. Subsequently, we pinpoint key issues in AI app development (e.g., model protection). Finally, we focus on user reviews and user privacy protection. Our paper provides several notable findings. Some essential ones involve revealing the issue of insufficient model protection by presenting the lack of model encryption, and demonstrating the risk of user privacy data being leaked. We published our large-scale AI app datasets to inspire more future research.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
Various depth estimation models are now widely used on many mobile and IoT devices for image segmentation, bokeh effect rendering, object tracking and many other mobile tasks. Thus, it is very crucial to have efficient and accurate depth estimation models that can run fast on low-power mobile chipsets. In this Mobile AI challenge, the target was to develop deep learning-based single image depth estimation solutions that can show a real-time performance on IoT platforms and smartphones. For this, the participants used a large-scale RGB-to-depth dataset that was collected with the ZED stereo camera capable to generated depth maps for objects located at up to 50 meters. The runtime of all models was evaluated on the Raspberry Pi 4 platform, where the developed solutions were able to generate VGA resolution depth maps at up to 27 FPS while achieving high fidelity results. All models developed in the challenge are also compatible with any Android or Linux-based mobile devices, their detailed description is provided in this paper.
translated by 谷歌翻译
从大脑的事件驱动和稀疏的尖峰特征中受益,尖峰神经网络(SNN)已成为人工神经网络(ANN)的一种节能替代品。但是,SNNS和ANN之间的性能差距很长一段时间以来一直在延伸SNNS。为了利用SNN的全部潜力,我们研究了SNN中注意机制的影响。我们首先使用插件套件提出了我们的注意力,称为多维关注(MA)。然后,提出了一种新的注意力SNN体系结构,并提出了端到端训练,称为“ ma-snn”,该体系结构分别或同时或同时延伸了沿时间,通道以及空间维度的注意力重量。基于现有的神经科学理论,我们利用注意力重量来优化膜电位,进而以数据依赖性方式调节尖峰响应。 MA以可忽略的其他参数为代价,促进了香草SNN,以实现更稀疏的尖峰活动,更好的性能和能源效率。实验是在基于事件的DVS128手势/步态动作识别和Imagenet-1K图像分类中进行的。在手势/步态上,尖峰计数减少了84.9%/81.6%,任务准确性和能源效率提高了5.9%/4.7%和3.4 $ \ times $/3.2 $ \ times $。在ImagEnet-1K上,我们在单个/4步res-SNN-104上获得了75.92%和77.08%的TOP-1精度,这是SNN的最新结果。据我们所知,这是SNN社区与大规模数据集中的ANN相比,SNN社区取得了可比甚至更好的性能。我们的工作阐明了SNN作为支持SNN的各种应用程序的一般骨干的潜力,在有效性和效率之间取得了巨大平衡。
translated by 谷歌翻译
网络体系结构搜索(NAS),尤其是可区分的体系结构搜索(DARTS)方法,已经显示出在特定感兴趣的特定数据集中学习出色的模型体系结构的强大力量。与使用固定的数据集相反,在这项工作中,我们关注NAS的不同但重要的方案:如何完善部署的网络模型体系结构,以增强其鲁棒性,并通过一些收集和错误分类的示例的指导来增强其鲁棒性,这些示例被某些降低了现实世界中的未知损坏具有特定的模式(例如噪声,模糊等)。为此,我们首先进行了一项实证研究,以验证模型体系结构绝对与腐败模式有关。令人惊讶的是,通过仅添加一些损坏和错误分类的示例(例如,$ 10^3 $示例)到清洁培训数据集(例如$ 5.0 \ times 10^4 $示例)中,我们可以完善模型体系结构并显着增强鲁棒性。为了使其更加实用,应仔细研究关键问题,即如何为有效的NAS指导选择适当的失败示例。然后,我们提出了一个新颖的核心失效指导飞镖,该飞镖嵌入了K-Center-Greedy算法的飞镖,以选择合适的损坏故障示例以完善模型体系结构。我们使用我们的方法在清洁和15个腐败上使用飞镖精制的DNN,并在四个特定的现实世界腐败的指导下进行了指导。与最先进的NAS以及基于数据启发的增强方法相比,我们的最终方法可以在损坏的数据集和原始清洁数据集上获得更高的精度。在某些腐败模式上,我们可以达到超过45%的绝对准确性提高。
translated by 谷歌翻译
在这项工作中,我们研究了基于价值的深钢筋学习(DRL)中简单但普遍适用的奖励成型案例。我们表明,线性转换形式的奖励转移等同于更改函数近似中$ q $ function的初始化。基于这样的等价性,我们带来了关键的见解,即积极的奖励转移会导致保守的剥削,而负面的奖励转移会导致好奇心驱动的探索。因此,保守的剥削改善了离线RL价值估计,乐观的价值估计改善了在线RL的勘探。我们验证了对一系列RL任务的见解,并显示了其对基准的改进:(1)在离线RL中,保守的剥削可根据现成的算法提高性能; (2)在在线连续控制中,具有不同转移常数的多个值函数可用于应对探索 - 诠释困境,以提高样品效率; (3)在离散控制任务中,负奖励转移可以改善基于好奇心的探索方法。
translated by 谷歌翻译