从定义的福利到定义的缴款计划的过渡使从政府和机构退休的责任转移到了个人。确定个人的最佳储蓄和投资策略对于稳定的金融立场和避免在工作生活和退休期间避免贫困至关重要,这在一个世界上,这是一项特别具有挑战性的任务,在这个世界上,不同职业组经历的就业和收入轨迹是高度多样化的。我们介绍了一个模型,在该模型中,代理商学习最佳投资组合分配和储蓄策略,这些策略适合其异质概况。我们使用深度加强学习来训练代理。通过职业和年龄依赖收入演化动态校准环境。该研究的重点是取决于代理概况的异质收入轨迹,并结合了代理的行为参数化。该模型提供了一种灵活的方法,可在不同的情况下估算异构概况的终身消费和投资选择。
translated by 谷歌翻译
我们展示了在线转移学习作为数字资产交易代理的应用。该代理使用回波状态网络的形式使用强大的特征空间表示,其输出可用于直接,经常性的强化学习代理。代理商学会交易XBTUSD(比特币与美元)Perpetual Swap衍生品在Bitmex上合同。它学会在五个微微采样的数据上贸易盘中,避免过度交易,捕获资金利润,也能够预测市场的方向。总体而言,我们的加密代理商实现了350%的总回报,交易成本净额超过五年,其中71%是资金利润。它达到的年度信息比率为1.46。
translated by 谷歌翻译
我们探索在线感应转移学习,通过由高斯混合模型隐藏的加工单元形成的径向基函数网络转移到直接,经常性的加固学习剂。该代理商在实验中进行工作,交易主要的现货市场货币对,我们准确地占交易和资金成本。这些利润和损失来源,包括货币市场发生的价格趋势,通过二次实用程序向代理商提供,他们将直接学习瞄准职位。我们通过学习在在线转移学习背景下瞄准风险职位之前提前改进工作。我们的代理商实现了0.52的年度组合信息比例,复合返回率为9.3%,净的执行和资金成本,超过7年的测试集;尽管在交易成本在统计上最贵的价格是最昂贵的,但仍然迫使模型在5点在5点在5月5日的交易日结束。
translated by 谷歌翻译
在金融时序预测时,我们调查特征选择,非线性建模和在线学习的好处。我们考虑在线学习的顺序和持续学习子类型。我们进行的实验表明,以径向基函数网络的形式,在线转移学习存在益处,超出了递归最小二乘模型的顺序更新。我们表明,利用聚类算法构建核克矩阵的径向基函数网络比将每个训练矢量视为单独的基本函数,与内核脊回归发生的更有益。我们展示了定量程序来确定径向基函数网络的结构非常结构。最后,我们对金融时间序列的日志回报进行了实验,并表明在线学习模型,特别是径向基函数网络,能够优于随机的散步基线,而离线学习模型努力这样做。
translated by 谷歌翻译
We are witnessing a widespread adoption of artificial intelligence in healthcare. However, most of the advancements in deep learning (DL) in this area consider only unimodal data, neglecting other modalities. Their multimodal interpretation necessary for supporting diagnosis, prognosis and treatment decisions. In this work we present a deep architecture, explainable by design, which jointly learns modality reconstructions and sample classifications using tabular and imaging data. The explanation of the decision taken is computed by applying a latent shift that, simulates a counterfactual prediction revealing the features of each modality that contribute the most to the decision and a quantitative score indicating the modality importance. We validate our approach in the context of COVID-19 pandemic using the AIforCOVID dataset, which contains multimodal data for the early identification of patients at risk of severe outcome. The results show that the proposed method provides meaningful explanations without degrading the classification performance.
translated by 谷歌翻译
Human Activity Recognition (HAR) is one of the core research areas in mobile and wearable computing. With the application of deep learning (DL) techniques such as CNN, recognizing periodic or static activities (e.g, walking, lying, cycling, etc.) has become a well studied problem. What remains a major challenge though is the sporadic activity recognition (SAR) problem, where activities of interest tend to be non periodic, and occur less frequently when compared with the often large amount of irrelevant background activities. Recent works suggested that sequential DL models (such as LSTMs) have great potential for modeling nonperiodic behaviours, and in this paper we studied some LSTM training strategies for SAR. Specifically, we proposed two simple yet effective LSTM variants, namely delay model and inverse model, for two SAR scenarios (with and without time critical requirement). For time critical SAR, the delay model can effectively exploit predefined delay intervals (within tolerance) in form of contextual information for improved performance. For regular SAR task, the second proposed, inverse model can learn patterns from the time series in an inverse manner, which can be complementary to the forward model (i.e.,LSTM), and combining both can boost the performance. These two LSTM variants are very practical, and they can be deemed as training strategies without alteration of the LSTM fundamentals. We also studied some additional LSTM training strategies, which can further improve the accuracy. We evaluated our models on two SAR and one non-SAR datasets, and the promising results demonstrated the effectiveness of our approaches in HAR applications.
translated by 谷歌翻译
Testing Deep Learning (DL) based systems inherently requires large and representative test sets to evaluate whether DL systems generalise beyond their training datasets. Diverse Test Input Generators (TIGs) have been proposed to produce artificial inputs that expose issues of the DL systems by triggering misbehaviours. Unfortunately, such generated inputs may be invalid, i.e., not recognisable as part of the input domain, thus providing an unreliable quality assessment. Automated validators can ease the burden of manually checking the validity of inputs for human testers, although input validity is a concept difficult to formalise and, thus, automate. In this paper, we investigate to what extent TIGs can generate valid inputs, according to both automated and human validators. We conduct a large empirical study, involving 2 different automated validators, 220 human assessors, 5 different TIGs and 3 classification tasks. Our results show that 84% artificially generated inputs are valid, according to automated validators, but their expected label is not always preserved. Automated validators reach a good consensus with humans (78% accuracy), but still have limitations when dealing with feature-rich datasets.
translated by 谷歌翻译
Deep Neural Networks (DNN) are increasingly used as components of larger software systems that need to process complex data, such as images, written texts, audio/video signals. DNN predictions cannot be assumed to be always correct for several reasons, among which the huge input space that is dealt with, the ambiguity of some inputs data, as well as the intrinsic properties of learning algorithms, which can provide only statistical warranties. Hence, developers have to cope with some residual error probability. An architectural pattern commonly adopted to manage failure-prone components is the supervisor, an additional component that can estimate the reliability of the predictions made by untrusted (e.g., DNN) components and can activate an automated healing procedure when these are likely to fail, ensuring that the Deep Learning based System (DLS) does not cause damages, despite its main functionality being suspended. In this paper, we consider DLS that implement a supervisor by means of uncertainty estimation. After overviewing the main approaches to uncertainty estimation and discussing their pros and cons, we motivate the need for a specific empirical assessment method that can deal with the experimental setting in which supervisors are used, where the accuracy of the DNN matters only as long as the supervisor lets the DLS continue to operate. Then we present a large empirical study conducted to compare the alternative approaches to uncertainty estimation. We distilled a set of guidelines for developers that are useful to incorporate a supervisor based on uncertainty monitoring into a DLS.
translated by 谷歌翻译
Emerging applications such as Deep Learning are often data-driven, thus traditional approaches based on auto-tuners are not performance effective across the wide range of inputs used in practice. In the present paper, we start an investigation of predictive models based on machine learning techniques in order to optimize Convolution Neural Networks (CNNs). As a use-case, we focus on the ARM Compute Library which provides three different implementations of the convolution operator at different numeric precision. Starting from a collation of benchmarks, we build and validate models learned by Decision Tree and naive Bayesian classifier. Preliminary experiments on Midgard-based ARM Mali GPU show that our predictive model outperforms all the convolution operators manually selected by the library.
translated by 谷歌翻译
A tractogram is a virtual representation of the brain white matter. It is composed of millions of virtual fibers, encoded as 3D polylines, which approximate the white matter axonal pathways. To date, tractograms are the most accurate white matter representation and thus are used for tasks like presurgical planning and investigations of neuroplasticity, brain disorders, or brain networks. However, it is a well-known issue that a large portion of tractogram fibers is not anatomically plausible and can be considered artifacts of the tracking procedure. With Verifyber, we tackle the problem of filtering out such non-plausible fibers using a novel fully-supervised learning approach. Differently from other approaches based on signal reconstruction and/or brain topology regularization, we guide our method with the existing anatomical knowledge of the white matter. Using tractograms annotated according to anatomical principles, we train our model, Verifyber, to classify fibers as either anatomically plausible or non-plausible. The proposed Verifyber model is an original Geometric Deep Learning method that can deal with variable size fibers, while being invariant to fiber orientation. Our model considers each fiber as a graph of points, and by learning features of the edges between consecutive points via the proposed sequence Edge Convolution, it can capture the underlying anatomical properties. The output filtering results highly accurate and robust across an extensive set of experiments, and fast; with a 12GB GPU, filtering a tractogram of 1M fibers requires less than a minute. Verifyber implementation and trained models are available at https://github.com/FBK-NILab/verifyber.
translated by 谷歌翻译