提出测试释放(PTR)是一个差异隐私框架,可符合局部功能的敏感性,而不是其全球敏感性。该框架通常用于以差异性私有方式释放强大的统计数据,例如中位数或修剪平均值。尽管PTR是十年前引入的常见框架,但在诸如Robust SGD之类的应用程序中使用它,我们需要许多自适应鲁棒的查询是具有挑战性的。这主要是由于缺乏Renyi差异隐私(RDP)分析,这是一种瞬间的私人深度学习方法的基础。在这项工作中,我们概括了标准PTR,并在目标函数界定全局灵敏度时得出了第一个RDP。我们证明,与直接分析的$(\ eps,\ delta)$ -DP相比,我们的RDP绑定的PTR可以得出更严格的DP保证。我们还得出了亚采样下PTR的算法特异性隐私扩增。我们表明,我们的界限比一般的上限和接近下限的界限要紧密得多。我们的RDP界限可以为PTR的许多自适应运行的组成而更严格的隐私损失计算。作为我们的分析的应用,我们表明PTR和我们的理论结果可用于设计私人变体,用于拜占庭强大的训练算法,这些变体使用可靠的统计数据用于梯度聚集。我们对不同数据集和体系结构的标签,功能和梯度损坏的设置进行实验。我们表明,与基线相比,基于PTR的私人和强大的培训算法可显着改善该实用性。
translated by 谷歌翻译
In the Earth's magnetosphere, there are fewer than a dozen dedicated probes beyond low-Earth orbit making in-situ observations at any given time. As a result, we poorly understand its global structure and evolution, the mechanisms of its main activity processes, magnetic storms, and substorms. New Artificial Intelligence (AI) methods, including machine learning, data mining, and data assimilation, as well as new AI-enabled missions will need to be developed to meet this Sparse Data challenge.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Deep learning classifiers provide the most accurate means of automatically diagnosing diabetic retinopathy (DR) based on optical coherence tomography (OCT) and its angiography (OCTA). The power of these models is attributable in part to the inclusion of hidden layers that provide the complexity required to achieve a desired task. However, hidden layers also render algorithm outputs difficult to interpret. Here we introduce a novel biomarker activation map (BAM) framework based on generative adversarial learning that allows clinicians to verify and understand classifiers decision-making. A data set including 456 macular scans were graded as non-referable or referable DR based on current clinical standards. A DR classifier that was used to evaluate our BAM was first trained based on this data set. The BAM generation framework was designed by combing two U-shaped generators to provide meaningful interpretability to this classifier. The main generator was trained to take referable scans as input and produce an output that would be classified by the classifier as non-referable. The BAM is then constructed as the difference image between the output and input of the main generator. To ensure that the BAM only highlights classifier-utilized biomarkers an assistant generator was trained to do the opposite, producing scans that would be classified as referable by the classifier from non-referable scans. The generated BAMs highlighted known pathologic features including nonperfusion area and retinal fluid. A fully interpretable classifier based on these highlights could help clinicians better utilize and verify automated DR diagnosis.
translated by 谷歌翻译
Behavior constrained policy optimization has been demonstrated to be a successful paradigm for tackling Offline Reinforcement Learning. By exploiting historical transitions, a policy is trained to maximize a learned value function while constrained by the behavior policy to avoid a significant distributional shift. In this paper, we propose our closed-form policy improvement operators. We make a novel observation that the behavior constraint naturally motivates the use of first-order Taylor approximation, leading to a linear approximation of the policy objective. Additionally, as practical datasets are usually collected by heterogeneous policies, we model the behavior policies as a Gaussian Mixture and overcome the induced optimization difficulties by leveraging the LogSumExp's lower bound and Jensen's Inequality, giving rise to a closed-form policy improvement operator. We instantiate offline RL algorithms with our novel policy improvement operators and empirically demonstrate their effectiveness over state-of-the-art algorithms on the standard D4RL benchmark.
translated by 谷歌翻译
Unhealthy dietary habits are considered as the primary cause of multiple chronic diseases such as obesity and diabetes. The automatic food intake monitoring system has the potential to improve the quality of life (QoF) of people with dietary related diseases through dietary assessment. In this work, we propose a novel contact-less radar-based food intake monitoring approach. Specifically, a Frequency Modulated Continuous Wave (FMCW) radar sensor is employed to recognize fine-grained eating and drinking gestures. The fine-grained eating/drinking gesture contains a series of movement from raising the hand to the mouth until putting away the hand from the mouth. A 3D temporal convolutional network (3D-TCN) is developed to detect and segment eating and drinking gestures in meal sessions by processing the Range-Doppler Cube (RD Cube). Unlike previous radar-based research, this work collects data in continuous meal sessions. We create a public dataset that contains 48 meal sessions (3121 eating gestures and 608 drinking gestures) from 48 participants with a total duration of 783 minutes. Four eating styles (fork & knife, chopsticks, spoon, hand) are included in this dataset. To validate the performance of the proposed approach, 8-fold cross validation method is applied. Experimental results show that our proposed 3D-TCN outperforms the model that combines a convolutional neural network and a long-short-term-memory network (CNN-LSTM), and also the CNN-Bidirectional LSTM model (CNN-BiLSTM) in eating and drinking gesture detection. The 3D-TCN model achieves a segmental F1-score of 0.887 and 0.844 for eating and drinking gestures, respectively. The results of the proposed approach indicate the feasibility of using radar for fine-grained eating and drinking gesture detection and segmentation in meal sessions.
translated by 谷歌翻译
The problem of approximating the Pareto front of a multiobjective optimization problem can be reformulated as the problem of finding a set that maximizes the hypervolume indicator. This paper establishes the analytical expression of the Hessian matrix of the mapping from a (fixed size) collection of $n$ points in the $d$-dimensional decision space (or $m$ dimensional objective space) to the scalar hypervolume indicator value. To define the Hessian matrix, the input set is vectorized, and the matrix is derived by analytical differentiation of the mapping from a vectorized set to the hypervolume indicator. The Hessian matrix plays a crucial role in second-order methods, such as the Newton-Raphson optimization method, and it can be used for the verification of local optimal sets. So far, the full analytical expression was only established and analyzed for the relatively simple bi-objective case. This paper will derive the full expression for arbitrary dimensions ($m\geq2$ objective functions). For the practically important three-dimensional case, we also provide an asymptotically efficient algorithm with time complexity in $O(n\log n)$ for the exact computation of the Hessian Matrix' non-zero entries. We establish a sharp bound of $12m-6$ for the number of non-zero entries. Also, for the general $m$-dimensional case, a compact recursive analytical expression is established, and its algorithmic implementation is discussed. Also, for the general case, some sparsity results can be established; these results are implied by the recursive expression. To validate and illustrate the analytically derived algorithms and results, we provide a few numerical examples using Python and Mathematica implementations. Open-source implementations of the algorithms and testing data are made available as a supplement to this paper.
translated by 谷歌翻译
Solar activity is usually caused by the evolution of solar magnetic fields. Magnetic field parameters derived from photospheric vector magnetograms of solar active regions have been used to analyze and forecast eruptive events such as solar flares and coronal mass ejections. Unfortunately, the most recent solar cycle 24 was relatively weak with few large flares, though it is the only solar cycle in which consistent time-sequence vector magnetograms have been available through the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO) since its launch in 2010. In this paper, we look into another major instrument, namely the Michelson Doppler Imager (MDI) on board the Solar and Heliospheric Observatory (SOHO) from 1996 to 2010. The data archive of SOHO/MDI covers more active solar cycle 23 with many large flares. However, SOHO/MDI data only has line-of-sight (LOS) magnetograms. We propose a new deep learning method, named MagNet, to learn from combined LOS magnetograms, Bx and By taken by SDO/HMI along with H-alpha observations collected by the Big Bear Solar Observatory (BBSO), and to generate vector components Bx' and By', which would form vector magnetograms with observed LOS data. In this way, we can expand the availability of vector magnetograms to the period from 1996 to present. Experimental results demonstrate the good performance of the proposed method. To our knowledge, this is the first time that deep learning has been used to generate photospheric vector magnetograms of solar active regions for SOHO/MDI using SDO/HMI and H-alpha data.
translated by 谷歌翻译
太阳耀斑,尤其是M级和X级耀斑,通常与冠状质量弹出(CMES)有关。它们是太空天气影响的最重要来源,可能会严重影响近地环境。因此,必须预测耀斑(尤其是X级),以减轻其破坏性和危险后果。在这里,我们介绍了几种统计和机器学习方法,以预测AR的耀斑指数(FI),这些方法通过考虑到一定时间间隔内的不同类耀斑的数量来量化AR的耀斑生产力。具体而言,我们的样本包括2010年5月至2017年12月在太阳能磁盘上出现的563个AR。25个磁性参数,由空中震动和磁性成像器(HMI)的太空天气HMI活性区域(Sharp)提供的太阳能动力学观测值(HMI)。 (SDO),表征了代理中存储在ARS中的冠状磁能,并用作预测因子。我们研究了这些尖锐的参数与ARS的FI与机器学习算法(样条回归)和重采样方法(合成少数群体过度采样技术,用于使用高斯噪声回归的合成少数群体过度采样技术,smogn简短)。基于既定关系,我们能够在接下来的1天内预测给定AR的FIS值。与其他4种流行的机器学习算法相比,我们的方法提高了FI预测的准确性,尤其是对于大型FI。此外,我们根据Borda Count方法从由9种不同的机器学习方法渲染的等级计算出尖锐参数的重要性。
translated by 谷歌翻译
机器学习潜力是分子模拟的重要工具,但是由于缺乏高质量数据集来训练它们的发展,它们的开发阻碍了它们。我们描述了Spice数据集,这是一种新的量子化学数据集,用于训练与模拟与蛋白质相互作用的药物样的小分子相关的潜在。它包含超过110万个小分子,二聚体,二肽和溶剂化氨基酸的构象。它包括15个元素,带电和未充电的分子以及广泛的共价和非共价相互作用。它提供了在{\ omega} b97m-d3(bj)/def2-tzVPPD理论水平以及其他有用的数量(例如多极矩和键阶)上计算出的力和能量。我们在其上训练一组机器学习潜力,并证明它们可以在化学空间的广泛区域中实现化学精度。它可以作为创建可转移的,准备使用潜在功能用于分子模拟的宝贵资源。
translated by 谷歌翻译