传统上,音乐混合涉及以干净,单个曲目的形式录制乐器,并使用音频效果和专家知识(例如,混合工程师)将它们融合到最终混合物中。近年来,音乐制作任务的自动化已成为一个新兴领域,基于规则的方法和机器学习方法已被探索。然而,缺乏干燥或干净的仪器记录限制了这种模型的性能,这与专业的人造混合物相去甚远。我们探索是否可以使用室外数据,例如潮湿或加工的多轨音乐录音,并将其重新利用以训练有监督的深度学习模型,以弥合自动混合质量的当前差距。为了实现这一目标,我们提出了一种新型的数据预处理方法,该方法允许模型执行自动音乐混合。我们还重新设计了一种用于评估音乐混合系统的听力测试方法。我们使用经验丰富的混合工程师作为参与者来验证结果。
translated by 谷歌翻译
鉴于音乐源分离和自动混合的最新进展,在音乐曲目中删除音频效果是开发自动混合系统的有意义的一步。本文着重于消除对音乐制作中吉他曲目应用的失真音频效果。我们探索是否可以通过设计用于源分离和音频效应建模的神经网络来解决效果的去除。我们的方法证明对混合处理和清洁信号的效果特别有效。与基于稀疏优化的最新解决方案相比,这些模型获得了更好的质量和更快的推断。我们证明这些模型不仅适合倾斜,而且适用于其他类型的失真效应。通过讨论结果,我们强调了多个评估指标的有用性,以评估重建的不同方面的变形效果去除。
translated by 谷歌翻译
Accurate uncertainty quantification is necessary to enhance the reliability of deep learning models in real-world applications. In the case of regression tasks, prediction intervals (PIs) should be provided along with the deterministic predictions of deep learning models. Such PIs are useful or "high-quality'' as long as they are sufficiently narrow and capture most of the probability density. In this paper, we present a method to learn prediction intervals for regression-based neural networks automatically in addition to the conventional target predictions. In particular, we train two companion neural networks: one that uses one output, the target estimate, and another that uses two outputs, the upper and lower bounds of the corresponding PI. Our main contribution is the design of a loss function for the PI-generation network that takes into account the output of the target-estimation network and has two optimization objectives: minimizing the mean prediction interval width and ensuring the PI integrity using constraints that maximize the prediction interval probability coverage implicitly. Both objectives are balanced within the loss function using a self-adaptive coefficient. Furthermore, we apply a Monte Carlo-based approach that evaluates the model uncertainty in the learned PIs. Experiments using a synthetic dataset, six benchmark datasets, and a real-world crop yield prediction dataset showed that our method was able to maintain a nominal probability coverage and produce narrower PIs without detriment to its target estimation accuracy when compared to those PIs generated by three state-of-the-art neural-network-based methods.
translated by 谷歌翻译
A quantitative assessment of the global importance of an agent in a team is as valuable as gold for strategists, decision-makers, and sports coaches. Yet, retrieving this information is not trivial since in a cooperative task it is hard to isolate the performance of an individual from the one of the whole team. Moreover, it is not always clear the relationship between the role of an agent and his personal attributes. In this work we conceive an application of the Shapley analysis for studying the contribution of both agent policies and attributes, putting them on equal footing. Since the computational complexity is NP-hard and scales exponentially with the number of participants in a transferable utility coalitional game, we resort to exploiting a-priori knowledge about the rules of the game to constrain the relations between the participants over a graph. We hence propose a method to determine a Hierarchical Knowledge Graph of agents' policies and features in a Multi-Agent System. Assuming a simulator of the system is available, the graph structure allows to exploit dynamic programming to assess the importances in a much faster way. We test the proposed approach in a proof-of-case environment deploying both hardcoded policies and policies obtained via Deep Reinforcement Learning. The proposed paradigm is less computationally demanding than trivially computing the Shapley values and provides great insight not only into the importance of an agent in a team but also into the attributes needed to deploy the policy at its best.
translated by 谷歌翻译
In recent years there has been growing attention to interpretable machine learning models which can give explanatory insights on their behavior. Thanks to their interpretability, decision trees have been intensively studied for classification tasks, and due to the remarkable advances in mixed-integer programming (MIP), various approaches have been proposed to formulate the problem of training an Optimal Classification Tree (OCT) as a MIP model. We present a novel mixed-integer quadratic formulation for the OCT problem, which exploits the generalization capabilities of Support Vector Machines for binary classification. Our model, denoted as Margin Optimal Classification Tree (MARGOT), encompasses the use of maximum margin multivariate hyperplanes nested in a binary tree structure. To enhance the interpretability of our approach, we analyse two alternative versions of MARGOT, which include feature selection constraints inducing local sparsity of the hyperplanes. First, MARGOT has been tested on non-linearly separable synthetic datasets in 2-dimensional feature space to provide a graphical representation of the maximum margin approach. Finally, the proposed models have been tested on benchmark datasets from the UCI repository. The MARGOT formulation turns out to be easier to solve than other OCT approaches, and the generated tree better generalizes on new observations. The two interpretable versions are effective in selecting the most relevant features and maintaining good prediction quality.
translated by 谷歌翻译
Hierarchical time series are common in several applied fields. Forecasts are required to be coherent, that is, to satisfy the constraints given by the hierarchy. The most popular technique to enforce coherence is called reconciliation, which adjusts the base forecasts computed for each time series. However, recent works on probabilistic reconciliation present several limitations. In this paper, we propose a new approach based on conditioning to reconcile any type of forecast distribution. We then introduce a new algorithm, called Bottom-Up Importance Sampling, to efficiently sample from the reconciled distribution. It can be used for any base forecast distribution: discrete, continuous, or in the form of samples, providing a major speedup compared to the current methods. Experiments on several temporal hierarchies show a significant improvement over base probabilistic forecasts.
translated by 谷歌翻译
由于存在对抗性攻击,因此在安全至关重要系统中使用神经网络需要安全,可靠的模型。了解任何输入X的最小对抗扰动,或等效地知道X与分类边界的距离,可以评估分类鲁棒性,从而提供可认证的预测。不幸的是,计算此类距离的最新技术在计算上很昂贵,因此不适合在线应用程序。这项工作提出了一个新型的分类器家族,即签名的距离分类器(SDC),从理论的角度来看,它直接输出X与分类边界的确切距离,而不是概率分数(例如SoftMax)。 SDC代表一个强大的设计分类器家庭。为了实际解决SDC的理论要求,提出了一种名为Unitary级别神经网络的新型网络体系结构。实验结果表明,所提出的体系结构近似于签名的距离分类器,因此允许以单个推断为代价对X进行在线认证分类。
translated by 谷歌翻译
联合学习是用于培训分布式,敏感数据的培训模型的流行策略,同时保留了数据隐私。先前的工作确定了毒害数据或模型的联合学习方案的一系列安全威胁。但是,联合学习是一个网络系统,客户与服务器之间的通信对于学习任务绩效起着至关重要的作用。我们强调了沟通如何在联邦学习中引入另一个漏洞表面,并研究网络级对手对训练联合学习模型的影响。我们表明,从精心选择的客户中删除网络流量的攻击者可以大大降低目标人群的模型准确性。此外,我们表明,来自少数客户的协调中毒运动可以扩大降低攻击。最后,我们开发了服务器端防御,通过识别和上采样的客户可能对目标准确性做出积极贡献,从而减轻了攻击的影响。我们在三个数据集上全面评估了我们的攻击和防御,假设具有网络部分可见性的加密通信渠道和攻击者。
translated by 谷歌翻译
因子图是用于代表机器人技术各种问题的图形模型,例如运动(SFM),同时定位和映射(SLAM)和校准。通常,在他们的核心上,他们有一个优化问题,其术语仅取决于一小部分变量。因子图解决器利用问题的局部性,以大大减少迭代最小二乘(ILS)方法的计算时间。尽管非常强大,但他们的应用通常仅限于无约束的问题。在本文中,我们通过引入Lagrange乘数方法的因子图版本来对因子图内的变量进行建模。我们通过根据因子图提供完整的导航堆栈来显示我们方法的潜力。与标准导航堆栈不同,我们可以使用因子图对本地规划和本地化的最佳控制建模,并使用标准ILS方法来解决这两个问题。我们在现实世界自主导航方案中验证了我们的方法,并将其与ROS中实现的事实上的标准导航堆栈进行了比较。比较实验表明,对于手头的应用程序,我们的系统优于运行时的标准非线性编程求解器内部优化器(IPOPT),同时实现了类似的解决方案。
translated by 谷歌翻译
在线系统缺乏连续性的最常见原因之一是源自广泛流行的网络攻击,称为分布式拒绝服务(DDOS),在该网络攻击中,受感染设备(僵尸网络)网络被利用以通过淹没服务的计算能力。攻击者的命令。这种攻击是通过通过域名系统(DNS)技术通过域生成算法(DGAS)来进行的,这是一种隐身连接策略,但仍留下可疑的数据模式。为了发现这种威胁,已经取得了分析的进步。对于大多数人来说,他们发现机器学习(ML)是一种解决方案,可以在分析和分类大量数据方面非常有效。尽管表现出色,但ML模型在决策过程中具有一定程度的晦涩难懂。为了解决这个问题,ML的一个被称为可解释的ML的分支试图分解分类器的黑盒性质,并使它们可解释和可读。这项工作解决了在僵尸网络和DGA检测背景下可解释的ML的问题,我们最了解的是,当设计用于僵尸网络/DGA检测时,第一个具体分解了ML分类器的决定,因此提供了全球和本地。解释。
translated by 谷歌翻译