计算优化问题解决方案解决方案的雅各布是机器学习中的一个核心问题,其应用程序在超参数优化,元学习,优化为层和数据集蒸馏中的应用程序,仅举几例。展开的分化是一种流行的启发式方法,它使用迭代求解器近似溶液,并通过计算路径区分它。这项工作提供了对梯度下降和Chebyshev方法的二次目标的这种方法的非反应收敛速率分析。我们表明,为了确保雅各布的融合,我们可以1)选择较大的学习率,导致快速渐近地收敛,但接受该算法可能具有任意长的燃烧阶段或2)选择较小的学习率直接但较慢的收敛性。我们将这种现象称为展开的诅咒。最后,我们讨论了相对于这种方法的开放问题,例如为最佳展开策略得出实用的更新规则,并与Sobolev正交多项式领域建立了新的联系。
translated by 谷歌翻译
现实世界中的竞争游戏,例如国际象棋,GO或Starcraft II,依靠ELO模型来衡量球员的力量。由于这些游戏不是完全传递的,因此使用ELO隐式假设它们具有可以正确识别和提取的强透射组件。在这项研究中,我们研究了识别游戏中及传递组件强度的挑战。首先,我们证明ELO模型即使在基本的透明游戏中也无法提取此传递组件。然后,基于此观察,我们提出了ELO分数的扩展:我们最终获得了一个圆盘排名系统,该系统分配了每个玩家两个分数,我们将其称为技能和一致性。最后,我们提出了关于机器人和人类玩的现实世界游戏的回报矩阵的经验验证。
translated by 谷歌翻译
找到模型的最佳超参数可以作为双重优化问题,通常使用零级技术解决。在这项工作中,当内部优化问题是凸但不平滑时,我们研究一阶方法。我们表明,近端梯度下降和近端坐标下降序列序列的前向模式分化,雅各比人会收敛到精确的雅各布式。使用隐式差异化,我们表明可以利用内部问题的非平滑度来加快计算。最后,当内部优化问题大约解决时,我们对高度降低的误差提供了限制。关于回归和分类问题的结果揭示了高参数优化的计算益处,尤其是在需要多个超参数时。
translated by 谷歌翻译
Sky-image-based solar forecasting using deep learning has been recognized as a promising approach in reducing the uncertainty in solar power generation. However, one of the biggest challenges is the lack of massive and diversified sky image samples. In this study, we present a comprehensive survey of open-source ground-based sky image datasets for very short-term solar forecasting (i.e., forecasting horizon less than 30 minutes), as well as related research areas which can potentially help improve solar forecasting methods, including cloud segmentation, cloud classification and cloud motion prediction. We first identify 72 open-source sky image datasets that satisfy the needs of machine/deep learning. Then a database of information about various aspects of the identified datasets is constructed. To evaluate each surveyed datasets, we further develop a multi-criteria ranking system based on 8 dimensions of the datasets which could have important impacts on usage of the data. Finally, we provide insights on the usage of these datasets for different applications. We hope this paper can provide an overview for researchers who are looking for datasets for very short-term solar forecasting and related areas.
translated by 谷歌翻译
Neural networks can be trained to solve regression problems by using gradient-based methods to minimize the square loss. However, practitioners often prefer to reformulate regression as a classification problem, observing that training on the cross entropy loss results in better performance. By focusing on two-layer ReLU networks, which can be fully characterized by measures over their feature space, we explore how the implicit bias induced by gradient-based optimization could partly explain the above phenomenon. We provide theoretical evidence that the regression formulation yields a measure whose support can differ greatly from that for classification, in the case of one-dimensional data. Our proposed optimal supports correspond directly to the features learned by the input layer of the network. The different nature of these supports sheds light on possible optimization difficulties the square loss could encounter during training, and we present empirical results illustrating this phenomenon.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Solar forecasting from ground-based sky images using deep learning models has shown great promise in reducing the uncertainty in solar power generation. One of the biggest challenges for training deep learning models is the availability of labeled datasets. With more and more sky image datasets open sourced in recent years, the development of accurate and reliable solar forecasting methods has seen a huge growth in potential. In this study, we explore three different training strategies for deep-learning-based solar forecasting models by leveraging three heterogeneous datasets collected around the world with drastically different climate patterns. Specifically, we compare the performance of models trained individually based on local datasets (local models) and models trained jointly based on the fusion of multiple datasets from different locations (global models), and we further examine the knowledge transfer from pre-trained solar forecasting models to a new dataset of interest (transfer learning models). The results suggest that the local models work well when deployed locally, but significant errors are observed for the scale of the prediction when applied offsite. The global model can adapt well to individual locations, while the possible increase in training efforts need to be taken into account. Pre-training models on a large and diversified source dataset and transferring to a local target dataset generally achieves superior performance over the other two training strategies. Transfer learning brings the most benefits when there are limited local data. With 80% less training data, it can achieve 1% improvement over the local baseline model trained using the entire dataset. Therefore, we call on the efforts from the solar forecasting community to contribute to a global dataset containing a massive amount of imagery and displaying diversified samples with a range of sky conditions.
translated by 谷歌翻译
黑框模型的鲁棒性研究被认为是基于结构方程和从数据中学到的预测模型的数值模型的必要任务。这些研究必须评估模型的鲁棒性,以实现其输入的可能错误指定(例如,协变量转移)。通过不确定性定量(UQ)的棱镜对黑盒模型的研究通常基于涉及输入上施加的概率结构的灵敏度分析,而ML模型仅由观察到的数据构建。我们的工作旨在通过为这两个范式提供相关且易于使用的工具来统一UQ和ML可解释性方法。为了为鲁棒性研究提供一个通用且易于理解的框架,我们定义了依赖于概率指标之间的瓦斯汀距离的分位数约束和投影的输入信息的扰动,同时保留其依赖性结构。我们表明,可以通过分析解决这个扰动问题。通过等渗多项式近似确保规律性约束会导致更平滑的扰动,这在实践中可能更适合。从UQ和ML领域进行的实际案例研究的数值实验突出了此类研究的计算可行性,并提供了对黑盒模型鲁棒性的局部和全球见解,以输入扰动。
translated by 谷歌翻译
分析脑电图时,神经科医生经常在寻找各种“感兴趣的事件”。为了在这项任务中支持他们,已经开发了各种基于机器学习的算法。这些算法中的大多数将问题视为分类,从而独立处理信号段并忽略了持续时间事件固有的时间依赖性。在推理时,必须在处理后进行处理以检测实际事件。我们提出了一种基于深度学习的端到端事件检测方法(EventNet),该方法直接与事件一起作为学习目标,从临时的后处理方案逐渐消失,以将模型输出转化为事件。我们将EventNet与用于人工制品和癫痫发作检测的最新方法进行了比较,这两种事件类型具有高度可变的持续时间。 EventNet在检测两种事件类型方面显示出改进的性能。这些结果表明,将事件视为直接学习目标的力量,而不是使用临时后处理来获取它们。我们的事件检测框架可以轻松地扩展到信号处理中的其他事件检测问题,因为深度学习骨干链不取决于任何特定于任务的功能。
translated by 谷歌翻译
在从机器人控制到仿真的各种机器人应用中,碰撞检测似乎是规范操作,包括运动计划和估计。尽管该主题的开创性工作可以追溯到80年代,但直到最近,正确区分碰撞检测的问题才成为一个中心问题,尤其要归功于科学界围绕该主题所做的持续和各种努力物理。然而,到目前为止,很少有人提出过解决方案,并且只有对所涉及形状的性质的强烈假设。在这项工作中,我们引入了一种通用和高效的方法,以计算任何一对凸形的碰撞检测的导数,这是通过尤其利用随机平滑技术而显示的,这些技术特别适合于捕获非平滑问题的衍生物。这种方法是在HPP-FCL和Pinocchio生态系统中实现的,并在机器人文献的经典数据集和问题上进行了评估,显示了很少的微秒时间来计算许多真实的机器人应用程序直接利用的信息衍生物,包括许多真实的机器人应用程序,包括可不同的模拟。
translated by 谷歌翻译