太阳水箱是在太阳的Ca II K摄影观测中观察到的明亮的色球环特征。这些是高磁场浓度的区域,因此是太阳的磁性活性的示踪剂,并且是研究太阳长期可变性的最重要特征之一,因为记录了一个多世纪以来的Ca II K镜头。。但是,从一个世纪的数据库中检测到份额是一项非平凡的任务,需要大量的人力资源来手动进行。因此,在这项研究中,我们提出了一种图像处理算法,该算法可以从CA II K摄影观测中识别出太阳份量。拟议的研究已在Kodaikanal太阳能天文台的档案数据上实施。为了确保算法有效,无论噪声水平,亮度和其他图像属性如何,我们从数据存档中随机绘制图像样本以测试我们的算法。
translated by 谷歌翻译
This paper presents SVAM (Sequential Variance-Altered MLE), a unified framework for learning generalized linear models under adversarial label corruption in training data. SVAM extends to tasks such as least squares regression, logistic regression, and gamma regression, whereas many existing works on learning with label corruptions focus only on least squares regression. SVAM is based on a novel variance reduction technique that may be of independent interest and works by iteratively solving weighted MLEs over variance-altered versions of the GLM objective. SVAM offers provable model recovery guarantees superior to the state-of-the-art for robust regression even when a constant fraction of training labels are adversarially corrupted. SVAM also empirically outperforms several existing problem-specific techniques for robust regression and classification. Code for SVAM is available at https://github.com/purushottamkar/svam/
translated by 谷歌翻译
Non-convex AC optimal power flow (AC-OPF) is a fundamental optimization problem in power system analysis. The computational complexity of conventional solvers is typically high and not suitable for large-scale networks in real-time operation. Hence, deep learning based approaches have gained intensive attention to conduct the time-consuming training process offline. Supervised learning methods may yield a feasible AC-OPF solution with a small optimality gap. However, they often need conventional solvers to generate the training dataset. This paper proposes an end-to-end unsupervised learning based framework for AC-OPF. We develop a deep neural network to output a partial set of decision variables while the remaining variables are recovered by solving AC power flow equations. The fast decoupled power flow solver is adopted to further reduce the computational time. In addition, we propose using a modified augmented Lagrangian function as the training loss. The multipliers are adjusted dynamically based on the degree of constraint violation. Extensive numerical test results corroborate the advantages of our proposed approach over some existing methods.
translated by 谷歌翻译
The data used to train deep neural network (DNN) models in applications such as healthcare and finance typically contain sensitive information. A DNN model may suffer from overfitting. Overfitted models have been shown to be susceptible to query-based attacks such as membership inference attacks (MIAs). MIAs aim to determine whether a sample belongs to the dataset used to train a classifier (members) or not (nonmembers). Recently, a new class of label based MIAs (LAB MIAs) was proposed, where an adversary was only required to have knowledge of predicted labels of samples. Developing a defense against an adversary carrying out a LAB MIA on DNN models that cannot be retrained remains an open problem. We present LDL, a light weight defense against LAB MIAs. LDL works by constructing a high-dimensional sphere around queried samples such that the model decision is unchanged for (noisy) variants of the sample within the sphere. This sphere of label-invariance creates ambiguity and prevents a querying adversary from correctly determining whether a sample is a member or a nonmember. We analytically characterize the success rate of an adversary carrying out a LAB MIA when LDL is deployed, and show that the formulation is consistent with experimental observations. We evaluate LDL on seven datasets -- CIFAR-10, CIFAR-100, GTSRB, Face, Purchase, Location, and Texas -- with varying sizes of training data. All of these datasets have been used by SOTA LAB MIAs. Our experiments demonstrate that LDL reduces the success rate of an adversary carrying out a LAB MIA in each case. We empirically compare LDL with defenses against LAB MIAs that require retraining of DNN models, and show that LDL performs favorably despite not needing to retrain the DNNs.
translated by 谷歌翻译
This paper provides an introductory survey to GPT-3. We cover some of the historical development behind this technology, some of the key features of GPT-3, and discuss the machine learning model and the datasets used. We survey both academic and commercial efforts applying GPT-3 in diverse domains such as developing conversational AI chatbots, software development, creative work, domain knowledge, and business productivity. We discuss some of the challenges that GPT-3 faces such as the problems of training complexity, bias, and hallucination/incorrect answers. We also discuss the future research opportunities in this area.
translated by 谷歌翻译
Very large language models such as GPT-3 have shown impressive performance across a wide variety of tasks, including text summarization. In this paper, we show that this strong performance extends to opinion summarization. We explore several pipeline methods for applying GPT-3 to summarize a large collection of user reviews in a zero-shot fashion, notably approaches based on recursive summarization and selecting salient content to summarize through supervised clustering or extraction. On two datasets, an aspect-oriented summarization dataset of hotel reviews and a generic summarization dataset of Amazon and Yelp reviews, we show that the GPT-3 models achieve very strong performance in human evaluation. We argue that standard evaluation metrics do not reflect this, and evaluate against several new measures targeting faithfulness, factuality, and genericity to contrast these different methods.
translated by 谷歌翻译
Automated Market Makers (AMMs) have cemented themselves as an integral part of the decentralized finance (DeFi) space. AMMs are a type of exchange that allows users to trade assets without the need for a centralized exchange. They form the foundation for numerous decentralized exchanges (DEXs), which help facilitate the quick and efficient exchange of on-chain tokens. All present-day popular DEXs are static protocols, with fixed parameters controlling the fee and the curvature - they suffer from invariance and cannot adapt to quickly changing market conditions. This characteristic may cause traders to stay away during high slippage conditions brought about by intractable market movements. We propose an RL framework to optimize the fees collected on an AMM protocol. In particular, we develop a Q-Learning Agent for Market Making Protocols (QLAMMP) that learns the optimal fee rates and leverage coefficients for a given AMM protocol and maximizes the expected fee collected under a range of different market conditions. We show that QLAMMP is consistently able to outperform its static counterparts under all the simulated test conditions.
translated by 谷歌翻译
Practical operations of coordinated fleets of mobile robots in different environments reveal benefits of maintaining small distances between robots as they move at higher speeds. This is counter-intuitive in that as speed increases, increased distances would give robots a larger time to respond to sudden motion variations in surrounding robots. However, there is a desire to have lower inter-robot distances in examples like autonomous trucks on highways to optimize energy by vehicle drafting or smaller robots in cluttered environments to maintain communication, etc. This work introduces a model based control framework that directly takes non-linear system dynamics into account. Each robot is able to follow closer at high speeds because it makes predictions on the state information from its adjacent robots and biases it's response by anticipating adjacent robots' motion. In contrast to existing controllers, our non-linear model based predictive decentralized controller is able to achieve lower inter-robot distances at higher speeds. We demonstrate the success of our approach through simulated and hardware results on mobile ground robots.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Increasing number of COVID-19 research literatures cause new challenges in effective literature screening and COVID-19 domain knowledge aware Information Retrieval. To tackle the challenges, we demonstrate two tasks along withsolutions, COVID-19 literature retrieval, and question answering. COVID-19 literature retrieval task screens matching COVID-19 literature documents for textual user query, and COVID-19 question answering task predicts proper text fragments from text corpus as the answer of specific COVID-19 related questions. Based on transformer neural network, we provided solutions to implement the tasks on CORD-19 dataset, we display some examples to show the effectiveness of our proposed solutions.
translated by 谷歌翻译