Evolutionary algorithms (EAs) are general-purpose optimization algorithms, inspired by natural evolution. Recent theoretical studies have shown that EAs can achieve good approximation guarantees for solving the problem classes of submodular optimization, which have a wide range of applications, such as maximum coverage, sparse regression, influence maximization, document summarization and sensor placement, just to name a few. Though they have provided some theoretical explanation for the general-purpose nature of EAs, the considered submodular objective functions are defined only over sets or multisets. To complement this line of research, this paper studies the problem class of maximizing monotone submodular functions over sequences, where the objective function depends on the order of items. We prove that for each kind of previously studied monotone submodular objective functions over sequences, i.e., prefix monotone submodular functions, weakly monotone and strongly submodular functions, and DAG monotone submodular functions, a simple multi-objective EA, i.e., GSEMO, can always reach or improve the best known approximation guarantee after running polynomial time in expectation. Note that these best-known approximation guarantees can be obtained only by different greedy-style algorithms before. Empirical studies on various applications, e.g., accomplishing tasks, maximizing information gain, search-and-tracking and recommender systems, show the excellent performance of the GSEMO.
translated by 谷歌翻译
Evolutionary algorithms (EAs) are a kind of nature-inspired general-purpose optimization algorithm, and have shown empirically good performance in solving various real-word optimization problems. During the past two decades, promising results on the running time analysis (one essential theoretical aspect) of EAs have been obtained, while most of them focused on isolated combinatorial optimization problems, which do not reflect the general-purpose nature of EAs. To provide a general theoretical explanation of the behavior of EAs, it is desirable to study their performance on general classes of combinatorial optimization problems. To the best of our knowledge, the only result towards this direction is the provably good approximation guarantees of EAs for the problem class of maximizing monotone submodular functions with matroid constraints. The aim of this work is to contribute to this line of research. Considering that many combinatorial optimization problems involve non-monotone or non-submodular objective functions, we study the general problem classes, maximizing submodular functions with/without a size constraint and maximizing monotone approximately submodular functions with a size constraint. We prove that a simple multi-objective EA called GSEMO-C can generally achieve good approximation guarantees in polynomial expected running time.
translated by 谷歌翻译
Clustering is a fundamental problem in many areas, which aims to partition a given data set into groups based on some distance measure, such that the data points in the same group are similar while that in different groups are dissimilar. Due to its importance and NP-hardness, a lot of methods have been proposed, among which evolutionary algorithms are a class of popular ones. Evolutionary clustering has found many successful applications, but all the results are empirical, lacking theoretical support. This paper fills this gap by proving that the approximation performance of the GSEMO (a simple multi-objective evolutionary algorithm) for solving the three popular formulations of clustering, i.e., $k$-center, $k$-median and $k$-means, can be theoretically guaranteed. Furthermore, we prove that evolutionary clustering can have theoretical guarantees even when considering fairness, which tries to avoid algorithmic bias, and has recently been an important research topic in machine learning.
translated by 谷歌翻译
Evolutionary algorithms (EAs) have found many successful real-world applications, where the optimization problems are often subject to a wide range of uncertainties. To understand the practical behaviors of EAs theoretically, there are a series of efforts devoted to analyzing the running time of EAs for optimization under uncertainties. Existing studies mainly focus on noisy and dynamic optimization, while another common type of uncertain optimization, i.e., robust optimization, has been rarely touched. In this paper, we analyze the expected running time of the (1+1)-EA solving robust linear optimization problems (i.e., linear problems under robust scenarios) with a cardinality constraint $k$. Two common robust scenarios, i.e., deletion-robust and worst-case, are considered. Particularly, we derive tight ranges of the robust parameter $d$ or budget $k$ allowing the (1+1)-EA to find an optimal solution in polynomial running time, which disclose the potential of EAs for robust optimization.
translated by 谷歌翻译
In many real-world optimization problems, the objective function evaluation is subject to noise, and we cannot obtain the exact objective value. Evolutionary algorithms (EAs), a type of general-purpose randomized optimization algorithm, have been shown to be able to solve noisy optimization problems well. However, previous theoretical analyses of EAs mainly focused on noise-free optimization, which makes the theoretical understanding largely insufficient for the noisy case. Meanwhile, the few existing theoretical studies under noise often considered the one-bit noise model, which flips a randomly chosen bit of a solution before evaluation; while in many realistic applications, several bits of a solution can be changed simultaneously. In this paper, we study a natural extension of one-bit noise, the bit-wise noise model, which independently flips each bit of a solution with some probability. We analyze the running time of the (1+1)-EA solving OneMax and LeadingOnes under bit-wise noise for the first time, and derive the ranges of the noise level for polynomial and super-polynomial running time bounds. The analysis on LeadingOnes under bit-wise noise can be easily transferred to one-bit noise, and improves the previously known results. Since our analysis discloses that the (1+1)-EA can be efficient only under low noise levels, we also study whether the sampling strategy can bring robustness to noise. We prove that using sampling can significantly increase the largest noise level allowing a polynomial running time, that is, sampling is robust to noise.
translated by 谷歌翻译
机会受到限制的优化问题允许建模问题,其中涉及随机组件的约束仅应以较小的概率侵犯。进化算法已应用于这种情况,并证明可以实现高质量的结果。在本文中,我们有助于对进化算法的理论理解,以进行偶然的优化。我们研究独立且正态分布的随机组件的场景。考虑到简单的单对象(1+1)〜EA,我们表明,施加额外的统一约束已经导致局部最佳选择,对于非常有限的场景和指数优化时间。因此,我们引入了问题的多目标公式,该公式可以摆脱预期成本及其差异。我们表明,在使用此公式时,多目标进化算法是非常有效的,并获得一组解决方案,该解决方案包含最佳解决方案,以适用于施加在约束上的任何可能的置信度。此外,我们证明这种方法还可以用于计算一组最佳解决方案,以限制最小跨越树问题。为了在多目标配方中呈指数指数的折衷,我们提出并分析了改进的凸多目标方法。关于NP-固定随机最小重量占主导地位问题的实例的实验研究证实了多目标和改进的凸多目标方法的益处。
translated by 谷歌翻译
我们研究在线交互式强盗设置中的非模块化功能。我们是受到某些元素之间自然互补性的应用程序的动机:这仅使用只能代表元素之间竞争力的下函数来表达这一点。我们通过两种方式扩展了纯粹的下二次方法。首先,我们假设该物镜可以分解为单调下模量和超模块函数的总和,称为BP物镜。在这里,互补性自然是由超模型成分建模的。我们开发了UCB风格的算法,在每一轮比赛中,在采取行动以平衡对未知目标(探索)和选择似乎有希望的行动(剥削)的行动之间揭示的嘈杂收益。根据全知识的贪婪基线来定义遗憾和超模块化曲率,我们表明该算法最多可以在$ o(\ sqrt {t})$ hore $ t $ t $ t $ the $ t $ t $ the $ t $ t $ the $ the。其次,对于那些不承认BP结构的功能,我们提供了类似的遗憾保证,从其表现比率角度来看。这适用于几乎但不完全是子模型的功能。我们在数值上研究了Movielens数据集上电影推荐的任务,并选择用于分类的培训子集。通过这些示例,我们证明了该算法的性能以及将这些问题视为单次生管的缺点。
translated by 谷歌翻译
In noisy evolutionary optimization, sampling is a common strategy to deal with noise. By the sampling strategy, the fitness of a solution is evaluated multiple times (called \emph{sample size}) independently, and its true fitness is then approximated by the average of these evaluations. Most previous studies on sampling are empirical, and the few theoretical studies mainly showed the effectiveness of sampling with a sufficiently large sample size. In this paper, we theoretically examine what strategies can work when sampling with any fixed sample size fails. By constructing a family of artificial noisy examples, we prove that sampling is always ineffective, while using parent or offspring populations can be helpful on some examples. We also construct an artificial noisy example to show that when using neither sampling nor populations is effective, a tailored adaptive sampling (i.e., sampling with an adaptive sample size) strategy can work. These findings may enhance our understanding of sampling to some extent, but future work is required to validate them in natural situations.
translated by 谷歌翻译
我们研究了基于消费者的决策积极学习非参数选择模型的问题。我们提出一个负面结果,表明这种选择模型可能无法识别。为了克服可识别性问题,我们介绍了选择模型的有向无环图(DAG)表示,从某种意义上说,该模型可以捕获有关选择模型的更多信息,从而可以从理论上识别信息。然后,我们考虑在主动学习环境中学习与此DAG表示的近似的问题。我们设计了一种有效的主动学习算法,以估计非参数选择模型的DAG表示,该模型在多项式时间内运行时,当随机均匀地绘制频繁排名。我们的算法通过主动和反复提供各种项目并观察所选项目来了解最受欢迎的频繁偏好项目的分布。我们表明,与相应的非活动学习估计算法相比,我们的算法可以更好地恢复有关消费者偏好的合成和公开数据集的一组频繁偏好。这证明了我们的算法和主动学习方法的价值。
translated by 谷歌翻译
在随着时间变化的组合环境中的在线决策激励,我们研究了将离线算法转换为其在线对应物的问题。我们专注于使用贪婪算法对局部错误的贪婪算法进行恒定因子近似的离线组合问题。对于此类问题,我们提供了一个通用框架,该框架可有效地将稳健的贪婪算法转换为使用Blackwell的易近算法。我们证明,在完整信息设置下,由此产生的在线算法具有$ O(\ sqrt {t})$(近似)遗憾。我们进一步介绍了Blackwell易接近性的强盗扩展,我们称之为Bandit Blackwell的可接近性。我们利用这一概念将贪婪的稳健离线算法转变为匪(t^{2/3})$(近似)$(近似)的遗憾。展示了我们框架的灵活性,我们将脱机之间的转换应用于收入管理,市场设计和在线优化的几个问题,包括在线平台中的产品排名优化,拍卖中的储备价格优化以及supperular tossodular最大化。 。我们还将还原扩展到连续优化的类似贪婪的一阶方法,例如用于最大化连续强的DR单调下调功能,这些功能受到凸约束的约束。我们表明,当应用于这些应用程序时,我们的转型会导致新的后悔界限或改善当前已知界限。我们通过为我们的两个应用进行数值模拟来补充我们的理论研究,在这两种应用中,我们都观察到,转换的数值性能在实际情况下优于理论保证。
translated by 谷歌翻译
Models for the processes by which ideas and influence propagate through a social network have been studied in a number of domains, including the diffusion of medical and technological innovations, the sudden and widespread adoption of various strategies in game-theoretic settings, and the effects of "word of mouth" in the promotion of new products. Motivated by the design of viral marketing strategies, Domingos and Richardson posed a fundamental algorithmic problem for such social network processes: if we can try to convince a subset of individuals to adopt a new product or innovation, and the goal is to trigger a large cascade of further adoptions, which set of individuals should we target?We consider this problem in several of the most widely studied models in social network analysis. The optimization problem of selecting the most influential nodes is NP-hard here. The two conference papers upon which this article is based (KDD 2003 and ICALP 2005) provide the first provable approximation guarantees for efficient algorithms. Using an The present article is an expanded version of two conference papers [51,52], which appeared in KDD 2003 and ICALP 2005, respectively.
translated by 谷歌翻译
设置子模块目标函数的优化问题具有许多现实世界应用。在离散场景中,在可以选择同一项目的情况下,域通过设置到有界整数格的2元素概括。在这项工作中,我们考虑最大化界限整数晶格上的单调子模块功能的问题,受到基数约束。特别是,我们专注于最大化D​​R-SubsoDular函数,即在整数格中定义的函数,该函数展示递减返回属性。给定任何epsilon> 0,我们介绍了一种随机算法的概率保证o(1 - 1 / e-epsilon)近似,使用由Mirzasoleiman等人开发的随机贪婪算法启发的框架。然后,我们表明,在合成DR-IMODOOMULAL功能上,在整数晶格上应用我们的建议算法比替代方案快,包括将目标问题还原到集合域,然后应用于最快的已知的集合子态最大化算法。
translated by 谷歌翻译
非主导的分类遗传算法II(NSGA-II)是现实应用中最强烈使用的多目标进化算法(MOEA)。然而,与几个通过数学手段分析的几个简单的MOES相反,到目前为止,NSGA-II也不存在这种研究。在这项工作中,我们表明,数学运行时分析也可用于NSGA-II。结果,我们证明,由于持续因素大于帕累托前方大小的人口大小,具有两个经典突变算子的NSGA-II和三种不同的选择父母的方式满足与Semo和GSEMO相同的渐近运行时保证基本ineminmax和Lotz基准函数的算法。但是,如果人口大小仅等于帕累托前面的大小,那么NSGA-II就无法有效地计算完整的帕累托前部(对于指数迭代,人口总是错过帕累托前部的恒定分数) 。我们的实验证实了上述研究结果。
translated by 谷歌翻译
我们研究动态算法,以便在$ N $插入和删除流中最大化单调子模块功能的问题。我们显示任何维护$(0.5+ epsilon)$ - 在基数约束下的近似解决方案的算法,对于任何常数$ \ epsilon> 0 $,必须具有$ \ mathit {polynomial} $的摊销查询复杂性$ n $。此外,需要线性摊销查询复杂性,以维持0.584美元 - 批量的解决方案。这与近期[LMNF + 20,MON20]的最近动态算法相比,达到$(0.5- \ epsilon)$ - 近似值,与$ \ mathsf {poly} \ log(n)$摊销查询复杂性。在正面,当流是仅插入的时候,我们在基数约束下的问题和近似的Matroid约束下提供有效的算法,近似保证$ 1-1 / e-\ epsilon $和摊销查询复杂性$ \ smash {o (\ log(k / \ epsilon)/ \ epsilon ^ 2)} $和$ \ smash {k ^ {\ tilde {o}(1 / \ epsilon ^ 2)} \ log n} $,其中$ k $表示基数参数或Matroid的等级。
translated by 谷歌翻译
顺序决策问题的目的是设计一种自适应选择一组项目的交互式策略,每个选择都是基于过去的反馈,以最大程度地提高所选项目的预期效用。已经表明,许多现实世界应用的实用程序功能都是自适应的。但是,大多数关于自适应下调优化的现有研究都集中在平均案例上。不幸的是,在最糟糕的案例实现下,具有良好平均表现的政策可能表现较差。在这项研究中,我们建议研究两种自适应下调优化问题的变体,即最坏情况下的自适应下二一个最大化和鲁棒的下二一个最大化。第一个问题旨在找到一项最大化最坏情况的政策,后者旨在找到一项政策(如果有的话),同时可以同时实现接近最佳的平均效用和最差的效用。我们引入了一类新的随机函数,称为\ emph {worst-case subsodular函数}。对于最严重的自适应性次传导性最大化问题,但要受到$ p $系统约束的约束,我们制定了一种自适应的最坏情况贪婪的贪婪政策,该政策实现了$ \ frac {1} {p+1} $近似值案例实用程序如果效用函数是最差的子模型。对于稳健的自适应下调最大化问题,但受到基数约束(分区矩阵约束),如果效用函数既是最坏情况下的casase subsodular and Adaptive subsodular,否 - \ frac {1} {2}}} $(分别$ 1/3 $)在最坏情况下和平均案例设置下同时。我们还描述了我们的理论结果的几种应用,包括池碱积极学习,随机的下套装覆盖和自适应病毒营销。
translated by 谷歌翻译
典型的自适应顺序决策问题的目标是根据一些部分观察来设计一个交互策略,该策略根据一些部分观察来顺序选择一组项目,以最大化预期的实用程序。已经表明,许多实际应用的实用功能,包括基于汇集的主动学习和自适应影响最大化,满足自适应子骨科的特性。然而,大多数关于自适应子模块最大化的研究重点关注完全自适应设置,即,必须等待从\ emph {all}过去选择之前的反馈。虽然这种方法可以充分利用过去过去的反馈,但是与非自适应解决方案相比,完成选择过程可能需要更长的时间来完成选择过程,其中在任何观察发生之前发生所有选择。在本文中,我们探讨了部分自适应子模块最大化的问题,其中允许同时在批处理中进行多种选择并一起观察它们的实现。我们的方法享有适应性的好处,同时减少了从过去选择等待观察的时间。据我们所知,没有结果对于非单调自适应子膜最大化问题的部分适应性政策。我们在基数限制和背包约束下研究了这个问题,并对这两种情况制定了有效和高效的解决方案。我们还分析了批量查询复杂性,即策略所需的批量次数,以便在一些额外的假设下完成选择过程。
translated by 谷歌翻译
在机器学习中最大化的是一项基本任务,在本文中,我们研究了经典的Matroid约束下的删除功能强大版本。在这里,目标是提取数据集的小尺寸摘要,即使在对手删除了一些元素之后,该数据集包含高价值独立集。我们提出了恒定因素近似算法,其空间复杂性取决于矩阵的等级$ k $和已删除元素的数字$ d $。在集中式设置中,我们提出$(4.597+o(\ varepsilon))$ - 近似算法,带有摘要大小$ o(\ frac {k+d} {\ varepsilon^2} \ log \ log \ frac \ frac {k} })$将$(3.582 + o(\ varepsilon))$(k + \ frac {d} {\ varepsilon^2} \ log \ frac {k} {k} {\ varepsilon}) $摘要大小是单调的。在流设置中,我们提供$(9.435 + o(\ varepsilon))$ - 带有摘要大小和内存$ o的近似算法$(k + \ frac {d} {\ varepsilon^2} \ log \ log \ frac {k} {k} {k} {k} {k} {k} { \ varepsilon})$;然后,将近似因子提高到单调盒中的$(5.582+o(\ varepsilon))$。
translated by 谷歌翻译
在本文中,我们研究了经典的少量最大化问题,但在非自适应和适应性环境下都受到群体公平限制。已经表明,许多机器学习应用程序的效用函数,包括数据汇总,影响社交网络中的最大化和个性化建议,都满足了子义的属性。因此,在许多应用程序的核心中可以找到受到各种限制的最大化函数。在高水平上,少量最大化旨在选择一组大多数代表性项目(例如,数据点)。但是,大多数现有算法的设计并未包含公平的约束,从而导致某些特定组的不足或过分代表。这激发了我们研究公平的supsodular最大化问题,我们旨在选择一组项目,以最大化(可能是非单调的)suppodular效用功能,但要受群体公平约束。为此,我们为此问题开发了第一个常数因子近似算法。我们的算法的设计足够强大,可以扩展到更复杂的自适应设置下解决suppodular的最大化问题。此外,我们将研究进一步扩展到整合全球基础性约束。
translated by 谷歌翻译
In this paper, we study the \underline{R}obust \underline{o}ptimization for \underline{se}quence \underline{Net}worked \underline{s}ubmodular maximization (RoseNets) problem. We interweave the robust optimization with the sequence networked submodular maximization. The elements are connected by a directed acyclic graph and the objective function is not submodular on the elements but on the edges in the graph. Under such networked submodular scenario, the impact of removing an element from a sequence depends both on its position in the sequence and in the network. This makes the existing robust algorithms inapplicable. In this paper, we take the first step to study the RoseNets problem. We design a robust greedy algorithm, which is robust against the removal of an arbitrary subset of the selected elements. The approximation ratio of the algorithm depends both on the number of the removed elements and the network topology. We further conduct experiments on real applications of recommendation and link prediction. The experimental results demonstrate the effectiveness of the proposed algorithm.
translated by 谷歌翻译
在机器学习,游戏理论和控制理论中解决各种应用,极限优化已经是中心。因此,目前的文献主要集中于研究连续结构域中的这些问题,例如,凸凹minalax优化现在在很大程度上被理解。然而,最小的问题远远超出连续域以混合连续离散域或甚至完全离散域。在本文中,我们研究了混合连续离散的最小问题,其中最小化在属于欧几里德空间的连续变量上,最大化是在给定地面集的子集上。我们介绍了凸子蒙皮最小新的类问题,其中物镜相对于连续变量和子模块相对于离散变量凸出。尽管这些问题在机器学习应用中经常出现,但对于如何从算法和理论观点来解决它们的知之甚少。对于此类问题,我们首先表明获得鞍点难以达到任何近似,因此引入了(近)最优性的新概念。然后,我们提供了若干算法程序,用于解决凸且单调 - 子模块硬币问题,并根据我们最佳的概念来表征其收敛率,计算复杂性和最终解决方案的质量。我们所提出的算法迭代并组合离散和连续优化的工具。最后,我们提供了数字实验,以展示我们所用方法的有效性。
translated by 谷歌翻译