Large Neighborhood Search (LNS) is a popular heuristic algorithm for solving combinatorial optimization problems (COP). It starts with an initial solution to the problem and iteratively improves it by searching a large neighborhood around the current best solution. LNS relies on heuristics to select neighborhoods to search in. In this paper, we focus on designing effective and efficient heuristics in LNS for integer linear programs (ILP) since a wide range of COPs can be represented as ILPs. Local Branching (LB) is a heuristic that selects the neighborhood that leads to the largest improvement over the current solution in each iteration of LNS. LB is often slow since it needs to solve an ILP of the same size as input. Our proposed heuristics, LB-RELAX and its variants, use the linear programming relaxation of LB to select neighborhoods. Empirically, LB-RELAX and its variants compute as effective neighborhoods as LB but run faster. They achieve state-of-the-art anytime performance on several ILP benchmarks.
translated by 谷歌翻译
在混合整数线性编程(MIP)中,A(强)后门是实例的整数变量的“小”子集,具有以下属性:在分支和结合过程中,可以通过仅通过分支来求解该实例到全局最优性。在后门中的变量上。为广泛使用的MIP基准集或特定问题构建预计的后门数据集,家庭可以在MIP的新结构属性上引起新的问题,或者解释为什么在理论上很难在实践中有效解决问题的问题。现有用于查找后门的算法依赖于以各种方式对候选变量子集进行采样,这种方法证明了MIPLIB2003和MIPLIB2010的某些实例的后门存在。但是,由于勘探和剥削之间的不平衡,这些算法在任务中始终取得成功。我们建议BAMCTS,这是一个蒙特卡洛树搜索框架,用于寻找MIPS的后门。广泛的算法工程,与传统MIP概念的杂交以及与CPLEX求解器的密切集成使我们的方法能够超过MIPLIB2017实例的基础线,从而更频繁,更有效地找到后门。
translated by 谷歌翻译
The design of good heuristics or approximation algorithms for NP-hard combinatorial optimization problems often requires significant specialized knowledge and trial-and-error. Can we automate this challenging, tedious process, and learn the algorithms instead? In many real-world applications, it is typically the case that the same optimization problem is solved again and again on a regular basis, maintaining the same problem structure but differing in the data. This provides an opportunity for learning heuristic algorithms that exploit the structure of such recurring problems. In this paper, we propose a unique combination of reinforcement learning and graph embedding to address this challenge. The learned greedy policy behaves like a meta-algorithm that incrementally constructs a solution, and the action is determined by the output of a graph embedding network capturing the current state of the solution. We show that our framework can be applied to a diverse range of optimization problems over graphs, and learns effective algorithms for the Minimum Vertex Cover, Maximum Cut and Traveling Salesman problems.
translated by 谷歌翻译