Multi-agent path finding (MAPF) is a task of finding non-conflicting paths connecting agents' specified initial and goal positions in a shared environment. We focus on compilation-based solvers in which the MAPF problem is expressed in a different well established formalism such as mixed-integer linear programming (MILP), Boolean satisfiability (SAT), or constraint programming (CP). As the target solvers for these formalisms act as black-boxes it is challenging to integrate MAPF specific heuristics in the MAPF compilation-based solvers. We show in this work how the build a MAPF encoding for the target SAT solver in which domain specific heuristic knowledge is reflected. The heuristic knowledge is transferred to the SAT solver by selecting candidate paths for each agent and by constructing the encoding only for these candidate paths instead of constructing the encoding for all possible paths for an agent. The conducted experiments show that heuristically guided compilation outperforms the vanilla variants of the SAT-based MAPF solver.
translated by 谷歌翻译
我们在本文中研究了多代理路径查找(MAPF)问题的计划和行动阶段。MAPF是将代理从其开始位置导航到指定的个人目标位置的问题,因此代理不会相互冲突。具体而言,我们专注于用一组疯狂的室内四轮驱动器执行MAPF计划。我们展示了如何修改现有的基于时间冲突的搜索算法(CCB),以制定适合与四轮驱动器执行的计划。代理阶段使用Loco定位系统检查计划是否正确执行。我们的发现是,CCBS算法允许扩展可以为四轮驱动器制定安全计划,即每个四轮驱动器周围的圆柱保护区域都可以在计划级别引入。
translated by 谷歌翻译
在多代理路径查找(MAPF)中,任务是从其初始位置找到多个代理的非冲突路径,以给定单个目标位置。 MAPF表示经常通过启发式搜索解决的古典人工智能问题。基于搜索的技术的重要替代方案是将MAPF编译为不同的形式主义,例如布尔满足性(SAT)。基于SAT的基于SAT的方法将SAT求解器视为外部工具,其任务是返回输入MAPF的布尔模型的所有决策变量的分配。我们在本短文中存在一种名为DPLL(MAPF)的新型编译方案,其中相对于MAPF规则的判定变量的部分分配的一致性检查直接集成到SAT求解器中。该方案允许在SAT求解器和一致性检查程序同时协同工作以创建布尔模型并搜索其令人满意的分配来进行更远的自动编译。
translated by 谷歌翻译
In this paper, we propose a new neural network architecture based on the H2 matrix. Even though networks with H2-inspired architecture already exist, and our approach is designed to reduce memory costs and improve performance by taking into account the sparsity template of the H2 matrix. In numerical comparison with alternative neural networks, including the known H2-based ones, our architecture showed itself as beneficial in terms of performance, memory, and scalability.
translated by 谷歌翻译
t-SNE remains one of the most popular embedding techniques for visualizing high-dimensional data. Most standard packages of t-SNE, such as scikit-learn, use the Barnes-Hut t-SNE (BH t-SNE) algorithm for large datasets. However, existing CPU implementations of this algorithm are inefficient. In this work, we accelerate the BH t-SNE on CPUs via cache optimizations, SIMD, parallelizing sequential steps, and improving parallelization of multithreaded steps. Our implementation (Acc-t-SNE) is up to 261x and 4x faster than scikit-learn and the state-of-the-art BH t-SNE implementation from daal4py, respectively, on a 32-core Intel(R) Icelake cloud instance.
translated by 谷歌翻译
We investigate a model for image/video quality assessment based on building a set of codevectors representing in a sense some basic properties of images, similar to well-known CORNIA model. We analyze the codebook building method and propose some modifications for it. Also the algorithm is investigated from the point of inference time reduction. Both natural and synthetic images are used for building codebooks and some analysis of synthetic images used for codebooks is provided. It is demonstrated the results on quality assessment may be improves with the use if synthetic images for codebook construction. We also demonstrate regimes of the algorithm in which real time execution on CPU is possible for sufficiently high correlations with mean opinion score (MOS). Various pooling strategies are considered as well as the problem of metric sensitivity to bitrate.
translated by 谷歌翻译
Online controlled experiments (A/B tests) have become the gold standard for learning the impact of new product features in technology companies. Randomization enables the inference of causality from an A/B test. The randomized assignment maps end users to experiment buckets and balances user characteristics between the groups. Therefore, experiments can attribute any outcome differences between the experiment groups to the product feature under experiment. Technology companies run A/B tests at scale -- hundreds if not thousands of A/B tests concurrently, each with millions of users. The large scale poses unique challenges to randomization. First, the randomized assignment must be fast since the experiment service receives hundreds of thousands of queries per second. Second, the variant assignments must be independent between experiments. Third, the assignment must be consistent when users revisit or an experiment enrolls more users. We present a novel assignment algorithm and statistical tests to validate the randomized assignments. Our results demonstrate that not only is this algorithm computationally fast but also satisfies the statistical requirements -- unbiased and independent.
translated by 谷歌翻译
Vision Transformers convert images to sequences by slicing them into patches. The size of these patches controls a speed/accuracy tradeoff, with smaller patches leading to higher accuracy at greater computational cost, but changing the patch size typically requires retraining the model. In this paper, we demonstrate that simply randomizing the patch size at training time leads to a single set of weights that performs well across a wide range of patch sizes, making it possible to tailor the model to different compute budgets at deployment time. We extensively evaluate the resulting model, which we call FlexiViT, on a wide range of tasks, including classification, image-text retrieval, open-world detection, panoptic segmentation, and semantic segmentation, concluding that it usually matches, and sometimes outperforms, standard ViT models trained at a single patch size in an otherwise identical setup. Hence, FlexiViT training is a simple drop-in improvement for ViT that makes it easy to add compute-adaptive capabilities to most models relying on a ViT backbone architecture. Code and pre-trained models are available at https://github.com/google-research/big_vision
translated by 谷歌翻译
The appearance of an object can be fleeting when it transforms. As eggs are broken or paper is torn, their color, shape and texture can change dramatically, preserving virtually nothing of the original except for the identity itself. Yet, this important phenomenon is largely absent from existing video object segmentation (VOS) benchmarks. In this work, we close the gap by collecting a new dataset for Video Object Segmentation under Transformations (VOST). It consists of more than 700 high-resolution videos, captured in diverse environments, which are 20 seconds long on average and densely labeled with instance masks. A careful, multi-step approach is adopted to ensure that these videos focus on complex object transformations, capturing their full temporal extent. We then extensively evaluate state-of-the-art VOS methods and make a number of important discoveries. In particular, we show that existing methods struggle when applied to this novel task and that their main limitation lies in over-reliance on static appearance cues. This motivates us to propose a few modifications for the top-performing baseline that improve its capabilities by better modeling spatio-temporal information. But more broadly, the hope is to stimulate discussion on learning more robust video object representations.
translated by 谷歌翻译
Unsupervised anomaly detection in time-series has been extensively investigated in the literature. Notwithstanding the relevance of this topic in numerous application fields, a complete and extensive evaluation of recent state-of-the-art techniques is still missing. Few efforts have been made to compare existing unsupervised time-series anomaly detection methods rigorously. However, only standard performance metrics, namely precision, recall, and F1-score are usually considered. Essential aspects for assessing their practical relevance are therefore neglected. This paper proposes an original and in-depth evaluation study of recent unsupervised anomaly detection techniques in time-series. Instead of relying solely on standard performance metrics, additional yet informative metrics and protocols are taken into account. In particular, (1) more elaborate performance metrics specifically tailored for time-series are used; (2) the model size and the model stability are studied; (3) an analysis of the tested approaches with respect to the anomaly type is provided; and (4) a clear and unique protocol is followed for all experiments. Overall, this extensive analysis aims to assess the maturity of state-of-the-art time-series anomaly detection, give insights regarding their applicability under real-world setups and provide to the community a more complete evaluation protocol.
translated by 谷歌翻译