Scheduled batch jobs have been widely used on the asynchronous computing platforms to execute various enterprise applications, including the scheduled notifications and the candidate pre-computation for the modern recommender systems. It is important to deliver or update the information to the users at the right time to maintain the user experience and the execution impact. However, it is challenging to provide a versatile execution time optimization solution for the user-basis scheduled jobs to satisfy various product scenarios while maintaining reasonable infrastructure resource consumption. In this paper, we describe how we apply a learning-to-rank approach plus a "best time policy" in the best time selection. In addition, we propose an ensemble learner to minimize the ranking loss by efficiently leveraging multiple streams of user activity signals in our scheduling decisions of the execution time. Especially, we observe the cannibalization cross use cases to compete the user's peak time slot and introduce a coordination system to mitigate the problem. Our optimization approach has been successfully tested with production traffic that serves billions of users per day, with statistically significant improvements in various product metrics, including the notifications and content candidate generation. To the best of our knowledge, our study represents the first ML-based multi-tenant solution of the execution time optimization problem for the scheduled jobs at a large industrial scale cross different product domains.
translated by 谷歌翻译
现代软件系统和产品越来越依赖机器学习模型,以基于与用户和系统的交互进行数据驱动的决策,例如计算基础架构。对于更广泛的采用,这种做法必须(i)容纳没有ML背景的软件工程师,并提供(ii)提供优化产品目标的机制。在这项工作中,我们描述了一般原则和特定的端到端毫升平台,为决策和反馈集合提供易于使用的API。循环仪支持从在线数据收集到模拟培训,部署,推理的完整端到端ML生命周期,并扩展支持和调整产品目标的评估和调整。我们概述了平台架构和生产部署的整体影响 - 循环仪当前托管700毫升型号,每秒达到600万决定。我们还描述了学习曲线并总结了平台采用者的经验。
translated by 谷歌翻译
移动通知已成为社交网络服务的主要通信渠道,以使用户了解和参与。随着越来越多的移动应用程序向用户推出通知,他们不断面临关于发送什么,何时以及如何发送的决定。缺乏研究和方法论通常会导致启发式决策。许多通知到达不适当的时刻或引入太多中断,未能为用户提供价值并激发用户的投诉。在本文中,我们探讨了移动通知和用户参与度之间交互的独特功能。我们提出了一个国家过渡框架,以定量评估通知的有效性。在此框架内,我们开发了一个假设对数线性结构和Weibull分布的徽章通知的生存模型。我们的结果表明,与逻辑回归模型相比,该模型对应用程序的灵活性和卓越的预测准确性具有更大的灵活性。特别是,我们提供了一个在线用例,以进行通知交付时间优化,以显示我们如何做出更好的决策,推动更多用户参与度并为用户提供更多价值。
translated by 谷歌翻译
Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. A dedicated venue for collecting and summarizing the latest advances of EVA is highly desired by the community. Besides, the basic concepts of EVA (e.g., definition, architectures, etc.) are ambiguous and neglected by these surveys due to the rapid development of this domain. A thorough clarification is needed to facilitate a consensus on these concepts. To fill in these gaps, we conduct a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. The EVA system and its enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.
translated by 谷歌翻译
移动通知系统在各种应用程序中起着重要作用,以通信,向用户发送警报和提醒,以告知他们有关新闻,事件或消息的信息。在本文中,我们将近实时的通知决策问题制定为马尔可夫决策过程,在该过程中,我们对奖励中的多个目标进行了优化。我们提出了一个端到端的离线增强学习框架,以优化顺序通知决策。我们使用基于保守的Q学习的双重Q网络方法来应对离线学习的挑战,从而减轻了分配转移问题和Q值高估。我们说明了完全部署的系统,并通过离线和在线实验证明了拟议方法的性能和好处。
translated by 谷歌翻译
我们考虑实时流失预测的问题。由于推理生成的批处理模式,传统方法只能通过离线干预措施(例如测试消息,电子邮件或静态的产品内裸露)支持保留活动。实时流失预测中的其他最新作品并未评估精确取舍以在生产中部署此类模型的成本。在本文中,我们提出了RICON,这是一种灵活,具有成本效益且健壮的机器学习系统,可使用ClickStream数据实时预测客户流失倾向。除了流失倾向的预测外,RICON还基于产品使用智能提供了见解。通过在QBO高级客户的真实大数据上应用,我们展示了Ricon在存在强大的班级不平衡的情况下如何获得2.68的顶级升降机。此外,我们执行了一项广泛的比较研究,以证明我们对里昂的建模选择是合理的。最后,我们提到了如何将RICON与Intuit中的干预平台集成在一起,以实时的生产外环境有帮助。
translated by 谷歌翻译
Federated Learning is a distributed machine learning approach which enables model training on a large corpus of decentralized data. We have built a scalable production system for Federated Learning in the domain of mobile devices, based on TensorFlow. In this paper, we describe the resulting high-level design, sketch some of the challenges and their solutions, and touch upon the open problems and future directions.
translated by 谷歌翻译
随着数据生成越来越多地在没有连接连接的设备上进行,因此与机器学习(ML)相关的流量将在无线网络中无处不在。许多研究表明,传统的无线协议高效或不可持续以支持ML,这创造了对新的无线通信方法的需求。在这项调查中,我们对最先进的无线方法进行了详尽的审查,这些方法是专门设计用于支持分布式数据集的ML服务的。当前,文献中有两个明确的主题,模拟的无线计算和针对ML优化的数字无线电资源管理。这项调查对这些方法进行了全面的介绍,回顾了最重要的作品,突出了开放问题并讨论了应用程序方案。
translated by 谷歌翻译
研究过程自动化 - 对科学仪器,计算机,数据存储和其他资源的可靠,高效和可重复执行的可靠,高效和可重复执行,这是现代科学的基本要素。我们在此处报告Globus研究数据管理平台内的新服务,该服务可以将各种研究过程的规范作为可重复使用的动作集,流量以及在异质研究环境中执行此类流动的集合。为了以广泛的空间范围(例如,从科学仪器到远程数据中心)和时间范围(从几秒钟到几周),这些Globus自动化服务功能:1)云托管以可靠地执行长期持久的流量,尽管零星的失败,但这些Globus自动化服务功能:1) ; 2)声明性符号和可扩展的异步行动提供商API,用于定义和执行涉及任意资源的各种行动和流动规范; 3)授权授权机制,用于安全调用动作。这些服务允许研究人员将广泛的研究任务的管理外包和自动化为可靠,可扩展和安全的云平台。我们向Globus自动化服务提供用例
translated by 谷歌翻译
Algorithms that involve both forecasting and optimization are at the core of solutions to many difficult real-world problems, such as in supply chains (inventory optimization), traffic, and in the transition towards carbon-free energy generation in battery/load/production scheduling in sustainable energy systems. Typically, in these scenarios we want to solve an optimization problem that depends on unknown future values, which therefore need to be forecast. As both forecasting and optimization are difficult problems in their own right, relatively few research has been done in this area. This paper presents the findings of the ``IEEE-CIS Technical Challenge on Predict+Optimize for Renewable Energy Scheduling," held in 2021. We present a comparison and evaluation of the seven highest-ranked solutions in the competition, to provide researchers with a benchmark problem and to establish the state of the art for this benchmark, with the aim to foster and facilitate research in this area. The competition used data from the Monash Microgrid, as well as weather data and energy market data. It then focused on two main challenges: forecasting renewable energy production and demand, and obtaining an optimal schedule for the activities (lectures) and on-site batteries that lead to the lowest cost of energy. The most accurate forecasts were obtained by gradient-boosted tree and random forest models, and optimization was mostly performed using mixed integer linear and quadratic programming. The winning method predicted different scenarios and optimized over all scenarios jointly using a sample average approximation method.
translated by 谷歌翻译
As the number of distributed services (or microservices) of cloud-native applications grows, resource management becomes a challenging task. These applications tend to be user-facing and latency-sensitive, and our goal is to continuously minimize the amount of CPU resources allocated while still satisfying the application latency SLO. Although previous efforts have proposed simple heuristics and sophisticated ML-based techniques, we believe that a practical resource manager should accurately scale CPU resources for diverse applications, with minimum human efforts and operation overheads. To this end, we ask: can we systematically break resource management down to subproblems solvable by practical policies? Based on the notion of CPU-throttle-based performance target, we decouple the mechanisms of SLO feedback and resource control, and implement a two-level framework -- Autothrottle. It combines a lightweight learned controller at the global level, and agile per-microservice controllers at the local level. We evaluate Autothrottle on three microservice applications, with both short-term and 21-day production workload traces. Empirical results show Autothrottle's superior CPU core savings up to 26.21% over the best-performing baselines across applications, while maintaining the latency SLO.
translated by 谷歌翻译
未来的互联网涉及几种新兴技术,例如5G和5G网络,车辆网络,无人机(UAV)网络和物联网(IOT)。此外,未来的互联网变得异质并分散了许多相关网络实体。每个实体可能需要做出本地决定,以在动态和不确定的网络环境下改善网络性能。最近使用标准学习算法,例如单药强化学习(RL)或深入强化学习(DRL),以使每个网络实体作为代理人通过与未知环境进行互动来自适应地学习最佳决策策略。但是,这种算法未能对网络实体之间的合作或竞争进行建模,而只是将其他实体视为可能导致非平稳性问题的环境的一部分。多机构增强学习(MARL)允许每个网络实体不仅观察环境,还可以观察其他实体的政策来学习其最佳政策。结果,MAL可以显着提高网络实体的学习效率,并且最近已用于解决新兴网络中的各种问题。在本文中,我们因此回顾了MAL在新兴网络中的应用。特别是,我们提供了MARL的教程,以及对MARL在下一代互联网中的应用进行全面调查。特别是,我们首先介绍单代机Agent RL和MARL。然后,我们回顾了MAL在未来互联网中解决新兴问题的许多应用程序。这些问题包括网络访问,传输电源控制,计算卸载,内容缓存,数据包路由,无人机网络的轨迹设计以及网络安全问题。
translated by 谷歌翻译
虚拟支持代理商已经普及,作为企业提供更好,更可访问的客户服务的一种方式。此域中的一些挑战包括模糊的用户查询以及更改支持主题和用户行为(非实用性)。但是,我们这样做可以访问用户提供的部分反馈(点击,调查和其他事件),这些反馈可以利用来改善用户体验。适应的学习技术,如上下文匪徒,是对这个问题设置的自然拟合。在本文中,我们讨论了Microsoft Virtual代理的上下文匪徒(CB)的实际实现。它包括基于神经线性匪徒(NLB)和基于多武装匪徒(MAB)集合的内容建议的意图消歧。我们的解决方案已部署到生产并改进了Microsoft虚拟代理的关键业务指标,由A / B实验确认。结果包括问题分辨率的相对增加12%,并且对人类运营商的升级相对减少超过4%。虽然我们目前的用例侧重于Intent消费歧义和支持机器人的上下文建议,但我们认为我们的方法可以扩展到其他域。
translated by 谷歌翻译
Generalized linear models with nonlinear feature transformations are widely used for large-scale regression and classification problems with sparse inputs. Memorization of feature interactions through a wide set of cross-product feature transformations are effective and interpretable, while generalization requires more feature engineering effort. With less feature engineering, deep neural networks can generalize better to unseen feature combinations through low-dimensional dense embeddings learned for the sparse features. However, deep neural networks with embeddings can over-generalize and recommend less relevant items when the user-item interactions are sparse and high-rank. In this paper, we present Wide & Deep learning-jointly trained wide linear models and deep neural networks-to combine the benefits of memorization and generalization for recommender systems. We productionized and evaluated the system on Google Play, a commercial mobile app store with over one billion active users and over one million apps. Online experiment results show that Wide & Deep significantly increased app acquisitions compared with wide-only and deep-only models. We have also open-sourced our implementation in TensorFlow.
translated by 谷歌翻译
In the past few years, Deep Reinforcement Learning (DRL) has become a valuable solution to automatically learn efficient resource management strategies in complex networks. In many scenarios, the learning task is performed in the Cloud, while experience samples are generated directly by edge nodes or users. Therefore, the learning task involves some data exchange which, in turn, subtracts a certain amount of transmission resources from the system. This creates a friction between the need to speed up convergence towards an effective strategy, which requires the allocation of resources to transmit learning samples, and the need to maximize the amount of resources used for data plane communication, maximizing users' Quality of Service (QoS), which requires the learning process to be efficient, i.e., minimize its overhead. In this paper, we investigate this trade-off and propose a dynamic balancing strategy between the learning and data planes, which allows the centralized learning agent to quickly converge to an efficient resource allocation strategy while minimizing the impact on QoS. Simulation results show that the proposed method outperforms static allocation methods, converging to the optimal policy (i.e., maximum efficacy and minimum overhead of the learning plane) in the long run.
translated by 谷歌翻译
A fundamental question in any peer-to-peer ride-sharing system is how to, both effectively and efficiently, meet the request of passengers to balance the supply and demand in real time. On the passenger side, traditional approaches focus on pricing strategies by increasing the probability of users' call to adjust the distribution of demand. However, previous methods do not take into account the impact of changes in strategy on future supply and demand changes, which means drivers are repositioned to different destinations due to passengers' calls, which will affect the driver's income for a period of time in the future. Motivated by this observation, we make an attempt to optimize the distribution of demand to handle this problem by learning the long-term spatio-temporal values as a guideline for pricing strategy. In this study, we propose an offline deep reinforcement learning based method focusing on the demand side to improve the utilization of transportation resources and customer satisfaction. We adopt a spatio-temporal learning method to learn the value of different time and location, then incentivize the ride requests of passengers to adjust the distribution of demand to balance the supply and demand in the system. In particular, we model the problem as a Markov Decision Process (MDP).
translated by 谷歌翻译
瀑布推荐系统(RS)是移动应用程序中RS的流行形式,是推荐的项目流,这些项目由连续页面组成,可以通过滚动浏览。在Waterfall RS中,当用户完成浏览页面时,Edge(例如,手机)将向Cloud Server发送请求,以获取新的建议页面,称为分页请求机制。 RSS通常将大量项目放入一页中,以减少众多分页请求中的过度资源消耗,但是,这将降低RSS根据用户的实时兴趣及时续订建议的能力,并导致贫穷的用户。经验。直观地,在页面内插入其他请求以更新频率的建议可以减轻问题。但是,以前的尝试,包括非自适应策略(例如,统一插入请求)最终会导致资源过度消费。为此,我们设想了一项名为智能请求策略设计(IRSD)的Edge Intelligence的新学习任务。它旨在通过根据用户的实时意图确定请求插入的适当情况来提高瀑布RSS的有效性。此外,我们提出了一种新的自适应请求插入策略的范式,名为基于Uplift的On-Ending Smart请求框架(AdareQuest)。 AdareQuest 1)通过将实时行为与基于基于注意力的神经网络相匹配的历史兴趣来捕获用户意图的动态变化。 2)估计根据因果推理插入的请求带来的用户购买的反事实提升。 3)通过在在线资源约束下最大化效用功能来确定最终请求插入策略。我们在离线数据集和在线A/B测试上进行了广泛的实验,以验证AdareQuest的有效性。
translated by 谷歌翻译
5G及以后的移动网络将以前所未有的规模支持异质用例,从而要求自动控制和优化针对单个用户需求的网络功能。当前的蜂窝体系结构不可能对无线电访问网络(RAN)进行这种细粒度控制。为了填补这一空白,开放式运行范式及其规范引入了一个带有抽象的开放体系结构,该架构可以启用闭环控制并提供数据驱动和智能优化RAN在用户级别上。这是通过在网络边缘部署在近实时RAN智能控制器(接近RT RIC)上的自定义RAN控制应用程序(即XAPP)获得的。尽管有这些前提,但截至今天,研究界缺乏用于构建数据驱动XAPP的沙箱,并创建大型数据集以有效的AI培训。在本文中,我们通过引入NS-O-RAN来解决此问题,NS-O-RAN是一个软件框架,该框架将现实世界中的生产级近距离RIC与NS-3上的基于3GPP的模拟环境集成在一起,从而实现了XAPPS和XAPPS的开发自动化的大规模数据收集和深入强化学习驱动的控制策略的测试,以在用户级别的优化中进行优化。此外,我们提出了第一个特定于用户的O-RAN交通转向(TS)智能移交框架。它使用随机的合奏混合物,结合了最先进的卷积神经网络体系结构,以最佳地为网络中的每个用户分配服务基站。我们的TS XAPP接受了NS-O-RAN收集的超过4000万个数据点的培训,该数据点在近距离RIC上运行,并控制其基站。我们在大规模部署中评估了性能,这表明基于XAPP的交换可以使吞吐量和频谱效率平均比传统的移交启发式方法提高50%,而动机性开销较少。
translated by 谷歌翻译
Recommender systems can strongly influence which information we see online, e.g., on social media, and thus impact our beliefs, decisions, and actions. At the same time, these systems can create substantial business value for different stakeholders. Given the growing potential impact of such AI-based systems on individuals, organizations, and society, questions of fairness have gained increased attention in recent years. However, research on fairness in recommender systems is still a developing area. In this survey, we first review the fundamental concepts and notions of fairness that were put forward in the area in the recent past. Afterward, through a review of more than 150 scholarly publications, we present an overview of how research in this field is currently operationalized, e.g., in terms of general research methodology, fairness measures, and algorithmic approaches. Overall, our analysis of recent works points to specific research gaps. In particular, we find that in many research works in computer science, very abstract problem operationalizations are prevalent, and questions of the underlying normative claims and what represents a fair recommendation in the context of a given application are often not discussed in depth. These observations call for more interdisciplinary research to address fairness in recommendation in a more comprehensive and impactful manner.
translated by 谷歌翻译