对计算的需求仍在呈指数增长。这种增长将转化为计算能源消耗的指数增长,除非其能源效率的提高可以超过其需求增加。然而,经过数十年的研究,由于已经进行了高度优化,因此进一步提高能源效率变得越来越具有挑战性。结果,在某个时候,计算需求的增加可能会超过其能源效率的增加,这可能会大大增加。这种指数增长(如果不受组织)将把计算定位为全球碳排放的重要贡献者。尽管著名的技术公司已经意识到了这一问题并试图减少其碳排放,但可以理解的是,他们的成功是可以无意间传达出现在或很快就会解决问题的错误印象的潜力。如果这种错误的印象有助于阻止在这一领域进行进一步研究,因为我们讨论了消除计算机,而且更普遍地社会的碳排放远非解决问题。为了更好地理解问题的范围,本文提炼了决定计算的碳足迹及其对实现可持续计算的影响的基本趋势。
translated by 谷歌翻译
本文探讨了超线性增长趋势的环境影响,从整体角度来看,跨越数据,算法和系统硬件。我们通过在行业规模机器学习用例中检查模型开发周期来表征AI计算的碳足迹,同时考虑系统硬件的生命周期。进一步迈出一步,我们捕获AI计算的操作和制造碳足迹,并为硬件 - 软件设计和尺度优化的结束分析以及如何帮助降低AI的整体碳足迹。根据行业经验和经验教训,我们分享关键挑战,并在AI的许多方面上绘制了重要的发展方向。我们希望本文提出的关键信息和见解能够激发社区以环保的方式推进AI领域。
translated by 谷歌翻译
丹尼德缩放结束和摩尔法的放缓使能量使用数据中心在不可持续的道路上。数据中心已经是全球电力使用的大部分,应用需求以快速缩放。我们认为,数据中心计算的碳强度的大幅减少可以通过以软件为中心的方法来实现:通过修改系统API,通过修改系统API来使应用程序开发人员可见的能量和碳,使其成为可能进行知情的贸易性能和碳排放之间,并通过提高应用程序编程水平,以便灵活地使用更节能的计算和存储方法。我们还为系统软件奠定了一个研究议程,以减少数据中心计算的碳足迹。
translated by 谷歌翻译
通过提供前所未有的计算资源访问,云计算能够在机器学习等技术中快速增长,其计算需求产生了高能源成本和相应的碳足迹。结果,最近的奖学金呼吁更好地估计AI的温室气体影响:当今的数据科学家无法轻松或可靠地访问该信息的测量,从而排除了可行策略的发展。向用户提供有关软件碳强度的信息的云提供商是一种基本的垫脚石,以最大程度地减少排放。在本文中,我们提供了一个测量软件碳强度的框架,并建议通过使用每个能量单元使用基于位置和特定时间的边际排放数据来测量运行碳排放。我们为一组自然语言处理和计算机视觉的现代模型提供了操作软件强度的测量,以及各种模型尺寸,包括预处理61亿个参数语言模型。然后,我们评估了一套用于减少Microsoft Azure Cloud Compute平台排放的方法套件:使用不同地理区域中的云实例,在一天中的不同时间使用云实例,并在边际碳强度高于某个阈值时动态暂停云实例。我们证实了先前的结果,即数据中心的地理区域在给定云实例的碳强度中起着重要作用,并发现选择合适的区域可能具有最大的运营排放减少影响。我们还表明,一天中的时间对操作软件碳强度有显着影响。最后,我们最终提出了有关机器学习从业人员如何使用软件碳强度信息来减少环境影响的建议。
translated by 谷歌翻译
Artificial Intelligence (AI) is used to create more sustainable production methods and model climate change, making it a valuable tool in the fight against environmental degradation. This paper describes the paradox of an energy-consuming technology serving the ecological challenges of tomorrow. The study provides an overview of the sectors that use AI-based solutions for environmental protection. It draws on numerous examples from AI for Green players to present use cases and concrete examples. In the second part of the study, the negative impacts of AI on the environment and the emerging technological solutions to support Green AI are examined. It is also shown that the research on less energy-consuming AI is motivated more by cost and energy autonomy constraints than by environmental considerations. This leads to a rebound effect that favors an increase in the complexity of models. Finally, the need to integrate environmental indicators into algorithms is discussed. The environmental dimension is part of the broader ethical problem of AI, and addressing it is crucial for ensuring the sustainability of AI in the long term.
translated by 谷歌翻译
Accurate reporting of energy and carbon usage is essential for understanding the potential climate impacts of machine learning research. We introduce a framework that makes this easier by providing a simple interface for tracking realtime energy consumption and carbon emissions, as well as generating standardized online appendices. Utilizing this framework, we create a leaderboard for energy efficient reinforcement learning algorithms to incentivize responsible research in this area as an example for other areas of machine learning. Finally, based on case studies using our framework, we propose strategies for mitigation of carbon emissions and reduction of energy consumption. By making accounting easier, we hope to further the sustainable development of machine learning experiments and spur more research into energy efficient algorithms.
translated by 谷歌翻译
在迅速增长的海上风电场市场中出现了增加风力涡轮机尺寸和距离的全球趋势。在英国,海上风电业于2019年生产了英国最多的电力,前一年增加了19.6%。目前,英国将进一步增加产量,旨在增加安装的涡轮机容量74.7%,如最近的冠村租赁轮次反映。通过如此巨大的增长,该部门现在正在寻求机器人和人工智能(RAI),以解决生命周期服务障碍,以支持可持续和有利可图的海上风能生产。如今,RAI应用主要用于支持运营和维护的短期目标。然而,前进,RAI在海上风基础设施的全部生命周期中有可能发挥关键作用,从测量,规划,设计,物流,运营支持,培训和退役。本文介绍了离岸可再生能源部门的RAI的第一个系统评论之一。在当前和未来的要求方面,在行业和学术界的离岸能源需求分析了rai的最先进的。我们的评论还包括对支持RAI的投资,监管和技能开发的详细评估。通过专利和学术出版数据库进行详细分析确定的关键趋势,提供了对安全合规性和可靠性的自主平台认证等障碍的见解,这是自主车队中可扩展性的数字架构,适应性居民运营和优化的适应性规划人机互动对人与自治助理的信赖伙伴关系。
translated by 谷歌翻译
Ongoing risks from climate change have impacted the livelihood of global nomadic communities, and are likely to lead to increased migratory movements in coming years. As a result, mobility considerations are becoming increasingly important in energy systems planning, particularly to achieve energy access in developing countries. Advanced Plug and Play control strategies have been recently developed with such a decentralized framework in mind, more easily allowing for the interconnection of nomadic communities, both to each other and to the main grid. In light of the above, the design and planning strategy of a mobile multi-energy supply system for a nomadic community is investigated in this work. Motivated by the scale and dimensionality of the associated uncertainties, impacting all major design and decision variables over the 30-year planning horizon, Deep Reinforcement Learning (DRL) is implemented for the design and planning problem tackled. DRL based solutions are benchmarked against several rigid baseline design options to compare expected performance under uncertainty. The results on a case study for ger communities in Mongolia suggest that mobile nomadic energy systems can be both technically and economically feasible, particularly when considering flexibility, although the degree of spatial dispersion among households is an important limiting factor. Key economic, sustainability and resilience indicators such as Cost, Equivalent Emissions and Total Unmet Load are measured, suggesting potential improvements compared to available baselines of up to 25%, 67% and 76%, respectively. Finally, the decomposition of values of flexibility and plug and play operation is presented using a variation of real options theory, with important implications for both nomadic communities and policymakers focused on enabling their energy access.
translated by 谷歌翻译
In recent years, the exponential proliferation of smart devices with their intelligent applications poses severe challenges on conventional cellular networks. Such challenges can be potentially overcome by integrating communication, computing, caching, and control (i4C) technologies. In this survey, we first give a snapshot of different aspects of the i4C, comprising background, motivation, leading technological enablers, potential applications, and use cases. Next, we describe different models of communication, computing, caching, and control (4C) to lay the foundation of the integration approach. We review current state-of-the-art research efforts related to the i4C, focusing on recent trends of both conventional and artificial intelligence (AI)-based integration approaches. We also highlight the need for intelligence in resources integration. Then, we discuss integration of sensing and communication (ISAC) and classify the integration approaches into various classes. Finally, we propose open challenges and present future research directions for beyond 5G networks, such as 6G.
translated by 谷歌翻译
自动化机器学习(Automl)努力自动配置机器学习算法及其组合的整体(软件)解决方案 - 机器学习管道 - 针对手头的学习任务(数据集)量身定制。在过去十年中,Automl已成为具有数百个贡献的热门研究课题。虽然Automl提供了许多前景,但也称它也是相当资源密集的,这是其主要批评的主要观点之一。高资源消耗的主要原因是许多方法依赖于许多ML管道的(昂贵)评估,同时寻找良好的候选者。由于使用许多数据集和方法进行了大规模实验,因此在Automl方法研究的背景下放大了这个问题,每个数据都是用几种重复来排除随机效应的几个重复的实验。本文阐述了最近的绿色AI的精神,是为了提高对问题的自动化研究人员的意识,并详细阐述可能的补救措施。为此,我们确定了四类行动,社区可能采取更加可持续的自动化计划,即接近设计,基准,研究激励和透明度。
translated by 谷歌翻译
Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. A dedicated venue for collecting and summarizing the latest advances of EVA is highly desired by the community. Besides, the basic concepts of EVA (e.g., definition, architectures, etc.) are ambiguous and neglected by these surveys due to the rapid development of this domain. A thorough clarification is needed to facilitate a consensus on these concepts. To fill in these gaps, we conduct a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. The EVA system and its enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.
translated by 谷歌翻译
Energy consumption in buildings, both residential and commercial, accounts for approximately 40% of all energy usage in the U.S., and similar numbers are being reported from countries around the world. This significant amount of energy is used to maintain a comfortable, secure, and productive environment for the occupants. So, it is crucial that the energy consumption in buildings must be optimized, all the while maintaining satisfactory levels of occupant comfort, health, and safety. Recently, Machine Learning has been proven to be an invaluable tool in deriving important insights from data and optimizing various systems. In this work, we review the ways in which machine learning has been leveraged to make buildings smart and energy-efficient. For the convenience of readers, we provide a brief introduction of several machine learning paradigms and the components and functioning of each smart building system we cover. Finally, we discuss challenges faced while implementing machine learning algorithms in smart buildings and provide future avenues for research at the intersection of smart buildings and machine learning.
translated by 谷歌翻译
由于不断增长的计算要求,深度学习(DL)的能源消耗和碳足迹的增加已成为引起人们关注的原因。在这项工作中,我们关注开发医学图像分析模型(MIA)的碳足迹,其中处理了高空间分辨率的体积图像。在这项研究中,我们介绍并比较了文献中四种工具的特征,以量化DL的碳足迹。使用这些工具之一,我们估计了医学图像分割管道的碳足迹。我们选择NNU-NET作为医疗图像分割管道的代理,并在三个常见数据集上进行实验。在我们的工作中,我们希望告知MIA产生的能源成本不断增加。我们讨论了削减环境影响的简单策略,以使模型选择和培训过程更加有效。
translated by 谷歌翻译
Progress in machine learning (ML) comes with a cost to the environment, given that training ML models requires significant computational resources, energy and materials. In the present article, we aim to quantify the carbon footprint of BLOOM, a 176-billion parameter language model, across its life cycle. We estimate that BLOOM's final training emitted approximately 24.7 tonnes of~\carboneq~if we consider only the dynamic power consumption, and 50.5 tonnes if we account for all processes ranging from equipment manufacturing to energy-based operational consumption. We also study the energy requirements and carbon emissions of its deployment for inference via an API endpoint receiving user queries in real-time. We conclude with a discussion regarding the difficulty of precisely estimating the carbon footprint of ML models and future research directions that can contribute towards improving carbon emissions reporting.
translated by 谷歌翻译
深度神经网络的规模和复杂性继续成倍增长,大大增加了这些模型训练和推断的能源消耗。我们介绍了一个开源软件包ECO2AI,以帮助数据科学家和研究人员以直接的方式跟踪其模型的能源消耗和同等的二氧化碳排放。在Eco2ai中,我们强调能源消耗跟踪和正确的区域二氧化碳排放会计的准确性。我们鼓励研究社区搜索具有较低计算成本的新最佳人工智能(AI)架构。动机还来自基于AI的温室气体与可持续AI和绿色AI途径隔离周期的概念。
translated by 谷歌翻译
Explainable Artificial Intelligence (XAI) is transforming the field of Artificial Intelligence (AI) by enhancing the trust of end-users in machines. As the number of connected devices keeps on growing, the Internet of Things (IoT) market needs to be trustworthy for the end-users. However, existing literature still lacks a systematic and comprehensive survey work on the use of XAI for IoT. To bridge this lacking, in this paper, we address the XAI frameworks with a focus on their characteristics and support for IoT. We illustrate the widely-used XAI services for IoT applications, such as security enhancement, Internet of Medical Things (IoMT), Industrial IoT (IIoT), and Internet of City Things (IoCT). We also suggest the implementation choice of XAI models over IoT systems in these applications with appropriate examples and summarize the key inferences for future works. Moreover, we present the cutting-edge development in edge XAI structures and the support of sixth-generation (6G) communication services for IoT applications, along with key inferences. In a nutshell, this paper constitutes the first holistic compilation on the development of XAI-based frameworks tailored for the demands of future IoT use cases.
translated by 谷歌翻译
作为行业4.0时代的一项新兴技术,数字双胞胎因其承诺进一步优化流程设计,质量控制,健康监测,决策和政策制定等,通过全面对物理世界进行建模,以进一步优化流程设计,质量控制,健康监测,决策和政策,因此获得了前所未有的关注。互连的数字模型。在一系列两部分的论文中,我们研究了不同建模技术,孪生启用技术以及数字双胞胎常用的不确定性量化和优化方法的基本作用。第二篇论文介绍了数字双胞胎的关键启示技术的文献综述,重点是不确定性量化,优化方法,开源数据集和工具,主要发现,挑战和未来方向。讨论的重点是当前的不确定性量化和优化方法,以及如何在数字双胞胎的不同维度中应用它们。此外,本文介绍了一个案例研究,其中构建和测试了电池数字双胞胎,以说明在这两部分评论中回顾的一些建模和孪生方法。 GITHUB上可以找到用于生成案例研究中所有结果和数字的代码和预处理数据。
translated by 谷歌翻译
机器学习传感器代表了嵌入式机器学习应用程序未来的范式转移。当前的嵌入式机器学习(ML)实例化遭受了复杂的整合,缺乏模块化以及数据流动的隐私和安全问题。本文提出了一个以数据为中心的范式,用于将传感器智能嵌入边缘设备上,以应对这些挑战。我们对“传感器2.0”的愿景需要将传感器输入数据和ML处理从硬件级别隔离到更广泛的系统,并提供一个薄的界面,以模拟传统传感器的功能。这种分离导致模块化且易于使用的ML传感器设备。我们讨论了将ML处理构建到嵌入式系统上控制微处理器的软件堆栈中的标准方法所带来的挑战,以及ML传感器的模块化如何减轻这些问题。 ML传感器提高了隐私和准确性,同时使系统构建者更容易将ML集成到其产品中,以简单的组件。我们提供了预期的ML传感器和说明性数据表的例子,以表现出来,并希望这将建立对话使我们朝着传感器2.0迈进。
translated by 谷歌翻译
深度学习的最近历史一直是成就之一:从游戏中的人类胜利到图像分类,语音识别,翻译和其他任务的世界领先表现。但是,这一进展带来了对计算能力的渴望。本文分类了这种依赖性的程度,表明各种应用程序的进展非常依赖于计算能力的增加。推断向前的信仰表明,沿当前线的进步正在经济,技术和环境上迅速变得不可持续。因此,在这些应用程序中的持续进展将需要更大的计算方法,这要么必须从变化到深度学习或转移到其他机器学习方法。
translated by 谷歌翻译
Recent progress in hardware and methodology for training neural networks has ushered in a new generation of large networks trained on abundant data. These models have obtained notable gains in accuracy across many NLP tasks. However, these accuracy improvements depend on the availability of exceptionally large computational resources that necessitate similarly substantial energy consumption. As a result these models are costly to train and develop, both financially, due to the cost of hardware and electricity or cloud compute time, and environmentally, due to the carbon footprint required to fuel modern tensor processing hardware. In this paper we bring this issue to the attention of NLP researchers by quantifying the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP. Based on these findings, we propose actionable recommendations to reduce costs and improve equity in NLP research and practice.
translated by 谷歌翻译