> **来源:[研报客](https://pc.yanbaoke.cn)** # CLIMATETECH IN FOCUS # ARTIFICIAL INTELLIGENCE # FOR SUSTAINABILITY # Contents 01 Terms of Use & Disclaimer 03 Preface 06 Executive Summary # AI Evolution at Crossroads: Where Are We Heading? 09 The Past, Emergence, and Destination of Artificial General Intelligence 12 The AI & Sustainability Paradox 17 Hyperscale Sustainable AI Data Center 23 AI in Climate Mitigation & Adaptation # II AI for Sustainability Landscape 31 Roles by Automation Level 33 AI Use Case Landscape by Automation Level 35 Automation Levels Beyond Technology # III Focus Sector Deep-dive 39 Energy & Manufacturing 50 Shipping & Logistics 55 Finance & Investment 60 Certification & Global Trade # IV Powering the Progress: Incubation, Education, and Governance 67 Incubation for AI Innovation 71 Educating for AI-Native Generation 77 Open & Equitable AI for All 81 Governance of AI Risks # V Conclusion: What's Next? 91 Institutions 92 Acknowledgement 96 Bibliography # Terms of Use & Disclaimer This report is published by Shanghai Climate Week and its partner institutions. All rights are strictly reserved. This report may not be reproduced or redistributed, in whole or in part, without the written permission of Shanghai Climate Week or any of its partner institutions, whether electronic, mechanical, photocopying, recording, or otherwise. Shanghai Climate Week disclaims any and all liability for actions undertaken by third parties in this respect. The information and opinions in this report have been prepared by Shanghai Climate Week and its partner institutions. This report does not constitute investment advice and should not be relied on for such advice or as a substitute for consultation with professional accountants, tax, legal, or financial advisors. Shanghai Climate Week and its partner institutions have made every effort to use reliable, up-to-date, and comprehensive information and analysis. However, no representations or warranties are made regarding the accuracy of the information. Shanghai Climate Week and its partner institutions disclaim any obligation to update the information or conclusions in this report and shall not be held liable for any loss arising from any action taken or refrained from as a result of information contained in this report or any reports or sources of information referred to herein or for any consequential, special, or similar damages, even if advised of the possibility of such damages. The report does not constitute an offer to buy or sell securities or a solicitation of an offer to buy or sell securities. Any unauthorized sale of this report is prohibited without the written consent of Shanghai Climate Week. # 6 Shanghai Climate Week is a global platform dedicated to advancing climate action and sustainable development through innovation and international cooperation, guided by the principles of "China Action, Asia Voice, Global Standard." This year, we are pleased to continue to present ClimateTech in Focus, which discusses Artificial Intelligence as a key enabling infrastructure for sustainability, offering deeper insights to policymakers, industry leaders, and practitioners on how AI can strengthen energy systems, supply chains, and climate governance. Shanghai Climate Week # 6 The Sino-International Entrepreneurs Federation is committed to helping public- and private-sector leaders achieve their goals by advising on strategy, policy, and delivery. As the Presenting Partner of this year's ClimateTech in Focus, we underscore the critical role of aligning AI-driven innovation with measurable, real-world sustainability impacts - empowering entrepreneurs and investors across regions to scale trusted, practical solutions that accelerate the transition to a low-carbon, resilient global economy. Sino-International Entrepreneurs Federation # Preface I # Dr. Tshilidzi Marwala Under-Secretary-General, United Nations Rector, United Nations University en years after the adoption of the 2030 Agendafor Sustainable Development, the internationalcommunity continues to confront the wideninggap between our ambitions and our collective progress. Climate impacts are accelerating, and while no region is spared, their effects are felt most acutely among those with the fewest resources to respond. These realities underscore the urgency of strengthening resilience, expanding access to sustainable energy, and ensuring that all countries have the capacity to participate meaningfully in the global response to climate change. In recent years, science, technology, and innovation have offered new avenues for advancing this work. Artificial intelligence is becoming an increasingly significant tool for improving how societies anticipate climate risks, plan for changing conditions, and manage natural resources. We see its growing role in early warning systems, climate modelling, renewable energy integration, agriculture, water management, and urban planning. These developments point to AI's potential to support more effective and equitable climate action. However, the benefits of AI will only be realized if countries have the skills, infrastructure, and institutions required to use these technologies responsibly. I often highlight the importance of "closing the digital divide" by strengthening national capacities so that all countries – particularly those in the Global South – can access the increasing opportunities emerging from digital transformation. Access and opportunity must be available to all. This principle is essential not only for fairness, but also for the credibility and effectiveness of the global climate action. # GNU United Nations University's growing portfolio of initiatives illustrates a clear and urgent recognition: digital innovation must be mobilized in the service of climate objectives. As the research and education arm of the United Nations, United Nations University is committed to bridging science, policy, and capacity development to build a sustainable, inclusive, and digitally empowered future. Our work supports the UN system, governments, academic institutions, and communities in translating ambition into implementation: by developing the skills needed for data-driven decision-making, strengthening institutional readiness, advancing ethical and inclusive governance of emerging technologies, and fostering cooperation across sectors, disciplines, and regions. This integrated approach reflects not only the demands of our time but also the values at the heart of sustainable development. This new edition of ClimateTech In Focus, dedicated to AI for Sustainability, offers timely insights into how technological innovation is reshaping climate and development pathways. Through case studies, expert dialogues, and analysis across energy, agriculture, finance, education, and environmental management, the report highlights both the opportunities and the responsibilities associated with deploying AI in support of sustainable development. It also draws attention to the essential enabling conditions required to scale these solutions, including reliable energy systems, coherent governance frameworks, adequate financing, and the continuous development of human capital. The years ahead will be decisive. The choices we make now will determine the trajectory of our shared future. I remain confident that meaningful progress is possible when governments, the private sector, academia, and civil society work together with a shared commitment to inclusion, equity, and long-term sustainability. The United Nations will continue to play its part by supporting countries to harness the benefits of AI and digital innovation in ways that are safe, ethical, and aligned with the vision of the 2030 Agenda. # Preface II # Antonio Basilio Director, APEC Business Advisory Council Chairman, Pacific Economic Cooperation Council Philippine National Committee cross the Asia-Pacific and the world, one message is unmistakably clear: the twin forces of technological innovation and collective action will define our ability to confront the climate crisis. Among these technologies, artificial intelligence stands out as a catalyst of unprecedented potential – reshaping how we predict risks, optimize resources, and design the low-carbon systems of tomorrow. AI is rapidly transforming economies and “holds immense potential to unlock innovation, drive productivity, and promote inclusive growth” – if we guide its use responsibly and inclusively. The urgency of our shared mission has not diminished; if anything, it has become more immediate as climate impacts intensify each year. What continues to evolve are the tools at our disposal – and the partnerships that strengthen our response. At the forefront of this new era, AI offers an extraordinary opportunity to accelerate sustainable innovation, enhance efficiency, and deepen cooperation. However, we must ensure that this progress benefits all and leaves no one behind. Through initiatives such as the Seminar on Responsible Adoption of General-Purpose AI and the Asia-Pacific AI Governance Accelerator, PECC has been fostering frank dialogue on responsible AI governance – highlighting the importance of ethics, transparency, and trust in AI deployment. Likewise, ABAC's continued engagement with APEC Leaders has underscored that technological progress must go hand-in-hand with economic resilience, growth finance, and human-centered development. We have consistently emphasized that innovation should not come at the cost of equity or sustainability. Public-private collaboration is key: by working together, governments and businesses can transform emerging technologies like AI into engines of sustainability and shared prosperity. Building on our previous report, ClimateTech in Focus: Innovations for a Greener Supply Chain, which explored how technology can strengthen supply chain resilience, this year's edition - Artificial Intelligence for Sustainability - examines how AI is reshaping the future of infrastructure, finance, and education. From smarter energy grids and climate-resilient infrastructure to growth finance innovation and personalized education for sustainability, AI is opening new frontiers. This report offers concrete, forward-looking recommendations to ensure that the AI revolution contributes not only to competitiveness but also to equity, trust, and climate action. Our goal is to harness AI to amplify what works in sustainability - optimizing resource use, democratizing knowledge, and accelerating low-carbon innovation - while safeguarding privacy, ethics, and inclusivity. We stand at a pivotal moment where technology and responsibility intersect. The decisions we make today will define the legacy we leave for our children and grandchildren. I urge all stakeholders – businesses, governments, academia, and civil society – to seize this opportunity to harness the transformative power of AI, not as an end in itself, but as a powerful partner in building a sustainable, inclusive, resilient, and prosperous Asia-Pacific for all. By aligning innovation with stewardship and collective action, we can ensure that AI becomes a driving force in our quest for a future where economic growth and environmental well-being advance hand in hand. # Preface III # Zhou Yiping Founding Director, United Nations Office for South-South Cooperation Senior Strategic Advisor, Shanghai Climate Week We are entering a decisive decade in which the trajectories of climate action and technological development are becoming inseparably intertwined. Climate risks are no longer abstract projections but lived realities across regions, economies, and societies. At the same time, artificial intelligence is rapidly evolving from a frontier technology into a foundational infrastructure shaping how we generate knowledge, allocate resources, and govern complex systems. The question before us is no longer whether AI will influence sustainable development, but whose priorities it will serve, and on what terms. For the Global South, this moment carries particular weight. Many developing countries face the dual challenge of acute climate vulnerability and structural constraints in technology, finance, and infrastructure, yet these same countries also hold immense potential to leapfrog traditional development pathways. Artificial Intelligence, if designed and deployed with sustainability in mind, can become a powerful enabler of such a transition - supporting climate-resilient infrastructure, accelerating clean energy integration, improving agricultural productivity, and enhancing transparency in green finance. If misaligned, however, AI risks reinforcing existing asymmetries, deepening the divide between technology producers and technology recipients. From the perspective of South-South cooperation, this is a defining inflection point. What is required instead is a shift toward shared innovation, co-development, and collective capacity building. This report argues that AI must be understood not merely as a tool, but as a system that embeds values, incentives, and power structures. Its integration into of inclusivity, sustainability, and long-term resilience. This means aligning AI development with low-carbon SHANGHAI CLIMATE WEEK 上海气候周 objectives, recognizing and addressing its environmental footprint, and ensuring that data, models, and computational resources do not become new sources of exclusion. It also requires respecting national development pathways and data sovereignty, while fostering mechanisms for trust-based cooperation across regions. Yet technology alone is insufficient. Sustainable impact depends on people, institutions, and policies. Capacity building – from digital literacy to advanced technical expertise – must be treated as a strategic priority. Data infrastructure must be developed as a public good, not a private bottleneck. And global governance frameworks must evolve to reflect the realities and aspirations of developing countries, ensuring that international rules are not only technologically sophisticated, but also development-oriented and fair. The coming ten years will be decisive. They will determine whether artificial intelligence becomes a catalyst for a more balanced, climate-resilient global development model, or whether it entrenches new forms of inequality. This report calls on policymakers, business leaders, and innovators to act with foresight and responsibility – to invest in cooperation rather than fragmentation, in shared capabilities rather than narrow advantages. By embedding AI within a framework of South-South solidarity and global inclusiveness, we can ensure that technological progress serves as a bridge toward sustainability, rather than a fault line of division. Artificial intelligence should not be an exclusive asset of a few, but a shared instrument for collective resilience and green transformation. The opportunity before us is profound. The responsibility is even greater. How we choose to act now will shape not only the future of climate action, but the contours of global development for generations to come. # Executive Summary Artificial intelligence (AI) is no longer a peripheral tool in sustainability efforts. Across energy systems, manufacturing, logistics, finance, certification, education, and public governance, AI is increasingly embedded as operational infrastructure – shaping how societies anticipate risk, allocate resources, enforce rules, and coordinate action at scale. This report examines how AI is already transforming climate mitigation and adaptation in practice, why progress remains uneven, and what institutional conditions are required to translate technical capability into durable public value. Al's greatest contribution to sustainability lies not in breakthrough algorithms, but in its ability to reduce uncertainty, compress decision cycles, and align complex systems under real-world constraints. In energy and manufacturing, Al supports grid stability, renewable integration, predictive maintenance, and energy-carbon co-optimization, helping systems move along the spectrum between resilience and efficiency. In shipping and logistics, Al has become indispensable for navigating tightening emissions regulations, volatile operating conditions, and Scope 3 accountability, transforming logistics from a carbon blind spot into a governable lever for decarbonization. In finance, Al is shifting climate risk from narrative disclosure into decision-grade intelligence - embedding physical and transition risks into pricing, capital allocation, and supervisory frameworks. In certification and global trade, Al is reconfiguring compliance from document-driven procedures into data-driven trust infrastructure, enabling verifiable carbon transparency as a condition of market access. Yet the report finds that technical readiness consistently outpaces institutional readiness. Many AI systems reach the pilot or MVP stage rapidly but struggle to scale to large-scale deployment. The binding constraints are rarely funding or model performance; instead, they arise from fragmented data ownership, legacy infrastructure, unclear regulatory pathways, limited operational capacity, and weak trust between innovators, regulators, and adopters. As a result, AI innovation in sustainability often stalls precisely at the point where real impact should begin. To address this gap, the report argues for a shift in how AI innovation is incubated and governed. Effective incubation goes beyond capital provision and emphasizes deployment readiness: shared operational services, access to secure compute and data environments, domain-specific mentorship, predefined use cases with real buyers, and regulatory sandboxes that allow supervised learning before full-scale approval. Cross-border and networked incubation models are emerging as particularly effective, enabling talent, technology, and market feedback to circulate across regions rather than remaining siloed within national ecosystems. Education and talent development are equally decisive. With AI already embedded in everyday learning, the question is no longer whether students will use AI, but how education systems guide its use. The report highlights a shift from rote knowledge reproduction toward critical thinking, interdisciplinary problem-solving, and project-based learning grounded in real sustainability challenges. Successful systems treat AI as a learning assistant rather than an answer machine, and they invest in institutional pathways that allow young people to move from education into public service, entrepreneurship, and policy influence. Countries competing effectively for AI talent combine flexible visas and incentives with meaningful roles, practical testbeds, and long-term integration into national innovation systems. Equity and openness emerge as defining challenges of the AI era. While open data and shared models can accelerate innovation, poorly governed openness risks deepening data inequality, particularly for the Global South. The report emphasizes that openness must be conditional and governed – with clear usage rights, traceable provenance, and mechanisms that ensure local institutions retain control over locally generated data. Inclusive AI deployment is already visible in resource-constrained settings, where lightweight models, community data systems, and public-sector use cases deliver tangible benefits in mobility, disaster preparedness, public health, and agriculture. These experiences demonstrate that AI can support leapfrog development when paired with appropriate governance and infrastructure. Al governance is not a brake on climate innovation but its enabling condition. Effective governance requires risk-proportionate oversight that builds trust, transparency, and accountability - distinguishing decision-support tools from systems exercising authority, and ensuring explainability, human oversight, and public legitimacy for high-impact uses. When treated as a learning system supported by capacity building and cross-border cooperation, governance enables AI to scale responsibly for climate action rather than constraining it. # AI EVOLUTION AT CROSSROADS: WHERE ARE WE HEADING? # 66 Al and other monitoring methods allow near real-time, transparent validation of carbon results, while strengthening disaster preparedness and enabling anticipatory action – saving lives, reducing costs, and building community resilience. H.E. Daniel Francisco Chapo President of Mozambique 66 We are on the verge of absolute irreversibility of climate change. It's really now that it's being decided. In this decisive moment, the tools we choose to develop and deploy – including artificial intelligence – will inevitably shape our capacity to anticipate, to act, and ultimately to remain within planetary boundaries. In my eyes, we have no choice but to find a model of economic development compatible with those boundaries. H.E. Corinne Lepage Former French Minister of the Environment & Former Member of the European Parliament # The Past, Emergence, and Destination of Artificial General Intelligence Large Language Models (LLMs) mark a major change in AI. As these systems scale, especially Generative Pre-trained Transformer (GPT) models, they can develop skills like reasoning and problem-solving without being explicitly programmed, and these gains are still hard to fully explain. This has drawn strong interest from researchers and industry. Many report contributors and scholars agree that LLMs provide a clear substitutive advantage over traditional AI methods. As a result, LLMs have become a leading trend, changing how work is done across many fields. Figure 1.1a LLMs vs. Traditional AI Approaches <table><tr><td>Feature</td><td>LLMs</td><td colspan="2">Traditional AI Approaches</td></tr><tr><td>Language Processing</td><td>Human-like, context-rich, emergent</td><td>Rule-based/statistical, limited</td><td>5,6,7,10</td></tr><tr><td>Adaptability</td><td>High (prompting, few-shot learning)</td><td>Low (task-specific training)</td><td>2,4,5,10</td></tr><tr><td>Application Scope</td><td>Broad, multi-domain</td><td>Narrow, domain-specific</td><td>2,3,6,8,9</td></tr><tr><td>Human Collaboration</td><td>Effective hybrid workflows</td><td>Limited</td><td>1</td></tr><tr><td>Interpretability</td><td>Basic with hallucinations / bias</td><td>Various</td><td>5,10</td></tr><tr><td>Computational Consumption</td><td>High</td><td>Various, usually low</td><td></td></tr></table> This growing interest has increased research into emerging technologies, shifting focus from traditional AI methods to advanced models. Researchers are applying a unified GPT and agent framework to large-scale language understanding, computer vision, reasoning, and multimodal generation. Industry leaders are using this approach to address high-impact problems in decision-making, optimization, market research, and evidence synthesis, indicating broader adoption of LLMs in real-world use. Figure 1.1b Scholarly Work by Keywords Following a monumental paradigm shift, the pressing question arises: where are we heading? Artificial General Intelligence (AGI) is the theoretical capability of machines to replicate human-like cognition, enabling them to understand, learn, and perform any intellectual task, unlike current narrow AI systems restricted to specific domains. First mentioned in Samuel Butler's Erewhon (1872) as fiction, AGI is now a serious research goal, and systems like GPT are seen as potential steps toward it. Public information shows some industry leaders from top AI companies speculate AGI will arrive by 2030 or sooner. Many theoretical scientists outside the tech industry remain skeptical, arguing that current progress is not a viable path to AGI. The history of AI is marked by cycles of optimism and disappointment, often referred to as "AI summers" and "AI winters." Each cycle shows both the promise of new breakthroughs and the limits that follow. The current excitement around LLMs may be a real step forward, but it also brings back concerns about overpromising and the need for realistic expectations. Figure 1.1c AI Evolution Pathways This report draws on cross-disciplinary research from a diverse panel of contributors. While perspectives across academia and industry remain divided, a preliminary consensus emerges: achieving AGI will require at least one major theoretical breakthrough beyond current approaches. Grounded world models and meta-learning are considered the most necessary among other technologies. Source: ClimateTech In Focus Responding Contributors Figure 1.1d Top 5 Critical Breakthroughs Expected Before Reaching AGI <table><tr><td></td><td colspan="2">% Consider it Required for AGI</td><td colspan="2">Current Stages</td></tr><tr><td>Grounded World Models A breakthrough enabling understanding of physical or causal dynamics</td><td colspan="2">88%</td><td></td><td></td></tr><tr><td>Continual Memories A breakthrough enabling incremental, long-term learning without catastrophic forgetting</td><td colspan="2">78%</td><td></td><td></td></tr><tr><td>Meta-learning A breakthrough in meta-learning that enables GPT to autonomous- ly determine and self-correct its long-term strategies and sub-goals</td><td colspan="2">73%</td><td></td><td></td></tr><tr><td>Higher-order Abstractions A breakthrough enabling higher- order thinking abstraction, beyond linguistic prediction, to enable reflection and recursive reasoning</td><td colspan="2">63%</td><td></td><td></td></tr><tr><td>Low-latency Interaction A breakthrough to process non-linguistical multi-channel interactions in low-latency or on the edge, includingvision, audio, robotics, and real-world feedback</td><td colspan="2">53%</td><td></td><td></td></tr></table> Advances in computational capacity and bandwidth remain decisive constraints on more advanced adoption of AI services. Contemporary deployments typically operate within service-level throughput limits of roughly 500,000 tokens per minute for text generation and approximately 2 MB/s for image data, which restrict interactive applications to modalities that can be streamed within these bounds. As a result, commercially available video generation remains predominantly non-interactive and computationally intensive, while domains such as autonomous driving continue to depend on traditional methods because portable compute and low-latency infrastructure remain insufficient. Whether these constraints can be fully resolved is uncertain, yet continued progress in computational capacity is widely regarded as the central enabler for overcoming them and moving toward more general forms of intelligence. # The AI & Sustainability Paradox # 66 AI is becoming an enabler across every major sustainability objective, from lowering emissions and improving resource efficiency to strengthening the resilience of essential systems like food, water, and even how we live in cities. AI helps us optimize how we produce and consume energy, manage natural resources, and operate entire value chains more efficiently. Dr. Lamya Fawwaz Executive Director, Masdar Sustainability, defined as meeting human needs today without compromising those of future generations, requires that technological progress remain within environmental limits. Scaling up AI will inevitably increase the energy footprint, creating a paradox: both pressure on global net-zero goals and a powerful solution to a leaner supply chain. Training the GPT-3 model with 175 billion parameters, which is no longer considered state-of-the-art as of now, created a footprint of over 1 GWh of electricity. Contributors to the report believe AI is driving the expansion of cloud infrastructure and leading to the energy footprint of data centers becoming even more critical. The IEA and the report's contributors project at least a $30\%$ year-on-year growth in electricity consumption for the world's AI-enabled servers, as a baseline scenario assuming the current level of AI growth sustains. Figure 1.2a Electricity Consumption for Al-enabled Servers # 1.3% AI-related energy use in global electricity use in 2030 Inference will become the primary driver of electricity demand once AI is deployed on a global scale. In a single year, inference can emit more than 25 times the emissions of total training. If the forecast still shows that training dominates emissions totals, it suggests training demand is growing even faster rather than slowing. Contributors expect this gap to continue through 2030 because models keep getting larger, pushing training needs to rise as fast as, or faster than, global inference use. Al's sustainability paradox is the conflict between its rising energy use and the need for its benefits to support climate and human development goals. Al can improve human welfare, but it may also contribute to global warming or resource depletion. It is the mandate of the human beings to align AI research, infrastructure, and governance so Al delivers more positive impact than environmental costs. The next sections explain how to reduce the environmental footprints of AI through advances in computing architectures, energy systems, and policy design. Contributors to this report generally view model downsizing as the most effective approach over the next 5-10 years for easing compute and sustainability constraints, ahead of improvements in chip efficiency and further expansion of large-scale centralized AI cloud infrastructure. Figure 1.2b Investment to Address Compute Challenge # Moore Law, or More Cloud? Balancing AI's rapidly escalating compute demand and sustainability requires rethinking about where computing happens and how efficiently it runs. Specifically there are two distinct pathways: Figure 1.2c Optimization & Hyperscaling Goals <table><tr><td>Optimization (OPEX-driven)</td><td>Hyperscaling (CAPEX-driven)</td></tr><tr><td>• Maximize operational efficiency by more efficient chips, models, architecture, and encapsulation • More efficient chips with near-memory computing, quantization, and specialized GPU / ASICs • Smaller AI models through architectural efficiency (model downsizing) • Scale via more efficient units and smaller models</td><td>• Maximize capital efficiency by higher throughput and lower latency for centralized AI datacenters • More massive data centers with massive GPU / TPU clusters, high-speed backbone connection, and centralized scheduling • Scale via more hardware and larger models</td></tr></table> Moore's Law began as Gordon Moore's observation that the number of transistors on an integrated circuit would roughly double every two years, leading many to expect ongoing gains in performance and lower costs. That trend is now slowing at advanced nodes around 3 nm and below, where further scaling faces increasing physical and economic limits from quantum tunneling, power density, leakage currents, and the rising complexity and cost of extreme ultraviolet (EUV) lithography. As a result, scaling no longer reliably delivers proportional reductions in energy per operation or cost per unit of compute. Most performance gains now come from architectural specialization, such as tensor cores and advanced packaging, which increase system throughput but often add design complexity and can raise total system power use at scale. This More-than-Moore regime shift fundamentally reshapes the sustainability landscape of AI hardware: as single-device optimization approaches its physical and architectural limits, further scaling of AI performance becomes an engineering, energy, capital, and societal coordination challenge predominantly, pending the next breakthrough in fundamental science. # Smarter, Greener, but Not Bigger Model Model downsizing in the More-than-Moore regime is one of the few ways to grow AI services without matching increases in energy use, carbon emissions, and material demand. Most real-world tasks, especially on edge devices, do not need general-purpose intelligence. They require focused, task-specific performance, so smaller models are often sufficient and more energy-efficient. Knowledge distillation, pruning, and quantization reduce parameter count, memory usage, and compute load, delivering similar performance at much lower computational cost.[17,18] Figure 1.2d Key Methods for Model Downsizing <table><tr><td></td><td>Knowledge Distillation</td><td>Pruning</td><td>Quantization</td></tr><tr><td>Technology</td><td>•Downsize to smaller parameter count •Maintain accuracy by reasoning traces, intermediate layers, and other distilling techniques</td><td>•Downsize by removing redundant weights and channels •Maintain accuracy by preserving the model's effective expressive subspace</td><td>•Downsize by reducing precision from FP32 to INT8/FP4 •Maintain accuracy by calibration and Quanti-zation-aware-training (QAT)</td></tr><tr><td>For Sustainability</td><td>Reduces compute, memory, and inference energy; enables on-device and edge deployment</td><td>Reduce FLOPs to enable lightweight edge inference</td><td>Reduce arithmetic energy and bandwidth to enable high-throughput low-power execution</td></tr></table> # 40-70% Lowered energy consumption by shifting to INT8/FP8 quantization Empirical measurements show that INT8 or FP8 inference typically reduces energy use per inference by 40 to $70\%$ compared with FP32 in compute-bound workloads, depending on the workload and memory behavior. Pruning can enable sub-watt neural inference on embedded platforms. [18,19] Because inference typically accounts for most lifecycle emissions of deployed AI systems globally, these per-inference savings will add up to a total reduction in $\mathrm{CO}_{2}$ reductions. [19] Combined with edge deployment, compact models also reduce communication energy by replacing large raw-data transfers with smaller semantic outputs, delivering multiplicative energy savings across billions of devices. Model downsizing also lowers carbon footprint in hardware, reduces cooling water demand, and slows upgrade cycles for high-end accelerators. As a result, software compression is now a primary sustainability factor in large-scale AI, alongside energy-efficient hardware design and hyperscale infrastructure investment. Recent advances in training efficiency show that improving training frameworks can reduce redundant computation and energy use, not just by compressing models for inference. Early stopping tracks loss curves and other convergence signals and ends runs that are unlikely to perform well, saving compute, energy, and emissions. In generative molecular modeling, early stopping can sometimes predict final performance using about $20\%$ of the planned compute. Data-centric methods, such as subset selection and active learning, also reduce waste by identifying redundant training examples. By selecting the most informative, diverse, or uncertain samples, models can often reach similar quality with less data and computation.[20,21,22] This less-is-more approach helps limit growth in training energy use and makes algorithmic and data-efficiency methods key tools for sustainable AI, alongside inference compression, hardware efficiency, and edge deployment. # Edge-Cloud Hybrid AI as the Future Michael Victor N. Alimurung, City Administrator of Quezon City, highlights that you can't talk about AI if edge infrastructure doesn't exist – AI assumes your cameras and sensors are connected. In an edge cloud hybrid system, each workload must determine where to be placed in the appropriate environment and architecture. The report outlines four functional regimes in a four-quadrant model: reflexive, embedded, systemic, and applicable. It does not aim to fully categorize AI. It offers a practical way to classify AI-enabled systems based on two deployment constraints: latency tolerance and aggregate compute demand. Point positions indicate dominant operational regimes under typical deployment assumptions. Many AI-enabled systems span multiple regimes across different stages of their lifecycle, including training, inference, and orchestration. Deployment is often flexible and primarily shaped by cost, governance, or organizational constraints rather than real-time or scale requirements. This report classifies latency tolerance in terms of response coupling modes rather than absolute time thresholds: Real-time denotes tightly coupled control loops where delayed responses invalidate system correctness. - Sub-second refers to interactions where small delays are tolerable, but perceptible latency degrades usability or performance. - Continuous systems allow delayed computation but require progressive, streaming outputs to maintain situational awareness or interaction flow. - Asynchronous systems decouple request and response entirely, allowing results to be retrieved after extended delays without impacting task execution. Figure 1.2e Main Patterns of AI Use Cases by Throughput and Latency Tolerance Source: WeCarbon Analysis # Hyperscale # Sustainable AI Data Center Hyperscale facilities demonstrably outperform distributed computing in energy efficiency per unit of computation. Best-in-class hyperscale data centers report site-level PUE values approaching 1.10 under favorable climatic and operational conditions, compared to a global industry average of 1.55, implying a $40\%$ reduction in non-compute energy overhead through centralized optimization.[23] Leading examples of AI-focused facilities colocated with renewable generation achieve 90–95% hourly carbon-free electricity matching in selected regions, significantly reducing Scope 2 emissions relative to carbon-intensive regional grids. Cooling innovations are comparable: liquid immersion and free-air cooling reduce cooling energy demand by 30–50% relative to conventional chiller-based systems.[24] Figure 1.3a Map of Selected Renewable-powered Hyperscale Data Centers Source: WeCarbon Analysis Public CFE Datastreams Seawater Cooling Note: Energy icons indicate primary renewable energy sources associated with each site. Symbol size do not represent installed capacity, electricity output, or actual energy consumption. As AI adoption accelerates, computing demand and electricity use rise together, making energy efficiency a core constraint. SenseTime's "Compute Power and Electricity Coordination Platform" in Lingang, Shanghai, China, improves both compute utilization and power management at the SenseTime Lingang Intelligent Computing Center. # - Compute Management An integrated training and inference architecture boosts utilization. The platform monitors total, real-time, and available compute plus training and inference workloads, enabling fine-grained, cross-region scheduling and energy-aware operating strategies. Off-peak workload shifting reduces idle waste and raises effective compute output per MW by $150\%$ . In inference, it delivers a 4x increase in QPS at the same compute and electricity cost, with elastic, on-demand scaling to reduce large-scale inference cost. # Power Management An energy large model monitors and predicts electricity use, real-time load, adjustable load, and PUE, and optimizes dispatch and efficiency. Built on SenseTime's large model and partnering energy algorithm architecture, it predicts the next 15 minutes of power demand and generates optimal dispatch strategies automatically. Reported performance: $90\%$ to $95\%$ demand forecast accuracy and $95\%+$ decision accuracy. Results at Lingang AIDC show annual PUE below 1.28 and 3,000,000 kWh of electricity saved per year. # 66 Compute infrastructure is not only the foundation of artificial intelligence; it is becoming a critical node in the energy transition and climate action. We are working to translate 'green sustainability, ethical governance, and inclusive empowerment' from industry aspirations into measurable, operational standards for next-generation infrastructure. In doing so, we are putting into practice our mission of staying committed to original innovation and enabling AI to lead human progress. # Dr. Xu Li Chairman and Chief Executive Officer, SenseTime Beyond data centers, SenseTime's grounded world model, Kaiwu, provides an efficient, controllable synthetic data generation method to reduce dependence on real-world data in assisted driving and embodied intelligence, lowering energy use from physical-device training and reducing real-world intervention. Since 2022, SenseTime has completed energy-saving retrofits in its office building, reducing greenhouse gas emissions such as $\mathrm{CO}_{2}$ by nearly 95 tons per year, and includes sustainability requirements in supplier evaluations, such as environmental management, hazardous substance control, labor rights protection, and employee training. When AI learns to manage its own electricity use carefully, we move closer to a greener digital future. Source: SenseTime # AI Datacenter as a Strategic Asset Hyperscale data center growth is highly concentrated and requires major capital. Global investment is projected to reach $6.7 trillion by 2030, including about $5.2 trillion for AI-capable compute facilities. This continues to concentrate infrastructure in a small group of large firms and in markets with low-cost electricity and enough grid capacity.[23] Many hyperscale sites already use about 100 MW or more per facility. Several new projects are requesting 100 to 300 MW at the grid interconnection stage, adding significant new demand to regional power systems.[24] # USD 6.7 trillion Global investment in hyperscale data centers by 2030 If clean generation and transmission do not expand fast enough, electricity demand from hyperscale facilities could grow faster than the decarbonized supply. This can push grids to rely more on existing fossil generation or delay plant retirements, increasing local carbon intensity and adding operational strain to power systems.[25] Increased centralization also concentrates computing capacity in a limited number of countries and firms. Where renewable buildout cannot keep up with AI-driven demand, the gap between compute growth and clean energy scaling increases pressure on host-region energy systems.[26] Governments and utilities therefore treat AI cloud infrastructure as strategic and are intervening in energy procurement, grid expansion, and interconnection rules to manage reliability and security risks.[27] Leadership in AI computing is widely seen as a strategic asset, prompting state-backed hyperscale cloud growth in the United States, the Gulf, and East Asia. The UAE's Stargate program in Abu Dhabi is a partnership involving G42 and global leading tech firms. It plans about 5 GW of AI data centre capacity, starting with an initial phase of about 1 GW powered by nuclear, solar, and gas. The program positions AI infrastructure alongside power plants and industrial zones in national development planning. Sameer Al Shethri, Vice President of the National Industrial Development Center, indicates that AI has become a primary driver and the heartbeat of industrial competitiveness and sustainability in the Kingdom of Saudi Arabia. China is taking a similar approach through its state-coordinated "East-to-West Computing" strategy. It links extensive AI use cases in East China with data centre clusters in West China powered by large-scale wind, solar, and hydropower, integrating sustainable AI computing into long-term grid and regional planning. # Geographical, Power, and Financing Archetypes Geography and grid design strongly affect the carbon intensity of hyperscale data centers. Zhou Yiping, Founding Director of the United Nations Office for South-South Cooperation, notes that the snowballing growth of AI energy use is driving potential conflicts between AI development objectives and sustainability, creating hidden costs for society. Recent studies find that hyperscale data centers are becoming long-term, stable "super offtakers" of electricity, tying AI infrastructure closely to regional power systems.[28] Over $60\%$ of global hyperscale data center capacity, measured in commissioned IT power (MW), is concentrated in approximately 20 major metropolitan markets worldwide, making location a key decarbonization factor shaped by climate conditions, energy supply, climate risk, and power-market design.[29] Cooler climates reduce cooling demand, which partly explains why leading operators frequently select Nordic countries and northern North America. However, low-carbon electricity alone is insufficient. Even regions rich in hydro, wind, or solar can face pronounced seasonal variability, including winter wind fluctuations in Nordic systems or hydropower output linked to rainfall and snowmelt. As a result, energy storage, backup generation, and grid redundancy remain critical to ensuring an uninterrupted power supply. Climate risk further constrains siting decisions. A 2025 global assessment of nearly 9,000 data centers reports widespread exposure to flooding, storms, wildfires, extreme heat, and sea-level rise. $^{30}$ Where flood protection, fire mitigation, and climate-adaptive design are inadequate, outages, repairs, and asset replacement can offset the benefits of cleaner electricity and favorable climate conditions. Sustainability objectives also interact with regulatory and operational constraints. Multinational operators must comply with data-sovereignty regimes such as the General Data Protection Regulation (GDPR), introducing trade-offs between minimizing carbon intensity and meeting legal, latency, and business requirements. In some cases, low-carbon regions are geographically distant from major user bases, increasing latency or straining network capacity, which can be particularly limiting for latency-sensitive AI and cloud workloads. Figure 1.3a Hyperscale AI Center Location Factors Source: WeCarbon Analysis Al date centers have a distinct electricity profile. Once built, they run at a steady, near-continuous load and can operate for decades. Their scale, scarcity, and stable demand make them attractive for renewable energy financing. For developers and lenders, long-term predictable electricity demand lowers revenue swings and demand risk, which improves project bankability and reduces the cost of capital for new renewable generation.<sup>31</sup> Mohammed Abdul Mujeeb Khan, Project Manager at Clean Rivers, also added that, to mitigate Al's carbon footprint, regulatory frameworks should require full-life-cycle emissions reporting for Al projects and enforce the use of renewable-powered data centers. Peng Yucheng, Chief Executive Officer of Midas Innovation Group, notes that AI is shifting from a cost center to a responsibility center and that its carbon footprint is no longer a hidden cost. AI data centers are also becoming anchor customers for clean energy projects. These structures strengthen the case for large solar and wind projects and align digital infrastructure growth with energy system expansion. In practice, this accelerates financing and creates a feedback loop in which reliable, clean power supports AI deployment and AI demand helps expand renewable energy systems.[32] CASE STUDY 2 # Power-to-Compute at Hyperscale Chidata Group is a leading carrier neutral hyperscale data center solutions provider and a pioneer in next generation AI-ready infrastructure across China. Guided by its mission to "Efficiently Convert Electrical Power Into Computing Power", the company plans, designs, builds and operates hyperscale data center clusters located in strategically important computing hubs, including major nodes in the Northern China region under the national "East-to-West Computing" initiative. Chidata's leadership is reflected in global recognitions such as the LEED Building Design and Construction (BD+C) Platinum certification for Huailai Headquarters Park Building One D, which is the only data center project in China to earn Platinum in the year 2025. Chindata's coherent, infrastructure design archetype aims for increasing density, efficiency, and lifecycle performance requirements of the AI workloads, including modular construction, simplified power design, hybrid cooling and intelligent operations. - Modular prefabrication allows the delivery of a 36 MW hyperscale project in approximately 6 months $50\% -75\%$ shorter than traditional cycles. - Their "X-Power" system supports a high-reliability, extensive range of workloads from 12kW per rack for edge inference applications to 150kW per rack for hyperscale GPU clusters used in AI training. It achieves this through the application of 800V high voltage direct current architecture, multilevel energy storage and higher voltage grades that increase power delivery efficiency and help to alleviate power bottlenecks associated with AI cluster deployment. - Their "X-Cooling" solutions integrates air cooling, cold plate liquid cooling and immersion liquid cooling into a unified system capable of achieving PUE levels between 1.12 and 1.14 during live operation, which is validated across diverse environments. The water-free and wastewater recovery features have saved an estimated 250,000 metric tons of freshwater and reused as much as $60\%$ of total water in the cooling systems. # 66 Al has become a powerful engine to reshape the world. Our responsibility is not only to support its growth but also to ensure that this growth is clean, efficient, and sustainable. Chindata integrates Al technologies across the full lifecycle of our businesses to continuously reduce energy consumption, lower carbon emissions, and advance long-term sustainable development. # Nick