> **来源:[研报客](https://pc.yanbaoke.cn)** # AI for Good Impact Report 2nd edition 2025 # Disclaimer The designations employed and the presentation of the material in this publication do not imply the expression of any opinion whatsoever on the part of the International Telecommunication Union (ITU) or of the ITU secretariat concerning the legal status of any country, territory, city, or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. The mention of specific companies or of certain manufacturers' products does not imply that they are endorsed or recommended by ITU in preference to others of a similar nature that are not mentioned. Errors and omissions excepted; the names of proprietary products are distinguished by initial capital letters. All reasonable precautions have been taken by ITU to verify the information contained in this publication. However, the published material is being distributed without warranty of any kind, either expressed or implied. The responsibility for the interpretation and use of the material lies with the reader. The opinions, findings and conclusions expressed in this publication do not necessarily reflect the views of ITU or its membership. Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited (DTTL), its global network of member firms, and their related entities (collectively, the "Deloitte organization"). DTTL (also referred to as "Deloitte Global") and each of its member firms and related entities are legally separate and independent entities, which cannot obligate or bind each other in respect of third parties. DTTL and each DTTL member firm and related entity is liable only for its own acts and omissions, and not those of each other. DTTL does not provide services to clients. Please see www.deloitte.com/about to learn more. This communication contains general information only, and none of Deloitte Touche Tohmatsu Limited (DTTL), its global network of member firms or their related entities (collectively, the "Deloitte organization") is, by means of this communication, rendering professional advice or services. Before making any decision or taking any action that may affect your finances or your business, you should consult a qualified professional adviser. No representations, warranties, or undertakings (express or implied) are given as to the accuracy or completeness of the information in this communication, and none of DTTL, its member firms, related entities, employees or agents shall be liable or responsible for any loss or damage whatsoever arising directly or indirectly in connection with any person relying on this communication. DTTL and each of its member firms, and their related entities, are legally separate and independent entities. # ISBN 978-92-61-42161-8 (PDF version) 978-92-61-42171-7 (Electronic version) # Please consider the environment before printing this report. © ITU 2026 Some rights reserved. This work is licensed to the public through a Creative Commons Attribution-NonCommercial-Share Alike 3.0 IGO license (CC BY-NC-SA 3.0 IGO). Under the terms of this licence, you may copy, redistribute and adapt the work for non-commercial purposes, provided the work is appropriately cited. In any use of this work, there should be no suggestion that ITU endorse any specific organization, products or services. The unauthorized use of the ITU names or logos is not permitted. If you adapt the work, then you must license your work under the same or equivalent Creative Commons licence. If you create a translation of this work, you should add the following disclaimer along with the suggested citation: "This translation was not created by the International Telecommunication Union (ITU). ITU is not responsible for the content or accuracy of this translation. The original English edition shall be the binding and authentic edition". For more information, please visit https://creativecommons.org/licenses/by-nc-sa/3.0/igo/ # AI for Good Impact Report 2nd edition 2025 # Foreword by ITU We are living through defining times for humanity's relationship with technology. Artificial intelligence is advancing faster and faster, reshaping how we learn, work, deliver healthcare, manage natural resources, and respond to global challenges. While increasingly powerful and autonomous technologies create a wealth of opportunity to drive global prosperity, they also continue introducing new and complex risks. The findings of this year's AI for Good Impact Report show these dynamics becoming even more pronounced. Generative AI has entered the mainstream and emerging agentic AI systems are poised to transform entire industries. Research into artificial general intelligence and quantum technologies point to future capabilities that could profoundly affect societies, economies, and governance systems worldwide. AI is already helping to deliver meaningful benefits in areas from personalized learning and skills development to better healthcare diagnostics, early warning systems, and climate adaptation tools. The real-world applications outlined by this report demonstrate that, when guided by shared values, AI can be a powerful force for good. The challenges, however, are growing in scale and urgency. Autonomous systems raise new questions around accountability and human oversight. Bias and misinformation remain key risks. Changing labour markets call for investment in new skills. And the energy and resource demands of AI infrastructure keep rising. Governments around the world are taking diverse regulatory and governance approaches, from comprehensive risk-based frameworks to flexible, innovation-oriented models. The United Nations system, supported by ITU's AI for Good initiative, is working to strengthen international cooperation on AI development anchored in human rights and the public interest. This AI for Good Impact Report shares the latest insights on emerging technologies, governance trends, and AI use cases across education, healthcare, environmental sustainability, infrastructure, and agriculture. It highlights how ethical, inclusive AI can help accelerate progress toward the future we want, while identifying actions needed to manage risks and close digital divides. It can help guide informed decision-making and collective action to ensure that responsible AI governance and AI innovation remain mutually reinforcing goals. Doreen Bogdan-Martin Secretary-General, International Telecommunication Union # Foreword by Deloitte Organizations today are discovering how they can use AI to drive innovation and improve productivity, but the potential of AI is measured not only in business value. More significantly, AI can contribute to the benefit, wellbeing, and betterment of communities around the world. With AI, we can imagine a world where AI enables high-quality educational instruction regardless of geography or income. A world where communities enjoy confidence in their access to basic necessities, like food and water, as AI tools change how we address food security and resource management. We can imagine healthier communities, aided by applications and medical research that provide health services and breakthrough treatments, irrespective of means or economic status. And powerful approaches to improving environmental sustainability, positively impacting livelihoods on a grand scale can be imagined. This report explores the trajectory of AI across industries and sectors. It also considers the significant impact AI has on workforces and the importance of helping people acquire the skills and knowledge to thrive in an AI-fuelled world. Indeed, if AI is a boon to humanity, the needs and concerns of people across many walks of life should be reflected in the tools and systems transforming the world around them. The guiding light for the path ahead is a philosophy of human centricity. People should be at the heart of how we approach this world-changing technology. The public sector is a valuable collaborator as society moves to bolster its collective development and advancement through leveraging AI. This is a pivotal moment. The brightest future will likely result when AI is inherently collaborative, wherein the public, private, and civil sectors work together to mitigate challenges while amplifying the good AI can enable. With a focus on people, trust, and value to society, we can shape a future with AI that meets and even exceeds our greatest ambitions. Beena Ammanath Executive Director Global Deloitte Al Institute # Table of contents Foreword by ITU Foreword by Deloitte . 3 Executive summary .vii Introduction The present state and potential future trajectory of Al 1 From Generative AI to autonomous agents 1 Agentic AI. 1 Artificial general intelligence (AGI) 3 Sovereign AI 3 Quantum AI. 5 2025 AI regulatory landscape 7 Recent developments in global AI governance 7 The EU AI Act. 8 AI regulatory developments at national level (Note) 11 Europe 11 Asia-Pacific (APAC) 11 Africa and the African Union Al Continental Strategy. 13 Americas 14 Middle East. 15 Navigating AI's challenges 16 Building ethical and trustworthy AI. 16 Ethical considerations are central to AI development and deployment. 16 Design principles and ethical safeguards. 17 Responses from policymakers, governments and industry 18 Shaping an inclusive AI economy 19 Understanding sources of bias and implementing responsible mitigation strategies 19 Bridging the AI divide by ensuring connectivity and fair access to AI resources 21 Responses from policymakers, governments and industry 22 Safeguarding privacy in the age of AI 23 The need for privacy protection and effective data governance 24 Technical safeguards and privacy-preserving AI 24 Responses from policymakers, governments and industry 25 # Building an AI-ready workforce 26 AI's impact on jobs and skills requires navigating workforce evolution 26 AI complements human labour by enhancing efficiency and creating new opportunities. 27 Responses from policymakers, governments and industry 27 Environmental sustainability in AI 28 AI and energy: Navigating the increase in data centre electricity consumption.... 28 Advancing sustainable AI infrastructure through sustainable data centres 29 Responses from policymakers, governments and industry 30 # How AI is being used to tackle global challenges 32 # Education and skills development. 32 Bridging educational opportunities and labour market shifts through AI. 32 AI-driven personalized learning through intelligent tutoring systems and adaptive learning platforms. 33 AI-powered assessment and analytics 34 Connectivity as a foundation for digital inclusion and access to education. 34 Key considerations for stakeholders 35 # Health and health care 35 Diagnostics and early detection 36 Drug discovery and development 37 Conversational AI and virtual health assistants 38 Key considerations for stakeholders. 40 # Environmental sustainability. 40 AI as a tool for climate adaptation and sustainable development 40 AI for biodiversity monitoring and conservation 41 AI-driven innovation for a sustainable and efficient energy future 42 Enhancing climate resilience through AI 43 Key considerations for stakeholders. 45 # Infrastructure and smart cities 46 The role of AI in shaping future cities 46 Transforming urban planning through AI-enabled digital twins 48 Optimizing urban mobility with AI technologies 48 Infrastructure resilience for increased public safety and security 49 Key considerations for stakeholders. 50 # Food security and agriculture 51 The emergence of AI as a transformative technology in enhancing food security and agricultural productivity 51 AI for precision agriculture and resource efficiency 52 AI-driven solutions for livestock health and productivity. 52 Transformative innovations shaping sustainable agrifood systems 53 Key considerations for stakeholders. 54 Conclusion 56 Glossary 57 Authors and contributors from Deloitte 60 # Executive summary Artificial Intelligence (AI) is transforming the global landscape, influencing how societies learn, work, deliver health care, manage resources, and address environmental challenges. The AI for Good Impact Report 2025 provides an overview of AI's current state, potential future trajectory, regulatory environment, and its application across key sectors. This summary distils the report's key insights, aiming to offer readers a clear understanding of AI's opportunities, risks, and considerations. # Current AI landscape and emerging technologies The adoption of AI continues to accelerate, driven notably by Generative AI (GenAI), which has revolutionized a wide variety of technologies from content creation to automation. The evolution from GenAI to Agentic AI marks a significant shift: Agentic AI systems can act autonomously, making decisions and learning without human intervention. Looking ahead, increasingly capable AI agents are expected to operate not only individually but also in coordinated networks, forming ecosystems for resource sharing, information exchange, and even dedicated marketplaces. These intelligent agents are reshaping workflows across industries ranging from health care and finance to manufacturing and utilities, demonstrating enhanced efficiency but also raising ethical and workforce concerns. Artificial General Intelligence (AGI), representing AI with human-like cognitive abilities, remains a theoretical goal but is being pursued by leading technology firms. Its potential emergence within the next decade could profoundly impact society, necessitating early policy and strategic preparation. Sovereign AI initiatives reflect a growing global emphasis on technological autonomy. Some countries are investing in domestic AI infrastructure and capabilities to help reduce dependence on foreign technologies, safeguard national security, and tailor AI applications to local contexts. Quantum AI, though still in research phases, may lead to transformative advances by leveraging quantum computing to solve complex problems beyond classical capabilities. International efforts, such as the UN's International Year of Quantum Science and Technology 2025, highlight the importance of responsible and inclusive development in this frontier. # Regulatory developments and governance The proliferation of AI has increased the need for governance frameworks that balance innovation with risk mitigation. The European Union's AI Act, effective since August 2024, currently stands as the most comprehensive regulation, establishing risk-based classifications, transparency requirements, and prohibitions on harmful AI practices. Globally, diverse regulatory approaches are emerging. Japan has adopted a soft law model prioritizing innovation, while South Korea and China have implemented foundational AI laws and standards; South Korea emphasizes industry promotion alongside establishing a trustworthy and security foundation, and China focuses on trustworthiness and security. In Africa, progress is made by the adoption of frameworks and policies by various countries and through its Continental AI Strategy, the Africa Union aims to guide member states toward inclusive, ethical AI development, though regulatory maturity varies. In the United States, federal efforts seek to emphasize innovation through voluntary standards, while some states are introducing more prescriptive measures focused on transparency and safety. Overall, the Americas and Middle East show a mix of voluntary frameworks and targeted legislation, reflecting regional priorities. Internationally, the United Nations has established new mechanisms, including a High-Level Advisory Body on AI and an Independent International Scientific Panel, to foster global cooperation and develop inclusive governance aligned with human rights and sustainable development goals. The Global Digital Compact aims to create a universal framework promoting an open, secure, and human-centric digital future. # Addressing AI's challenges Ethical and trustworthy AI is paramount to maintaining public confidence and helping ensure inclusive benefits. Key concerns include bias, misinformation, loss of human control, and emergent behaviours in autonomous systems. Robust design principles, human-in-the-loop oversight, adversarial testing, and adherence to established and emerging standards are important to mitigating risks. AI's inclusivity challenges stem from data bias, unequal access to infrastructure, and digital divides that disproportionately affect underrepresented communities and developing regions. Efforts to bridge these gaps include promoting diverse datasets, fostering AI literacy, and expanding connectivity and computing resources globally. Privacy protection is increasingly complex as AI systems process vast sensitive datasets. Techniques like differential privacy, federated learning, and privacy-by-design principles are important to safeguard personal data. Regulatory frameworks such as the EU's GDPR in addition to other existing and emerging national laws provide legal foundations for data governance. AI is reshaping labour markets by evolving job roles and skill requirements. The World Economic Forum projects job transitions and new opportunities by 2030, with a growing demand for AI literacy and technical skills. Governments and organizations worldwide are investing in upskilling and reskilling initiatives, targeting groups and promoting inclusive access to AI education. AI's energy demands, particularly from data centres, can pose sustainability challenges. Data centres currently consume significant global electricity, with projections indicating a doubling of demand by 2030. Regional disparities in energy and water use, especially in water-stressed regions, highlight the need for tailored solutions. Sustainable data centres powered by renewable energy, energy-efficient algorithms, and innovative infrastructure models such as offshore floating data centres are emerging to help address these concerns. International collaborations and initiatives like the International Telecommunication Union's Green Digital Action promote transparency and sustainable AI deployment. # AI applications tackling global challenges Education: AI can enable personalized learning, intelligent tutoring systems, and enhanced assessment analytics, improving access and outcomes, particularly in underserved regions. Initiatives like the ITU's AI Skills Coalition focus on closing the global AI skills gap inclusively. Health care: AI supports diagnostics, early detection, drug discovery, and virtual health assistants, improving care quality and accessibility. Examples include AI tools reducing stroke treatment times in the UK and AI-powered platforms addressing maternal health in Africa. Environment: AI aids environmental action through emissions reduction, disaster preparedness, biodiversity monitoring, and energy optimization. It supports vulnerable regions by enhancing early warning systems and enabling data-driven adaptation strategies. Infrastructure and smart cities: AI-driven digital twins, traffic optimization, and disaster resilience tools enhance urban management, safety, and sustainability. Cities worldwide are increasingly adopting AI to improve quality of life and operational efficiency. Food security and agriculture: AI advances precision farming, livestock management, and supply chain transparency, addressing food insecurity and promoting sustainable practices. Digital public goods and open data platforms democratize access to agricultural intelligence. AI holds transformative potential to help address pressing global challenges across multiple sectors. Realizing these benefits requires coordinated efforts to foster ethical, inclusive, and sustainable AI development. Public and private sectors and civil society should collaborate to bridge digital divides, protect privacy, and invest in workforce readiness. By aligning innovation with human rights and environmental stewardship, AI can become a powerful catalyst for equity for all and resilient development worldwide. # Introduction Artificial Intelligence (AI) is one of the most powerful technologies shaping our world today. It is changing how people learn, work, receive health care, grow food, and can help protect the environment. This report aims to provide a balanced understanding of how AI can be used to address global challenges. The focus is on real-world applications, opportunities, and risks, with an emphasis on responsible and inclusive use of AI. This report does not go into technical details. Instead, it provides an overview of current AI trends, examples of how AI is being applied in different regions and industries, and insights into what the future might hold. It highlights both opportunities and challenges, including economic, ethical, and environmental considerations. By covering education, the environment, health care, infrastructure, and agriculture, the report shows how AI can be applied across important areas of development and public policy. This report is intended for a varied audience involved in shaping and responding to the development of AI. It serves as a guide to understanding the current state of AI and its likely developments over the coming years. It provides an overview of some of the key opportunities and risks associated with AI across different sectors. Additionally, it offers a framework to help shape policies and strategies and includes a non-exhaustive glossary with commonly used terms. Designed to be practical and easy to read, it delivers quick takeaways for decisionmakers as well as more detailed examples for those seeking deeper insights. # The present state and potential future trajectory of AI # From Generative AI to autonomous agents In 2025, the adoption of AI continues to gain momentum as organizations build the necessary structures and processes to extract meaningful value from its tools and technologies. Although this new era is only just beginning, its impact is already evident: startups and forward-thinking companies are working to adopt AI-focused business models, redefining conventional practices, and swiftly gaining market presence.<sup>1</sup> A survey by the Data & AI Leadership Exchange in partnership with DatalQ highlights that we are experiencing a transformational moment comparable to the internet's emergence in the 1990s. Although most organizations (76.2%) have been using earlier AI forms such as machine learning for more than three years, it is the arrival of Generative AI (GenAI) that has accelerated AI adoption and use dramatically. Corporate investments in AI and data are also rising sharply, with 98.4% of organizations reporting increased investments in 2024, up from 82.2% the previous year.[2] Over the past year, organizations have gained valuable experience with GenAI, leading to a deeper awareness of both the opportunities and challenges involved in scaling the technology. This has prompted many to revise their strategies and recalibrate their expectations. With investment in AI continuing to increase, the importance of a disciplined, methodical approach has also grown. While technical capabilities have strengthened, uncertainties around regulation and risk management have increased. Implications for workforce and talent continue to matter as AI drives a shift in skill requirements. Throughout this period, one priority has remained unchanged: the ongoing focus on improving data management, even among organizations that are already highly data-centric.<sup>3</sup> As AI reshapes organizational needs, larger companies report greater hiring activity for AI-related roles, particularly AI data scientists, machine learning engineers, and data engineers, positions that remain challenging to fill.4 Reflecting this shift, responsible AI, which includes ethics, governance, and risk mitigation, is becoming a top priority, with organizations focusing on establishing safeguards to help ensure ethical AI use.5 Correspondingly, an increasing number of organizations are appointing Chief AI Officers as leadership roles evolve to oversee AI and data strategies.6 To help organizations minimize risks and maximize the potential of GenAI in a safe and secure manner, Deloitte US has developed its Trustworthy $\mathsf{A}\mathsf{I}^{\mathsf{TM}}$ framework. Through the application of controls, guardrails, and training, organizations can be equipped to implement new technology in a secure, compliant, and responsible way.7 GenAI has changed how organizations create and automate content, driving unprecedented efficiency and innovation. However, the transition from GenAI to Agentic AI, which will be discussed in the next section, represents a significant leap. # Agentic AI Since 2024, AI intelligence has seen significant advancements, particularly with the emergence of Agentic AI. Unlike traditional GenAI tools that mainly answer questions, Agentic AI can perform actions and transform business processes by working alongside human employees as digital workers. Although still developing, this capability can reshape workflows across industries. The rise of multimodal AI models, for instance Google's Gemini suite, can generate and understand images, audio, video, handling multiple types of input and output. This progress is helping to move AI closer to human-like perception and interaction. Traditionally, business software digitalized existing tasks without altering underlying roles. Agentic AI disrupts this model by proactively suggesting directions, filling gaps, and adapting to context without waiting for explicit instructions.<sup>9</sup> One application of Agentic AI is autonomous drones capable of navigating complex, dynamic environments independently. These drones can identify relevant information and relay it to rescue teams in disaster zones, monitor crop health and detect pest or drought-affected areas in agriculture, and enhance logistics by delivering packages safely and efficiently through the air.[10] Another development is real-time, contextual multimodal AI assistants wearable as smart glasses.[11] Similarly, Agentic AI is driving digital transformation in health care. Diagnostic agents analyze vast amounts of medical and patient data to predict diseases more accurately while optimizing the use of medical resources. $^{12}$ In business, AI agents are increasingly deployed across sectors such as finance, banking, supply chain, and public sector to autonomously perform high-value tasks. $^{13}$ These examples show that Agentic AI is already reshaping labour markets and societal structures, with impacts seen across industries such as automotive, health care, and robotics. As part of the Fourth Industrial Revolution, it enables machines to replicate human behaviours and tackle complex challenges by integrating AI, robotics, and data. This transformation may show significant benefits and improved work efficiency but also raises ethical, workforce, and security concerns.[14] Despite its growing importance, the impact of Agentic AI on employment, social structures, and ethics often receive insufficient attention. These considerations are important as Agentic AI continues to evolve and integrate into society. $^{15}$ Gartner forecasts a dramatic shift in enterprise software, predicting that by 2028, $33\%$ of applications will incorporate Agentic AI, a sharp rise from less than $1\%$ in 2024. This evolution could enable $15\%$ of routine work decisions to be made autonomously, signalling a move toward systems that not only assist but act independently within business environments. $^{16}$ Looking ahead, we can expect major leaps in automation and innovation driven by increasingly capable AI agents operating both individually and in coordinated networks. These agents will become more precise, intelligent, and widely available, potentially forming their own ecosystems for collaboration, resource sharing, and information exchange, including dedicated marketplaces and communication protocols.[17] In payments and e-commerce, this means AI agents can research products, initiate purchases, optimize checkout flows, and even manage virtual spending through tools like Stripe's agent-ready checkout and virtual card systems. Companies like PayPal and eBay are already piloting Agentic AI to enhance user experience and automate consumer interactions, while Klarna and Remitly have demonstrated measurable gains in customer service efficiency using GenAI.[18] Public perception of AI has grown more cautious. Concerns focus on the environmental impact of AI, especially the high energy consumption of data centres, which could drive up energy costs for consumers and strain resources. There are also worries about the impact of automation and the fast pace of AI development, which may challenge society's ability to adapt and reskill the labour market. Similarly, scepticism remains about whether AI's benefits will be shared fairly. Despite these concerns, AI holds potential to expand access to services such as education and financial advice and to encourage investment in sustainable energy. The next sections will examine how AI has the potential to evolve over the next three to five years, highlighting areas for stakeholders to consider. # Artificial general intelligence (AGI) AGI is a theoretical area of AI research focused on creating software that exhibits human-like intelligence and the capacity for self-directed learning. Unlike current AI technologies, which operate within predefined parameters, such as image recognition models that cannot perform unrelated tasks like website building, AGI aims to develop systems capable of autonomous self-control, self-awareness, and the ability to acquire new skills independently. This would enable AGI to solve complex problems in unfamiliar contexts without prior training. However, AGI with human-level abilities remains a theoretical concept and an ongoing research goal.[19] Leading technology companies are investing billions and actively advancing AI capabilities with the ambition of achieving AGI or even superintelligence.[20] Some experts propose a narrower definition of AGI as achieving human-level performance across most economically relevant digital tasks, suggesting this milestone could be reached within five years. Others view AGI as a moving target, with the race toward its development likely to continue for many years.[21] The exact timing of AGI's arrival remains uncertain, with predictions placing its arrival within five to ten years. Nonetheless, its eventual emergence is expected to impact each facet of life, business, and society. Corporate executives and policymakers are encouraged to begin understanding the trajectory toward machines attaining human-level intelligence and to prepare for the transition to a more automated world.[22] As definitions of AGI evolve alongside its capabilities, the emergence of superintelligence could bring profound changes to security, privacy, and societal norms. # Sovereign AI Countries are increasingly recognizing the complex nature of GenAI, acknowledging both its potential and inherent risks, as well as its implications for economic growth and national security. In response, some nations are actively developing their own AI infrastructure, capabilities, and industries to enhance competitiveness and safeguard their futures. This effort is often framed as building 'Sovereign AI'.[23] Sovereign AI aims to reduce dependence on foreign AI technologies by developing domestic capabilities and securing access to critical data, technologies, knowledge, and infrastructure within national borders. This approach helps protect countries from potential supply chain disruptions and strengthens their national sovereignty. Consequently, states seeking technological autonomy may increasingly view sovereign AI as a strategic path forward.[24] [25] By aligning AI systems with national digital public infrastructure (DPI), such as digital ID, payments, and data exchanges, countries can help ensure that AI serves public interest, enhances service delivery, and reflects local values. This integration can allow AI models to be trained on locally relevant, high-quality datasets, such as multilingual corpora, or the same text in more than one language, and sector-specific data like crop yields or weather patterns, supporting context-aware applications. In India, for example, farmers receive AI-powered advisories on crop insurance and government programmes via voice-based interfaces in local languages, while in Bangladesh, AI tools translate court judgments to improve public access to legal information. Behind the scenes, AI strengthens fraud detection in digital finance and enables biometric verification in national ID systems, reinforcing trust and efficiency within public infrastructure.[26] In Switzerland, the 2025 launch of Apertus marks a significant step toward publicly governed AI infrastructure, offering open models and compute access tailored to multilingual needs such as Swiss German, Romansh, and others.[27] The United States advances sovereign AI through expanded public infrastructure like the National AI Research Resource (NAIRR), aiming to secure domestic innovation and strategic autonomy across critical sectors.[28] Meanwhile, the United Arab Emirates (UAE) has developed initiatives like Falcon LLM, an open-source language model developed at the Technology Innovation Institute (TII),[29] and the AI Campus in Abu Dhabi. The country is also forging strategic international collaboration, most notably a 2025 agreement with the United States to establish the "US-UAE AI Acceleration Partnership" based on a set of joint commitments.[30] India is making a concerted effort to overcome challenges such as linguistic diversity and underinvestment in AI research. Despite being a global technology hub, India has lagged other countries in homegrown AI innovation due to limited research and development (R&D) funding, fragmented data, and a services-oriented tech ecosystem.[31] The more recent launch of generative pre-trained transformer (GPT) models acted as a catalyst, prompting India's Ministry of Electronics and Information Technology (MeitY) to mobilize resources rapidly, including access to nearly 34,000 graphics processing unit (GPUs) and to solicit proposals for foundation models tailored to Indian languages and needs.[32] Initiatives like Sarvam AI's large-scale multilingual models and innovative tokenization techniques address India's linguistic complexities, aiming to reduce the 'language tax' and develop AI that serves its varied population. Supported by the US$1.25 billion IndiaAI Mission, this sovereign AI push combines public funding, private sector engagement, and emerging research programmes to build AI infrastructure, foster innovation, and extend AI benefits to sectors such as education, agriculture, and healthcare.[33] # UK's Sovereign AI Unit The UK's Department for Science, Innovation and Technology (DSIT) has established the Sovereign AI Unit to develop and leverage the nation's AI capabilities, aiming to drive economic growth and strengthen national security. The unit collaborates closely with the Prime Minister's Adviser on AI to deliver its mandate. Announced in the Al Opportunities Action Plan prepared by Matt Clifford, the Sovereign Al Unit is backed by up to £500 million in funding. Its objectives include investing in UK companies to support the growth of Al national champions. Working alongside Innovate UK and the British Business Bank, the unit aims to help high-potential startups launch and scale within the UK. Additionally, the unit focuses on creating and enhancing UK AI assets and enablers, such as data infrastructure, computing resources, and talent development. It also seeks to position the UK as the preferred collaborator for frontier AI companies, ensuring that both public and private sectors have reliable access to, and influence over, cutting-edge technologies. This approach is designed to ensure that the benefits of transformative AI reach communities across the country.[34] The broader global and economic context will likely influence AI's impact and shape how societies benefit from, or are challenged by, AI. A new form of digital competition, a so-called 'space race' for sovereign AI, is underway.[35] # Quantum AI Though still years from practical deployment, quantum artificial intelligence (quantum AI) represents a long-term frontier with transformative potential. Quantum AI involves using quantum technologies to run AI systems. Given that AI models demand significant computational power and infrastructure to operate efficiently, quantum AI seeks to replace traditional AI infrastructure with quantum computing resources, enabling faster and more cost-effective data processing.[36] # What is quantum technology? Quantum technology derives from the principles of quantum mechanics, which govern the behaviour of subatomic particles. These principles were established in the 1920s through the contributions of physicists such as Niels Bohr, Werner Heisenberg, and Erwin Schrödinger. Although the term may seem modern, quantum technology has been around for some time. It played a key role in the creation of nuclear power and remains essential to the semiconductors used in mobile phones and numerous other electronic devices.[37] A key advantage of quantum AI lies in its ability to solve problems that are exponentially difficult or nearly impossible for classical computers. This includes optimization challenges important to fields such as logistics, finance, and materials science. Additionally, quantum AI holds promise for simulating complex systems like chemical reactions and protein structures, areas where AI has already made significant strides, as seen in the 2024 Nobel Prize for protein structure prediction.[38] Other potential applications encompass quantum machine learning, quantum simulations, and the development of new materials. Although significant technical hurdles remain, specialists are optimistic that advancements in hardware and software could enable solutions to problems currently deemed unsolvable.[39] Despite these exciting prospects, quantum AI remains largely in the research phase, with most AI workloads still relying on conventional computing resources for operation.[40] While quantum technologies offer transformative potential across medicine, environment, trade, and more, they also introduce risks, particularly in cybersecurity.[41] # Quantum for Good Quantum technology is poised to transform our world, but its development should be inclusive, ethical, and sustainable. The initiative Quantum for Good is spearheading responsible innovation, fostering global collaboration, and promoting the creation of inclusive standards to ensure that quantum technology delivers tangible benefits.[42] Recognizing the significance of quantum science and to raise awareness of its historical and future impact, numerous national scientific societies have united to support the marking of 100 years since the discovery of quantum mechanics with a United Nations-declared international year. On 7 June 2024, the United Nations officially proclaimed 2025 as the International Year of Quantum Science and Technology (IYQ). As part of this, the ITU and its collaborators are working to harness quantum's potential to accelerate progress in critical areas such as the environment, healthcare, cybersecurity, and digital inclusion.[43] In response to these rapid developments, the next section will examine key regulatory changes over the past 12 months, highlighting how governments worldwide are adapting legal frameworks to help manage AI's risks and opportunities. # 2025 AI regulatory landscape This chapter explores the evolving international regulatory landscape, highlighting initiatives including the UN's High-Level Advisory Body on AI and the Global Digital Compact. These developments were reinforced during the UN General Assembly's high-level meetings in September 2025, where Member States and global leaders debated the ethical use of AI in military contexts, the governance of autonomous systems, and the urgent need to safeguard information integrity.[44] Alongside this, the chapter discusses the European Union's AI Act. It further examines national regulatory developments across key jurisdictions, illustrating varied approaches to AI oversight. # Recent developments in global AI governance AI systems present a range of societal and ethical risks including concerns around bias in algorithmic decision-making, the need for trustworthy and ethical AI aligned with human rights, and challenges in data privacy and consent. In response to these challenges, the United Nations has launched two new global bodies to strengthen AI governance: the Global Dialogue on AI Governance and the Independent International Scientific Panel on AI. Announced during a high-level UN General Assembly (UNGA) meeting in September 2025, these initiatives aim to foster international cooperation, align regulatory approaches, and provide evidence-based guidance on AI's societal impacts. The Global Dialogue serves as a platform for governments, industry, and civil society to share good practices, while the Scientific Panel functions as an early-warning system, offering insights into emerging risks and opportunities.[45] This initiative builds on the Global Digital Compact adopted in September 2024, a global framework for digital cooperation and AI governance.[46] At the UN's high-level meetings held in September 2025 discussions addressed concerns such as disinformation, autonomous weapons, and algorithmic manipulation, with UN Secretary-General Antonio Guterres outlining four global priorities: ensuring human control over AI in conflicts, establishing coherent international regulation, safeguarding information integrity, and closing the global AI capacity gap. Member States and regional blocs reaffirmed their commitment to human-centric AI. Proposals such as a Global Fund for Capacity Development were introduced to support fair access and mitigate the concentration of AI benefits, reinforcing the UN's role in shaping a responsible and collaborative global AI ecosystem.[47] # Global Digital Compact The Global Digital Compact (GDC) aims to create a broad and inclusive international framework that supports the collaborative efforts of multiple stakeholders to bridge gaps in digital access, data, and innovation. It seeks to establish guiding principles, goals, and actions that promote a digital future that is open, free, secure, and centred on human rights, while also advancing the achievement of the Sustainable Development Goals. # Global Digital Compact (continued) This Compact was first proposed in the UN Secretary-General's report, "Our Common Agenda", as a response to the Member States' Declaration commemorating the United Nations' seventy-fifth anniversary (A/RES/75/1). The report recommended that the Global Digital Compact be finalized at the Summit of the Future in September 2024, with participation from relevant stakeholders. To support this process, the Secretary-General released a policy brief outlining the Compact's objectives and guiding the preparations and negotiations ahead of the Summit, where the Compact will be a central topic. The General Assembly, through decision 77/568, has committed to conducting open, transparent, and inclusive intergovernmental consultations on the Compact. Should the Compact be agreed upon through these negotiations, it will become a key outcome of the Summit of the Future and will be appended to the Pact for the Future. The intergovernmental negotiations are being co-facilitated by Sweden and Zambia.[48] Regulations can play a role in mitigating AI-related risks by setting standards for transparency, accountability, and safety. For example, the EU AI Act requires conformity assessments for high-risk systems, while other jurisdictions focus on data governance, fairness audits, and environmental reporting. Described in more detail in the following sections, these regulations and frameworks aim to help ensure AI serves the public good, protects individual rights, and fosters trust across various contexts. # The EU AI Act On a regional level, the European Union (EU) AI Act entered into force on 1 August 2024 and applies across the 27 EU member states, with significant extra-territorial reach for AI providers offering products or services on the EU market. To support effective implementation, the European Commission has issued non-binding guidelines on the definition of AI systems. These guidelines clarify how to determine whether a software system qualifies as an 'AI system' under the Act and will evolve over time based on practical experience and emerging use cases. They were published alongside guidance on prohibited AI practices. Examples of systems not considered AI under the Act include basic data processing tools such as spreadsheets or dashboards that execute pre-defined instructions without learning, as well as classical heuristic systems like rule-based chess programmes that do not adapt through data. Prohibited systems include, inter alia, subliminal or manipulative AI that distorts human behaviour beyond conscious awareness and AI exploiting vulnerabilities of individuals due to age, disability, or social or economic status. The European AI Office has also released interpretative guidelines for general-purpose AI (GPAI) models, applicable since 2 August 2025. These include a practical guide and a template for documenting training data, and they serve to complement the voluntary EU GPAI Code of Practice. While not legally binding, they provide insight into the Commission's enforcement approach. The implementation is phased. Provisions on prohibited practices and AI literacy took effect in February 2025, while obligations for GPAI models, governance structures, and penalties began in August 2025. Full application of high-risk AI requirements will continue through 2026 and beyond. As timelines for further standards for high-risk AI systems are delayed, the European Commission proposed a simplification package for the AI Act in November 2025. The proposal introduces a deferral of high-risk application obligations by up to 16 months. Further, with the competitiveness of EU businesses in mind, the proposal aims to reduce compliance burdens through streamlined documentation and reporting requirements, particularly for SMEs and small mid-cap companies. These measures aim to maintain the Act's core objectives while ensuring proportionality and legal certainty, enabling European firms to focus resources on innovation and growth. The proposal will now be discussed by European Parliament and EU Member States with an expected conclusion until summer 2026. While product safety and consumer protection remain central, AI may now be seen as a strategic tool to boost competitiveness and autonomy. The EU aims to reduce dependency on non-EU technologies and strengthen its position in the global AI race. The AI Continent Action Plan, part of the 'AI Made in Europe' strategy, includes the InvestAI initiative to mobilize €200 billion for AI development. This plan focuses on building AI factories and gigafactories for large-scale model training using Europe's supercomputing network, expanding data infrastructure, and investing in talent. The EU's approach combines risk-based regulation with massive investment, signalling that the EU intends not only to regulate AI but also to compete globally on innovation and infrastructure. # The European Union AI Act in a nutshell The development of the EU AI Act has been a carefully orchestrated process, beginning with the formation of a 'High-Level Expert Group on AI' by the European Commission. This group was tasked with drafting policy recommendations focused on advancing trustworthy AI. Following these initial efforts, the European Commission released its European approach to AI in February 2020 and subsequently presented the first proposal for the EU AI Act in April 2021. The AI Act represents the result of a five-year political process aimed at balancing innovation with the need for secure and reliable AI systems. Its primary objective is to enhance the functioning of the single market concerning AI products and services, while also promoting a human-centric approach to AI development and deployment, putting the protection of EU citizens at the forefront of this regulation. The Act applies to a broad range of stakeholders, including providers, deployers, importers, and distributors of AI systems within the EU, as well as non-EU entities whose AI systems are used within the EU. This approach reflects the regulatory framework seen in the General Data Protection Regulation (GDPR), emphasizing the importance of safety and innovation in equal measure. # The European Union AI Act in a nutshell (continued) The EU AI Act establishes a framework for regulating the deployment and use of AI within the EU, creating a standardized process for the market entry and operational activation of AI systems. This framework drives a harmonized approach across EU Member States. Serving as a product safety regulation, the Act employs a risk-based classification system, categorizing AI systems based on their use cases and assigning compliance requirements according to the level of risk they pose to users. This includes prohibiting certain AI applications deemed unethical or harmful, as well as imposing stringent requirements on high-risk AI applications to effectively manage potential threats. Additionally, t