> **来源:[研报客](https://pc.yanbaoke.cn)** # AI Ready – Analysis Towards a Standardized Readiness Framework Version 2.0 Interim Report January 2026 # Disclaimer The views expressed in this publication are those of the authors and do not necessarily reflect the views of ITU. Any references made to specific countries, companies, products, initiatives or guidelines do not in any way imply that they are endorsed or recommended by ITU, the authors, in preference to others of similar nature that are not mentioned. Requests to reproduce extracts of this publication may be submitted to jur@itu.int. This document is intended for informational purposes only. Information provided is correct as of June 2025. The designations employed and the presentation of the material in this publication do not imply the expression of any opinion whatsoever on the part of ITU concerning the legal status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. The mention of specific companies or certain manufacturer products does not imply that they are endorsed or recommended by ITU in preference to others of a similar nature that are not mentioned. All reasonable precautions have been taken by ITU to verify the information contained in this publication. However, the published material is being distributed without warranty of any kind, either expressed or implied. The responsibility for the interpretation and use of the material lies with the reader. The opinions, findings and conclusions expressed in this publication do not necessarily reflect the views of ITU or its membership. # ISBN 978-92-61-41911-0 (electronic version) 978-92-61-41921-9 (EPUB version) # AI Ready - Analysis Towards a Standardized Readiness Framework Version 2.0 Interim Report January 2026 Artificial Intelligence (AI) is reshaping the way we address complex societal challenges, offering new possibilities in areas such as healthcare, climate resilience, education, and digital inclusion. The ITU AI Readiness project was launched in 2024 to measure the ease/difficulties and the ability to reap the benefits of AI integration. Last year, to further advance the discussions, ITU launched the ITU AI Readiness pilot Plugfest to collate and study projects on applying AI to solve real world problems. The ITU AI Readiness project also called for engagement of experts to provide strategic feedback and guidance. 88 experts from 38 countries were carefully selected. Mentoring and comments on the Plugfest projects were provided by the experts in addition to valuable regional perspectives to shape the ITU AI Readiness Framework. This project brings together contributions from multiple sectors - industry, academia, government, and civil society - creating a collaborative environment where ideas, knowledge, and experiences are shared to develop the standardized AI Readiness Framework. Bringing the experience from analysing use cases, in 2025, an analytical approach was followed in combination with a bottom-up approach. This approach derives dimensions and metrics for readiness analysis from the Plugfest project reports. A way forward for integrating regional customizations is provided in the form of Indices. In addition to the analysis, a practical, living toolkit is designed and presented which can be used by countries, enterprises, Non-Governmental Organizations (NGOs), and other 3rd parties. We acknowledge the support and are very grateful for the encouragement provided by the Kingdom of Saudi Arabia and the Ministry of Industry and Information Technology of China during this project. We acknowledge also the work done by ITU Members in ITU Study Groups and for their contribution to AI Readiness standards. As we continue developing the AI Readiness Project, we look forward to deepening our collaboration with partners worldwide, developing AI Readiness standards, building AI Readiness capacity, and contributing to multi-level AI Governance. # Table of contents Foreword ii List of contributors iv Acronyms vi 1. Introduction 7 Background 7 Insights from AI Readiness Study 8 Report Structure 11 2. ITU AI Readiness Basic Framework 12 Data 13 Digital Infrastructure 15 Digital Skills 15 Innovation Ecosystem. 15 AI Policy 16 3. Structural Approach 17 Factors 17 Dimensions 18 4. AI Readiness Gap Analysis 35 5. AI Readiness Framework Engagement 37 AI Readiness Toolkit 37 6. Future work 43 Appendix: Additional Information 44 Appendix: FAQ 46 References 51 <table><tr><td>Name</td><td>Affiliation</td></tr><tr><td>Ahmed Said</td><td>Ministry of Communication and Information Technology, Egypt</td></tr><tr><td>Alireza Yari</td><td>ICT Research Institute, Iran</td></tr><tr><td>Ameny Khachlouf</td><td>Tunisia Telecom</td></tr><tr><td>Amit Kumar Srivastava</td><td>Department of Telecommunications, India</td></tr><tr><td>Amjad Maawia Elnayal</td><td>Telecommunications Regulatory Authority of Bahrain, Bahrain</td></tr><tr><td>Ammar Saleh Ali Muthanna</td><td>Saint Petersburg State University of Telecommunications</td></tr><tr><td>Anna Abramova</td><td>Moscow State Institute of International Relations (MGIMO)</td></tr><tr><td>Antonia Moreno</td><td>The National Center of Artificial Intelligence in Chile (CENIA)</td></tr><tr><td>Álvaro Soto</td><td>Pontificia Universidad Católica de Chile, The National Center of Artificial Intelligence in Chile (CENIA)</td></tr><tr><td>Asrat Mulatu Beyene</td><td>Addis Ababa Science and Technology University</td></tr><tr><td>Aysha Ahmed Alkoheji</td><td>Telecommunications Regulatory Authority of Bahrain, Bahrain</td></tr><tr><td>Chenxi QIU</td><td>China Academy of Information Communications Technology, MIIT of China</td></tr><tr><td>Fahad Albalawi</td><td>Saudi Data & Al Authority, Kingdom of Saudi Arabia</td></tr><tr><td>Habib Mohammed Hussien</td><td>Addis Ababa Science and Technology University</td></tr><tr><td>Halima Mohamed Ismaeel</td><td>Ministry of Transportation and Telecommunications, Bahrain</td></tr><tr><td>Ian Nyasha Mutamiri</td><td>Postal and Telecommunications Regulatory Authority of Zimbabwe, Zimbabwe</td></tr><tr><td>Innocent Nzimenyera</td><td>GGGI</td></tr><tr><td>Katarzyna Wac</td><td>University of Geneva</td></tr><tr><td>Kiran Raj Pandey</td><td>Health AI for All Network (HAINet)</td></tr><tr><td>Lilibeth Acosta</td><td>GGGI</td></tr><tr><td>Marcelo Gabriel Mendoza Rocha</td><td>Pontificia Universidad Católica de Chile, The National Center of Artificial Intelligence in Chile (CENIA)</td></tr><tr><td>Maxwell Ababio</td><td>Shield Tech Hub</td></tr><tr><td>Mohammed Alawad</td><td>Saudi Data & Al Authority, Kingdom of Saudi Arabia</td></tr><tr><td>Munezero Mihigo Ribeus</td><td>GGGI</td></tr><tr><td>Osmar Bambini</td><td>umgrauemeio</td></tr><tr><td>Prasha Sooful</td><td>NT Health, Australia</td></tr><tr><td>Rim Belhassine Cherif</td><td>Tunisie Telecom</td></tr><tr><td>Shan XU</td><td>China Academy of Information Communications Technology, MIIT of China</td></tr><tr><td>Shweta Khushu</td><td>Vector Institute</td></tr><tr><td>Tsafak Djoumessi Pauline Gnimpieba</td><td>Ministère des Postes et Télécommunications de la République du Cameroun, Cameroon</td></tr><tr><td>Xingzhi MA</td><td>China Academy of Information Communications Technology, MIIT of China</td></tr><tr><td>Yue QIN</td><td>China Academy of Information Communications Technology, MIIT of China</td></tr></table> # Acronyms <table><tr><td>AI</td><td>Artificial Intelligence</td></tr><tr><td>AI-RE Toolkit</td><td>AI Readiness Enablement Toolkit</td></tr><tr><td>API</td><td>Application Programming Interface</td></tr><tr><td>CPU</td><td>Central Processing Unit</td></tr><tr><td>EG</td><td>Expert Group</td></tr><tr><td>GPU</td><td>Graphics Processing Unit</td></tr><tr><td>IAP</td><td>Incident Action Plan</td></tr><tr><td>IoT</td><td>Internet of Things</td></tr><tr><td>IP</td><td>Intellectual Property</td></tr><tr><td>KB</td><td>Knowledge Base</td></tr><tr><td>KPI</td><td>Key Performance Indicator</td></tr><tr><td>ML</td><td>Machine Learning</td></tr><tr><td>NGO</td><td>Non-Governmental Organization</td></tr><tr><td>SDK</td><td>Software Development Kit</td></tr><tr><td>TAC</td><td>Technical Advisory Committee</td></tr></table> # 1. Introduction # Background This report provides an analysis of the Artificial Intelligence (AI) Readiness study aimed at developing a framework for assessing AI Readiness, which indicates the ability to reap the benefits of AI integration. By studying the actors and characteristics in different domains, a bottom-up approach is followed, which allows us to find common patterns, metrics, and evaluation mechanisms for the integration of AI in these domains. The ITU AI Readiness framework aims to engage with multiple stakeholders around the world, assess and improve the level of integration of AI in various domains, study use cases to validate the weightage of the key factors in those domains, improve global AI capacity building, and foster opportunities for international collaboration. In September 2024, ITU published its first version of the AI Readiness report, where 6 key fundamental factors were identified: - Open Data: Accessibility and quality of datasets for analysis of AI applications. Research: Collaboration between domain-specific and AI research communities. - Deployment: Infrastructure and ecosystem readiness for AI deployment. - Standards: Ensuring trust, interoperability, and compliance. - Open source: Enabling rapid adoption through an open developer ecosystem. - Sandbox: Platforms for AI experimentation and validation. To further study the role played by these components in the real practice, ITU and the Kingdom of Saudi Arabia called for engagement from the field and launched a pilot AI Readiness Plugfest during the 2024 GAIN Summit in Riyadh. The ITU AI Readiness Plugfest is an initiative to explain the AI Readiness factors to various stakeholders and allow stakeholders to "plug in" their regional AI readiness factors, such as data accessibility, AI models, compliance with standards, toolsets, and training programs. Additionally, the Technical Advisory Committee (TAC) and Expert Group (EG), composed of experts invited through AI for Good initiatives, provide strategic guidance and feedback on AI readiness projects. Expert Groups are composed of global experts with different backgrounds coming from 38 countries. Experts are mainly from Academia (33%), government ministries/regulatory authorities (32%), telecommunication companies, research institutes/Think Tanks, regional/international organizations, and private companies. There are 88 experts in EGs, among whom $62.5\%$ come from developing countries. 32 experts are women leading figures in the countries and the domain, representing $36\%$ of all experts. To study the sandbox environments and their influence on AI readiness, cloud credit support is provided to selected projects, further facilitating the development and deployment of AI solutions in real-world applications. In July 2025, the third ITU AI Readiness workshop at the ITU AI for Good Global Summit was hosted. The workshop invited global stakeholders, industry leaders, and researchers to foster collaboration on ITU AI Readiness. The workshop served as a compilation of projects towards ITU AI Readiness 2.0, featuring the sharing of plugfest project learnings along with the partner presentations centering on their understandings of AI Readiness. During the workshop, ITU announced its further steps towards ITU AI Readiness 3.0 activities. One of the main contributions of this report is the further development of the framework for assessing AI Readiness, which indicates the ability to reap the benefits of AI adoption. After the AI for Good Global Summit in July 2025, we continued our analysis, summarized the learnings from the plugfest project reports. By continuing AI use case studies, initiating consultations with experts from industry, research institute, academia and government, we derived 13 generic dimensions from the expert guidance during the plugfest. Metrics quantify and measure detailed domain-specific values under each dimension. Indices serve as filters or weightages, which capture the granular priorities of the user. Indices could be applied to both dimensions and metrics to allow users to adjust the relevant importance when self-evaluating. The basic framework and the details are complementary to each other, making the framework available for both policymakers with guidance on AI and domain experts with technical and actionable recommendations. For better stakeholder engagement around the ITU AI Readiness Framework, ITU designed a pilot AI Readiness Enablement Toolkit (AI-RE Toolkit), which is a dynamic model and a living tool that enables self-evaluation for the users. The toolkit uses the principle of a foundational model built from the ITU AI Readiness Knowledge Base (KB) in the ITU AI for Good Sandbox and a finetuned model integrating regional customizations for users to self-assess the AI performance in their context. The ITU AI Readiness Knowledge Base functions as the brain of the toolkit. It is built with AI techniques and gathers input mapped to 6 fundamental factors in the framework. Output from the framework contains the evaluation of the status quo, gap analysis, and customized actionable recommendations. Each time users input new materials, such as the latest version of the report, unstructured data, and deployment stories, the knowledge base can iteratively learn from the new input. To increase adoption from general users, the ITU AI Readiness Challenge, with a specific focus on 6 factors, was launched by the end of October 2025 during the AI for Good Impact Africa event in Johannesburg, South Africa. Participants were requested to build the basic framework of the knowledge base. To review the framework, dimensions, the pilot toolkit design, and the standards gap on the ground, several rounds of review meetings with experts from EGs were held, with a specific focus on collecting feedback and potential inputs. From the feedback with experts, potential users of the toolkit were identified, pain points of the users on the ground were noted, and contributions from the member states were discussed. # Insights from AI Readiness Study 1. Strengthening ICT-related higher education, leveraging open-source ecosystems, and engaging with international education and training platforms, and enabling leapfrogging opportunities can accelerate AI skills development. 2. A strong positive correlation exists between national income levels and general digital literacy, measured through ICT skill penetration. However, substantial variation exists within income groups. Middle-income countries often exhibit higher optimism and trust toward AI technologies than high-income economies, creating favorable conditions for large-scale AI adoption if skills gaps are addressed. Digital skills development accelerates most rapidly at the middle-income stage. ICT skill penetration typically remains low in low-income economies. Policy choices and education investment during this phase play a decisive role in widening or narrowing national AI readiness gaps. 3. Data readiness is a critical determinant of effective, trustworthy, and inclusive AI adoption. Beyond data scale and accessibility, the quality, diversity, representativeness, and labeling of datasets directly shape AI system performance, as well as their fairness, transparency, and adaptability. 4. Insufficient data quality and biased datasets risk reinforcing discrimination and limiting real-world impact, particularly in localized deployment contexts. Strengthening public data openness and data service capabilities – including data collection, data cleaning, and data labeling – is therefore essential to enable scalable and localized AI adoption across priority sectors such as education, agriculture, and transportation. 5. AI readiness globally is constrained by limited data scale and uneven Internet penetration. Global Internet usage stands at $55.56\%$ , indicating that nearly half of the world's population remains outside the digital ecosystem required for large-scale AI data generation. While $57\%$ of countries have Internet penetration above $60\%$ , nearly half remain below $50\%$ , and only $18\%$ of countries exceed $90\%$ penetration, highlighting persistent constraints on global data scale for AI development. 6. Data readiness gaps are driven by service capability and governance, not access alone. On average, developed economies have more than three times as many Internet service providers per million inhabitants as developing economies, with median values showing an even larger gap. In addition, lack of data governance frameworks limits effective and trustworthy data use. 7. Basic network coverage supports entry-level AI use, but advanced network readiness remains uneven. While $96\%$ of the global population is covered by mobile broadband, access to advanced networks remains highly uneven. Global 4G coverage reaches $93\%$ , but only $56\%$ in low-income economies. Global 5G coverage stands at $55\%$ , compared to $84\%$ in high-income economies and just $4\%$ in low-income economies, with significant regional and urban-rural disparities. 8. Shortfalls in computing infrastructure, energy supply, and edge devices constrain AI deployment. Availability of data centers, per capita electricity supply in developed economies is more than twice that of developing economies. IoT market size in developed economies is on average four times larger than in developing economies, limiting the availability of edge devices for AI-enabled industrial applications. 9. Open-source technologies lower entry barriers for AI adoption worldwide. Contributions to major open-source AI and LLM-related repositories extend beyond application-layer development to include core model architectures, training pipelines, evaluation benchmarks, and governance mechanisms. Measurable upstream contributions to top-tier open-source LLM initiatives and participation in the opensource technology development, especially development of foundational and large language models (LLMs), is an important metric of AI readiness. 10. Overall, the level of open-source engagement correlates strongly with other readiness dimensions, including R&D, computing capacity, and the overall innovation ecosystem. R&D capacity is an important dimension of AI readiness, leading to metrics such as stronger AI research output, higher publication impact, and greater resilience in talent development. At the enterprise level, company investment in emerging technologies - including AI, data platforms, and advanced computing - plays a critical role in translating research into scalable systems. Corporate AI R&D expenditure brings cumulative advantages along with robust public research institutions and innovation support mechanisms. 11. Investment patterns influence AI readiness levels. Public investment in AI, supported by effective national AI strategies, will help establish research and innovation systems. Dedicated, multi-year public funding mechanisms for AI research, experimentation, and standards engagement with supportive private investment, including venture capital investment in AI startups, are important. These ecosystems benefit from mature financial markets, strong exit pathways, and dense networks linking startups, research institutions, and large technology firms. Investment patterns influence startup formation, scale-up potential, domestic commercialization, and enable AI ecosystems to focus on not only deployment and adoption but also endogenous innovation. 12. Regional evaluation of AI Readiness could be linked to strong performance across all dimensions of AI readiness. In some cases, tight linkages between academia, industry, government, and active participation in international AI standardization processes play a decisive role in shaping global technical specifications, reference architectures, and evaluation methodologies. In contrast, in some other cases, expanding AI adoption, selective research strengths, but limited influence over foundational technologies, with moderate engagement in open-source AI projects (primarily at the application and integration layers), growing public AI funding, but fragmented governance and coordination, lead to limited participation in core open-source LLM development and international AI standardization. Lastly, if there are structural constraints across all dimensions, it will lead to minimal upstream engagement in open-source AI and LLM projects with limited access to private capital and global AI investment networks. AI deployment in such cases is frequently driven by imported technologies, increasing dependency risks and limiting national influence over interoperability, security, and long-term system evolution. Participation in international AI standardization processes remains low, further reducing visibility of local needs in global technical frameworks. 13. Policy interventions should complement AI adoption with investments in research and technical capacity. Consistent investment helps move beyond pilot initiatives towards scalable and interoperable systems. Complementary and mutually reinforcing public and private investment play a catalytic role in enabling research, experimentation, and standardization engagement, while private investment is essential for commercialization and scale. Weak coordination between these channels leads to fragmented ecosystems and limited global competitiveness. 14. AI readiness encompasses not only technological capacity but also the ability to participate effectively in the development, adoption, and implementation of international technical standards. Active engagement in standardization on AI, data, and emerging technologies, contributions to technical specifications, reference architectures, and evaluation frameworks, and alignment between national AI strategies and international interoperability requirements. # Report Structure This report is structured as such: The Introduction part serves as the overview of the report; ITU AI Readiness Basic Framework section is a summary of the general framework for public use, containing the explanation of the key dimensions and the indices; the Structural Approach section complements the basic framework with more actionable metrics for domain experts; AI Readiness Gap Analysis section identifies the gaps notified during the studies in standards, policy and implementation area; AI Readiness Framework engagement section introduces the ITU AI Readiness Enablement Toolkit design, its requirements, and the ITU AI Readiness Challenge, which serve as the engagement bridge with general users; the Future Work section outlines the work that will be accomplished in 2026-2027 including the expansion of the plugfest projects, launch of the ITU AI-RE Toolkit, ITU AI Readiness challenge, development of AI Readiness standards, and expansion of ITU AI for Good Sandbox Network. # 2. ITU AI Readiness Basic Framework <table><tr><td>Dimensions</td><td>Indices</td><td>Metrics</td></tr><tr><td rowspan="8">Data</td><td rowspan="2">Data Accessibility</td><td>Open Datasets and models</td></tr><tr><td>Data collection source</td></tr><tr><td rowspan="3">Data service capability</td><td>Data Quality Metrics</td></tr><tr><td>Data Representativeness and Diversity Metrics</td></tr><tr><td>Data Labelling Capacity Metrics</td></tr><tr><td rowspan="2">Data Governance</td><td>Bias Detection and Mitigation</td></tr><tr><td>Fairness and Accountability Safeguards</td></tr><tr><td>Data Interoperability</td><td>Standard data formats</td></tr><tr><td rowspan="16">Digital Infrastructure</td><td rowspan="5">Connectivity</td><td>Fixed-broadband subscriptions</td></tr><tr><td>Fixed Broadband Download Speed</td></tr><tr><td>Mobile-cellular subscriptions</td></tr><tr><td>Mobile Download Speed</td></tr><tr><td>Network Coverage</td></tr><tr><td rowspan="3">Computing Capacity</td><td>Compute availability per capita</td></tr><tr><td>Number of Data Centres</td></tr><tr><td>Energy Supply Per Capita</td></tr><tr><td rowspan="5">Device</td><td>IoT Market Size</td></tr><tr><td>Access to chipset</td></tr><tr><td>Robotics platform adoption</td></tr><tr><td>Smart sensors deployment</td></tr><tr><td>Usability</td></tr><tr><td>Automation</td><td>Levels of automation [Y.3173]</td></tr><tr><td rowspan="2">Access to AI</td><td>Location of AI (edge/cloud)</td></tr><tr><td>Contextualization level</td></tr><tr><td rowspan="5">Digital Skills</td><td rowspan="2">Education</td><td>Number of STEM Graduates</td></tr><tr><td>Access to AI courses</td></tr><tr><td rowspan="2">Digital literacy</td><td>National ICT Skills Level</td></tr><tr><td>AI skills level</td></tr><tr><td>AI application development</td><td>AI application development trainings</td></tr><tr><td rowspan="12">Innovation Ecosystem</td><td rowspan="5">Standards</td><td>Data standards</td></tr><tr><td>AI pipeline standards [ITU-T Y.3172]</td></tr><tr><td>Benchmarking standards</td></tr><tr><td>Energy management standards</td></tr><tr><td>Vertical Applications standards</td></tr><tr><td>Open source</td><td>Engagement/ adoption of open-source projects/models</td></tr><tr><td rowspan="2">R&D</td><td>R&D Investment as a Percentage of GDP</td></tr><tr><td>Number of AI Publications</td></tr><tr><td rowspan="2">Investment</td><td>Annual public investment in AI</td></tr><tr><td>Private investment/VC in AI startups</td></tr><tr><td rowspan="2">AI Technology Source</td><td>Domain-wise AI Technology exported</td></tr><tr><td>Domain-wise AI Technology imported</td></tr><tr><td rowspan="9">AI Policy</td><td rowspan="3">AI Policy and Regulation</td><td>National AI strategies</td></tr><tr><td>National ethics framework</td></tr><tr><td>AI policy tools</td></tr><tr><td rowspan="3">Regulatory Quality</td><td>AI regulation implementation in the country</td></tr><tr><td>Flexibility</td></tr><tr><td>Sandbox</td></tr><tr><td rowspan="3">Implementation</td><td>Implementation guidelines and priorities</td></tr><tr><td>Supervision guidelines</td></tr><tr><td>AI content guidelines</td></tr></table> # Data The Data dimension evaluates foundational data readiness for AI development and deployment. Four core dimensions are selected - Data Accessibility, Data Service Capability, Data Governance, and Data Interoperability- as they collectively address the key aspects of AI data. Accessibility ensures data exists and can be acquired, Capability determines whether data can be transformed into AI-ready inputs, and Governance guarantees data is used ethically, securely, and sustainably, and Interoperability ensures compatibility. This four-in-one mirrors the journey from raw data to trustworthy AI solutions: without accessible data, projects cannot launch; without capability, data remains unusable; and without governance, AI adoption risks ethical failures or public rejection. To operationalize the findings in section 1 above, and to develop data readiness metrics within the AI Readiness Framework, a set of actionable metrics is proposed, focusing on data quality, data labeling capacity, and bias and fairness risks. 1. Data Service Capability - Quality and Labeling Metrics (a) Data Quality Metrics These metrics assess whether datasets meet minimum requirements for AI training and deployment: - Dataset completeness: proportion of datasets meeting minimum completeness thresholds Accuracy and consistency: error rates and internal consistency checks - Timeliness and update frequency: frequency with which datasets are updated - Multi-source diversity: diversity of data sources across institutions, regions, and data modalities (b) Data Representativeness and Diversity Metrics These metrics evaluate the extent to which datasets reflect real-world conditions: - Demographic and geographic coverage: representation of population sub-groups and regions - Sectoral coverage: coverage across priority sectors (e.g., education, agriculture, transportation) - Local data share: proportion of datasets collected locally versus externally sourced or synthetic data (c) Data Labeling Capacity Metrics These metrics assess the ability to transform raw data into AI-ready training data: Availability of labeled datasets: proportion of datasets with usable labels Labeling quality: consistency or inter-annotator agreement in labeling processes - Localization of labeling: extent to which datasets are labeled using local languages, contexts, and domain expertise - Scalability of labeling processes: cost, time, and workforce required per labeling task 2. Data Governance - Bias and Fairness Metrics (a) Bias Detection and Mitigation These metrics assess safeguards against bias and discrimination at the data level: - Existence of bias audits: whether datasets undergo bias or fairness assessments - Bias documentation: availability of documentation describing known dataset limitations or biases - Corrective mechanisms: processes to rebalance or refine datasets when bias is identified (b) Fairness and Accountability Safeguards These metrics connect data readiness with trustworthy and accountable AI deployment: - Alignment with data governance frameworks: consistency with national or sectoral data governance policies - Transparency mechanisms: disclosure of data provenance and labeling practices Monitoring and review: periodic reassessment of datasets used in deployed AI systems # Digital Infrastructure The Digital Infrastructure dimension is a foundational element for the development and adoption of artificial intelligence, as it provides the essential physical and technological conditions for AI systems to be trained, deployed, and accessed. It is divided into five key dimensions: connectivity, which ensures fast and reliable data transmission across devices and platforms; Computing Capacity, which supplies the processing power required to run complex AI models; Device, which determines how widely AI applications can reach end users through smartphones, sensors, and IoT devices; Level of Automation; and Access to AI. Together, these capture the full stack of AI enablement – from core infrastructure to edge deployment – making this dimension critical for scaling and democratizing AI across sectors and populations. # Digital Skills One of the major challenges for AI adoption in developing countries is the low level of general digital literacy and a shortage of specialized technical skills among the population. Universal digital literacy provides a foundation for the inclusive use of frontier technologies and AI systems. Widespread application of AI, simultaneously cultivating AI talent and vertical domain talent, while actively fostering their exchange and collaboration, holds strategic and critical importance. Under this dimension, there will be three indicators: education, digital literacy and AI application development. # Innovation Ecosystem The Innovation Ecosystem assesses the broader environment that nurtures and accelerates the advancement and adoption of Artificial Intelligence. It focuses on the critical inputs and collaborative dynamics that transform research into tangible progress and practical applications. We examine six key dimensions: standards, Engagement in Open source, R&D, investment, GenAI Content, and AI Technology market size. These elements collectively capture a nation's capacity for pioneering research, its engagement in global knowledge sharing, and the financial commitment required to translate innovation into impactful AI solutions. A vibrant ecosystem requires not only cutting-edge research foundations but also active participation in open communities and sustained investment to bridge the gap between discovery and deployment. Weakness in any of these areas can significantly impede the pace and scale of AI-driven progress. # AI Policy The AI Policy dimension evaluates the maturity of institutional frameworks essential for trustworthy and responsible AI ecosystems. We selected AI Policy and Regulation, Regulatory Quality, and Implementation as they collectively represent the indispensable triad for accountable AI systems. # 3. Structural Approach The ITU AI Readiness framework identified 6 fundamental factors, which set the foundation for the AI Readiness study. Under factors, 13 dimensions are derived from the Plugfest project reports, each of which is mapped to at least one factor. The dimensions are chosen based on whether they fit some axis (e.g., X, Y) and whether it is possible to plot the progress of different levels of entities across these axes. Indices are extracted and summarized based on the domain-specific metrics called out from the plugfest projects. Two types of indices (1) 0/1 filter, (2) weightage, which goes from 0-1, are designed. Weightages reflect the relative importance given to different metrics and dimensions in different countries or companies. Metrics are domain-specific Key Performance Indicators (KPIs) under each dimension, which will be designed and reviewed by domain experts. Metrics are used to measure the output of the toolkit. The method or process of measuring a metric depends on the domain. Users of the framework include countries, enterprises, Non-Governmental Organizations (NGOs), and other $3^{rd}$ parties. The user can choose the level of self-assessment by applying the relevant set of indices. # Factors The preliminary report of AI Readiness, published in May 2024, sets the foundation for the ITU AI Readiness study. The preliminary report identified six key readiness technical factors: 1. Data: Accessibility and quality of datasets for analysis of AI applications. The availability of data is crucial in training, modelling, and applications of AI, irrespective of the domain. Data availability for analysis may be private or public. Metadata for private data may be published (e.g., data types and structures). However, public data, open for analysis by anyone, requires cleaning and anonymization to remove confidential or personal information. 2. Research: Collaboration between domain-specific and AI research communities. Balancing the two main aspects of research, namely advancements in domain-specific research and advancements in AI research, requires collaboration between domain experts and AI researchers. Providing a platform for collaboration with experts from different realms of knowledge, facilitating cooperation, and the exchange of information among them is key to creating a sustainable ecosystem for AI-based innovation. 3. Deployment Support: Infrastructure and ecosystem readiness for AI deployment. Two major categories of infrastructure are studied - physical infrastructure and communication infrastructure. Physical infrastructure elements play an important role in the integration and application of AI in data collection, aggregation - at the edge or core, training - federated or centralized, and in the application of AI and Machine Learning (AI/ML) inference using actuators. In addition, there is backend infrastructure, such as compute availability, storage availability, fiber/wireless availability for the last mile, and high-speed wide area network capabilities, which would democratize AI/ML solutions and create scalability for innovations. 4. Standards: Ensuring trust, interoperability, and compliance. Interoperability and compliance with standards build trust. Secure standards lead to AI Readiness, as global participation and consensus decide whether pre-standard research could be adopted into the real world. Vendor ecosystems, including open source, are diverse in different domains of use cases. Adoption of AI-based solutions that involve humans, such as mobility inclusion, requires their trust and perception of using AI-based solutions. 5. Open source and Code: Enabling rapid adoption through an open developer ecosystem. An energized third-party developer ecosystem not only fast-tracks adoption but also enables revenue generation. Developer ecosystem bootstraps reference implementations of algorithms, with baseline and open-source toolsets. Third-party applications, Application Programming Interfaces (API), and Software Development Kits (SDKs), along with crowd-sourced solutions, increase the generalizability of AI/ML solutions across regions and domains via transfer learning. Hardware implementations, especially open-source IoT boards, are evolving to host edge data processing. 6. Sandbox Environments: Platforms for AI experimentation and validation. Many use cases require an experimental sandbox, creating experimental solutions, and validating them using experimental setups. While real-world data would imply a more reliable source of data and a realistic testing environment, not all scenarios could be encountered in the real world, especially when catastrophic events and related data are rare. # Dimensions In this session, we will introduce the dimensions that are derived from the plugfest projects with examples of AI integration scenarios, so that a straightforward understanding can be provided. The evaluation could be done at different levels of the entity, including country, industry, and enterprise levels, based on needs. The evaluation of the status quo, gap analysis, and recommendations for users of each value could be provided accordingly. # Dimension 1: Data/model Marketplace This dimension is derived from projects where the importance of exchanging data among partners and creating value out of it was realized. In the scenario where open data and models are available on the table, ontologies and connections within the data can be identified in the system; new ideas, business values, or concept notes could be developed based on the exchange. This dimension aims to measure the creation of an ecosystem/environment for startups, business-to-business, or other types of value-providers to create services such as (un)structured data, expert knowledge base, and general platform, and monetize them. This dimension helps in measuring the readiness of integrating AI to provide business value, especially for deriving value from existing unstructured data, models, and domain knowledge and business workflows. The more value that can be generated from adopting AI, the easier it is to use the AI techniques on a larger scale. Metrics such as "properties of the data and models", "properties of the marketplace ecosystem" could be used, so that the preparedness for the data and model marketplace could be evaluated. This dimension is mapped to the factor of open data and open source due to the reliance on open-source data and models. <table><tr><td>Dimension</td><td>Metrics</td></tr><tr><td>Data/model marketplace</td><td>Properties of datasets (metadata: types structured/unstructured, number of datasets, volume, velocity, variety, quality).Properties of models (metadata: types, size: number of parameters, performance: accuracy, ML/DL, i/o: data and inference, training dataset parameters).Data collection sources (location: home/enterprise/public, heterogeneity: image.audio/video, number of sources, privacy/trust, synthetic/ real world, type of source: streamed, e.g., mounted cameras, satellites, drones, field IoT reports)Diligence metrics for the marketplace (license and participation agreement templates - trust).Data marketplace: the number of data producers, data consumers, and agreements integrated in the digital data marketplace. The number of transactions in the data marketplace.The number of open datasets and downloads from the data marketplace for such datasets. The number of global citations for the datasets.Marketplace metrics, including the number of active participants and transaction frequencies.Metrics for fairness and bias: safeguards to manage data bias or data quality risks.Privacy and security - metrics to measure the levels of assurance, such as privacy-preserving mechanisms for using datasets, number of sources that can contribute operational data in a privacy-preserving manner. Authentication, Authorization mechanisms, and just-in-time deletion of private data.Standards on personal data protection.Data governance metrics, such as: the percentage of datasets under partner agreements (communities/NGOs/public agencies/private), license metrics (types of licenses).Standards-compliant data formats: the amount of data that is avail-able in a pre-specified standard, amount of data that is available in an interoperable manner Metrics and Properties of open-source models, such as openly published weights, or open-source models. E.g., the number of open-source models in different domains like coding and mathematics.</td></tr></table> # Dimension 2: Generated Content Marketplace GenAI has been a heated discussion recently, and usually, the focus is on using AI to generate new content. Yet when studying the plugfest projects, one unique perspective came to attention: generating new datasets/models, so that they can be integrated into the new AI services and then be used/traded. New content could be generated for the purpose of AI services. Are we prepared to provide an ecosystem where new ideas can be generated by plugging in existing materials, connecting with other innovations, and being turned into new services? In this dimension, new (innovations) Intellectual Property (IP) may be created as part of creative sectors. New datasets and models may also be created, which may be used to create new services that use these datasets and models. We aim to measure the ease of creation of new services using AI. This generated content marketplace should allow users to generate new services based on the resources plugged into the ecosystem, which are IP databases, multi-modal content, arXiv papers, open source models, and codes. Metrics such as the properties of this ecosystem, parameters of the datasets or models in the ecosystem, the interoperability among resources when trying to generate new content, and the ability to detect hallucination could be considered. This dimension is mapped to the open data and open-source factor, as the marketplace relies on open data and open models to generate new content. <table><tr><td>Dimension</td><td>Metrics</td></tr><tr><td>Generated content marketplace</td><td>• Parameters of the ecosystem, which include datasets, models for content generation (including open-source models), pluggability of new services for content generation, and trading/monetization. • Availability of guardrails for hallucinations and ethical content. • Support for multi-modal content. • Evaluation techniques for fake detection. • Customization mechanisms for regional content.</td></tr></table> # Dimension 3: Cross-domain correlation analysis AI could be adopted in various domains. In scenarios where the co-benefits of the integration of AI, such as economic, social inclusion, and environmental benefits, are to be studied, cross-domain correlation analysis is needed. If AI is integrated into one workflow in some steps, it could be adopted in similar ways or modified manner in other workflows. This dimension aims to measure cross-domain correlation in integrating AI. The metrics here would find similarities and patterns in different domain workflows and opportunities for integrating AI. The availability and quality of the published domain report, domain-specific models, and KPIs can serve as metrics for this dimension, and the benefits due to correlation analysis will also be evaluated. This dimension is mapped to open data and standards. Cross-domain analysis requires a large amount of data, reports, use cases, and information about domains so that correlations among domains can be established. To compare the workflow of the domain, it is required to have a standardized representation of the workflow; thus, ITU standards have to be introduced. <table><tr><td>Dimension</td><td>Metrics</td></tr><tr><td>Cross-domain correlation analysis</td><td>·The existence of an integrated workflow including prevention (e.g., risk analysis), detection (e.g., monitoring), response (e.g., resource management), and impact analysis (e.g., post facto analysis). Based on: ○ Status of the application domains using available data (e.g., from published reports) and regional Readiness parameters (if available). ○ Domain-specific workflows/models, e.g., fire propagation and detection models. ○ Domain-specific KPIs (e.g., reduction in the burnt area). ·Availability of representation schemes for infrastructure for deployment (e.g., geographic distribution, geographic information system, ArcGIS-based representation). Including city building plan + weather info as input (e.g., CityGML). ·Benefits due to correlation, such as: ○ Cycle time reduction via integrated workflows (e.g., time delay between detection to response). ○ Coverage (in terms of area covered) and scale (in terms of deployments).</td></tr></table> # Dimension 4: Contextualization and Regional Impact When adopting AI solutions that are originally coming from other regions/ other domains, it is observed that contextualization and adaptation are needed. This includes the choice of datasets, models, research, guidelines, toolsets, and standards developed with regional inputs and developed regionally. Enlarging the regional impact of the AI solution to a larger scale should also be captured. What are the differences between local solutions with those in other parts of the world? What might be the gaps to bridge and to improve? This dimension handles indigenous solutions, contextualization of the overall solutions with regional inputs for maximizing the impact on communities and the region, and the adoption level of regional solutions on a larger scale. Metrics to evaluate this dimension are numbers and quality of locally collected data, innovation and patents, including models, toolsets, AI solutions, research, guidelines, number of users for local services, and the adoption level of local services in other markets. This dimension is mapped to open data, research, and deployment. Local innovation involves large amounts of data and research efforts. The contextualization will facilitate the deployment of AI integration. <table><tr><td>Dimension</td><td>Metrics</td></tr><tr><td rowspan="16">Contextualization & Regional Impact</td><td>Number/quality of regionally developed patents/technology components/solutions.</td></tr><tr><td>○ Datasets, models, research, guidelines, toolsets, and standards developed with regional inputs and developed regionally.</td></tr><tr><td>○ Availability of structured and accessible local datasets for training AI models.</td></tr><tr><td>Number of users of indigenous services.</td></tr><tr><td>Customizations for regional applications.</td></tr><tr><td>○ Generalized vs. contextualized solutions (with local inputs).</td></tr><tr><td>○ Gaps for local industry and researchers to develop and contribute with respect to global components and technologies.</td></tr><tr><td>○ Analysis of patterns in customizations to derive potential points for customization, e.g., model training based on regional skin patterns in dermatology.</td></tr><tr><td>Adoption and scaling of local technologies in other markets.</td></tr><tr><td>○ Mapping the technology adoption in different domains and entities to regionally developed components.</td></tr><tr><td>○ Level of locally developed technologies in domain-wise end-to-end solutions currently deployed.</td></tr><tr><td>○ Level of contribution to global standards.</td></tr><tr><td>Knowledge products: number of localized standard operating procedures, after-action reviews, and "hybrid AI + traditional practice" playbooks published.</td></tr><tr><td>Cultural diversity brought by regional inputs.</td></tr><tr><td>Adoption of best practices across regions: the number of new regions adopting an AI-based workflow and best practices from other regions.</td></tr><tr><td>Representation of sub-groups in the dataset for fine-tuning, prompt-tuning or evaluations.</td></tr></table> # Dimension 5: Level of Integration of AI in Workflows AI is now widely used in different industries, such as manufacturing, education, agriculture, international trade, and so on. It can be used to detect wildfires and provide alarms for the local population. It can also be used to smooth the logistical processes when international trade is made among business partners. But how well is the AI integration in the workflow, and how many benefits does the AI provide? In this dimension, AI is seen as a tool used to optimize different domain workflows. This dimension can be measured by efficiency, redundancy, and other metrics of AI integration. This dimension helps in inferring recommendations for improved integration of AI in workflows may be produced. Some gaps in interoperability are noticed. The use cases, represented as workflows, would integrate AI at various points, where $3^{rd}$ party APIs as tools will be called out. A standardized interface to host the APIs will be needed. The optimization and design of APIs for tool usage in the workflow by the models will be studied based on the analysis. This dimension is mapped to the Standards factor. Integrating different AI techniques into various domain workflows needs standards to guarantee their interoperability. <table><tr><td>Dimension</td><td>Metrics</td></tr><tr><td>Level of integration of AI in workflows</td><td>·Level of automation achieved by integrating AI. ·Benefits achieved by integrating AI (which will reflect the usefulness of AI). ·Time/energy saved. ·