> **来源:[研报客](https://pc.yanbaoke.cn)** # Maximizing the value of CX modernization with micro frontends Micro frontends have helped companies improve productivity by 30 to 50 percent—without adding capacity. How? With a clear vision informed by customer experience and robust governance. in $\mathbb{X}$ f # By Erez Eizenman Leverages deep expertise in business technology to help clients across sectors capture digital and technology transformation opportunities # By Jake McGuire Works with clients on e-commerce technology and product development # By McGregor Faulkner Serves global public sector and financial service institutions on digital and analytics transformations, with a focus on ensuring organizations reach full adoption and scale. # By Rohit Bhapkar Advises clients on digital transformations of their core business across a range of sectors with a deep focus in banking # By Thanou Thirakul Specializes in IT modernization in the financial and public sectors April 24, 2025 - This post is part of a research collaboration between McKinsey and RAVL. As McKinsey has written previously, using micro frontends—a development approach that divides a frontend application into small, self-contained modules that are built and deployed independently—can allow companies to rapidly upgrade aspects of their customer experience (CX), creating value and enhancing performance without adding capacity (that is, additional developers). Simply put, when implemented properly, microfrontends can help companies use existing resources more efficiently and effectively, doing more with what they already have. In fact, McKinsey research shows that many organizations that have a CX-focused strategy to guide the implementation of micro frontends have achieved impressive outcomes, including the following: - reducing the time to build and deploy a new release from days to minutes - speeding up delivery by 30 to 50 percent with the same resources - a 40 to 75 percent improvement in frontend performance as measured by page load time<sup>1</sup> The benefits of micro frontends are industry agnostic, as companies in various sectors have demonstrated (Exhibit 1). Exhibit 1 # Companies in a range of industries have enhanced scalability, sped up development, and added value with micro frontends. Effects of micro frontends in organizations from three industries<sup>1</sup> Primary objective Secondary benefit Do more with what you have Improve time to market Reduce maintenance costs <table><tr><td>Financial</td><td>- Full-scale development - Streamlined web application maintenance - Parallel work - Independent feature deployment - Automated production with no downtime</td><td>✔️</td><td>✔️</td><td>✔️</td></tr><tr><td>Retail</td><td>- A flexible e-commerce platform that cut development time in half - A modular web application design that allowed pages to load 75% faster</td><td>✔️</td><td>✔️</td><td>✔️</td></tr><tr><td>Consumer product</td><td>- A scalable e-commerce platform that cut development time by 30% - A modular web application design that allowed pages to load 40% faster</td><td>✔️</td><td>✔️</td><td>✔️</td></tr></table> Each organization uses its own design for micro frontends. McKinsey & Company Yet some companies that deploy micro frontends fail to attain these outcomes. What differentiates those that succeed from those whose efforts fall short? In our experience, it's all about strategy and governance. # Setting the stage for success with a road map and guiding principles for micro frontends Organizations that start with a clear road map reflecting customer feedback and priority CX upgrades (a CX journey strategy) have the advantage of knowing what they want to build. Without that clarity, even the best approach to building will fall short of maximizing customer impact. But when companies create a CX journey strategy and adhere to it, they define their financial goals and focus on CX upgrades that will help them achieve those goals. According to McKinsey analysis, companies that take this strategic approach to modernizing their CX have increased sales conversions by about 10 to 15 percent and boosted employee engagement by 20 to 30 percent. And a micro frontend approach to development can enable a time to market that is approximately 20 to 50 percent faster than a conventional frontend approach.[2] The other key ingredient in capturing the full value of micro frontends? A deliberate focus on governance, best practices, and robust support systems. Transitioning from conventional to micro frontend-enabled ways of working requires organizations to adopt a development approach that mirrors one with a track record of success in backend microservices. Organizations that follow this approach address three operational dimensions—efficient and effective use of existing resources, faster time to market, and lower maintenance costs (Exhibit 2). # Exhibit 2 # To ensure successful adoption of micro frontends, companies should establish governance models that address three key operational dimensions. <table><tr><td>Key operational dimensions</td><td>Guiding principles that enable optimal outcomes in each dimension</td></tr><tr><td>Do more with what you have</td><td>- Establish a governance framework for collaboration so teams can work independently as well as collaboratively</td></tr><tr><td>Improve time to market</td><td>- To reduce design churn, document shared dependencies at build time rather than at run time - To facilitate faster builds, create guidelines on shared library loading with module federation and upgrade library paths</td></tr><tr><td>Reduce maintenance costs</td><td>- Centralize all documents and ensure that all team members review them early and frequently - Author and publish shared interfaces, workflow, and communication channels</td></tr></table> Getting the most out of the transition to micro frontends hinges on thoughtful planning and adherence to the model approach developed prior to launching this new way of working. # Governance and leading practices Effective governance ensures consistency across micro frontends. Organizations must clearly define boundaries, establish shared design systems, and implement automated continuous integration and continuous deployment pipelines. Guardrails and technical alignment are essential to maintain cohesion in a distributed system (Exhibit 3). Exhibit 3 Adopting a centralized governance process ensures consistency and cross-functional collaboration between micro frontends. McKinsey & Company # Standardized communication and dependency management Teams must manage shared dependencies and standardize how microfrontends interact to ensure seamless performance and avoid conflicts. # Performance optimization Maintaining optimal performance and reliability depends on continuous monitoring and analytics coupled with automated testing. # Cross-functional collaboration Successful adoption relies on breaking down silos and fostering collaboration among developers, designers, and product teams. This holistic approach ensures that adoption aligns with organizational CX goals. # Micro frontends, macro advantages: Postadoption gains Micro frontends empower frontend teams to work in parallel, driving efficiency and faster innovation. Unlike standard approaches to development, micro frontends break large frontend applications into smaller, independently developed and deployed pieces that are each aligned with a specific domain. Teams can focus on their areas of business domain expertise and leverage the tools and frameworks that work best for them, provided they adhere to overarching architectural guardrails (Exhibit 4). # Micro frontends address key efficiency challenges that organizations face throughout large web application development cycles. <table><tr><td></td><td>Standard development approach</td><td>Micro frontend approach</td></tr><tr><td>Design</td><td>Complex and tightly coupled design increases the effort needed to integrate new features without affecting existing features</td><td>Simple modular design increases efficiency by making it easier to add new features</td></tr><tr><td>Code</td><td>Large teams working on the same code base can make it difficult to merge, adds time spent on extensive ad hoc coordination, reduces coding time, and diminishes return on talent investment</td><td>Independent module development with small teams leaves more time to code, requires less time in coordination meetings, and enhances return on talent investment</td></tr><tr><td>Test</td><td>Frequent retesting of each application component results in extensive data setup and integration requirements</td><td>Selective testing of modules affected by a feature change instead of testing an entire application eliminates time spent on data setup and integration</td></tr><tr><td>Deploy and operate</td><td>Susceptibility to cascading failures increases the effort required to identify root causes for failures</td><td>Failures in newly deployed features are contained to a single module, avoiding propagation to other modules and decreasing the effort needed to identify root causes</td></tr></table> Unlocking the full potential of micro frontends requires thoughtful governance, strategic alignment, and a robust process map. But by positioning themselves for success in this endeavor, organizations can reap tremendous rewards: - Accelerated development. Micro frontends enable teams to work on separate parts of an application simultaneously, eliminating bottlenecks and accelerating time to market. - Scalability and flexibility. By modularizing the frontend, organizations can scale with substantially greater agility. New features or modules can be added without disrupting existing functionality, and testing is streamlined considerably. - Enhanced developer productivity. Cognitive load is reduced as developers focus on specific areas within clear boundaries. Decentralized ownership minimizes coordination challenges and fosters an environment of autonomy and creativity. Done right, implementing micro frontends can provide the missing piece for organizations seeking to modernize their CX. This structured yet flexible approach to building frontend systems can open new possibilities for organizations to accelerate innovation while responding to customers' evolving needs and mounting expectations. Erez Eizenman and Rohit Bhapkar are senior partners in McKinsey's Toronto office, where McGregor Faulkner is a partner and Thanou Thirakul is a senior expert; Jake McGuire is a partner in the Denver office. The authors wish to thank RAVL CTO Dominic Wallace for his contributions to this blog post. <sup>1</sup> "Embracing scalability with micro frontend architecture," ThinkSys, accessed April 10, 2025. <sup>2</sup> “Embracing scalability with micro frontend architecture,” ThinkSys, accessed April 10, 2025. # Future-proofing the IT function amid global trends and disruptions At McKinsey's CIO Roundtable at the Hamburg IT Strategy Days, German IT leaders discussed strategies to fortify their IT operations while contending with an evolving global landscape. # By Anna Wiesinger Advises public- and private-sector clients, with a focus on digitalization processes and transformation management for education, employee, job market, and manager development # By Gérard Richter European leader of McKinsey Digital Hubs and Build by McKinsey and co-leader of McKinsey Digital Europe # By Thomas Elsner Works with companies to shape their core technology agendas and contribute to the success of their large-scale investments into technology and digital June 11, 2025 - IT leaders must face a hard truth: Global disruptions such as tariffs and trade controls, policy shifts, and economic uncertainty do not stop at companies and their technology functions. In recent years, leaders across all business functions have focused their efforts on scaling the adoption of cloud services and, more recently, putting gen AI to use. Operating models were set up globally, integrated outsourcing was employed at scale, and efficiency was the paradigm. But as the global landscape continues to change, companies and government entities will need to reassess the setup of their value chain as they continue to develop and deploy their technology landscape. IT leaders will need to consider priorities such as ensuring sovereignty over infrastructure and data and reassessing IT delivery models, supplier relationships, and the location of operations or data centers. At McKinsey's CIO Roundtable at the Hamburg IT Strategy Days, German CIOs and CTOs discussed these priorities in relation to the European Union and ideated strategies to bolster IT operations and outperform competitors in the face of global disruptions. # Four priorities that equip IT leaders to respond swiftly to trends and disruptions The results of the CIO Roundtable reflected the prevailing mood among leading German CIOs and CTOs regarding the strategic importance of four priorities affecting gen AI development, supplier management, and IT delivery models. # Sovereignty for data and technology gains strategic importance with gen AI The next stage of gen AI development will be characterized by increased AI sovereignty as companies and authorities increasingly develop their AI application landscape. This trend is driven by the need for bespoke solutions and by security and data protection concerns. AI sovereignty includes control over technology development, models, and applications, as well as control over algorithms and flexible, adaptable architectures that allow companies to respond quickly to new requirements. Another key factor is the traceability of AI systems: Companies need to ensure that data origins, model decisions, and processes are always transparent and auditable by always having access to and control over their data. In the CIO Roundtable discussion, participants agreed with this priority, citing their own experience. At the same time, however, executives also stated that data and technology sovereignty were mostly unrealized in their organizations. For data risk management, participants proposed a classification approach based on the traffic light principle (red, yellow, and green), where “red” data must not be used in gen AI applications. Gen AI projects are often managed by a central team that maintains an overview of data usage. Participants described as challenging the exchange of data with external partners—for example, in the healthcare ecosystem between pharmaceutical manufacturers, insurers, doctors, hospitals, and pharmacies. While there are numerous ideas for gen AI use cases, some companies focus on projects with a quick payback (that is, less than a year). However, the high costs of individual use cases pose a hurdle for smaller companies. Moreover, many formerly self-developed solutions are now available on the market for a fraction of the original cost. Participants emphasized that sovereignty does not necessarily mean developing all technologies internally. Smaller companies, especially, are severely limited in their possibilities, and sovereignty encompasses more than just the location of the data center—the source of the hardware and software also play a crucial role. # Changes in the provider landscape require strategic supplier management Market concentration in areas such as cloud infrastructure and software platforms leads to a few large providers dominating the market. This reduces the flexibility of IT organizations. Internal measures such as adjusting usage or switching to another provider often cannot compensate for price adjustments, forcing IT departments to closely monitor budgets and be prepared for price increases. These challenges may require a reassessment of supplier relationships. IT organizations may need to reevaluate their relationships with large providers and could consider alternative partnerships, such as working with classical IT service providers or smaller providers, to increase the resilience and flexibility of the IT infrastructure. This priority also found high approval among the roundtable participants, with some participants stating that their organizations had largely implemented it. In the discussion, participants had a strong interest in data centers that are "closer to home" in the European Union, given the risks of global business volatility and regulatory divergence. Participants also discussed new considerations, such as the increasing use of gen AI and dominant players, when deciding between developing in-house service centers ("build") or purchasing external solutions ("buy"). Some smaller providers and companies have started to rebuild their own on-premise capacities—an aspect that tech talent finds attractive because it offers the opportunity to work on a real tech infrastructure instead of pure service provider management. Moreover, on-premise solutions can enable better planning and prevent the uncontrolled growth of software-as-a-service (SaaS) costs. However, limited budgets and a lack of alternatives to SaaS make implementing on-premise solutions difficult. Replacing software licenses before their end of life is financially challenging, and negotiating or optimizing the complexity of software license models is increasingly seen as critical. Many companies, therefore, try to integrate multiple providers and contractually secure hosting in Europe. Overall, participants reported that the priority of strategic supplier management has shifted from pure procurement to a board-level topic. # Geopolitical tensions require rethinking IT delivery models Most companies have been working to capture economies of scale by globally consolidating their IT operations or data centers. But with international relations in flux, companies are being forced to reassess their IT delivery models whether they've consolidated or not. In either case, companies need to design their IT operations so that they can quickly respond to unforeseen events without jeopardizing business operations. Roundtable participants considered this a relevant priority, but due to its complex nature, many companies have struggled to implement it. Companies that invest early in resilient structures may secure a strategic advantage. Approaches such as nearshoring, in which IT services are relocated closer to—but not inside—the home market, may help minimize geopolitical risks. Companies can use these regional IT hubs to reduce their dependence on global supply chains and increase their flexibility. That said, the labor cost advantages of the past have slimmed significantly. Another approach to supply chain strategy has been emerging. Local shoring—or “local for local”—maps out a company’s entire IT value chain to stabilize it within a country’s legal framework and allows companies to operate closer to local markets and to better understand their specific requirements. Local teams can then adapt swiftly to market trends and tailor their solutions to regional conditions, which can strengthen the company’s competitive advantage. Some roundtable participants argued that this approach needed to be differentiated at the business process level to ensure long-term success. For example, a local-for-local approach would be difficult for banking organizations that need to connect their operations to international payment systems. Therefore, it is important for global and local teams to closely collaborate to identify where their services overlap and to exchange information about the local versus global market conditions and customer demand. Some roundtable participants also mentioned that recruiting qualified employees for different regional locations was an obstacle to implementing this priority. The necessary tech talent needs to not only have relevant tech expertise but also understand local conditions to successfully meet the specific challenges of the respective markets. Especially in regions with a shortage of skilled workers, companies need to find innovative ways to attract and retain talent in the long term. Companies can ensure that this talent is available through targeted training programs, collaborations with educational institutions, or attractive working conditions. Simultaneously, companies need to balance compliance with various data protection requirements. While the European Union, for example, requires that data be processed exclusively within the Union, other countries such as China require data to be stored locally. These diverging requirements complicate global IT architectures, increasing the importance of flexible architectures that can quickly respond to market and legal changes. # New cybersecurity threats require innovative strategies to protect data The rapid development of AI technologies has created new security risks, including deepfakes, automated attacks, and data misuse, increasing the importance of efficient threat detection and defense. Traditional protection mechanisms such as firewalls or endpoint protection are less effective against sophisticated cyberthreats. IT teams therefore need innovative approaches and tools that quickly adapt their security infrastructures within the dynamic threat landscape. Implementing real-time verification is crucial to quickly identifying and neutralizing fake content and harmful data. AI-powered defense tools can be used to identify patterns and anomalies that indicate potential attacks. Nearly all the roundtable participants considered the topic relevant to their daily work, with many having partially or fully implemented new protection technology. In the discussion, the danger of deepfakes was repeatedly highlighted. In the hands of malicious actors, deepfakes can engender devastating consequences—including, for instance, the erosion of customers' trust in targeted companies or the theft of intellectual property and identities. Participants also questioned how they could protect against threats when trained employees cannot reliably recognize fake content. At the same time, participants discussed which AI systems could execute security processes more efficiently versus systems that could complement processes with a human factor. Worldwide uncertainty guarantees that CIOs and CTOs will need to prioritize resilience, agility, and strategic adaptability in their IT and technology initiatives. IT leaders who focus on building flexible, secure, and scalable IT operations could enable their organizations to remain innovative and competitive amid unpredictable economic and market conditions. André Jerenz is a partner in McKinsey's Hamburg office, Anna Wiesinger is a partner in the Düsseldorf office, Gérard Richter is a senior partner in the Frankfurt office, and Thomas Elsner is a partner in the Munich office. The authors wish to thank Björn Michalik and Philipp Hühne for their contributions to this blog post. # Tech talent gap: Addressing an ongoing challenge The tech talent gap in Europe shows no signs of closing, so companies will need to think holistically to fulfill their talent needs. in $\mathbb{X}$ f # By Anna Wiesinger Advises public- and private-sector clients, with a focus on digitalization processes and transformation management for education, employee, job market, and manager development # By Henning Soller Serves companies across Europe and the Middle East on large-scale IT and data transformations with a focus on scaling innovation # By Nadja Stark Focuses on advanced analytics and digital transformations that grow clients' capabilities and talent, and building platforms to support both # By Thao Durschlag Serves clients from the financial industry in the context of complex technology transformations impacting operating model and people requirements March 17, 2025 - Only 16 percent of executives feel comfortable with the amount of technology talent they have available to drive their digital transformation, according to a 2023 internal McKinsey survey of 40 executives across sectors. The same survey found that 60 percent of companies cited the scarcity of tech talent and skills as a key inhibitor of that transformation. The talent gap was also a central theme at our latest Tech Talent Roundtable in Frankfurt. This event, the third in our Tech Talent Roundtable series, brought together senior HR and IT leaders in Europe from across sectors, including technology and the media, energy and materials, finance, and the public sector. While layoffs by major technology companies may have temporarily increased the pool of available tech talent, there is no evidence of a systematic narrowing of the gap between supply and demand. Results from the survey referenced above indicate that based on current trends, demand for tech talent is likely to be two to four times greater than supply over the coming years. Within the European Union alone, the tech talent gap could be 1.4 million to 3.9 million people by 2027. # Could gen AI be the solution? Over the course of the roundtable, gen AI was repeatedly mentioned as a possible solution for the tech talent shortage due to its potential to increase workforce efficiency. Gen AI can improve product manager productivity by 40 percent (where the process of requirements specification is sufficiently automated), for example, and can halve the time it takes to document and code. So far, however, this potential has been realized only by leading companies that have invested in a significant degree of automation; most companies have yet to see efficiency improvements at this scale. Despite the efficiency-improving potential of gen AI, there is currently no evidence that the technology is leading to a reduction in demand for tech talent. On the contrary: We see that the demand for technology talent has increased as companies invest in preparing the technological backbone needed for the effective deployment of AI tools. Roundtable participants noted that resources freed up through the increased efficiency of technology teams have mostly been redeployed to delivery teams, allowing these teams to expand their outputs. In addition, the spread of gen AI has generated further demand for a subset of skills and talents. The full potential of gen AI can be realized only if it is both embedded appropriately and used effectively by a significant proportion of the workforce. These requirements did not apply to previous technologies, which means that a significant number of employees—many of whom may have limited technical knowledge—may need to be reskilled or upskilled. Many of the tools and output created by gen AI also need to be managed or interpreted by humans, which imposes additional talent needs. Currently, therefore, gen AI appears to add to rather than relieve the problem of talent shortages. # Companies will need to take a holistic approach to meeting their talent needs There will be no magic bullet solution for the current talent shortage. Instead, companies will need to address the issue holistically. In doing so, they could draw on four separate levers (exhibit). Exhibit # Four workforce levers can help companies close tech talent gaps. Workforce levers to address tech talent gaps McKinsey & Company # Traditional workforce levers are unlikely to provide the whole solution In the past, many companies could meet their talent needs using a combination of two of the four levers: buying and outsourcing. Both levers will still be helpful, but they are unlikely to be enough in the current market. - Outsourcing can increase headcount but will not grant access to the full range of required skills. While outsourcing the IT workforce is a common practice, it typically leads to a high turnover rate as newly upskilled workers find more lucrative or permanent positions elsewhere. In addition, relatively low per diems have decreased the pool of talent available through outsourcing; highly skilled talent—and talent that possesses unique or high-value skills—may not always be available. Therefore, while outsourcing can still be the right solution in some situations, such as when additional resources are required on a time-limited basis, it will not always be the case. - Buying talent is challenging given the current tight labor market. High demand has considerably increased the cost of the limited supply of tech talent, which means companies are unlikely to be able to buy their full range of talent needs. They should, however, continuously scan the market for new hires. They will most likely be successful if they focus their search around a well-defined set of key talent needs. Companies will also need to be innovative in their search methods: Roundtable participants indicated that job ads are falling out of favor for tech jobs and that talent acquisition now often takes place through social media platforms and relevant technical communities. One leading electronics company, for example, has posted a series of recruitment ads for junior tech talent on Instagram. # Reskilling, upskilling, and partnerships can help close the talent gap As the gap between the supply and demand for talent widens, companies are increasingly focused on building the right talent or partnering to access it. - Building skills within the existing workforce should be a key element of any talent strategy. Most companies will be unable to close their talent gap without investing in reskilling or upskilling their existing workforce. For example, a recent survey of executives found that most plan to increase their pool of gen AI skills through training existing employees, rather than through hiring or contracting, because these skills will need to be embedded across most roles. Measuring the current level of workforce skills has historically been challenging, but new tools have made it much easier to both create this baseline and assess current ways of working. In addition, the results of these inventories of skills often reveal a number of existing employees who possess in-demand skills they are not currently using. Efficient use of the existing workforce based on their skills profile can significantly decrease the degree of reskilling required. - Partnering can offer greater access to key skills than traditional outsourcing. Large system integrators generally have access to a large workforce that can be part (though typically not all) of the solution for the talent problem. Partnering with such organizations can help alleviate the issues of turnover and quality of talent that can make outsourcing problematic. Talent is less likely to leave, for example, if the partnership agreement that brings them into a company specifies that they will receive training and coaching or will be eligible for promotion. The key challenge is typically to craft an agreement that reflects a true partnership, because doing so requires the two sides to be open to a different approach to both the content and terms of the contract. # Getting started While the ultimate solution for filling talent gaps will likely involve all four levers, the immediate issue for many companies is how to get started. The first step will be to develop a clear view on each of the following: - the skills and level of capacity required to drive the desired digital transformation the skill profile of the current workforce - the talent pool available from current vendors, as well as any challenges related to existing relationships with those vendors Taken together, these elements will give companies a rounded view of where they are and where they need to be. Demand for technology talent will exceed supply for the foreseeable future, and the talent gap is therefore likely to continue to be a major topic during future iterations of our Tech Talent Roundtable. However, companies that work to develop a clear baseline of their current situation and strategically employ a combination of all four workforce levers will be able to secure the talent they need. Anna Wiesinger is a partner in McKinsey's Düsseldorf office, Henning Soller is a partner in the Frankfurt office, Nadja Stark is an engagement manager in the Berlin office, and Thao Durschlag is an associate partner in the Munich office. # Next-gen banking success starts with the right data architecture For a bank, capturing the most value from its data transformation means choosing the optimal data architecture for its distinct array of analytics and business needs. # in $\mathbb{X}$ f # By Aziz Shaikh Serves as a leader of our North American data architecture and engineering guild, helping chief data officers (CDOs) across sectors generate business value at scale from data, using next-generation data technologies and governance # By Henning Soller Serves companies across Europe and the Middle East on large-scale IT and data transformations with a focus on scaling innovation # By Aysen Cerik # By Fares Darwazeh # By Margarita Młodziejewska February 28, 2025 - On average, a bank spends about 6 to 12 percent of its annual technology budget on data.<sup>1</sup> Banks aim such investments at harnessing a portion of the $2.6 trillion to $4.4 trillion in potential global industrial value from deploying gen AI to gain insights and realize efficiencies in these banks' complex systems worldwide. However, data implementation plans often lack well-defined business cases and thus fail to deliver on their full potential value. But according to McKinsey analysis, with the right data architecture archetype, banks could cut their implementation time in half and lower costs by 20 percent. Realizing maximum value, efficiency, and savings is even more vital when new systems must be scaled across multiple countries and be compliant with a range of regulations. Without an optimal architecture, this process can be a major cost driver. At the same time, banks need to remain a step ahead of new threats to data privacy and cybersecurity as well as changing regulations, including the General Data Protection Regulation (GDPR), the Basel Committee on Banking Supervision (BCBS) 239, and the Digital Operational Resilience Act (DORA). Each data architecture archetype is more or less suited to a bank's unique array of business and analytics needs. Assessing those needs and determining the appropriate architecture involves complex considerations. A detailed decision-making road map can provide the guidance and insights essential to informing the process and building a solid foundation for success. # Realizing digital transformation's full value: Common barriers and enablers Most banks made significant investments in their data transformation journeys in the past five to ten years. While some were able to finalize their transformations successfully, most banks either could not finalize them or could not realize their expected impact. In a 2022 McKinsey Global Survey on digital strategy and investments, for instance, less than one-third of the expected value of digital transformations and initiatives was captured, and only 16 percent of surveyed executives said their transformations successfully improved performance and led to sustained long-term gains. In our experience, incomplete digital transformations in banking typically result in one of three scenarios: - Legacy IT stack and data architecture. Data architecture has not been transformed, and a spaghetti architecture remains in place, along with legacy platforms and tools. - Fragmented data warehouses and data lakes. Data transformation was not finalized, and banks must manage old and new platforms simultaneously. - Core data transformation without a new stack or proper tool use. Data transformation was finalized, but the impact was then curtailed by either using new platforms and tools inefficiently or not using them at all. Each of these scenarios limits the potential gains from the transformation and creates inefficiencies such as delayed decision-making, along with security and compliance risks and elevated costs from maintaining multiple complex environments. # Five keys to unlocking digital transformation value Among banks that do realize the targeted value from their transformations, five common best practices stand out as significant enablers. Adopting these practices can lead to 20 percent cost reductions for platform builds, 30 percent faster time to market, and 30 percent lower change costs, according to McKinsey analysis. The five overarching best practices are the following: - Building true data platforms. Using new architectural approaches can enable data and process synergies that span countries and business lines. - Opting for open-source and cloud-provisioned platforms. Opting for these platforms rather than vendor platforms can improve cost efficiency by reducing licensing fees and infrastructure costs, enhance the scalability of resources, and avoid vendor lock-in. - Enabling optimal automation. Automating as many processes as possible can enhance quality checking and speed deployment, allowing for faster delivery. - Enhancing existing platforms. Strategic upgrades and adaptations of existing platforms can enable new capabilities, including gen AI applications, without the time or expense involved in building new platforms from scratch. - Enabling lab environments. Using such environments can allow data scientists to innovate and experiment with data while leaving a bank's data intact. # Designing the right data architecture: Principal considerations and strategies There are five data architecture archetypes: data warehouse, data lake, data lakehouse, data mesh, and data fabric. Using data warehouse and data lake archetypes alone is no longer a common practice. Other archetypes are typically used alone or in combination—for example, a data mesh and a data fabric operating on an underlying database solution—and are, ideally, selected to align with a bank's vision and strategies for its data and business (Exhibit 1). # Companies can evaluate their strategic aspirations against different data architecture archetype features to determine the best fit. Data architecture archetypes # 1 Data warehouse A centrally managed system used for collection, reporting, and data analysis Considered a core component of business intelligence # 2 Data lake Platform architecTED on infinitely scalable object storage to centrally source data in its native, raw format and then transform, organize, govern, and publish it # 3 Data lakehouse Fusion of the data lake and data warehouse architectures in a single integrated platform # 4 Data mesh Distributed-domain-driven architecture based on self-serve infrastructure, federated governance, and business domains Reorganizes data work at the unit of the data product; built and owned by business functions Applies to operational and analytical data # 5 Data fabric Metadata layer spanning multiple data environments with unified data management and security capabilities Data sets from various data environments (eg, operational sources, data warehouses, data lakes) are brought in as metadata and connected McKinsey & Company When designing a data architecture, the overarching considerations are core system complexity, cost, flexibility, and risk. More specifically, considerations and evaluation criteria include the following: - Integration complexity. Assess how complicated and difficult it will be to integrate all components and layers—such as an enterprise data warehouse and a data lake—into a single data architecture. - Cost and resource allocation. Consider financial implications along with any licensing, infrastructure, maintenance, and resources required. - Scalability and flexibility. Determine the architecture's ability to accommodate increasing data volumes, evolving needs, and new technology integrations effectively. - Business continuity and risk. Ascertain the potential impact on business continuity and risks, and establish mitigation strategies. - Business enablement. Estimate the impact on the business and future possibilities, factoring in critical use cases and optimal steering. - Regulatory remediation. Determine the best data strategy to comply with regulatory requirements, and deploy the appropriate remediation. - Overall strategy alignment. Establish requirements based on the bank's overall business strategy, including, for example, M&A readiness and group consolidation. For multinational banks, the process of decision-making is complicated by the need to incorporate the guidelines of central and local entities. Several multinational banks have initiated programs to establish groupwide platforms. They have set up overarching platforms that span multiple entities, including countries and business lines, along with platforms that allow for easy integration of further entities into a standardized data model used throughout the group. Additionally, they have established standardized tools and methods on top of the platform to enable overarching steering for the group. # Matching data archetypes with aspirations: Selection criteria and decision road maps Factoring in a bank's existing capabilities and planned use cases for its data provides critical insights to inform decisions on archetypes. Each archetype is either well-suited or ill-suited to a given set of conditions, circumstances, needs, and goals. Navigating these various and often overlapping scenarios can seem daunting. However, the process can be made more straightforward by breaking down a bank's existing circumstances and desired outcomes into ten decision-criteria elements and answering "yes" or "no" to each corresponding question, such as the following: - Geographic footprint. Does the architecture need to support data access and management globally, across regions, and with minimal latency? - Scalability and flexibility. Must the architecture enable seamless scaling of data volumes and adaptation to evolving needs? - Data governance. Does the data architecture need to be capable of supporting comprehensive data governance controls—including data stewardship, policy enforcement, and audit trails—across diverse data environments? - Data security and compliance. Does the data solution need to provide robust support for and enforcement of security and compliance measures such as encryption, access management, and regulatory adherence across multiple jurisdictions and data types? - Data variety. Does the solution need to support the management of a wide range of data types (structured, unstructured, and semistructured)? - Business domain specificity. Do business processes require specialized data solutions that cater to specific industry or domain needs, with tailored data models and analytics? - Data interoperability. Does data from multiple sources and formats need to be integrated and processed efficiently in real time? - Stream processing. Does the data architecture need to be capable of handling high-throughput, low-latency stream processing efficiently for real-time analytics and decision-making? - Metadata management. Is an advanced metadata management system that supports detailed data lineage, impact analysis, and comprehensive data cataloging required? - Cost-effectiveness. Does the solution offer the best cost-effectiveness when balancing storage capabilities and future scalability requirements? Once the required criteria are identified via "yes" answers to the questions, each archetype can be evaluated according to its ability to meet specific requirements (Exhibit 2). Exhibit 2 # Once required capabilities are determined, they can be used to evaluate and compare each data architecture archetype's ability to meet them. Low ability to meet need High ability to meet need <table><tr><td>Required capabilities (nonexhaustive)</td><td>1. Data warehouse</td><td>2. Data lake</td><td>3. Data lakehouse</td><td>4. Data mesh</td><td>5. Data fabric</td></tr><tr><td>Geographic footprint</td><td>Setup complexity and cost typically mean limited user base support</td><td>Supports a broad range of users</td><td>Provides broad access but can face integration challenges</td><td>Supports a broad range of users; effective cross-domain collaboration is required</td><td>Global coordination is complex</td></tr><tr><td>Scalability and flexibility</td><td>Inherent scalability constraints</td><td>Highly scalable and flexible; handles vast amounts of structured and unstructured data</td><td>Combines the scalability of data lakes with the structured query capabilities of data warehouses</td><td>Scalable and flexible via decentralized management but requires effective coordination</td><td>Inherent scalability constraints</td></tr><tr><td>Data governance</td><td>Strong governance capabilities with well-defined schemas and controls</td><td>Unstructured nature and lack of inherent controls create challenges</td><td>Structured data management features support enhanced governance</td><td>Decentralized; can be complex but allows for domain-specific policies</td><td>Comprehensive tools for stewardship, policy enforcement, and quality management</td></tr><tr><td>Data security and compliance</td><td>High level of security and compliance features, often with robust auditing and control mechanisms</td><td>Potentially complex management due to diverse data types and lack of centralized controls</td><td>Provides functional security but often requires additional tools and configuration to meet advanced security and compliance requirements</td><td>Security and compliance are managed within domains, which can be effective if implemented well</td><td>Robust mechanisms for encryption, access controls, and audit trails</td></tr><tr><td>Data variety</td><td>Supports structured data only</td><td>Supports structured and unstructured data types</td><td>Supports structured and unstructured data types</td><td>Supports diverse data types but requires domain-specific management and integration</td><td>Supports structured and unstructured data types</td></tr><tr><td>Business domain specificity</td><td>Can be tailored to specific domains but often requires additional configuration</td><td>Not inherently domain-specific; often requires additional layers for domain-specific processing</td><td>Can be tailored for specific business domains with additional tooling and integrations</td><td>Enables decentralized data ownership and domain-specific data products</td><td>Customizable but requires additional configuration for domain-specific requirements</td></tr><tr><td>Data interoperability</td><td>Good interoperability, though integration with some nonrelational data sources may be limited</td><td>Integration can be complex</td><td>Supports integration and interaction of various data sources and format; seamless interoperability may require additional tooling and configuration</td><td>Good interoperability but dependent on effective cross-domain standards and practices</td><td>Facilitates seamless data exchange and integration</td></tr><tr><td>Stream processing</td><td>Typically weaker in real-time processing</td><td>Typically weaker in real-time processing</td><td>Supports real-time data processing</td><td>Capable of handling streaming data if implemented within domains</td><td>Limited in handling extremely high-velocity data streams or complex event processing</td></tr><tr><td>Metadata management</td><td>Robust metadata management with well-defined schemas and data lineage capabilities</td><td>Often limited metadata management, making data discovery and lineage challenging</td><td>Improved metadata management due to more structured data handling</td><td>Can be complex due to a decentralized nature</td><td>Robust capabilities including cataloging, lineage, and governance</td></tr><tr><td>Cost-effectiveness</td><td>Can be costly because of licensing and infrastructure requirements</td><td>Cost-effective for storing large volumes, but cost can increase with data management, integration, and querying complexities</td><td>Balanced cost structure, leveraging data lake storage and data warehouse processing efficiencies</td><td>Cost-effective via decentralized management but may require investment in coordination and integration tools</td><td>Up-front setup complexity</td></tr><tr><td>McKinsey & Company</td><td></td><td></td><td></td><td></td><td></td></tr></table> After the archetype is selected and a robust data architecture is designed, fully implemented, and scaled, rigorous follow-up to assess the need for improvements must be done regularly to ensure the setup remains sustainable and continues to deliver value well into the future. # Getting started: Essentials for leaders Leaders seeking to ensure their banks' digital transformations are optimized to yield maximum value can begin by conducting high-level diagnostics of business needs and data maturity assessments to understand their requirements for data architecture and data governance as well as identify any gaps that can be shored up by adopting best practices. An architecture blueprint can then be designed alongside a road map for model design and required governance. Finally, implementation can be rolled out use case by use case in an iterative manner. Aziz Shaikh is a partner in McKinsey's New York office, Henning Soller is a partner in the Frankfurt office, Aysen Cerik is an associate partner in the Istanbul office, Fares Darwazeh is a consultant in the Riyadh office, and Margarita Młodziejewska is a consultant in the Zurich office. The authors wish to thank Asin Tavakoli and Mitch Gibbs for their contributions to this blog post. <sup>1</sup> Analysis is based on McKinsey's Tech: Performance benchmarking product across seven industries: advan