> **来源:[研报客](https://pc.yanbaoke.cn)** # AI Regulation in 2026 A snapshot JANUARY 2026 舵 图 # Executive Summary 2025 was a mixed year for AI regulation: although the focus on HR tech, financial services, insurance, and generative AI that was seen in 2024 continued into 2025, the US and EU have both made attempts to soften or even withdraw AI regulation. - HR Tech has long been targeted by policymakers, with the initial focus on video interview laws before a transition to bias audits. Key themes are non-discrimination and transparency, including notification. HR tech is also affected by risk-based laws. - Financial Services are already heavily regulated, so AI in financial services is often addressed through testing environments. - Insurance is also heavily regulated, so the applicability of existing laws and regulations to insurance has been emphasized, although AI-specific laws, such as Colorado's SB169, have been introduced. AI in insurance is also covered by risk-based laws, typically being grouped with essential private services and essential public services. - Dynamic pricing has come under scrutiny in the US in particular. Laws target AI used in various types of dynamic pricing, including rent and ticket prices. - Generative AI is a priority of governments around the world, with laws targeting deepfakes and the use of generative AI in the judiciary in particular. AI companions are also targeted, with requirements relating to transparency and disclosure. - Risk-based frameworks target multiple AI use cases, either just seeking to govern those that are high-risk/used to make consequential decisions or creating risk tiers with differing obligations based on the level of risk. The EU AI Act represents the most comprehensive risk-based framework and has inspired several others around the world. - International cooperation is becoming increasingly important as governments around the world all take a different approach. International bodies such as UNESCO and the UN have led several international efforts over the past few years. The Council of Europe and the Association of Southeast Asian Nations have also introduced regional initiatives. - The AI regulatory ecosystem is ever-changing. The best way to ensure compliance is to take a proactive approach, beginning with inventorying your systems and ensuring safeguards are in place throughout the AI lifecycle. # About this Report For the past three years, we have published a report on the state of AI regulations as we have moved into the new year (2023 report here, 2024 report here, and 2025 report here). These reports have been some of our most popular resources, and something we have continued this year. However, while in previous years we have covered developments by country, this year we're shaking things up and covering developments by sector. This means you can easily find the most relevant laws to you based on your use case, although we also recommend looking at the section on risk-based laws as your use case may also be targeted here. This approach also allows us to cover more US laws, of which there are many (over 270 in progress at the Federal level and 376 at the state level at the time of writing). By taking this approach, we've also added more content on international initiatives that may affect you even if you are based in a country with less regulatory activity. Still want a country-by-country view? In the figure below, we've highlighted the countries leading the charge in terms of volume of significant AI regulatory activity. This eBook is intended to serve as a snapshot of the regulatory activity as we move into 2026, ensuring you have the key activity around the world on your radar. Throughout 2026, we will continue to bring you updates on our blog. In the meantime, leverage this eBook to set you up for success on your AI governance journey throughout 2026. # Contents # Regulation of HR Tech 1 Video interview laws 1 Bias audit laws 2 Updates to existing frameworks 2 Impact assessment laws 3 Worker displacement and wage laws 3 # Regulation of Financial Services 5 # Regulation of AI in Insurance 7 Ohio's HB579 8 Pennsylvania's HB1925 8 Texas's SB815 8 # Regulation of Dynamic Price Setting 9 # Generative AI Laws 10 Frontier model laws 10 Transparency and Disclosure Laws 11 Deepfake Laws 13 Generative AI use in the Judicial Service 15 # Risk-based Laws 16 EU AI Act 18 Italy's AI Law under the EU AI Act 19 Korea's AI law 20 Kazakhstan Al law 21 Vietnam's Al Law 21 Taiwan's Bill on AI 22 Brazil's Al law 23 Chile's AI law 23 Colorado's SB205 24 California's SB420 25 Hawaii SB59 26 Illinois' Preventing Algorithmic Discrimination Act 27 # International AI Initiatives 28 UN Mechanisms to Promote Multilateral AI Governance 28 UNESCO Guidance on Generative AI in Education 29 ASEAN Expanded Guide on AI Governance and Ethics 29 Council of Europe Framework Convention on AI 30 # 2025 was a mixed year: What to expect in 2026 31 Navigating the uncertainty 32 # Prioritize proactive governance with Holistic AI 34 # Regulation of HR Tech The use of AI and algorithms in employment decisions has long been a key area of interest to policymakers, with some of the earliest laws passed in the AI ecosystem targeting HR tech tools. As we have an entire whitepaper dedicated to HR tech laws, in this eBook, we'll give a whistlestop tour. With policymakers keen to regulate HR tech tools, there is a lot of noise. However, laws targeting HR tech can be grouped into five key types: video interview laws, bias audit laws, updates to existing legal frameworks, impact assessment laws, and worker displacement laws. For each of these categories, we highlight some of the key laws that have been proposed or that are already in effect that you should have on your radar in 2026 if you develop or use HR tech tools. - Video interview laws - Some of the first laws specifically targeting the use of automated HR tools focused on obtaining consent for video interviewing tools powered by AI. Illinois Artificial Intelligence Video Interview Act - Requires notification for AI-driven video interviews and candidate consent, as well as reporting of the demographic data for candidates hired and not hired. Effective 1 January 2020. Maryland's Facial Recognition Services Prohibition - Prevents employers from using facial recognition services during video interviews unless applicants sign a waiver consenting to its use. Effective 1 October 2020. - Bias audit laws - These laws explicitly require annual, independent bias audits of automated employment decision tools. New York City Local Law 144 – Requires employers and employment agencies using AEDTs for hiring and promotion decisions to conduct independent, impartial bias audits annually using the metrics specified by the Department of Consumer and Worker Protections. Also requires a summary of the results of the bias audit to be published and notification to be given to candidates and employees 10 working days before the use of the AEDT. Enforced from 5 July 2023. New Jersey A3854 - Requires vendors of AEDTs used for hiring or promotion decisions to complete annual impartial bias audits. A summary of results should also be published, and employers or employment agencies must issue notification of the use of the tool within 30 days. In progress. New Jersey A3855/S2964 - Requires employers and employment agencies using AEDTs for hiring or promotion to conduct annual impartial bias audits, publish a summary of the results, and issue notifications at least 10 working days before use. In progress. New York A03914/S04394 - Requires employers and employment agencies using AEDTs for hiring decisions to conduct annual impartial disparate impact analysis and publish a summary of the results. In progress. Philadelphia HB594 – Requires employers and employment agencies using AEDTs for hiring or promotion to conduct annual independent, impartial bias audits, publish a summary of the results, and issue a notification at least 10 working days before the tool is used. In progress. - Updates to existing frameworks - While existing laws and regulations apply to AI, some legal frameworks have been updated to specifically extend equal opportunity provisions to automated tools. Illinois HB3773 - Updates the Illinois Human Rights Act to make it a civil rights violation to use AI in a way that results in discriminatory outcomes based on protected attributes and to use zip codes as a proxy for protected attributes. Employers must also issue notifications for the use of AI-driven tools. Effective 1 January 2026. California's modified employment regulations - Employment regulations protecting against discrimination are updated to explicitly extend them to automated-decision systems. Interestingly, bias audits may provide a defense against claims of unlawful discrimination. Effective 1 October 2025. - Impact assessment laws - These laws require impact assessments of HR tech tools, with some also imposing restrictions on electronic monitoring to protect worker privacy. New York A3779/S185 - Requires annual impartial, third-party impact assessments of automated tools used for a variety of employment decisions. Specifically, disparate impact must be assessed, and the results must be submitted to a public registry and distributed to employees. Employers must also provide notifications for the use of automated tools and are prevented from solely relying on automated tools when making decisions. In progress. Vermont H.262 – Requires written impact assessments of automated employment decision tools that must focus on risks and validity and be updated any time the system is significantly changed. Employees are also required to meet certain conditions to use electronic monitoring in the workplace. Both technologies are prohibited from leveraging facial, gait, voice, or emotion recognition technology. In progress. Massachusetts S35 – Limits electronic monitoring and requires yearly independent impact assessments of electronic monitoring tools as well as AEDTs. Impact assessments must be submitted to a public registry, and AEDTs must be suspended if the impact assessment finds that they result in disparate impact. In progress. Washington HB 1672 - Limits electronic monitoring and requires a written impact assessment of AEDTs that must be updated following any system changes. In progress. - Worker displacement and wage laws – These laws seek to protect workers from preventable displacement from the use of AI by imposing financial deterrents. New York Workforce Stabilization Act - Requires employers to conduct bi-annual impact assessments of their AI deployments that, among other things, record worker displacement. $2\%$ surcharges would also be imposed on companies that terminate or reduce the hours of 15 or more employees due to AI. In progress. New York Robot Tax Act - Employers that meet certain thresholds will be liable for additional tax based on an employee's wage where workers have been displaced by technology, including AI. In progress. Illinois Surveillance-Based Price and Wage Discrimination Act - prevents surveillance data from being used to inform individualized pricing and wages. In progress. Georgia's SB164 - Prevents surveillance data from being used to inform individualized pricing and wages. In progress. # Regulation of Financial Services Financial Services is a legacy sector when it comes to using AI and algorithms. Like other sectors, existing laws and regulations apply whether or not AI is used. With financial services already being highly regulated, AI-specific laws often focus on the testing of systems. For example, the Financial Conduct Authority, the UK's financial services regulator, has launched AI Live Testing. The project provides companies with support from its technical and regulatory teams to deploy AI systems in financial markets in the UK, with the aim of supporting the evaluation of the impact of AI on UK financial markets. A report on the project's outcomes is expected towards the end of 2026. With a similar approach, the US has introduced legislation proposing regulatory sandboxes for financial services in order to promote AI innovation. Specifically, the bill (S2528/ HR4801) seeks to establish AI Innovation Labs within each financial regulator for regulated providers to experiment with AI without the risk of enforcement action. Both the US and UK testing environments are subject to applications, with the first Cohort of the AI Live Testing environment having kicked off in October 2025. Taking a different approach, Singapore's Monetary Authority published a consultation paper in November 2025 on its Proposed Guidelines on AI Risk Management for Financial Institutions. The guidelines, which follow on from the Monetary Authority's September 2025 circular on deep fakes, call for financial institutions to create an inventory of their AI systems, assess risk, and incorporate AI governance and risk management into their practices. They also call for controls that cover the whole of the AI lifecycle, including transparency, fairness, data management, human oversight, and evaluation, and senior management should foster a risk culture. Finally, the European Parliament adopted a resolution on the impact of AI on the financial sector on 25 November 2025. It notes that while AI has significant potential to bring benefits to financial services that should be passed on to consumers, it also brings risks, with LLMs bringing novel risks. As such, it calls for effective mitigation that respects existing applicable financial regulations. In addition to steps to promote AI-related innovation, it also calls for the European Commission to provide enabling guidance on the application of existing financial services legislation to AI and continuous monitoring of its applicability. # Regulation of AI in Insurance Insurance is another legacy sector that has been using algorithms to make determinations for decades and, like financial services, is heavily regulated. Indeed, the National Association of Insurance Commissioners in the United States issued a bulletin for redistribution by state-level regulators reminding insurance providers that existing regulations still apply when AI is used in insurance and calling for them to implement AIS Programs to mitigate the risks of AI. Some states have taken it a step further, however, and have introduced laws specifically regulating the use of AI in insurance practices. Colorado paved the way with its law Protecting Consumers from Unfair Discrimination in Insurance Practices, which became effective in 2023. With the law regulating a range of insurance practices and types like health, life, and private passenger auto insurance, work is underway to create specific rules for each line of insurance and insurance practice. Regulations on Governance and Risk Management Framework Requirements for Life Insurers', Private Passenger Automobile Insurers, and Health Benefit Plan Insurers' Use of External Consumer Data and Information Sources, Algorithms, and Predictive Models have been adopted and came into effect on 15 October 2025. Regulations on other types of insurance and insurance practices are still emerging. Also seeking to regulate a variety of insurance practices is Florida's S0202. Introduced in October 2025, it seeks to require that all insurance claim denials be made by qualified human professionals rather than AI and that the identity of the professional responsible for the decision is disclosed. Other insurance laws specifically target health insurance. For example: Ohio HB579 would require health insurers to file detailed annual reports with the state insurance superintendent on their AI use and prohibit them from making medical care decisions solely using AI. In progress. Pennsylvania HB1925 would require disclosure around the use of AI by insurers, as well as hospitals or clinicians, and an attestation of how bias and discrimination have been minimized. When AI is used, it must be followed by an individualized assessment. In progress. Texas SB815, effective 1 September 2025, amends the insurance code to prohibit automated decision systems from being used to make a healthcare-related adverse determination and allows the commissioner to audit and inspect agents' use of automated decision systems for review. # Regulation of Dynamic Price Setting Another key use case being targeted by policymakers is dynamic price setting, where parallels can be drawn with the use of AI to set individualized wages. In New York, S7882 came into effect on 15 December 2025. It prohibits landlords from using pricing algorithms to set rent prices, specifically targeting the merging of data from two or more residential property owners or managers. The law prohibits the use or licensing of any software, service, or algorithmic device that performs this service if it does all of the following: - Collecting data on past or current rental prices, available units, or lease start and end dates from two or more different landlords or property managers. - Analyzing or processing the data from the point above in a computer system, software, or process to analyze it, including using it to train an algorithm. - Providing recommendations to a landlord or property manager about rental prices, lease renewal terms, ideal occupancy, or other lease conditions. This act makes New York the first state to directly tackle dynamic pricing with regard to housing, and comes on the heels of its Algorithmic Pricing Disclosure Act, which more generally applies to businesses in the state. Similarly, California has passed AB 325, which amends its antitrust statute to ban "common pricing algorithms". These use competitor data to influence pricing or commercial terms (including employee compensation). Several other states are taking action against dynamic pricing, including Massachusetts (S2515 prohibiting surveillance pricing in grocery stores), Vermont (H371 targeting retailers), and Illinois (HB3838 targeting ticket sellers and resellers). With these bills still in progress, 2026 may see the rise of more legislation focused on dynamic pricing. # Generative AI Laws The past few years have also seen a large volume of laws targeting generative AI. In 2024, a key focus was on tackling the use of deepfakes that could interfere with elections. In 2025, deepfake laws were still introduced, but there was also significant activity surrounding mental health, as well as the use of generative AI in the judiciary. # Frontier model laws One of the most comprehensive generative AI laws is California's SB 53. Signed into law on 29 September 2025, the Transparency in Frontier Artificial Intelligence Act aims to tackle potentially catastrophic risks from the most advanced AI systems. SB 53 follows from the vetoed SB 1047 in 2024, which called for stricter measures for frontier models and their developers. SB 53 affects frontier developers and large frontier developers, and provides for a narrower approach where: - Developers must publish transparency reports when launching or substantially modifying a frontier model. Large frontier developers must further disclose catastrophic risk assessments and create and publish frontier AI frameworks detailing how they assess, manage, and mitigate catastrophic risks. - Developers must proactively declare their intent to comply with federal laws or regulations adopted by the California Office of Emergency Services and report any critical safety incidents. - Covered employees - those who oversee critical safety incidents - are protected from retaliation for whistleblowing. Large frontier developers must also create internal, anonymous whistleblowing channels. Non-compliance is subject to civil penalties by the Attorney General of up to $1 million per violation, and a successful whistleblower plaintiff may also be awarded attorney's fees. Similar to California's Frontier AI law, New York's Responsible AI safety and education (RAISE) Act, signed by Governor Hochul on 19 December, imposes transparency requirements and aims to reduce risk. Before signing, Governor Hochul secured commitments to update the RAISE Act to further align it with California's Transparency in Frontier AI Act before it comes into effect on 1 January 2027. Under the current version, large developers must do the following before deploying their models: - Implement a written safety and security protocol. - Retain original copies of these protocols and maintain version controls, retaining the documents for five years after the model is last deployed. - Publish a (redacted) copy of these protocols and submit it to the Attorney General and the Division of Homeland Security and Emergency Services. - Record and retain information on the specific tests used to assess the model under the safety and security protocols and their results. - Implement safeguards to prevent unforeseeable critical harm. Any safety incidents following the deployment of the model must be reported to the Attorney General and the Division of Homeland Security and Emergency Services within 72 hours. Non-compliance with the RAISE Act can result in civil penalties of up to $10 million for the first violation and up to$ 30 million for the second. # Transparency and Disclosure Laws Several laws have been introduced mandating that chatbot operators must disclose to users that they are interacting with AI and not a human. Some states require disclosures at the start of a session and at regular intervals, while others require regular disclosures only when users are known to be minors. For example, Maine's LD 1727 came into effect on 25 September 2025. It prohibits companies from leading customers to believe that they are interacting with humans if they are engaging with AI chatbots. This applies to both textual and aural communications, and businesses must disclose to users that they are engaging with an AI chatbot rather than a human representative. Some laws also specific target AI companions that are used for human-like relationships, going beyond chatbots designed to conduct commercial transactions: New York S 3008-C - Amended the general business law to add a new Article 47 regarding AI companions. Effective 5 November 2025, AI companion operators must i) implement safety protocols to detect expressions of suicidal ideations or self-harm and promptly direct users to crisis service providers, and ii) notify users that they are not interacting with a human at the start of a session and every three hours during sessions. Noncompliance can result in fines of up to $15000 per day. California SB 243 - From 1 January 2026, operators of companion chatbots must take steps to safeguard minors in particular from harms or may face civil action leading to damages of either \(1,000 per violation or actual damages (whichever is greater). To comply, operators must: o Provide notification that users are interacting with an artificially generated chatbot. o Disclose to minors at the beginning of a session and every three hours that they are not interacting with a human. ○ Maintain, and publicly publish, protocols to: i) Prevent the chatbot from providing content related to suicidal ideation, self-harm, or suicide to the user, ii) prevent chatbots from producing visual sexually explicit material or directly stating that the minor should engage in sexually explicit conduct when the user is known to be a minor. From 1 July 2027, annually report to the Office of Suicide Prevention the number of crisis service provider referral notifications provided, protocols for suicidal ideation by users, and protocols prohibiting responses to suicidal ideations or actions. Other laws focus on mental health chatbots that may simulate conversations with mental health professionals. For example, Utah's HB 452, effective May 2025, prohibits mental health chatbot suppliers from: - Using user input for targeted advertising, or to advertise any product or service to a user, unless the advertisement is disclosed as being an ad, and the third-party sponsoring or affiliated with the ad is disclosed. - Operating a mental health chatbot without a disclosure that users are not engaging with a human (i) before a user accesses the chatbot, (ii) at the start of an interaction if the user has not used the chatbot in the last seven days, and (iii) if asked or prompted by the user whether AI is being used. - Selling or sharing any individually identifiable health information or user input without the consent of the user. Suppliers can be fined up to $2,500 per violation, as well as incur additional penalties for each violation of an administrative order up to$ 5,000 each. However, it is an affirmative defense if they: - Maintain documentation about the development of the chatbot, including training data, foundation models, user data collection and sharing practices, and ongoing efforts to ensure reliability, accuracy, fairness, and safety. - Create and maintain a written policy filed with the Utah Division of Consumer Protection meeting detailed requirements, including the chatbot's intended purpose, mechanisms for users to report potentially harmful interactions, and protocols to assess and respond to risks or actions of harm to users. Other laws require more technical approaches beyond notifications when it comes to transparency and disclosure. For example, Chinese authorities issued Measures for AI Generation and Synthetic Content Identification in March 2025. Effective 1 September 2025, these measures require providers to disclose labelling practices in user agreements, maintain logs, and ensure compliance across various content types, including text, audio, images, video, and virtual environments. All network information service providers in China must clearly label AI-generated content (AIGC) with both explicit markers (like text, sound, or image labels visible in the content or interactive interfaces) and implicit markers (embedded metadata that identifies the content's AI origin). Additionally, app distribution platforms must verify that any AIGC services are properly labelled, and users must declare such content when publishing. The law also prohibits malicious removal or alteration of the labels, with enforcement handled by relevant authorities under current legal frameworks. # Deepfake Laws 2025 has seen a surge in malicious deepfake creation, with the primary use cases of impersonation being financial fraud and nonconsensual intimate imagery. All US states and the federal government have passed legislation around nonconsensual intimate imagery. Similar activity can also be seen globally. US Federal TAKE IT DOWN ACT: Signed into law on 19 May 2025, the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act (TAKE IT DOWN ACT), law prohibits any individual from knowingly publishing "intimate visual depictions" of minors or non-consenting adults, including images generated using AI. Within one year, covered platforms must implement notice-and-action mechanisms to allow victims to remove such material. Tennessee's ELVIS Act – Effective 1 July 2024, the Ensuring Likeness Voice and Image Security Act (ELVIS Act) protects artists from unauthorized use of their voice, likeness, or image from generative AI that can enable impersonation or deception, like deepfakes. Extending the Personal Rights Protection Act of 1984 also creates two new forms of civil liability when a person or company knowingly unauthorizedly makes an individual's voice or likeness publicly available, or in any way makes available a technology that can impersonate an individual's voice or likeness. Washington's Forged Digital Likeness Law - Effective 27 July 2025, HB1205 criminalizes the intentional use of a forged digital likeness (including synthetic audio, video, or images) with the intention to defraud, harass, threaten, intimidate, or for any other unlawful purpose. Violations are punishable by up to 364 days in jail and a $5,000 fine, with more serious penalties possible in cases involving fraud or identity theft. Pennsylvania's Criminal Code Amendments - Act 35 (formerly SB 649) came into effect on 5 September 2025, adding a new section to Pennsylvania's criminal code to establish digital forgery as an offence. Criminal penalties for creating or disseminating deepfakes or facilitating a third party to do so are established, although placing a disclaimer on the content that the digital likeness is fake provides a defense. India's Amendments to the IT Rules – In force from 15 November 2025, the amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 add additional due diligence obligations for online intermediaries relating to 'synthetically generated information' hosted on their platforms. Specifically, they are required to remove unlawful information upon receiving actual knowledge either through a court order or notification from the government. Denmark's proposed Copyright Act Amendments - Proposed in June 2025 and expected to come into effect in early 2026, the amendments seek to grant individuals copyright over their own likeness. They provide protections for the general public from the sharing of their digitally-generated likeness and protections for artists from the nonconsensual sharing of digitally-generated imitations of their work. Claimants may seek compensation under Danish civil law, and platforms can be held liable through fines imposed by the European Commission and relevant authorities under the EU's Digital Services Act. UK Data Use and Access Act – Effective from 19 June 2025, the law makes creating or requesting the creation of deepfake nonconsensual intimate images a criminal act. # Generative AI use in the Judicial Service With several instances of fake Al-hallucinated cases being cited in court filings, legal institutions around the world have begun to publish guidance on the use of AI by lawyers. In the UK, the Courts and Tribunals Judiciary published Guidance on AI for Judicial Office Holders in October 2025, refreshing guidance issued in April 2025. The guidance sets out the risks of AI in the judicial process and how they can be mitigated. In particular, the judiciary is urged to ensure they understand AI and its applications, ensure confidentiality and privacy, ensure accountability and accuracy, be conscious of and correct bias, maintain security, take responsibility for the outputs, and be aware of court users' use of AI. Also in the UK, the Bar Council of England and Wales refreshed its Considerations when using ChatGPT and Generative AI Software based on large language models in November 2025. The guidance covers both general-purpose models and generative AI tools specifically designed for use by lawyers, providing a guide on how LLMs work, their risks, including hallucinations, and things to consider when using LLMs. These include mandatory verification of LLM outputs and respecting privilege and confidentiality. In Europe, the Council of Bars and Law Societies of Europe (CCBE) published a guide on the use of generative AI by lawyers in October 2025. Like with the Bar Council's guidance, the CCBE guidance provides an explanation of how generative AI works, the key risks, and key considerations, such as confidentiality. It also promotes transparency to clients in how AI is used and the maintenance of knowledge and professional skills to avoid overreliance on AI and support the verification of AI-generated outputs. Multiple State Bars in the US have also published similar guidance, including the California State Bar and the State Bar of Arizona. The New York State Unified Court System also published a policy on the use of AI in October 2025. As with other guidance, it outlines how generative AI works, risks, and guiding principles for the safe and legal use of generative AI. It also restricts use to generative AI tools approved by the Division of Technology and Court Research and requires all judges and nonjudicial UCS employees with computer access to complete training on the use of AI. Supporting these efforts towards the safe and effective use of AI by the judiciary, UNESCO published a Global toolkit on AI and the rule of law for the judiciary in 2023, which has been the basis for the regional training sessions hosted by its Global Network of Experts on AI & the Rule of Law. The body also published Guidelines for the use of AI systems in courts and tribunals in December 2025, which are designed to act as a living document. The guidelines provide 15 principles to follow when using AI in the judicial system, including protecting human rights, assuming responsibility, exercising human oversight, and multi-stakeholder collaboration. Each principle is accompanied by specific implementation guidance. # Risk-based Laws Risk-based frameworks seek to regulate a range of AI systems across different sectors and use cases, and may target multiple use cases explored above. Traditional risk-based frameworks typically have multiple risk tiers, where each tier has different rules proportional to the risk posed, with some also prohibiting certain practices. Other risk-based laws regulate only those considered high-risk and do not prohibit any practices nor impose any requirements on less risky systems. Some also focus on disclosure and impact assessments, rather than imposing stringent requirements. However, despite the divergence in approach, there is convergence in the systems regulated by these laws, as demonstrated by the table below. <table><tr><td>Use case</td><td>EU AI Act</td><td>Brazil AI Law</td><td>Korea AI Act</td><td>CA SB-420</td><td>CO SB205</td><td>Illinois SB2203</td><td>Hawaii SB59</td></tr><tr><td>Subliminal techniques to distort behaviour</td><td>⊗</td><td>⊗</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Exploiting vulnerabilities to distort behaviour</td><td>⊗</td><td>⊗</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Social scoring</td><td>⊗</td><td>⊗</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Profiling to predict crime risk</td><td>⊗</td><td>⊗</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Untargeted scraping for facial recognition databases</td><td>⊗</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Inference of emotions in workplaces or educational institutions</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Biometric categorization systems</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Real-time remote biometric identification in public spaces</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Creating or sharing sexual abuse or exploitation of minors</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Autonomous weapons</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Biometric identification, categorization, and emotion recognition</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Critical infrastructure</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Education and vocational training</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Employment, workers' management and access to self-employment</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Essential private and public services</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Law enforcement applications</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Migration, asylum and border control management</td><td>A</td><td>A</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Administration of justice and democratic processes</td><td>A</td><td>A</td><td></td><td>A</td><td>A</td><td>A</td><td></td></tr><tr><td>Nuclear materials</td><td></td><td></td><td>A</td><td></td><td></td><td></td><td></td></tr><tr><td>Housing</td><td></td><td></td><td></td><td>A</td><td>A</td><td>A</td><td>A</td></tr><tr><td>Election</td><td></td><td></td><td></td><td></td><td></td><td>A</td><td></td></tr></table> # EU AI Act Entering into force in 2024, the EU AI Act was the first comprehensive risk-based AI legislation and is setting the standard for similar laws worldwide. With a long journey leading up to its adoption that spanned many years, 2025 was a significant year for the European Market as it started to become applicable. Indeed, prohibitions and AI literacy requirements became applicable on 2 February 2025, kicking off the law's lengthy implementation period. The next application date was 2 August 2025, when several rules started to apply, including penalties, which peak at up to €35,000,000 or 7% of annual turnover. Obligations for providers of general-purpose AI (GPAI) models, including those with systemic risk, also became applicable The next deadline is 2 August 2026, when the majority of the remainder of the EU AI Act begins to apply. This includes obligations for high-risk AI systems: Biometrics Critical Infrastructure Educational and Vocational Training Employment Workers Management and Access to Self-Employment Access to and Enjoyment of Essential Private Services and Essential Public Services and Benefits Law Enforcement Migration, Asylum and Border Control Management Administration of Justice and Democratic Processes Profiling These obligations affect a variety of actors, although providers have the largest volume, including implementing a quality management system, meeting accessibility requirements, registering the system, and meeting design requirements: # Risk Management Systems Implementing process for entire lifecycle of HRAIS to identify, analyse and mitigate risks Article 9 # Data / Data Governance Training and testing of HRAIS using data shall be undertaken in accordance with Article 10 # Technical cumentation Drafting comprehensive "manual" for HRAIS which contains, at a minimum the Annex IV information Article 11 # Record-keeping HRAIS must be designed to ensure automatic logging of events eg period of use and input data reviewed (Article 12) and providers must keep these logs Article 20 # Transparency HRAIS must be accompanied by instructions for use which include detailed information including their characteristics, capabilities and limitations Article 13 # Human Oversight HRAIS must be designed so they can be overseen by humans, who should meet various requirements eg being able to understand the HRAIS and to stop its use Article 14 # Accuracy, Robustness and Cybersecurity HRAS must be accurate (with accuracy metrics included in instructions for use), resilient to errors or inconsistencies (eg through fail-safe plans) and resilient to cyber attacks Article 15 # Quality Management System HRAIS providers must put in place a comprehensive quality management system which includes at least the extensive Article 17 information requirements # Post-marketing Monitoring HRAIS providers must document a system to collect and analyse data provided by users on the performance of the HRAIS throughout its lifetime Article 61 Obligations for systems considered high-risk because they are a safety component of a product or are themselves a product covered under certain harmonization legislation and are subject to third-party conformity assessments, however, are applicable from 2 August 2027. Providers of GPAI models placed on the market before 2 August 2025 must also be compliant by this date. # Italy's AI Law under the EU AI Act Under the EU AI Act, EU Member States may adopt their own national implementation laws to enforce the Act's obligations, such as designating relevant authorities and enforcement bodies, setting penalties, establishing sandboxes, and implementing any other specific rules. Italy became the first in the EU to pass its own national AI Act (Law No. 132), which came into force on 10 October 2025. It designates the Agency for Digital Italy as the notifying authority and the national Cybersecurity Agency as the market surveillance authority, giving them the necessary supervisory, inspection, and sanctioning powers. The law provides a framework for the use of AI in different sectors such as healthcare, employment, and public administration. It also provides broader directions to guide the use of AI regarding copyright and minor consent, amends criminal law to include AI-related provisions, and aligns data privacy and security provisions with the GDPR, ensuring harmonization with existing EU legislation. The law additionally introduces a national strategy for AI that supports innovation while protecting fundamental rights, and directs the government to adopt further legislative decrees on the use of data, algorithms, and mathematical models for the training of AI systems in line with the Act. In support of innovation, it provides significant state-backed investments into AI, enabling technologies, and sandboxes. Other EU Member States are in various stages of implementing similar national AI acts – at the time of writing, Germany, Spain, and Ireland have drafts in different stages making their way through the legislative process. Other states are yet to have drafted any legislation, but have completed several concurrent steps, such as designating nationally relevant authorities and establishing sandboxes. The impact of these national AI laws, including Italy's, on the development and use of AI will depend on these implementing measures, which must be done without compromising the consistent application of the EU AI Act, GDPR, and other relevant legislation. # Korea's AI law Supply of Energy Production of Drinking Water Healthcare Services Medical Devices Nuclear Materials Biometric Information Hiring Loan Screening Transportation Logistics Public Services Education The EU isn't the only jurisdiction with comprehensive AI rules set to take effect in 2026 - Korea's Basic AI Act, which was enacted in January 2025, will take effect from 24 January 2026. Like the EU AI Act, it seeks to balance safety and innovation, although Korea's law does not prohibit any systems. Instead, it identifies several use cases for AI systems that are considered high-impact: Those using generative AI or high-impact systems must inform users that they are interacting with AI or an AI-generated output. Additional obligations for high-impact systems include: - Developing and implementing a risk management plan. Providing information about the final output and how it was derived. - User protection measures. - Documentation on system safety and reliability. Penalties for non-compliance can reach up to 30 million won (over $21000). Additionally, National AI Committee members who disclose confidential information can be fined the same or imprisoned for up to three years. # Kazakhstan Al law On 17 November 2025, Kazakhstan's president signed the country's first law on Al, seeking to ensure the safe, transparent, and ethical use of Al across the public and private sectors. It creates a risk classification system and an autonomy scale ranging from low to high, and establishes principles, responsibilities, and liability for AI developers for the actions and outcomes of systems. Although it does not specify high-risk use cases and instead leaves risk classification to owners and processors, it introduces prohibited practices like behavioral manipulation and exploitation of personal data or vulnerable populations. The law addresses several aspects of copyright, such as data use, authorship, and AI-assisted creations, but requires clarification around what constitutes human or AI contributions, and how the law will be enforced in practice. # Vietnam's AI Law Vietnam's first dedicated AI law was passed in December 2025 and will come into effect on 1 March 2026, although it will be rolled out in phases. Modelled after concepts from the EU AI Act, this law introduces a four-tiered risk classification system (unacceptable, high, medium, and low), and defines varying obligations for providers based on risk levels. The law also requires foreign companies to appoint a local representative to act as a point of contact and for other compliance purposes. Under the law, AI activities must comply with seven key principles: The law goes into effect from 1 January 2026, but full implementation will occur in phases: - Within 6 months, the National Commission on AI and the AI Development Fund must be established, including an initial implementation framework. - After 12 months (2027), provisions on prohibited acts (Chapter II) and the regulatory sandbox mechanism will apply. - After 18 months (mid-2028), obligations for high-risk AI systems shall take full effect. However, high-risk systems that were already deployed shall have 24 months from the date of application to comply with provisions in Chapter II (2029) and complete risk and conformity assessments. Deputies in the National Assembly are calling for more debates to clarify key issues, including protecting vulnerable groups and addressing AI's impact on employment. Further developments are expected as implementation rolls out from 2026. # Taiwan's Bill on AI On 23 December 2025, Taiwan's Legislative Yuan passed the country's Artificial Intelligence Fundamental Act. Aiming to strike a balance between promoting innovation and safeguarding human rights, the law introduces seven key principles: The bill promotes international collaborations, regulatory sandboxes, and public-private collaboration. Similar to other bills, it adopts a risk classification system and introduces stricter liabilities for high-risk systems. It also commits to strengthening AI literacy across all sectors and in schools, and directs the government to issue further regulations to implement the law. While Brazil passed Bill No. 2238/2023 in December 2024, the bill is still currently under review in the Senate, and can only take effect after being voted on in the House of Representatives and finally going to the President. Whether any progress on this will be made in 2026 is still unclear. Nevertheless, in its current form, it takes a similar approach to the EU AI Act, where it takes a risk-based classification that prohibits systems with excessive risk and imposes stringent obligations on high-risk systems: Education Recruitment/ employment Public and Private Services Administration of Justice Autonomous Vehicles Healthcare Study of Crimes Biometrics Identification Immigration Management Key obligations for developers include: - Measures to mitigate and prevent bias. Including algorithmic impact assessments. - Transparency on management and governance policies. - Record keeping to support assessments of accuracy and robustness. Security tests. Non-compliance can result in fines of up to R $50,000,000 (around$ 9 million) per infraction, or up to 2% of annual revenue. Other outcomes, such as a suspension of the development or supply of the AI system, are also possible. While this AI law is still in progress, Brazil has several other existing legislations that apply to AI, including the General Data Protection Law (13,709/2018), the Consumer Protection Code (8,078/1990), and the Copyright Law (9,610/1998). # Chile's AI law In 2024, Chile introduced a risk-based AI bill. While it has not yet been passed, it has similarities to the EU AI Act and Brazil's law, prohibiting systems with unacceptable risk. Similar to the EU AI Act, these include those using subliminal manipulation, systems exploiting human vulnerabilities to cause harm, biometric categorization, social scoring, remote biometric identification systems, non-selective facial image extraction, and systems for assessing emotional states in the justice system, border management, the workplace, or educational institutions. Although the specific systems that are considered high-risk are not specified, the rules applicable to high-risk systems include: Penalties for violations, like the AI Act, are tiered, ranging from 5000 monthly tax units to 20000 monthly tax units. # Colorado's SB205 SB205, which provides Consumer Protections for Artificial Intelligence, was signed into law on 14 May 2024 and was supposed to take effect from 1 February 2026. However, an amendment was passed by the Governor during a special August 2025 legislative session to delay the effective date to 30 June 2026 as lawmakers were unable to reach a compromise on amendments to the original law. The law still stands in its original form, which seeks to provide consumer protections from a variety of AI systems. Although Colorado's Attorney General has yet to develop regulations to support the enforcement of the law and provide clarity, broadly, SB205 requires developers and deployers to demonstrate reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination through a series of transparency, governance, and mitigation measures. As with the other horizontal laws, SB205 primarily focuses on high-risk systems – those used to make consequential decisions: Education Employment Financial or Lending Services Essential Government Services Healthcare Services