An autonomous research institute under the Ministry of Finance

 

blog author image

( Co-authored with Lekha Chakraborty)

As artificial intelligence reshapes financial services—from credit scoring to fraud detection—regulators worldwide are racing to balance innovation with stability, ethics, and inclusion. This global analysis compares key frameworks in Singapore, the European Union, the United Kingdom, and the United States, while highlighting insights from standard-setting bodies like the BIS, FSB, and IOSCO. It also explores central banks’ adoption of AI supervisory tools and emerging open-source platforms for policy analysis, underscoring the need for harmonized, adaptive governance to harness AI’s potential without amplifying systemic risks. A companion blog  in this series will examine India-specific regulatory responses to artificial intelligence in finance, including the Reserve Bank of India’s evolving guidelines, the proposed Digital India Act, and ongoing discussions within the Financial Stability and Development Council. The present blog complements that analysis by distilling actionable lessons for India from leading international frameworks and standard-setting bodies. 

Drawing on regulatory approaches in Singapore (FEAT principles), the European Union (AI Act), the United Kingdom (principles-based regime and Consumer Duty), and the United States (Executive Order 14110 and AI Bill of Rights), as well as recent reports by the BIS (2024), FSB (2024), IOSCO (2024), and the MIT AI Risk Repository (2024), it identifies five priority areas: (hybrid risk- and principles-based supervision, mandatory bias testing and human oversight, macroprudential safeguards against AI-induced systemic risks, accelerated supervisory technology adoption, and open-source regulatory technology) that can inform the design of a coherent, innovation-friendly, and stability-enhancing AI governance framework tailored to India’s rapidly digitising financial sector.

Section 1: . Global Regulatory and Legislative Landscape: A Comparative Analysis

  1. Singapore

In Singapore, the government has recognized the importance of addressing bias in AI systems and has developed the Fairness, Ethics, Accountability, and Transparency (FEAT) principles to guide the development and use of AI and Data Analytics. The FEAT principles provide a framework for ensuring that AI systems are developed and used in a manner so as to provide the entities using AI-ML foundational principles and contextualize the governance of use such models. This is done in order to promote public confidence.

Under the FEAT principles, the Monetary Authority of Singapore (‘MAS’) stipulates that systems should be developed in a manner that is ‘fair and unbiased.’ This means that AI systems should not discriminate against individuals or groups based on their race, gender, age, or other characteristics. Under the Principle 5, Fairness, it requires the entities to have processes which account for specific data attributes providing “appropriate levels” of justifications.

The FEAT principles also require that AI systems be developed in an ethical manner with a further requirement of having at least some human-driven ethical decision. Developers must weigh the societal impact of their systems, ensuring ethical and non-harmful usage in line with the entity’s code of conduct. Specifically, AI systems should refrain from infringing on individual privacy or manipulating public opinion. The second principle in the framework emphasizes accountability for entities employing such systems. They should elucidate the decision-making process, aligning with the global push, echoed by the European Union, for transparent and explainable AI systems. It also stipulates that entities should ensure that their systems are transparent and accessible to the public, so that individuals can understand how the systems work and how decisions are made.

Further, the FEAT principles require that AI-ML systems be transparent. This entails developers providing clear explanations of how their systems work and how decisions are made. This also includes disclosing any data sources used, as well as any assumptions or limitations of the system.

  1. European Union

Public controversy surrounds the integration of AI algorithms into critical aspects of life. In response, various political and non-governmental organizations have formulated non-binding ethical guidelines for AI implementation, placing emphasis on transparency and explainability. Going beyond, the European Union, through its General Data Protection Regulation (GDPR), introduced a preliminary governance framework for AI applications. Under the GDPR, individuals processed by AI applications have the right to receive meaningful information about the application’s functioning.

Moreover, the GDPR prevents AI applications from making impactful negative decisions without human approval. However, the effectiveness of these rules in governing modern AI applications is debatable. Their success hinges heavily on individual engagement and the AI application operator’s commitment to diligent compliance, presenting challenges in ensuring robust governance.

Proposed in December 2022, the ‘Artificial Intelligence Regulation (AI Act)’ by the EU reached political agreement among all 27 member states in December 2023. Subsequently approved by the EU Ministers Council in February 2024, the AI Act, pending passage in the European Parliament, seeks to harmonize standards for AI-ML systems and implement a risk-based classification regime based on intended usage. Notably, ‘credit scoring’ falls into the high-risk category. While the AI Act mandates a conformity assessment test, it overlooks the challenge of ‘automation bias’ arising from human-labeled data in developing AI-ML systems. This omission poses a significant hurdle to the explainability of systems, making them susceptible to errors and bias from individuals involved in the labeling process.

  1. United Kingdom

The Regulators are Bank of England, FCA, PRA, and Consumer Duty obligations. The UK employs a principles-based framework, with sectoral regulators integrating core values like safety, transparency, fairness, accountability, and contestability into their supervisory guidance.

Consumer Duty & Fairness: Under the Consumer Duty, regulated entities must avoid foreseeable harm, ensuring AI outputs remain transparent and fair for consumers.

  1. United States

In October 2023, President Biden issued an executive order on Safe, Secure and Trustworthy development and use of Artificial Intelligence. The executive order dictates principles that aim to “establish new standards for AI safety and security, protect Americans’ safety, advance equity and civil rights, support American workers, ensure responsible use of AI by the government, promote innovation, competition and collaboration.” The order pushes the Director of the Federal Housing Finance Agency and the Director of the Consumer Financial Protection Bureau to use their authority to require regulated entities to ensure evaluation of their underwriting models for biases or disparities along with evaluation of automated collateral-valuation and appraisal processes to minimize bias.

The White House also released a blueprint for an AI Bill of Rights in October 2022 which is not an enforceable law but a set of guiding principles for responsible design and use of Artificial Intelligence. The blueprint lays down five principles, amongst which, algorithmic discrimination and protections suggests including proactive protection measures to prevent discrimination and inequitable outcomes of algorithmic and system design.

While the US lacks a federal legislation on AI, state level laws are slowly catching up but are fragmented in their approach, creating laws to tackle specific issues related to use of AI.

Section 2: Analysing  Standard-Setting Bodies - BIS, FSB, and IOSCO: A Unified Global Dialogue on benefits and risks posed by AI to the financial sector

Benefits

The financial services sector has been an early adopter of AI-related tools and techniques (including machine learning, or “ML”). This has been due to the industry’s long-standing focus on data collection and analysis. Due to recent technological developments, namely (i) increase in computing power (and lowering of costs), (ii) the availability, size, and use of datasets, and (iii) advances in cloud computing, financial entities are increasingly utilising AI for traditional functions (such as credit scoring, risk management, etc.) and implementing new use cases, especially with Generative AI (“GenAI”) and Large Language Models (“LLMs”).

GenAI’s ability to process human language, create content, analyse large amounts of complex data, and simultaneously train itself, has made it a much sought-after technological development across sectors, in particular with BFSI. The financial services sector has already started deploying GenAI in a variety of functions, including chat automation, virtual assistants, fraud detection and back-end operations.
Amongst the aforementioned publications, the BIS working paper studies the use-cases of AI and ML technology on four key financial functions: financial intermediation, insurance, asset management, and payments. It underlines that the latest evolution of AI into Gen AI has allowed for deeper incorporation of the technology in the financial sector. For instance, Gen AI has assisted in readability of new types of data (especially unstructured data, wherein data is not organised in a pre-specified template or format). This can enhance functions such as risk analysis (insurance), credit scoring (banks), general back-end operations, and regulatory compliance in forms of enhanced KYC and AML processes. Additionally, the technology’s ability to converse like humans, and automate tasks has applications in customer support, virtual assistants, robo-advisory, amongst others.
The blog also explores a new domain of AI usage, i.e. AI agents such as GPT4 and Claude 3, that have been deployed in high frequency trading. AI agents are AI systems that are built on advanced LLM models with high degree of agency. Due to their autonomous capabilities, such agents have broad usage in financial institutions across diverse functions.

With respect to capital markets, the IOSCO report underlines that AI is being deployed in the following operations:
1. Advisory and support services;
2. Risk management: Market intermediaries use ML-based risk management systems for credit risk monitoring as an early warning indicator for customer risk profiling.
3. Client identification and monitoring: Advanced KYC
4. Selection of trading algorithm; and
5. Asset management/ Portfolio management.

Focusing on what impact AI could have on financial stability, BIS has studied the characteristics to map out possible positive outcomes that could result from widespread adoption of AI. These include increased productivity, consumption, and investment both in the short and long term. The paper points out that a ‘goldilocks situation’ could be possible.

Due to increased productivity and output, greater use of AI would ease inflation in the near-term, helping central banks to maintain their targets. In the longer-term, while there could be a rise in inflation due to AI-induced demand, central banks would be able to manage this by tightening policy rates. It adds that the AI’s contribution to output growth can have the potential to negate adverse impact of events like ageing population, disruption in global supply chains, geopolitical disturbance and political fragmentation.

️Systemic Risks from Artificial Intelligence in Finance: Insights from BIS, FSB, and the MIT AI Risk Repository

The Bank for International Settlements (BIS 2024) and the Financial Stability Board (FSB 2024) have identified a cluster of interconnected risks that could threaten global financial stability as generative AI and large language models become ubiquitous. Large language models frequently produce confident but false outputs (“hallucinations”), while their speed, automaticity, and widespread adoption amplify herding behaviour and procyclicality. Autonomous AI agents, designed to maximise narrow objectives, may ignore broader systemic consequences: in one documented experiment, an LLM instructed to trade profitably engaged in illegal insider trading and then lied when questioned (BIS 2024). The growing use of alternative and unstructured data with short histories or small samples further increases the likelihood of biased or overfitted recommendations that cannot be generalised. Model homogeneity, third-party concentration, and reliance on the same cloud providers and datasets heighten the danger of simultaneous failures, flash crashes, and sudden illiquidity during stress episodes. Cybersecurity threats are magnified because generative AI lowers the cost and raises the sophistication of attacks, including deep-fake identity fraud, while opaque “black-box” algorithms hinder timely detection of market manipulation or emerging systemic risks (FSB 2024).

Building on these concerns, the MIT AI Risk Repository (MIT 2024) – a living database of over 700 rigorously curated AI risks maintained by the MIT FutureTech group – systematically catalogues these financial-sector threats under multiple domains (discrimination & toxicity, misinformation, malicious actors & misuse, human–computer interaction, and systemic risks). The repository classifies herding and procyclicality as “systemic risk” entries, hallucinations and deep-fakes under “misinformation” and “malicious use,” and third-party concentration as “organisational risk.” By assigning severity, likelihood, and existing mitigation scores to each risk, the repository offers regulators and supervisors an evidence-based taxonomy to prioritise oversight, stress-testing scenarios, and governance requirements. Together, the BIS, FSB, and MIT analyses underscore that while AI promises substantial efficiency gains, its unchecked deployment could create novel transmission channels for financial crises, making proactive, internationally coordinated risk management indispensable.

Section 3: AI Model Regulatory Framework - Central Banks as AI Adopters: Use Cases and Experiences

This section covers a brief overview of the use-cases in Singapore, European Central Bank (ECB), United Kingdom (UK), and United States of America (US) with examples, along with adoption benefits and challenges.

  1. Singapore

The Monetary Authority of Singapore (MAS) has launched a multi-phase initiative known as Project MindForge, in partnership with key stakeholders like major banks, tech firms, and advisory bodies- to develop and implement a risk framework governing the use of generative AI in the financial sector. This project resulted in the creation of the ‘Veritas tool’, which is an open-source AI self-assessment tool for benchmarking against the FEAT Principles.

During the first phase, which concluded in November 2023, the consortium crafted a comprehensive GenAI risk framework encompassing seven key dimensions: accountability and governance; monitoring and stability; transparency and explainability; fairness and bias; legal and regulatory considerations; ethical impacts; and cyber and data security. They also designed a platform-agnostic GenAI reference architecture to guide financial institutions in building robust GenAI systems.

  1. European Central Bank (ECB)

ECB has found more than 40 use cases for Gen AI in banking supervision. Athena, ECB’s new tool helps to address supervisory and regulatory blind spots. It will allow over 1,000 supervisors to analyse more than 5 million documents across the Single Supervisory Mechanism (SSM).

In big data analytics, a range of supervisory technologies is enhancing the efficiency and depth of regulatory assessment. GABI, for instance, enables large-scale generation and optimization of regression models, supporting more data-driven supervisory decisions. NAVI provides visualizations of complex ownership structures by integrating data from multiple sources, aiding in the analysis of bank ownership and interdependencies.

  1. United Kingdom

The UK’s Financial Conduct Authority (FCA) AI Lab fosters discussion, experimentation, and collaboration on AI use in financial markets. Initiatives such as the Supercharged Sandbox provide compute resources and datasets to financial services innovators, while ‘AI Live Testing’ and ‘AI Spotlight’ evaluate real-world AI applications.

The AI Live Testing tool [a brief technical description of what it is and what value it adds] and the AI Spotlight tool [to detect minor infractions such as Payments monitoring]. Further, the Bank of England’s Digital Regulatory Reporting is advancing regulatory reporting by entities regulated by the central bank through structured, standardized, machine-readable data using AI which aids timely and reliable compliance and monitoring via public tools commercialised through the University of Cambridge’s Cambridge Judge Business School, implementing the Regulatory Genome Project (“RGP”).

Adoption Benefits

AI has proven to reduce manual work leading to productivity gains, enhance decision-making, and enable tailored financial products, which can strengthen the resilience of the financial system and improve consumer choices.

The UK AI initiatives including the Bletchley Declaration aim to accelerate fintech and financial supervision innovation, leveraging AI and fostering collaboration among regulators, industry, and technology providers. Further, by integrating AI into regulatory activities (e.g., FCA Innovation Services), the UK seeks to identify, monitor, and mitigate system-wide risks more proactively.

Adoption Challenges

The UK Regulators identify safe handling of personal data and resilience to cyber risks as major hurdles, particularly in complex AI supply chains and digital reporting environments. However, concerns about effective supervision of outsourced AI services and maintaining public trust in automated systems persist.

Nonetheless, ensuring AI model robustness, explainability, and workforce training/socialization are ongoing priorities for the FCA and the Bank of England.

  1. United States of America (US)

In the Insurance sector, the National Association of Insurance Commissioners (“NAIC”) evaluates and recommends best practices for the ethical and effective use of AI in insurance oversight, including solvency supervision and risk analytics.

The US Securities and Exchange Commission’s Self Regulatory Organization (“SRO”) - FINRA provides guidance for AI deployment in securities markets, emphasizing rule compliance, robust governance, and monitoring for bias or misrepresentations
Further, under the Federal Reserve AI Programme, the Fed has instituted a Chief AI Officer and cross-agency working groups to advance responsible AI use in banking supervision, stress testing, and regulatory reporting.

U.S. supervisory authorities have integrated advanced AI tools into their oversight frameworks to enhance both speed and precision. The Securities and Exchange Commission employs large-scale Big Data analytics and machine-learning models for real-time market surveillance, enabling rapid pattern recognition and early identification of anomalous trading behaviour.

The Federal Reserve incorporates artificial intelligence into Comprehensive Capital Analysis and Review (CCAR) loss-projection models, significantly improving the accuracy of stress-testing simulations across a wide range of economic scenarios.
Dedicated machine-learning systems for fraud-likelihood scoring are deployed across agencies to predict and prevent financial crime in real time.

Meanwhile, the Consolidated Audit Trail (CAT) creates a massive, centralized repository that aggregates billions of daily order and execution records, allowing regulators to reconstruct market events, detect systemic risks, and conduct forensic investigations with unprecedented granularity. Together, these technologies markedly strengthen risk management, compliance monitoring, and overall market integrity, although their effectiveness continues to depend on rigorous data-governance practices and ongoing efforts to identify and mitigate algorithmic biases.

Adoption Benefits

AI applications improve the speed and accuracy of risk surveillance, enhances fraud detection, and enables real-time response to threats in financial markets.

Adoption Challenges

Thematically important #DataPrivacy et #aIModelRisk considerations require strong emphasis on consumer privacy, data protection, and mitigating risks of bias, errors, or unintended outcomes from AI models. A. Technological Neutrality: Regulators maintain technology-agnostic rules while adapting supervision to the evolving use of generative AI and third-party solutions, which can exhaust resources and experts to bias. B. Stakeholder Coordination: Fragmented regulatory landscape across federal and state entities poses governance and harmonization challenges for nationwide AI adoption.

A comparative assessment of UK and US supervisory technology reveals that while both jurisdictions achieve productivity gains and real-time risk surveillance, the UK emphasises collaborative sandboxes and principles-led innovation, whereas the US prioritises rigorous enforcement, tech-neutral rules, and anti-bias audits across fragmented federal and state regimes (FCA 2024; NAIC 2024). Similarly, the rich ecosystem of US supervisory tools—Big Data for Market Surveillance, CCAR loss projections, ML fraud-likelihood models, and the Consolidated Audit Trail—demonstrates how AI strengthens market integrity and stress-testing, albeit with persistent challenges around data privacy, model risk, and regulatory coordination (SEC 2024; Federal Reserve 2024).

B. Open-Sourced Applications and RGP in the AI Age of Policy Analysis

● RGP Collaboration with the Ministry of Finance
Rules of engagement under the Cambridge’s RGP Regulator Engagement Programme

The RGP Regulator Engagement Programme allows participants from the financial regulatory community with the goal of strengthening regulators’ digital preparedness.

Traditionally, there has been no consistent method for comparing different regulatory systems. Although rulebooks and policy documents are widely available in formats suited for human interpretation, access to “machine-readable” regulations that can be integrated into regulator-oriented tools has been limited.

This gap has hindered regulators from effectively benchmarking their own frameworks against those of peers and from obtaining a holistic view of the broader regulatory environment. It has also constrained their ability to react swiftly to new market developments.   

● Open Fisca

“The most widely adopted free and open-source engine to write rules as code. Great for Tax Law analysis”

Government departments, lawmakers and civil society finally share a common language to debate and activate social and fiscal policies: OpenFisca enables collaboratively modelling laws and regulations and making them computable over open APIs for developers, data scientists and researchers.

Example 1: OpenFisca customised and used by Policy Engine UK PolicyEngine’s new firm microsimulation model estimates revenue and business impacts of VAT thresholds from.

Section 4: The Way Forward: Toward Harmonised AI Governance in Finance

The global AI regulatory mosaic—from Singapore’s FEAT principles to the EU’s risk-based AI Act, the UK’s principles-led approach, and the US’s executive-driven equity focus—reveals a shared imperative: embedding fairness, transparency, and accountability to mitigate biases and systemic threats. Standard-setters like the FSB and BIS illuminate AI’s dual-edged sword, amplifying productivity while risking herding, cyber vulnerabilities, and disinformation cascades.   Central banks’ tools, such as ECB’s Athena and the Fed’s ML fraud models, demonstrate tangible gains in supervision, yet underscore challenges in data privacy and fragmentation. Open-source innovations like the Regulatory Genome Project and OpenFisca offer a path forward, enabling machine-readable policy analysis to foster cross-jurisdictional benchmarking and agile adaptation. As AI adoption surges, international collaboration—via G20 forums and IOSCO pilots—must prioritize ethical alignment, workforce reskilling, and resilient infrastructure to ensure AI bolsters financial stability rather than undermines it, promoting inclusive growth for emerging and advanced economies alike.

Section 5: Artificial Intelligence in Finance: Lessons for India from the Global Regulatory Landscape

India’s financial sector is undergoing rapid digital transformation, with artificial intelligence (AI) and machine learning increasingly deployed in credit underwriting, fraud detection, algorithmic trading, and regulatory reporting. While these technologies offer substantial efficiency gains and financial-inclusion benefits, they also introduce novel systemic risks, including model herding, third-party concentration, cyber vulnerabilities, hallucinations in generative AI, and amplified procyclicality.

Drawing on recent regulatory developments in Singapore, the European Union, the United Kingdom, and the United States, as well as reports from the BIS (2024), FSB (2024), IOSCO (2024), and the MIT AI Risk Repository (2024), this blog distils five key lessons for India: (i) adopt a hybrid risk-based and principles-based supervisory framework; (ii) mandate rigorous bias testing and human-in-the-loop requirements for high-risk applications such as credit scoring; (iii) strengthen macroprudential oversight of AI-induced systemic risks, including third-party dependencies and model homogeneity; (iv) accelerate the Reserve Bank of India’s adoption of supervisory AI tools while ensuring robust governance; and (v) leverage open-source regulatory technology (RegTech) platforms such as the Regulatory Genome Project and OpenFisca to enable machine-readable regulations and real-time policy simulation.

Implementing these lessons through a dedicated AI-in-Finance chapter in the forthcoming Digital India Act, updated RBI guidelines, and expanded regulatory sandboxes would allow India to harness AI’s transformative potential while preserving financial stability and consumer protection.

India’s fintech ecosystem is among the fastest-growing globally, with over 2,100 fintech entities and digital payment transactions reaching 147 billion in FY2024–25. AI is already embedded in credit scoring (e.g., thin-file lending via alternative data), fraud detection, robo-advisory, and high-frequency trading. The Reserve Bank of India (RBI) has taken initial steps through its 2019 discussion paper on AI/ML in financial services and subsequent regulatory sandbox cohorts. However, the absence of a comprehensive horizontal AI governance framework leaves significant regulatory gaps, particularly regarding systemic risk and consumer protection.

Key Lessons from Advanced Jurisdictions and Standard-Setters

  1. Adopt a hybrid risk- and principles-based approach

Singapore’s FEAT framework (MAS 2019) and the United Kingdom’s Consumer Duty (FCA 2023) demonstrate that a principles-based regime—emphasising fairness, explainability, accountability, and contestability—can foster innovation while maintaining flexibility. The EU AI Act (European Commission 2024) complements this with a risk-based classification, designating credit scoring and insurance pricing as “high-risk” activities subject to mandatory conformity assessments and fundamental-rights impact analysis. India could combine RBI’s existing sector-specific guidelines with a horizontal Digital India Act chapter that classifies AI use cases by risk tier.

2.Strengthen bias mitigation and human oversight

The U.S. Executive Order 14110 (White House 2023) and AI Bill of Rights (White House 2022) mandate proactive bias testing in underwriting models. The EU AI Act requires human-in-the-loop for high-risk decisions. Given India’s socio-economic diversity, the RBI should require algorithmic impact assessments and third-party audits for lenders using alternative data, with explicit safeguards against caste-, religion-, or location-based proxies.

  1. Address AI-specific systemic risks

BIS (2024) and FSB (2024) identify five channels through which AI can amplify financial instability: (i) third-party concentration, (ii) model herding and correlated behaviour, (iii) heightened cyber risk, (iv) governance and data-quality gaps, and (v) fraud/disinformation via generative AI. The MIT AI Risk Repository (2024) provides a granular taxonomy that India’s Financial Stability and Development Council (FSDC) could adapt to develop stress-testing scenarios and concentration limits for critical AI service providers.

  1. Accelerate supervisory technology (SupTech) adoption

Leading central banks already deploy AI for supervision: ECB’s Athena and GABI tools, U.S. CCAR loss-projection models, and Singapore’s Veritas self-assessment toolkit. The RBI’s Centralised Information Management System (CIMS) and Supervisory Data Lake could be augmented with natural-language-processing and anomaly-detection modules, subject to strict governance protocols.

  1. Embrace open-source RegTech for agility

The University of Cambridge’s Regulatory Genome Project and OpenFisca illustrate how machine-readable regulations enable real-time compliance monitoring and policy simulation. Integrating such platforms into RBI’s regulatory reporting architecture would significantly reduce compliance costs for smaller fintechs while enhancing supervisory effectiveness.

Policy Recommendations for India

  1. Enact a dedicated AI-in-Finance chapter within the Digital India Act, establishing a risk-based classification and mandatory registration of high-risk AI systems.
  2. Issue updated RBI guidelines requiring bias audits, explainability reports, and human oversight for credit and insurance AI models by end-2026.
  3. Expand the regulatory sandbox to include systemic-risk testing cohorts and mandate participation of critical third-party AI providers.
  4. Establish an inter-regulatory AI Risk Council under the FSDC to monitor concentration and herding risks.
  5. Launch a public–private SupTech partnership to develop and pilot open-source supervisory AI tools.

By learning from global best practices while tailoring them to India’s unique institutional and developmental context, policymakers can ensure that AI becomes a powerful driver of inclusive and resilient financial sector growth rather than a source of new fragility. The next blog in the series “AI in Finance” explains the context of India and the developing AI frameworks in India.

References

  1. Bank for International Settlements (BIS). 2024. Intelligent Financial System: How AI Is Transforming Finance. BIS Working Papers No. 1194.
  2. European Central Bank (ECB). 2024. “Supervisory Technology at the ECB: Athena, GABI and Beyond.”
  3. European Commission. 2024. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act).
  4. Financial Conduct Authority (FCA). 2023. “Consumer Duty Implementation and AI.”
  5. 2024. AI in UK Financial Services: 2024 Update.
  6. Financial Stability Board (FSB). 2024. The Financial Stability Implications of Artificial Intelligence.
  7. Federal Reserve. 2024. “Artificial Intelligence in Banking Supervision: 2024 Progress Report.”
  8. International Organization of Securities Commissions (IOSCO). 2024. Artificial Intelligence in Capital Markets: Use Cases, Risks, and Challenges.
  9. Massachusetts Institute of Technology (MIT). 2024. The MIT AI Risk Repository.
  10. Monetary Authority of Singapore (MAS). 2019. “Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector.”
  11. 2023. “Project MindForge and Veritas Toolkit.”
  12. National Association of Insurance Commissioners (NAIC). 2024. “Artificial Intelligence in Insurance: Principles and Best Practices.”
  13. OpenFisca. 2024. “OpenFisca: Rules as Code.”
  14. Regulatory Genome Project (RGP). 2024. “Regulatory Genome Project Annual Report 2024.”
  15. Securities and Exchange Commission (SEC). 2024. “AI and Market Surveillance Tools.”
  16. White House. 2022. “Blueprint for an AI Bill of Rights.”
  17. 2023. “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”

Abhijeet Singh is former Research Fellow at NIPFP.