Fintech

Increasing Adoption of Artificial Intelligence Drives Global Push for Regulation – Fintech Schweiz Digital Finance News

Published

on

Free newsletter

Receive the most interesting news from Fintech Switzerland once a month in your inbox

The rapid expansion and spread of generative artificial intelligence (gen AI) and artificial intelligence more broadly in organizations around the world has led to a global push for regulation.

In the USA, President Joe Biden signed an executive order on artificial intelligence in October 2023, establishing AI standards that will ultimately be codified by financial regulators. Over the past five years, 17 US states have enacted 29 bills focused on regulating the design, development, and use of artificial intelligence. second to the Council of State Governments.

>”/>

In China, President Xi Jinping introduced last year the Global AI Governance Initiative, which outlines a global plan focused on the development, security and governance of AI. Authority they also released “interim measures” to regulate the provision of gen AI services, imposing various obligations relating to risk assessment and mitigation, transparency and accountability, as well as user consent and authentication.

Recently, Japanese Prime Minister Fumio Kishida unveiled an international framework for the regulation and use of genetic AI called the Hiroshima AI Process Friends Group. The group, which focuses on implementing principles and codes of conduct to address risks related to artificial intelligence, has already won support from 49 countries and regions, according to the Associated Press. reported on May 3rd.

Impact of the EU artificial intelligence law on financial services companies

The European Union’s AI Act is perhaps the most impactful and innovative legislation to date. Approved by the European Parliament in March 2024, the regulatory framework represents the world’s first major law regulating artificial intelligence and is intended to serve as a model for other jurisdictions.

Second According to Dataiku, an American artificial intelligence and machine learning (ML) company, the EU AI law will have a considerable impact on the financial services sector and companies should prepare to comply now.

Under the AI ​​Act, financial companies will have to classify AI systems into one of four risk levels and take specific mitigation measures for each category. They will need to explicitly record the “intended purpose” of each AI system before starting to develop the model. While Dataiku says there is some uncertainty about how this will be interpreted and enforced, he notes that this indicates a stricter emphasis on maintaining adequate timelines than current regulatory standards.

Furthermore, the AI ​​Law introduces “Post Market Monitoring (PMM)” obligations for AI models in production. This means that companies will be required to continuously monitor and validate that their models remain in their original risk category and maintain their intended purpose. Otherwise a reclassification will be necessary.

Dataiku recommends that financial services firms promptly familiarize themselves with the requirements of the AI ​​Act and evaluate whether current practices meet these standards. Additionally, documentation should begin early in the development of any new model, particularly when the models are likely to reach production.

Furthermore, Dataiku warns that the EU’s proactive stance could encourage other regions to accelerate the development and implementation of AI regulations. By 2026, technology consultancy Gartner predicts 50% of governments around the world will enforce the responsible use of AI through regulations, policies and the need to ensure data privacy.

An innovative regulatory framework

The EU AI Act is the global regulatory framework aimed specifically at AI. The legislation adopts the a risk-based approach to products or services that use AI and impose different levels of requirements depending on the perceived threats that AI applications pose to society.

In particular, the law prohibits AI applications that pose “unacceptable risks” to the EU’s fundamental rights and values. These applications include social scoring systems and biometric categorization systems.

High-risk AI systems, such as remote biometric identification systems, AI used as a security component in critical infrastructure, and AI used in education, employment, and credit scoring, are forced to comply with rigorous standards relating to risk management, data governance, documentation, transparency, human control, accuracy and cybersecurity, among others.

Generation artificial intelligence systems they are also subjects to a series of obligations. In particular, these systems must be developed with enhanced safeguards against violating EU laws, and providers must document their use of copyrighted training data and comply with transparency standards.

For basic models, which include artificial intelligence systems, additional obligations are imposed, such as demonstrating mitigation of potential risks, using unbiased datasets, ensuring performance and security throughout the model lifecycle, minimizing use of energy and resources and provide technical documentation.

The AI ​​Act was finalized and approved by all 27 EU member states on 2 February 2024 and by the European Parliament on 13 March 2024. final approval by the Council of the EU on 21 May 2024, the AI ​​Law will now be published in the Official Journal of the EU.

The provisions will begin to come into force gradually, with countries required to ban banned AI systems six months after publication. The rules for general purpose AI systems such as chatbots will begin to apply one year after the law comes into force, and by mid-2026 the full set of regulations will be in place.

Violations of the AI ​​Act will result in fines of up to €35 million ($38 million), or 7% of a company’s global revenue.

The adoption of artificial intelligence is increasing

Globally, jurisdictions are racing to regulate AI as adoption of the technology increases. A McKinsey survey found that AI adoption has reached a remarkable 72% this year, up from 55% in 2023.

Gen AI is the number one type of AI solution adopted by companies around the world. A Gartner study conducted in Q4 2023 found that 29% of respondents from organizations in the US, Germany and the UK are using gen AI, making it the most frequently deployed AI solution.

Organizations that have adopted AI in at least one business function, Source: McKinsey and Company, May 2024

Featured image credit: Edited by freepik

Receive the most interesting news from Fintech Switzerland once a month in your inbox

Source

Leave a Reply

Your email address will not be published. Required fields are marked *

Información básica sobre protección de datos Ver más

  • Responsable: Miguel Mamador.
  • Finalidad:  Moderar los comentarios.
  • Legitimación:  Por consentimiento del interesado.
  • Destinatarios y encargados de tratamiento:  No se ceden o comunican datos a terceros para prestar este servicio. El Titular ha contratado los servicios de alojamiento web a Banahosting que actúa como encargado de tratamiento.
  • Derechos: Acceder, rectificar y suprimir los datos.
  • Información Adicional: Puede consultar la información detallada en la Política de Privacidad.

Trending

Exit mobile version