Logo Created with Sketch. SCHUMANN - EN

The AI Act and Its Consequences for Credit Management

New regulations for high-risk AI are coming into force and will change existing processes. In SCHUMANN Insights you can read about the effects the EU AI Act will have.
Blog Post, Schumann Insights, Videos
, Prof. Dr. Matthias Schumann

The EU AI Act and High-Risk AI: What Credit Managers Need to Know

The new legislation on the use of artificial intelligence passed by the EU in June has caused a stir in many industries. Critics think it will be an obstacle to innovation and will cause an increase in bureaucracy. Supporters, on the other hand, see the new Act as necessary in order to protect individuals and society from the possible risks and poor decisions made by AI systems. But which consequences of the legislation are particularly important for credit management?

High-Risk AI Systems and Credit Management

The "high-risk AI systems" defined in the Act are especially relevant for credit management. These include AI solutions that process person-related data, including biometric data. Especially in focus here are systems that are based on large-language models (LLMs) and can be used in a wide variety of applications. These solutions have the potential to influence decision-making processes strongly, which presents special challenges in the area of creditworthiness and its evaluation.

Face Recognition and Prohibition of Discrimination

The new Act specifically prohibits the use of AI systems that could discriminate against people on the basis of attributes such as age, handicap or social and economic situation. This especially affects technologies for face recognition, for deriving emotions and for influencing the formation of opinion. These limitations are intended to guarantee that AI-supported systems work fairly and transparently, without disadvantaging particular groups.

Creditworthiness Evaluation in Focus

An especially important reference in the attachments to the Act affects systems evaluating the creditworthiness of individuals. These applications are explicitly categorized as high-risk. It is notable that these regulations do not apply to the evaluation of legal entities, which is of particular importance in the area of business customers. Here, the question arises as to how one-person companies are to be treated as these operate in a grey zone between individual people and legal entities.

For B2B business with legal entities there are no specific requirements for AI systems as long as no complete credit or creditworthiness evaluation is made. Individual aspects such as payment behaviour do not fall under the high-risk category. This means that in credit risk management at the level of legal entities the regulations are less strict than for the evaluation of natural persons.

Requirements on High-Risk AI Systems

For AI systems that are classified as high-risk there are wide-ranging testing, documentation and notification obligations. In addition to detailed technical documentation, these systems must be tested and validated in order to guarantee the comprehensibility of their mode of function. These requirements are similar to the BaFin regulations already in place for rating procedures. In addition, systems must be subject to regular evaluation in order to ensure that they function correctly.

A central element of these requirements is risk management. Companies that use high-risk AI solutions must set up a quality management system that guarantees compliance with the regulations. Conformity evaluation procedures play a key role here as they monitor and document compliance with the requirements. These regulations are especially relevant for systems that process person-related data, as used in the areas of public security and administration.

Future Developments and Their Effects on Credit Risk Management

The extent to which the new EU committees that are to be set up for consultation and evaluation of AI solutions will really improve the quality of the systems remains to be seen. Suppliers and operators of high-risk AI systems are thereby obliged to enter their systems in an EU database. How exactly the conformity evaluation in individual cases will take place also still needs to be clarified.

In credit risk management targeting legal entities there are currently no specific rules for the use of AI systems. This is different for natural individuals, for whom the existing rules are extended by the AI Act. Providers of financial services must also continue to observe the strict requirements of BaFin on the comprehensibility of their systems.

Summary: The Role of Risk Management Software

In conclusion it can be said that for B2B business with legal entities there are not yet any specific rules for the use of AI systems. However, AI is being used increasingly to provide additional information to support decision-making processes. Risk management software will play a central role here by making sure that companies meet the new regulatory requirements without reducing their ability to innovate. AI-supported solutions will continue to be used, primarily in relation to business customers. They will not necessarily result in the completely new design of rating systems but will lead to the optimization of decision-making processes through intelligent data analysis.

About the Author
Prof. Dr. Matthias Schumann

Since 1991, Prof. Dr. Matthias Schumann has held a professorship in Business Administration and Information Systems (Chair of Application Systems and E-Business) at the University of Göttingen. He also heads the joint computing center of the Faculty of Economics and the Faculty of Social Sciences. He is a shareholder of Prof. Schumann GmbH.

Prof. Schumann's research interests include information systems at financial service providers and systems for credit management, as well as issues related to knowledge and education management. Prof. Schumann has a wide range of experience in consulting companies, extensive lecturing activities and more than 350 publications.

University of Göttingen

Matthias Schumann q