Building trust in artificial intelligence – white paper for the certification of AI

White paper for the certification of AI

Artificial intelligence is changing our society, our economy and our everyday lives in fundamental ways. And in doing so, it is creating some exciting opportunities in how we live and work together. For example, it already helps doctors to better evaluate x-rays, which often leads to a more accurate diagnosis.

It is the basis of chatbots that provide helpful answers to people looking for advice on, for example, insurance. And, before too long, it will be enabling cars to become more and more autonomous. Current forecasts indicate that the number of AI applications is set to increase exponentially over the coming years. McKinsey, for example, projects additional global growth from AI of up to 13 billion U.S. dollars by 2030.

At the same time, it is clear that we need to ensure that our use of AI and the opportunities it brings remains in harmony with the views and values of our society. Acting under the aegis of Kompetenzplattform KI.NRW, an AI competence platform in the state of the North Rhine-Westphalia, an interdisciplinary team has come together to develop a certification process for AI applications to be carried out by accredited examiners.

This will confirm compliance with a certified quality standard that, in turn, will enable technology companies to verifiably design AI applications that are technically reliable and ethically acceptable.

“The purpose of the certification is to help establish quality standards for AI made in Europe, to ensure a responsible approach to this technology and to promote fair competition between the various players,” says Prof. Dr. Stefan Wrobel, director of Fraunhofer IAIS and professor of computer science at the University of Bonn.

Focusing on the human aspect

Artificial intelligence has the potential to enlarge our capabilities and provide us with new knowledge. However, once we begin to base our decisions on machine learning that is either fully or partially automated, we will face a host of new challenges. The technical feasibility of such applications is a one consideration.

First and foremost, however, we must resolve the basic philosophical, ethical and legal issues. To ensure that the needs of people are firmly embedded at the center of development of this technology, close dialog between the fields of information technology, philosophy and law is necessary.

The team of experts has now published a white paper in which they detail their interdisciplinary approach to the certification process. For example, they explain the ethical principles involved. “Anyone using AI should be able to act properly in accordance with their moral convictions, and nobody should be curtailed in their rights, freedom or autonomy,” says Prof. Dr. Markus Gabriel, professor of philosophy at the University of Bonn. Legal questions have also been addressed.

“For example, we need to determine how AI applications can be made to conform to the basic values and principles of a state governed by the rule of law and subject to the principles of freedom,” explains Prof. Frauke Rostalski, professor of law at the University of Cologne.

Priorities for building trust in the use of AI

This interdisciplinary approach has identified a number of ethical, legal and technological issues of relevance to the use of AI. These are all examined in the white paper. The criteria employed in a certification process should include fairness, transparency, autonomy, control, data protection, safety, security and reliability. Recommendations of the EU thus also serve as orientation for the KI.NRW certification project.

The certification process will revolve around questions such as: Does the AI application respect the laws and values of society? Does the user retain full and effective autonomy over the application? Does the application treat all participants in a fair manner? Does the application function and make decisions in a way that are transparent and comprehensible? Is the application reliable and robust? Is it secure against attacks, accidents and errors? Does the application protect the private realm and other sensitive information?

Building in checks and controls at the design stage

According to the white paper, it must be determined during the initial design process whether the application is ethically and legally permissible – and if so, which checks and controls must be formulated to govern this process. One necessary criterion is to ensure that use of the application does not compromise anyone using it in their ability to make a moral decision – as if the option existed to decline the use of AI – and that their rights and freedoms are not curtailed in any way.

Transparency is another important criterion: the experts emphasize that information on correct use of the application should be readily available, and the results determined through the use of AI in the application must be fully interpretable, traceable and reproducible by the user. Conflicting interests, such as transparency and the nondisclosure of trade secrets, must be balanced against one another.

The plan is to publish an initial version of the inspection catalog by the beginning of 2020 and then begin with the certification of AI applications. This project also involves the participation of Germany’s Federal Office for Information Security (BSI), which has extensive experience in the development of secure IT standards. This know-how will feed into the certification process. Finally, given that AI is constantly evolving, the inspection catalog itself will be a “living document” in need of continual updating.

Participating institutions
Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS (Prof. Dr. Stefan Wrobel, director; Dr. Maximilian Poretschkin, project leader); University of Bonn, Center for Science and Thought (Prof. Dr. Markus Gabriel, director; Jan Voosholz, project leader); University of Cologne, Chair of Criminal Law, Criminal Procedure Law, Philosophy of Law, and Comparative Law (Prof. Frauke Rostalski, project leader).

http://www.iais.fraunhofer.de/ki-zertifizierung Download the white paper (in German language)
http://www.ki.nrw Competence platform KI.NRW

Media Contact

Katrin Berkler Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme IAIS

All latest news from the category: Information Technology

Here you can find a summary of innovations in the fields of information and data processing and up-to-date developments on IT equipment and hardware.

This area covers topics such as IT services, IT architectures, IT management and telecommunications.

Back to home

Comments (0)

Write a comment

Newest articles

Novel catalyst for charge separation in photocatalytic water splitting

A research team led by Prof. JIANG Hailong, Prof. LUO Yi, and Prof. JIANG Jun from the University of Science and Technology of China (USTC) discovered a metal-organic framework (MOF)…

Finding a missing piece for neurodegenerative disease research

Research led by the University of Michigan has provided compelling  evidence that could solve a fundamental mystery in the makeup of fibrils that play a role in Alzheimer’s, Parkinson’s and…

BESSY II: New procedure for better thermoplastics

Thermoplastic blends, produced by a new process, have better resilience. Now, experiments at the IRIS beamline show, why: nanocrystalline layers increase their performance. Bio-based thermoplastics are produced from renewable organic…