Many call AI the “game-changer” of the 21st Century – what does the term mean and what actually makes artificial intelligence so special?
XV: One definition of artificial intelligence might, for example, be that AI tries to make machines intelligent. That means giving machines the ability to understand context and act appropriately and with foresight in a given situation.
TB: While AI is certainly a special field of technology, it should be seen more as the next step in automation. A smart heating control system or a factory robot automated for efficiency are just two examples of this advancement. From this perspective, AI is part of a long but direct line that can be traced back to the first mechanical weaving looms from the 19th Century.
Safety is a basic requirement for everybody – this also applies to new technologies such as AI. As a consequence, one question is coming up again and again in public discourse: How do you make artificial intelligence safe? Is it even possible to standardize and regulate AI?
XV: Innovative, connected, and smart products can only work to their full potential if they can be deployed and used safely. At DEKRA DIGITAL, we are convinced that users will not trust innovation or new technologies without verified safety. This also applies to artificial intelligence.
TB: Regulation and certification of AI are therefore important and the right approach, especially in the context of the market and everyday life. Rules come into play as soon as solutions containing AI have to work together with users or other systems. There is also the ethical aspect: Legislation sets a framework for what AI is and is not allowed to do. This also helps to make artificial intelligence more transparent. These regulations can then be implemented in a binding manner through technical standards and norms – and certified by independent third parties.
Let’s now consider the legislation: The European Union has set itself the goal of only allowing safe and tested AI. What would the corresponding laws have to look like so that AI can be effectively regulated?
TB: There’s a lot going on here at the moment, both on the part of the standardization organizations and among legislators at a national and international level. While many initiatives and concrete projects are still at a relatively early stage, a first “AI Management Systems Standard” at ISO/IEC, for example, is in preparation. Furthermore, the European Commission is currently working on a basis for creating a legal framework for the development and use of artificial intelligence. A first draft of the corresponding framework law is expected by the end of the first half of 2021.
Concrete, implementable legal requirements are therefore still lacking. Nevertheless, there are numerous products on the market that work with artificial intelligence. Is there tension here between innovation and “safe” AI?
TB: Indeed: Regulatory efforts are still at a rather early stage, especially compared to the growth rate experienced in the AI market. Some very innovative companies are stating facts based on innovative products, but they have not actually been sufficiently tested when they enter the market – or at least not as intensively as comparable “non-smart” products had to be in the past. This is the case precisely because there are hardly any regulations yet.
XV: The key thing is having smooth and efficient processes up to regulation and certification and then in implementation. AI is very dynamic and continues to develop at a rapid pace. Here, regulators must adapt to what is happening in the market and to the dynamic technology environment. Otherwise, the actual action taken and the regulatory ideals will diverge.
How do you bring these two aspects together – is this where DEKRA comes in?
TB: Based on its identity as an independent third party – a role it has assumed for almost 100 years now – DEKRA can play a key role in the AI ecosystem, as the “honest broker.” That means that DEKRA is the trusted party mediating between market participants and supports and verifies compliance with common standards. In this way, the entire ecosystem can develop in a direction that all participants want.
XV: However, artificial intelligence is only one part of ensuring that data-driven products and services are safe. Cyber security and functional safety are equally important. In addition, AI regulation requires an industry-specific view, which DEKRA already has.
Where does the potential of AI lie for DEKRA? And where could artificial intelligence be used at DEKRA?
XV: AI as a technology field is becoming more and more important for DEKRA in many ways. For example, AI can be used to make internal processes more efficient. On top of that, we will use AI to improve and enhance existing products and services. From automatically generated inspection reports to smart, camera-based inspection tools, many things are conceivable. As another example, the more “digital” vehicles become, the greater the need for smart technologies to check that they are functioning properly.
TB: A new business area for DEKRA is without doubt the testing and certification of artificial intelligence. As soon as regulatory requirements exist for AI systems, DEKRA will test and certify compliance with them as a neutral third party. AI as a technology brings with it new challenges, such as the need for continuous testing of self-learning systems instead of the mainly selective testing carried out in the past – per the concept of “permanent monitoring.”
The AI Hub was founded in 2020 to unleash this potential. What is your team’s goal?
TB: We want to make AI an essential driver of value creation for DEKRA by 2025. The work required to achieve this can be summarized in three points:
Are there already initial AI projects and what do they look like?
XV: What you don't see from the outside is that a lot has already been tried out within the company. DEKRA has gathered a lot of knowledge already. For example, the first use cases are image recognition software that automatically reads out data on vehicle documents, a vehicle scanner that detects damage with the help of an algorithm, and our test lab that certifies products with Alexa installed. The most important insight gained so far is that when test objects change in technical terms, we also have to adapt our testing methods.
These lessons learned will be incorporated into upcoming services that are currently being developed. We are also building upon existing expertise in technical implementation and deployment of AI solutions. Our Big Data teams in Málaga play a key role here, but we will also strengthen our collaboration with partners and startups.