How to use digitisation as a contribution to sustainable business
6. January 2020
Cultural change? What companies need for a successful digital future
2. March 2020

Artificial intelligence and ethics – are there natural limits for digital technologies?

A rtificial intelligence (AI) is a key technology in the digital age. Hardly any other technology is so helpful and at the same time so controversial. Almost everyone now uses software with AI algorithms, and yet many people are worried about their job or their privacy, thinking that AI is entering many areas of our lives.

For many people, AI seems so strange because they cannot understand how AI acts and how it produces results. Few people know the technology and how it works well enough to understand what happens in devices and applications using AI algorithms.

In many places we are already using the technological possibilities of artificial intelligence in various forms. Be it speech recognition in smartphones, determining the best price for online ordering or avoiding traffic jams in navigation systems.

I am firmly convinced that it will not be possible to handle the volume of data efficiently in the future without self-learning algorithms. If we look at the possibilities offered by AI, we see enormous potential. At the same time, we should bear in mind that AI does nothing other than identify patterns based on mathematical procedures and apply these patterns for decision-making purposes.

How limited today’s technology is compared to the human brain is shown by the example of cat and dog pictures. An AI needs more than 1 million images to make a distinction with a high degree of certainty. A small child can make this distinction very reliably after only a few observations. The same is true for the learning of movements, which is much more difficult for a robot. Many hundreds of thousands of repetitions are necessary, where a human only needs 1,000 – 2,000 repetitions to learn new movements.

Concrete application scenarios show the potential

The Platform Learning Systems (PLS) has developed extensive use cases that demonstrate the use of AI in everyday life in a wide variety of areas. One example is the use of AI in the transport sector, both in passenger transport and in logistics. In a networked economy, AI leads to better use of vehicles and infrastructure and helps to avoid traffic jams and delays, for example.

The use of AI in the medical sector is even more obvious. Already today, there are first applications in the field of cancer detection, e.g. skin cancer. In the future, cancer detection and therapy recommendations by AI could significantly improve the survival and healing chances of those affected. It is similar with the use of AI in the coordination of rescue operations. Fed with extensive data, an AI algorithm can make quick decision recommendations, especially in critical situations, and help where every second counts.

All application scenarios have one thing in common: the greatest effect is achieved when artificial intelligence and human intelligence work closely together. AI cannot make decisions, but can support a human being in the decision-making process. In doing so, AI can draw on completely different amounts of data and perform analyses at enormous speed, the results of which in turn are available to humans with their experience.

Where does the artificial “knowledge” come from?

Like any technology, AI is basically value-neutral and acts without experience. In the case of learning systems, an algorithm uses training data to acquire empirical values and applies them again in comparable situations. The performance and decision-making ability of an AI is strongly dependent on the data with which it was trained.

Face recognition software developed and trained in Asia is probably not usable in Europe. As a general rule, unbalanced training material can lead to discriminatory effects. Similar to the way a chatbot with AI technology will eventually get into the habit of using offensive language if he is constantly being insulted and tries to react to it.

For the use of AI, this means that the selection of training data is a very important component to develop an AI algorithm correctly. At the same time, when using AI, limits should be defined to determine whether an AI is still within the limits set for it.

Need for standardization and risk assessment

Especially the question of specifications for AI algorithms occupies politicians and organizations like the German Institute for Standardization (DIN). For this reason, a process was started last year in which a standardization roadmap for AI is being built for different focal points. The goal is to create the framework for standardization and normalization of artificial intelligence and its use cases, so that the use of AI becomes possible under trustworthy conditions for the consumer.

An important point in this case is transparency, or disclosure on the basis of documentation standards. This should ensure that an AI always remains explainable. By adhering to specified standards, regulatory disclosure obligations should be avoided. This includes the traceability of the data with which an AI algorithm was trained, so that the recommendations of an AI can always be understood against the background of its previous “experiences”.

Of course, this requirement is very abstract and certainly not applicable to every use of AI. Therefore, a distinction should be made between the use of AI in different risk classes. In this case the risk assessment is the damage multiplied by the probability of occurrence.

To put it in concrete terms, let me give you an example A non-optimal recommendation for a purchase offer of a consumer product leads at most to a small damage. Whereas a human life is put at concrete risk if cancer is diagnosed incorrectly.

A balanced selection of training data should ensure that discriminatory behaviour does not occur in critical situations. Ultimately, however, it will not be possible to build a perfect system that is free of errors. Therefore, the famous question of AI in autonomous driving, whether to avoid an elderly lady or a child, is misleading. The important thing is that AI is on average much better than the average of purely human decisions. Then it is possible to build up trust in AI.

Scope for political action

The theoretically possible failure of AI in certain decision-making situations must under no circumstances lead to the creation of a legal and regulatory framework that nips innovation in the bud. We have also accepted in the past that there are new technologies which, in addition to their undisputed added value, also lead to people being harmed. The car is an example that makes today’s mobile life possible in the first place, but conversely leads to millions of traffic accidents and thousands of deaths every year, in some cases even by people who do not use the car at all (e.g. pedestrians).

Similar to the attempt at standardization and standardization, a reliable framework should be created on which one can build and trust. This includes creating a claim to explainability, which can also be legally enforced if necessary for certain risk classes.

When defining risk classes, the expected risk in combination with a possible damage should always be assumed and not the maximum theoretically conceivable damage. In addition, people can be given a reasonable coordinate system in dealing with AI.

I am convinced that regulation and legislation should adapt to the dynamic technological development. In the future, therefore, there is less need for rigid regulations that are fixed for generations, but rather for flexible solutions that can cope with rapid development.

Where will you start tomorrow? 

Artificial intelligence is the future technology that will be built into most software applications in the foreseeable future. Ideas for self-learning systems are almost limitless and, in combination with human intelligence, can unleash a huge potential.

For you as a company, this means that you should look into this technology:

  1. read through the application scenarios for adaptive robot tools in assembly or the information butler for the office and see how AI can be used in your company.
  2. develop ideas with your employees about where your products and services could be improved by using artificial intelligence.
  3. look specifically at which human decisions or in which decision situations in your company the use of AI would be a great support.
  4. start a pilot project, for example in cooperation with a university in your area or with the support of a start-up, which deals with new AI-based products and services.
  5. get involved as an entrepreneur in your networks or associations dealing with AI. Your opinion is important when dealing with this trend-setting technology.

At the end of the day, AI technology is like electricity, at some point it will be in (almost) everywhere because of its extensive advantages. For you and your company, this means that in the long run you will not be able to do without AI-based products or services. The sooner you get involved, the better.

 

Dr. Alexander Bode