echo interview, November 2024

Is artificial intelligence outstripping us?

ELIPSLIFE ECHO – A SERIES OF DISCUSSIONS WITH COMPANY REPRESENTATIVES ON CORE TOPICS FROM THE KTG AND UVG ECOSYSTEM

echo interview with Karin Frick

echo interview with Karin Frick, Principal Researcher, Gottlieb Duttweiler Institute, Rüschlikon

elipsLife: Ms Frick, the Gottlieb Duttweiler Institute has been dealing with questions about the future since it was founded in 1963. As Principal Researcher, you analyse trends in the economy and in society. Our future is, so to speak, your everyday life. Can you still sleep peacefully in the face of all the uncertainties in the world?
Dealing with the future is not the privilege of researchers. Any decision taken by each and every one of us invariably involves the future, whether it’s about choosing a career or partner, further training or buying a property. All these decisions are based on implicit ideas about what this future should look like. So the future is always present, but it is unpredictable. It is speculation, but it almost never comes as a surprise, as it is the result of preceding developments and decisions. Although some events such as accidents or attacks are unexpected, new technologies usually take decades to become established after they have been invented. For the public, the recent rapid development of artificial intelligence (AI) may be surprising, but for people familiar with the subject, it is not. The future is a field of possibilities, and examining these possibilities, the risks and opportunities, is something that most people do implicitly. If they do it professionally, then they do so more explicitly. So I probably sleep better than someone who doesn’t deal with these issues all the time.

As you have indicated, artificial intelligence is a hot topic. However, it often seems that that this term means different things to different people. How do you define AI?
I understand AI as being software. Unlike the software we have been using for decades, it can learn and, with increasing autonomy, answer questions based on unstructured data. You can now talk to the software, whereas you used to have to program it. What’s more, the program continues to evolve by itself. This ability to learn is a key difference compared to previous software. Today, AI-enabled machines have eyes and ears to a certain extent; they can listen and make interaction possible – even simulating human-like behaviours.

AI is created by humans, whose intelligence is known to be quite fallible. Why shouldn’t AI be just as fallible?
Of course AI is fallible! An inherent shortcoming of AI is that this software does not feel pain. People learn from experience how painful mistakes can be. For example, if we eat the wrong thing, we get stomach ache. Software, on the other hand, does not get hurt. People also have a sense of morality, whether innate or learnt, which software lacks – at least for the time being. The fact that it does not feel pain makes it prone to error. That’s why it seems very difficult to give this software the sensitivity we have as humans.

What do you feel is the main benefit of AI?
In the best-case scenario, progress gives us greater freedom. Today, electricity is simply there, which is why you no longer have to think about fetching wood to cook or create heat. Progress gives us capacity for other things. Thanks to new technologies, we can move faster and produce more things more cheaply and more continuously. However, if we say that software increases efficiency, we have to ask ourselves what we do with the freedom or capacity we have gained. Do I produce more goods and use more energy? Is that a good idea? The positive aspect of creating more freedom depends on what we do with it – both individually and as a company.

… and what are the risks?
As the software evolves, there is a risk that the system will no longer be transparent to anyone. It will become a mega black box that we will allow to make many decisions. There will always be an interest in taking control of this black box. This is already evident in political disputes. The system is undemocratic and somewhat out of control. I understand the demand for transparency and an approval process for such software developments. Society wants regulations because there is enormous potential for misuse with damaging consequences. When developing new drugs, companies are required to provide proof that they do not have serious side effects.

AI applications are becoming increasingly prevalent in more and more areas of life. Can any misuse – whether criminal or political – be controlled at all?
Already, there are virtual influencers who look like people. If the software takes on a human form, it is more enticing. This increases the risk of manipulation and misuse. And this is why people are demanding that any form of AI that looks and speaks like a human being must be identifiable as an AI creation. To prevent misuse, you need to be aware to whom you are delegating decisions. When it comes to controlling masses, AI has considerable potential for misuse. This makes protective measures such as the aforementioned regulations all the more important.

With AI, the next level of automation in the world of work is just around the corner. In which areas do you already see these types of applications?
The focus is on AI applications for service providers. For example, many companies use chatbots to support customer service. But applications are available in the medical sector in the form of increasingly improved AI diagnostic tools, as well as in logistics or for fraud detection in finance.

What is the main goal? To accelerate processes or make work easier?
For businesses, the main goal is always to cut costs. So it’s about efficiency and increased productivity – faster and cheaper processes. Today, AI is mainly used for routine processes that involve the processing of large quantities of data. Since the new software not only enables faster and cheaper production but is now also able to learn, we have a system that learns extremely quickly. Humans learn too, but slowly. It therefore makes economic sense for companies to invest more in fast-learning systems. However, investing more in AI systems in the future than in natural, human intelligence entails enormous socio-political risks.

The next generation of AI will be able to recognise our emotions and communicate with us in real time. How will this change the world of work?
“Artificially intelligent characters” will be part of teams in the future. So teams will be made up of people and robots. It is quite conceivable that in certain areas the robot will be the boss.

… and our private everyday life?
AI will not act as a kind of boss, giving instructions to the family. Instead, it will help with things like calendar planning by providing tips and suggestions and help to better organise everyday life.

What impact will the prevalence of AI have on the psyche of employees?
The first effect is existential fear. It is not the new technology in itself that causes fear, but the prospect that AI will take people’s work away. No work, no job, no money, no existence. Existential fear is extremely stressful. As things stand, many employees have no idea what they will be needed for if AI does their job for them. Companies do not really have an answer to this question either.

Who is actually liable if the use of AI leads to loss or damage?
The question of liability is still unresolved. Who is liable if a chatbot provides information that leads to major financial losses for a customer? Or for the consequences of an incorrect medical diagnosis with serious consequences? The absence of liability regulations will hinder the development of artificial intelligence, in certain industries at least. This should also protect us from an uncontrolled proliferation of providers.

Personal Profile
Karin Frick
Principal Researcher, GDI Speaker

Karin Frick, born in 1960, is Principal Researcher at the Gottlieb Duttweiler Institute (GDI) in Rüschlikon and was a member of its Executive Board for over 20 years. She grew up in Liechtenstein and has been working on future-related issues, social change and innovation since graduating from the University of St. Gallen. The economist researches the impact of technological progress on the economy and society and regularly lectures on trends and countertrends. She is a member of the Board of Trustees at the Liechtenstein think tank zukunft.li and a member of the Board of Directors at Ritter Schumacher Architekten. Frick has two adult sons, lives in Thalwil and is a passionate long-distance runner.

echo interview with Karin Frick

Print