echo interview with Karin Frick, Principal Researcher, Gottlieb Duttweiler Institute, Rüschlikon
elipsLife: Ms Frick, the Gottlieb Duttweiler Institute has been dealing with questions about the future since it was founded in 1963. As Principal Researcher, you analyse trends in the economy and in society. Our future is, so to speak, your everyday life. Can you still sleep peacefully in the face of all the uncertainties in the world?
Dealing with the future is not the privilege of researchers. Any decision taken by each and every one of us invariably involves the future, whether it’s about choosing a career or partner, further training or buying a property. All these decisions are based on implicit ideas about what this future should look like. So the future is always present, but it is unpredictable. It is speculation, but it almost never comes as a surprise, as it is the result of preceding developments and decisions. Although some events such as accidents or attacks are unexpected, new technologies usually take decades to become established after they have been invented. For the public, the recent rapid development of artificial intelligence (AI) may be surprising, but for people familiar with the subject, it is not. The future is a field of possibilities, and examining these possibilities, the risks and opportunities, is something that most people do implicitly. If they do it professionally, then they do so more explicitly. So I probably sleep better than someone who doesn’t deal with these issues all the time.
As you have indicated, artificial intelligence is a hot topic. However, it often seems that that this term means different things to different people. How do you define AI?
I understand AI as being software. Unlike the software we have been using for decades, it can learn and, with increasing autonomy, answer questions based on unstructured data. You can now talk to the software, whereas you used to have to program it. What’s more, the program continues to evolve by itself. This ability to learn is a key difference compared to previous software. Today, AI-enabled machines have eyes and ears to a certain extent; they can listen and make interaction possible – even simulating human-like behaviours.
AI is created by humans, whose intelligence is known to be quite fallible. Why shouldn’t AI be just as fallible?
Of course AI is fallible! An inherent shortcoming of AI is that this software does not feel pain. People learn from experience how painful mistakes can be. For example, if we eat the wrong thing, we get stomach ache. Software, on the other hand, does not get hurt. People also have a sense of morality, whether innate or learnt, which software lacks – at least for the time being. The fact that it does not feel pain makes it prone to error. That’s why it seems very difficult to give this software the sensitivity we have as humans.