The Art of Human-Technology Interactions

Photo of Artur OleschArtur Olesch
January 21, 2021
... min read

When you start to type or click, you release the potential locked in a computer or software. Philosophers say that you add context to the technologies. Scientists would call it open-ended dialogue. How does the emotional, chaotic brain create a relationship with logical, programmed processors and why can't you hurt a robot?

"I" mirrored on the screen

The pioneers of the technological revolution that started over 300 years ago certainly didn't expect that the steam machine composed of screws, bearings, rivets, and gears would evolve into the trusted daily companions of human beings. A few decades ago, giant, mechanical creatures left the factories to enter our homes. Since then, they have been miniaturized, equipped with powerful computing capabilities and all closed in eye-catching cases. They were also given user-friendly interfaces.

Computers, smartwatches and smartphones were initially built to augment our physical and mental abilities. Apparently, they are like many other tools: a hammer helps to build a house, a car enables us to move quickly from one place to another, and a computer is to write documents, communicate, or create.  But unlike simple, manual devices, digital technologies are much more complex. Their mind-blowing abilities, even if these are only mathematical calculations, evoke respect. In the current era of digitalization, they provide entertainment, communication, help, and even safety. We have become dependent on them. Last time you left your smartphone at home, did it make you anxious? Or did you perhaps get angry because you found yourself in a place with no access to the internet? How many times a day do you check the latest posts on social media?

https://a.storyblok.com/f/120667/1560x1100/e82743a650/art_of_human_cover_1560.jpeg
People facing the era of digitalization. Illustration by Aga Więckowska

Our existence has melted together with technologies to such an extent that we no longer notice their presence. If apps and IT systems offer some practical improvements in our daily lives, we adapt them. The interaction between man and machine is the subject of many psychological and sociological studies. I believe that in deliberations about AI or technology itself, both the intelligence and sensitivity of people are underestimated concerning the creation and utilization of new technologies. Systems, applications, computers, and smartphones are more human than we suppose. Even if computers can't think or feel, they are created by humans and thus are equipped with capabilities important for humans. This is already a foundation for a deeper emotional relationship.

An app is far more than lines of code. It can deliver an intended impact in the form of information, support, or care. A computer is not just a plastic box with processors, an accumulator, and some integrated circuits. It helps us to achieve goals. A robot is not just a bunch of wires with small engines covered with plastic. To illustrate the psychological bonds that connect us with technologies, Dr. Kate Darling, a research specialist at the MIT Media Laboratory, did a series of experiments. During a series of workshops, she gave robotic dinosaurs to participants, asking to name them, play with them and interact with them. After an hour, she requested that the participants torture and kill the mechanical toys. Every time Dr. Darling repeated the experiment, no one wanted to harm the robots.

On one hand, this seems irrational because they are just machines. Still, people project life-like values upon them, attribute mental properties and empathize with them. This also applies to fully digital, non-material solutions like IT systems.  The love which developed with the AI virtual assistant Samantha in the movie HER is fictional. But the story of the relationship between an autistic boy and Siri described in The New York Times is true. "Siri's responses are not entirely predictable, but they are predictably kind," says the mother of Gus, the boy with autism. As the machines enhanced with AI become even smarter, they will more precisely respond to people's real-time personal needs. It both opens new opportunities and poses some threats.

Designing for people

This new phenomenon of close synergy between us and technology didn't happen by chance. In the past few decades, developers learned that excellence in quality and usability must go hand in hand with user-friendliness and the personalization of individual expectations. Advances in miniaturization and design made digital devices so small and portable that interaction with the hi-tech became natural. Every new series of these little machines has a unique ability to satisfy new and universal social, intellectual, lifestyle or healthcare needs; they regularly receive new looks to meet changing aesthetic trends or fit individual preferences. We can choose between different colors and sizes. Apps have an increasingly intuitive layout designed by UX/UI experts. Yet, we still continue to use our smartphones that are covered in scratches. Or, we simply buy another, newer model if the old one is too slow or the battery runs out too quickly. We have no sentiments or negative feelings when a system must be re-installed or when we don't need an app anymore – because the outside layer is only a part of the story. To some extent, technology is similar to money – neither have value themselves, but they have the power to release it.

Today, people expect their particular digital solutions to have values tailored to their internal beliefs. We invest money in trusted, secure financial institutions or store health data only in reliable apps or platforms. Do you choose a bank based on the office façade? This is why digital health services are far beyond complicated to design and provide: how can the factors like care or quality be communicated so the user feels confident using an app? Following the most trusted companies, authenticity, transparency, quality, empathy, and understanding of the end-users are critical.

At this point, we need to take a closer look at the process itself of creating IT systems, not their physical representation. To deliver authentic values, app developers have to answer a list of questions. Does the promise contained in the service correspond to its real possibilities? Is there a danger that patients will experience automation bias and over-rely on technology? How will the user interpret the information delivered by the solution? Are the recommendations based on evidence? Do they reduce risk error? How is patient safety ensured?

Lost in uncertainty

Patients often have to wait years to get a correct diagnosis. What can they do when nobody know what they have?

Long before IT professionals start to embed features in the app or a system, these must be precisely specified, but not always explicitly, however. Even organizational culture – so elusive to the naked eye – can be unintentionally etched in the final product. Between the lines of code, there are surprisingly many human factors to be seen: intentions, empathy, ethical values, different scenarios regarding how you could use the solution, and even the personalities of those involved in the design process. The beauty of digital solutions is that all of these aspects can be transferred to the end-user. They can't be seen but only felt. It happens when sometimes you are fascinated by a solution, when you feel safe, comfortable and well-cared using the mobile health application.

Behind the human-machine relationships, there is also science, for instance, in the form of human factors engineering (HFE). It explores the interactions between people and devices, reaching out to domains like engineering psychology, ergonomics, human-computer interaction, cognitive engineering, human-centered design, and user modeling. The goal of HFE is to enhance safety, performance, and satisfaction or – "to make technology work for people," as described in the book Designing for People: An Introduction to Human Factors Engineering (John D Lee, Christopher D. Wickens, Yili Liu, Linda Ng Boyle). It requires understanding complex, measurable synergies between the user and technology tasks occurring in a physical, social, and regulatory environment. In healthcare, the challenge is even greater because health is not just a zero-one analysis of symptoms and physical parameters, but individual well-being, immeasurable emotional states, hopes and fears. Elements of empathy, intimacy and the understanding of user's feelings in designing digital solutions must be included at every stage of design.

Artificial Intelligence made by intelligent biological beings

But what if the technology is not pre-defined by functions designed by IT teams? AI-based systems can make decisions and calculations using their own, not the creator's rules? What does it mean for the users?

Ray Kurzweil, the Singularity University founder, predicts in his visionary book "The Age of Intelligent Machines" that, by the 2040s, non-biological intelligence will be a billion times more capable than biological intelligence. We've read many times in the press about "black boxes" or "independent systems." We've seen dark AI-driven future visions in the Matrix, Ghost in the Shell, Terminator, or Black Mirror series. If you type in the phrase "movies about AI taking over," expect over 76 million results.

And what is the state-of-the-art?

When I asked Gunter Dueck – philosopher, writer, ex IBM Chief Technology Officer – about his view on AI, he set a sharp line between "strong AI" and "weak AI." Strong AI is and will for many years be pure science-fiction. Weak AI is all about statistics, optimization, expert systems, and cluster recognition. "This is not really 'intelligence', but it turns out to be much more useful than people usually think. Using just these tools can reach and surpass human skills in many cases," he argues. Moreover, it's not only about what technology can do but, first of all, if it can be deployed in real-life settings.

Another philosopher that often address the challenges of AI is Prof. Yuval Noah Harari, a historian and best-selling author of Sapiens and Homo Deus. He talks about "the end of the world we know" or "hacking human beings by AI." At the same time, Harari sees the bright sides of AI: "If the World Health Organization identifies a new disease, or if a laboratory produces a new medicine, it can't immediately update all the human doctors in the world. Yet even if you had billions of AI doctors in the world – each monitoring the health of a single human being – you could still update all of them within a split second, and they could all communicate to one another their assessments of the new disease or medicine."

Books and scientific papers have been written on AI, covering its potential benefits and risks. Most of them are theoretical projections – some prey on hype and noise around phrases like "super-intelligent machines," "algorithms," or "big data." But so far, AI is just another tool in our hands, although different from the "old machines and systems." Because – as some say – it can compete with our cognitive abilities by solving complex problems, finding unknown correlations in data, or predicting trends. I would say – because it can complement some key capabilities and simply help where human resources are already limited. This especially prevalent in the healthcare sector, which is confronted with the imbalance between the demand and supply of health services.

AI can be human-centred if only some rules are followed. Much has been done to ensure the sustainable creation of AI in the recent years. For example, the European Commission has developed the Ethics Guidelines for Trustworthy Artificial Intelligence. The Organisation for Economic Co-operation and Development (OECD) has its expert group on AI in Society. American Medical Association adopted a new policy on Augmented Intelligence in Health Care. Microsoft's six principles for AI include fairness, inclusiveness, reliability and safety, transparency, privacy, security and accountability. Tech companies like Google, IBM or SAP also have their own code of conducts.

Restoring the care factor in administration-focused system

Even without legal guidelines, solutions based on AI are mostly built in the best faith, following the best expertise. Data and computer scientists, biostatisticians and healthcare IT professionals formulate a question, choose the right algorithm, train it on data, evaluate it, and check its accuracy and efficiency. Only if it meets ethical, data safety and patient safety standards may it be implemented in end-user systems. This process can be broken into hundreds (if not thousands) of smaller steps, analyses, calculations, and discussions between IT and medical experts working together in interdisciplinary teams to ensure trust. After all, nobody gets into an autonomous car without 100% certainty that it is much better than human driving skills, right?

In healthcare, we certainly won't have autonomous "robotic doctors". Instead, the goal is to focus on supportive AI-driven technologies so the patient gains control over his or her health while the doctor is better informed and finally relieved of simple, repetitive tasks. AI-assisted doctors will get their time back, which they can then give back to their patients.  Robert M. Wachter, Professor and Chair of the Department of Medicine at UCSF, the author of a New York Times science bestseller The Digital Doctor: Hope, Hype and Harm at the Dawn of Medicine's Computer Age, told me that the relationship between AI and patients will still require involvement on the patient's side who has to, for example, observe symptoms or make relevant measurements. Technology is useless without its users. To create a harmonious symbiosis between people and IT/AI, more technology-oriented awareness and up-to-date literacy are required, so people know how to work with AI-driven systems to achieve their goals.

https://a.storyblok.com/f/120667/1000x705/39f340755a/art_of_human_1560.jpeg
Harmonious symbiosis between people and technologies. Illustration by Aga Więckowska

Technologies are only so good or bad as the companies that build and implement them. They already surpass us in many tasks; soon, they will be intellectually unrivaled. With natural language processing, face mimics recognition systems, and synthetic emotional intelligence, they will be able to communicate with human beings in a similar fashion to the way that human beings communicate with one another. Artificial Intelligence will reveal our limits but also our strengths, as the philosopher Richard David Precht summarizes in his latest book "Artificial Intelligence and the Meaning of Life."

When we face a super-intelligent system, we will also be able to rediscover how emotional, creative, and sometimes chaotic we still are, albeit unique. And how empathetic and free of disrupting the patient-doctor relationship administrative burdens, healthcare can be.

BL/EN/2021/01/21/1