Avoid ‘Terminator’-style doomsday by treating machines more like humans, researcher says

Share:

Avoid ‘Terminator’-style doomsday by treating machines more like humans, researcher says
Technology, trust, Pak, Clemson
Richard Pak is an associate professor in Clemson’s psychology department and director of the Clemson University Human Factors Institute.

CLEMSON, South Carolina — A Clemson University researcher says people are correct to fear such malevolent machines as HAL 9000 from “2001: A Space Odyssey” or Skynet from “The Terminator” film series. Richard Pak uses characters from decades of sci-fi doomsday scenarios to illustrate why humanity needs to rethink its relationship with machines and the way it designs them.

In a recent article published in the academic journal Ergonomics, Pak, associate professor of psychology and director of Clemson’s Human Factors Institute, and George Mason University faculty member Ewart de Visser use a lengthy list of sci-fi villains — and some less deadly characters — to define the extremes between fully automated machines and those designed to be more like humans. The key in the design of technology is to find a happy medium between these two extremes to maintain trust and functionality between human and machine, Pak said.

“We’re headed toward a future in which some of our closest relationships will be with machines, so I think we should prioritize making sure those relationships function well,” Pak said. “These devices are already residing in our homes or in our pockets at all times, so our relationships with them will only grow deeper and more intimate over time.”

Pak cites Skynet as a technology that is highly autonomous, but low on the scale of “humanness design.” This technology was capable of establishing its own goal of self-preservation, but had limited capabilities or consideration for human communication. On the other end of this scale, Pak provides examples of characters designed for companionship in films such as “A.I. Artificial Intelligence” or “Her.” While these examples are high in humanness design, they are relatively low in autonomy.

These opposite ends of the spectrum showcase technology that is either lacking in effectiveness or causing an apocalypse. Pak makes it clear that he is far from carrying signs that read “the end is nigh,” but he believes extreme examples may be necessary to highlight the need for more attention on the relationship between humanity and automation.

Richard Pak scale
The scale Pak and collaborator Ewart de Visser designed to illustrate the scale of autonomy to humanness includes fictional and real-world examples of autonomous machines.

Pak said that when it comes to the design of self-driving cars or virtual assistants, developers should resist thinking of these machines as tools. Instead, they should take cues in design from the social sciences, where trust between teammates is extremely important.

“Autonomous machines by design will learn and change over time and dynamically set their own goals,” Pak said. “They will surprise human partners, which will greatly affect trust and adoption of the technology. This is where machine-human relationships begin to emulate rich interactions between people, so it only makes sense that we take cues from human-human models of interaction over human-machine models.”

In some ways, this reframing of interactions is already occurring, with researchers now replacing terms such as “human-computer interactions” with “human-machine teaming.” However, work to repair trust between these teams is largely outdated. According to Pak, the inevitable error or failure by a machine that leads to a breakdown in trust and potential abandonment of technology is a major issue to be overcome.

Pak uses potential future designs of blood glucose meters as an example. Currently, older adults either trust technology too much or not enough, which can be problematic for people monitoring blood sugar levels. It is vitally important in this case for the machine to monitor any errors, communicate them, apologize for them and show empathy after a mistake for the human-machine relationship to continue.

Pak said figuring out the pattern of behavior that would occur in a scenario such as this will be monumentally important for all technology design going forward. The ideal lies somewhere in the middle on Pak’s scale of fictional and real AI: a machine autonomous enough to problem-solve and course-correct on its own, but with a high enough level of humanness to maintain and constantly improve its relationship with the user.

Pak and his collaborators hope their work will act as a road map that will help other researchers adapt social science findings to improve current and future teamwork between humans and machines.

“AI gives machines a certain amount of agency and at a certain point this causes our expectations of the machine to be completely different,” Pak said. “Our next job is to hijack the machinery in our heads to make it work with machines; we have to use what we know about humans in the design of machines in order to make the human-machine relationship resilient and safe.”

Want to Discuss?

Get in touch and we will connect you with the author or another expert.

Or email us at news@clemson.edu

    This form is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.