College of Engineering, Computing and Applied Sciences

Artificial intelligence at a critical crossroads, says Clemson University’s Nathan McNeese

Share:

Nathan McNeese of Clemson University started studying how people and artificial intelligence can best work together over a decade ago, long before ChatGPT and its chatbot kin took the world by storm.

With many people just beginning to understand how AI shapes their lives, the technology is now entering a crucial juncture, said McNeese, the McQueen Quattlebaum Associate Professor of Human-Centered Computing.

“Never before in history have we had the broader population interacting with AI like we are now, and that’s fundamentally going to change people’s perceptions of what that interaction looks like,” McNeese said. “It also means that we’re at a critical time period, where if these technologies are not researched and deployed responsibly, it will cause significant harm to how people view and use AI systems moving forward.

“AI is one of the most significant societal opportunities in the history of the world, but it only comes to fruition if we ensure the safe and accurate design and implementation of these systems. And that design and implementation must always focus on humans.”


McNeese’s research into human-AI teaming is helping put Clemson at the forefront of shaping the future of a powerful technology that has already begun to transform how people learn, work and live.

His peers are recognizing his work. In the span of less than two months, McNeese received four high honors: a National Science Foundation CAREER Award; the William C. Howell Young Investigator Award and the Journal of Cognitive Engineering and Decision Making Best Article Award, both from the Human Factors and Ergonomics Society; and the overall Outstanding Alumni Award from his alma mater, Penn State’s College of Information Sciences and Technology.

“In a remarkably short time, Nathan McNeese has achieved several significant milestones,” said Anand Gramopadhye, dean of the College of Engineering, Computing and Applied Sciences. “These latest awards are stepping stones that have him on the path to becoming a leading figure in human-factors research. I offer him my wholehearted congratulations on all his success.”

In an interview with IDEAS, McNeese described how developments such as ChatGPT have changed the landscape for his research and how his research can ensure AI benefits society.

For those not familiar with your work, how would you describe your research focus in layman’s terms?

Nathan McNeese, far right, poses for a photo with members of his research group and their robotic dog, Spot, and one of their drones.

We deal with two ends of the spectrum. There’s the very computational technical end of AI, which is algorithmic development. We ask, how do you make sense of that data through an algorithm, and how do you have an output, which is what we see as humans? And then there’s the other side, which is the more humanistic side, and this probably is really where we made our name. We’re interested in understanding how humans perceive, interact with, are motivated by and are concerned about AI broadly speaking. What we try to do is reverse engineer those things and bring them back into the development of AI so that it takes into account the human and what humans want and perceive from AI.

What are the big takeaways from your research so far?

What we really focus on at the micro-level are the teaming aspects– human-AI collaboration– and we’ve seen a myriad of varied findings. It really depends on the human being. Some human beings are all for it, they will integrate with the AI, and the AI and the human interact efficiently. But then you’ll have other humans who are completely antithetical to this, where they want nothing to do with it.

What are you finding about why some people like working with AI and some would rather not?

Every individual, as a human being, is different. We are made up of different personalities, different desires, and various life experiences. That is dictating to a high degree how humans perceive AI and then how they will interact with AI. How humans perceive AI is incredibly important, because that then usually dictates how they interact with it. If they perceive the AI as a threat or untrustworthy, their interaction with it is not going to be one that is collaborative and communicative with high levels of interaction. But if they see the opposite of that, we see that they’re very willing to interact with it.

How have ChatGPT and other chatbots changed the landscape?

What’s interesting is we’re hitting a tipping point, with things like ChatGPT and large language models coming to the forefront of society. Wide masses of people are now interacting with AI agents, and I am extremely excited about this because it’s a paradigm shift in my work.

The past 10 years I’ve been studying humans interacting with AI. Broadly speaking, most of those humans have not interacted with AI. When they come into the lab, or they take a survey, or we have an interview, we’re speaking purely from a theoretical, conceptual level. Now with the introduction of things like large language models, where people’s grandparents are online playing with ChatGPT, we’re seeing a wave of new experience levels coming to the forefront. That’s going to change how people then perceive and interact with human-AI systems.

What do you think the impact will be?

Never before in history have we had the broader population interacting with AI like we are now, and that’s fundamentally going to change people’s perceptions of what that interaction looks like. It also means that we’re at a critical time period, where if these technologies are not researched and deployed responsibly, it will cause significant harm to how people view and use AI systems moving forward. AI is one of the most significant societal opportunities in the history of the world, but it only comes to fruition if we ensure the safe and accurate design and implementation of these systems. And that design and implementation must always focus on humans.

We have to get to the point where there are regulations to combat irresponsible AI development and deployment, the kind of models that cause disparate impact or treatment. There are a lot of people that are smarter than me that are saying this. We can’t get this wrong, right now, because it’ll end up being wrong forever, essentially, and I don’t think we’ll be able to come back from it. We have to do things the right way right now, or else it’s a wasted opportunity.

Many have questions about the ethics and safety of AI. How does your work address those concerns? What safeguards do we need to have in place?

In the human-AI teaming area, we were the first people in the world that ever approached ethics. We ask, how do you develop and build an ethical AI teammate? This becomes doubly important, because of the context and setting. It’s teamwork, which means that you have increased interaction. The frustrating fact about artificial intelligence is that artificial intelligence is based on data, and we have a lot of data– a lot of really bad data. If you go on social media, misinformation and hurtful opinions are everywhere. Unfortunately, all that gets captured. We are not fully to the level of being able to filter that information out and throw it in the trash. It usually gets put into the algorithm in terms of contributing to the algorithm itself, and that’s where you see things like racist algorithms. Our work sees humans as the best safeguard for this problem. As humans, we need to take an active role in shaping and designing these technologies to be a social good, and our work directly works with humans to facilitate this.

What inspired you to get into AI, and what continues to drive your passion for the topic?

I’ve always been interested in technology from a very young age. I used to take a lot of things apart and sometimes get into trouble with my parents. I mean, like, fully apart to the point where you can’t put it back together, because I was just interested in, what’s in that? I’ve always been just a curious person. Also, at an early age, I started studying teamwork– human-human teams. And then I figured out I love teamwork, and I love technology. I asked, how can I bring these two things together?

It was fortuitous in that I started studying AI almost a decade ago, before the hype machine. I can’t get bored of AI because of how rapidly it’s developing. My full-time job is to be an expert on AI systems and on human-AI interaction, and I can’t keep up with it because the products and development from the AI spectrum are coming out every week. It’s a continual race, and I like that race.

Want to Discuss?

Get in touch and we will connect you with the author or another expert.

Or email us at news@clemson.edu

    This form is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.