Since ChatGPT was released to the public, artificial intelligence (AI) has become a greater part of our daily lives. In Rhode Island, Governor Dan McKee has started an AI task force to promote the technology and at Rhode Island College, AI is now available as a major.
But as the technology expands, so too have profound questions about its nature. We sat down with Ellie Pavlick, an assistant professor at Brown University, to learn about her unique perspective on artificial intelligence.
The following interview has been edited for length and clarity.
Isabella Jibilian: What is artificial intelligence?
Ellie Pavlick: AI doesn’t have a really clear consensus definition. It’s evolving, but the standard way we think of it is kind of a computational system that’s not human, but otherwise has human-like intelligence.
IJ: What are some of the questions you’re asking about AI?
EP: My research mostly focuses on questions, “What does it mean for a system to be intelligent?”,“How would we compare the kind of reasoning capabilities or language capabilities that our current artificial intelligence systems have to what we know about humans?” And “How do humans reason and use language?”
IJ: How much do we know about how ChatGPT is reasoning?
EP: It’s very hard to say. There’s a lot to unpack here. One really important aspect is: what are we going for with the word reasoning? Historically, we use words like “reasoning” to talk about a thing that humans do.
There might be aspects of what AI is doing that could be analogized to what humans are doing, but there’s going to be other aspects that are wildly different. So there’s a lot of tension in the research community about how appropriate it is to use words like reasoning.
IJ: Why is it important to look inside the “black box” of AI systems?
EP: One of the most widely cited reasons is the trust element. Do I trust that when the model is producing an answer, that it’s producing this for the right reason?
Let’s say I ask a model to do a simple math problem:
If I have three apples and two bananas, how many pieces of fruit do I have?
I want it to say five and I would like it to explain that it added three and two. But another reason that a model might say five is that the last time it saw a sentence with the word “apples” in it, it also had the word “five” in it. That is the kind of thing that a large language model plausibly might do.
If you asked a human to explain how they arrived at an answer, they can articulate their logic and it’s somewhat of a representation of what they did inside their heads. For language models, we don’t have that. We can’t really see what’s happening inside their heads. That makes it really hard to make claims about what they can do, what they can’t do, and whether we trust them.
For more, watch our interview with Ellie Pavlick on Rhode Island PBS Weekly.