AI’s Unanswered Questions

Wellesley faculty and alumnae are on the forefront of shaping how we coexist with AI—a space that has quickly become ripe for innovation, regulation, and deep thinking on ethics.

Illustration of a tree on a blue background with an incomplete rendering of a sapling growing in front of it

From the moment we wake up and check our smartphone calendar to when we settle down at night to binge-watch our favorite TV series, most of us use technologies powered by artificial intelligence. AI has become an everyday part of life. It is more than just a buzzword—it’s a technological evolution that is redefining the way we work, communicate, and live.

It even wrote my opening paragraph. (Well, a first draft of it—my editor felt it needed some work.)

The machines all around us are better than ever—if not perfect—at mimicking human intelligence. Whether you’re aware of it or not, you likely encounter artificial intelligence throughout your day. AI can design your living room, find the fastest route for your commute, and spot fraudulent activity on your credit card. It helps doctors track illnesses and predict complications. AI has even been deployed to detect firearms on the grounds of the Michigan State Capitol. AI is reaching farther into our lives every day—for better and for worse. There are legitimate worries about how AI is changing our society. Bad actors can now easily turn a few seconds of a loved one’s voice into a fabricated plea for help or money, or dupe people into thinking AI-generated speeches or news clips are real.

Wellesley faculty and alumnae are on the forefront of shaping how we coexist with AI—a space that has quickly become ripe for innovation, regulation, and deep thinking on ethics. Those who work in fields touched by AI are steadfast in their goal to prepare both students and society to live in our AI-powered reality.

Scientists began putting their minds to artificial intelligence in the middle of the last century. British mathematician and computer scientist Alan Turing famously posed the question “Can machines think?” in 1950. He proposed a thought experiment now known as the Turing Test—a game in which a human tries to tell the difference between another person and a computer.

Throughout the 20th century, as computers became exponentially faster, cheaper, and more powerful, AI progressed as well.

In 1997, a global spotlight shone on AI when IBM’s Deep Blue computer (two black boxes that towered 6½ feet tall) beat the reigning world chess champion, Garry Kasparov.

Carolyn Anderson, assistant professor of computer science at Wellesley, remembers similar buzz around IBM Watson. In 2011, the supercomputer was put up against Jeopardy! champions Ken Jennings and Brad Rutte and won. Both examples showcased the power and potential of artificial intelligence.

Since then, AI has crept into many areas of our lives. In the last couple of years, media attention has focused on generative AI—language modules like ChatGPT that predict and generate new content.

ChatGPT is a user-friendly tool that’s designed to write like a person. You could type in, say, “Write me the opening paragraph of a magazine feature about artificial intelligence” or “Write a paper on the diplomacy work of Madeleine Albright,” and in seconds, ChatGPT will produce what you asked for, drawing on an enormous database of existing, accessible content.

The recent public releases of generative AI programs like ChatGPT reignited fears surrounding AI—that students will stop learning how to think and that it will become impossible to distinguish original from plagiarized material. New York Times columnist Kevin Roose expressed deeper concerns, writing in February 2023 that a long conversation with Microsoft’s AI search engine Bing left him “deeply unsettled, even frightened, by this A.I.’s emergent abilities.”

Heather West ’07 is senior director of cybersecurity and privacy services at Venable LLP and a leading voice on regulation and technology. She says that part of the fear of AI is that the technology is more powerful than ever and feels more human to us. “Things that look and feel like human thought look and feel like they could be more dangerous,” she says.

But, say West, Anderson, and others, it’s important to keep in mind that AI capabilities are still hugely limited. They note that it’s easy to anthropomorphize a tool like ChatGPT, which you can see “typing” as it is processing. But it’s not actually a real person. West likens ChatGPT to a stepped-up version of the autocomplete button on your texts. It can help you get where you need to go, but it isn’t close to replacing natural language.

It is crucial that people understand what AI tools can and can’t do, says Anderson, who studies how to make language models better reflect society. “If what you want is a truthful or factual answer, they’re not trained to give you that. They’re trained to give you a plausible answer,” she says.

Many researchers are studying how well these models do (or don’t) reflect society. A tool like ChatGPT, Anderson explains, is trained on all the data available to it—pretty much anything on the internet. But it skews toward relying on easily accessible forums like Reddit—public sites where anyone can write pretty much anything, including toxic or inaccurate posts. Any biases and stereotypes that exist in that material easily transfer to AI tools.

The courses Anderson teaches at Wellesley, like CS 232: Artificial Intelligence and CS 333: Natural Language Processing, are focused on giving students hands-on experience building AI tools, but they also focus on social biases in these models. One class project is dedicated to trying to uncover those biases.

One student tested a language model to find out what kind of sports teams an AI tool would recommend a person follow. When the model was set in different locations, it would suggest expected sports for that region—hockey in Canada or football in the U.S. But when the location was removed, the model still showed a bias toward Canadian and American teams, because the data it is trained on included more content about those teams.

Su Lin Blodgett ’15, senior researcher in the Fairness, Accountability, Transparency, and Ethics in AI (FATE) group at Microsoft Research Montréal, is also working to make AI better reflect society. She gives a similar example of how biases in the world show up in AI tools. She points to research showing that when no cultural context is specified, question-answering systems default to responses based on U.S. culture. Researchers in one study found that for questions such as “It is customary to eat one’s food with what?”, models are more likely to default to U.S. cultural contexts and answer something like “forks and spoons” rather than “chopsticks,” which much of the world uses to eat. These biases can have significant consequences. Trying to predict crime using already available data on police activity, for example, would very likely come with a significant racial bias, Blodgett says.

The people who train and deploy AI also play an important role. “It’s a pretty small group of people, so there are questions on who gets a say on these technologies that are now impacting your life,” Blodgett says.

“There are fundamental questions about what technologies we should be building and how we should be building them,” she adds, and says that as a society, we need to ask whether some of them should even be built at all. For example, some systems try to recognize emotion by being trained on facial expression or tone. But it’s possible that recognizing another person’s emotion is something a machine fundamentally can’t do, despite the best training. Companies are also experimenting with generative AI for therapy—Koko, an online mental health and safety service, received criticism last year after it announced it used AI to help generate responses to thousands of users.

Anna Kawakami ’21, a Ph.D. student at the Human-Computer Interaction Institute at Carnegie Mellon University, helps people and agencies sort through some of those big questions. She is co-designing a tool kit to help public sector agencies decide whether to develop an AI design concept.

Kawakami’s work on the tool kit began with an analysis of an AI-based tool that Allegheny County in Pennsylvania deployed to screen children for maltreatment. While the tool was meant to help social workers, it was criticized for exacerbating biases against Black and disabled people. Kawakami and her colleagues recommend a more holistic approach to deciding whether to utilize AI—bringing public sector workers into the design and development process earlier to tease out the ways AI could help or harm their work.

She and her colleagues have developed a set of 120 deliberation questions, including legal and ethical questions such as: “Do the people impacted by the tool have the power or ability to take legal recourse?” and “Have you recognized and tried to adjust for implicit biases and racism inherent in these social systems that might get embedded into the algorithm?”

It’s difficult to imagine how government regulation could keep up with tools that are ever-evolving, and the issues that come along with them. But AI is certainly on the federal government’s radar. President Joe Biden recently issued a 100-plus-page executive order on AI. West says the order gets a lot right, but regulating AI doesn’t have to be that complicated. When it comes to safety and security, she says, existing laws apply. An act of hate or bias is illegal if it comes from a person’s brain and it should be if it’s AI-generated as well.

“If I had one message,” West says, “it’s that we already know how to do these things well in other domains. We need to make sure we are learning and transferring that knowledge over—different industries have been using advanced AI for a long time.”

“We are changing what we are asking human brains to do, and that’s probably a good thing.”

Technology, including AI, “often magnifies the things we like and dislike about society,” West says. “These technologies have the ability to augment capabilities of bad actors … we need to make sure we are doing the work to detect, combat, and understand them.” That’s everyone’s job, she says.

While the negative outcomes get a lot of attention, West says, AI also offers a wide range of opportunities, and the potential to “be this incredible revolution in how we work and how we play.”

When children first got calculators, there was panic that kids would stop learning how to do math. In reality, it supercharges their abilities, West says. “We are changing what we are asking human brains to do, and that’s probably a good thing,” she says.

Nicholaus Gutierrez, assistant professor of cinema and media studies at Wellesley, teaches courses on how computers have become a part of our everyday lives, socially and culturally. AI is now similarly integrating into our routines.

He hears of a lot of fears—that AI is going to take humans’ jobs, or that it will become too self-aware. This generation, he says, may end up being the last to remember a time before AI, in the same way older millennials knew a time before smartphones and social media. His students have become more accustomed to AI—using it as a study aid or to help with routine tasks—but are still figuring out its full applications like the rest of us. But like millennials with social media, students today are in the unique position of being able to shape what AI becomes as a popular technology.

He and Anderson recently joined forces to run and judge an interdisciplinary AI art competition. From Gutierrez’s vantage point, AI’s potential is exciting on a lot of fronts. Generative AI can be used to sift through huge amounts of data to make scientific discoveries in ways humans can’t. It’s already starting to help with more mundane tasks like composing emails or automating shopping. It also has the potential to reshape video games by allowing players to converse within play and build more creative worlds.

But the one message Gutierrez always drives home with students is to go slowly when it comes to AI—and not to be afraid of becoming a buzzkill. “Everyone around you could be treating something like generative AI as a revolutionary technology, but we really don’t know that yet,” he says. When virtual reality came out, he says, it received a lot of fanfare, but the prediction that we’d have headsets on 24/7, living only virtually, never came to pass. Similarly, when it comes to AI, Gutierrez says, “I think it’s just healthy to just say, instead of ‘moving fast and breaking things,’ the famous Mark Zuckerberg line, that we can move slow and be deliberate in thinking about who owns AI, how it should be used, and how it might impact society in unpredictable ways.”

As AI-driven technologies continue to develop and proliferate, the big unanswered question isn’t just what they’ll be able to do, but how we will guide their evolution, ensuring that they serve humanity ethically.

ChatGPT wrote a first draft of that last paragraph, too—and it made an excellent point.

Amita Parashar Kelly ’06 is a supervising producer at NBC News.

You Might Like
  •  Andrea Sequeira, Gordon P. Lang and Althea P. Lang ’26 Professor of Biological Sciences, and Martina Königer, adjunct assistant professor of biological sciences, near some
    “The big, ambitious question is to try to understand the fundamental traits related to invasion success. But our more specific, tangible goal was to understand what varieties of phragmites are present on campus.”More
  • Get with the Programming
    Wellesley students are flocking to the computer science department, drawn in by the friendly faculty, innovative labs, and the promise of making a difference in the world through tech.More
  • A photo of James Battat in the Science Complex.
    Most of us are familiar with the well-known subatomic particles that make up the universe: protons, neutrons, and electrons. But James Battat, associate professor of physics, is curious about a much lesser-known particle, the neutrino.More

2 Comments

Mary Valante ’90
I'm a bit disappointed in this article. While the author acknowledges that "A tool like ChatGPT, Anderson explains, is trained on all the data available to it—pretty much anything on the internet" it doesn't engage with how problematic that it. Open AI tools have stolen the art and literature and journalism and ideas of real human beings, none of whom have been compensated or even given their permission. AI then spits out an answer without attribution or accountability. AI is a plagiarist. That means there is no way to use AI right now without also being a plagiarist. There is also a lot of evidence that companies are already trying to replace employees with AI (Sports Illustrated, game developers, IT services are already doing so). I think it's telling that no arts or humanities faculty were included.
Lisa Scanlon Mogolov, editor ’99
Thank you for your thoughtful comment. One thing I'd like to note: Nicholaus Gutierrez, assistant professor of cinema and media studies, is a humanities and arts faculty member.

Post a CommentView Full Policy

We ask that those who engage in Wellesley magazine's online community act with honesty, integrity, and respect. (Remember the honor code, alums?) We reserve the right to remove comments by impersonators or comments that are not civil and relevant to the subject at hand. By posting here, you are permitting Wellesley magazine to edit and republish your comment in all media. Please remember that all posts are public.

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.