Brian K. Smith (Peter Julian)

The promise and peril of artificial intelligence—chatbot technology in particular—has been a hot topic since last fall with OpenAI’s launch of ChatGPT (Generative Pre-trained Transformer), a chatbot that impressed many with its ability to generate detailed and human-like text, although critics noted its uneven factual accuracy. Journalists, artists, ethicists, academics, and public advocates raised concerns about how ChatGPT could negatively affect education, disrupt entire industries, and be used to sow political and social chaos.

By January, ChatGPT reached more than 100 million monthly users, a faster adoption rate than that of Instagram and TikTok. On March 14, OpenAI released GPT-4, an upgrade of the version used in ChatGPT. Microsoft and Google also have introduced their own chatbots.

In the following Q&A, Brian K. Smith, the Honorable David S. Nelson Chair and associate dean for research at the Lynch School of Education and Human Development, talks about AI/ChatGPT’s potential—for better or worse. Smith's research interests include computer-based learning environments, human-computer interaction, and computer science education. He also worked in artificial intelligence throughout his career.

OpenAI Ceo Sam Altman met with Washington, D.C., lawmakers earlier this year to clarify misconceptions about ChatGPT by explaining its uses and limitations, but some legislators believe that the new technology warrants a dedicated regulation agency.  Is that wise?  

Whether it’s government, industry, academia, or some combination, people need to think about the societal implications of any technology. As many suggest, those implications could be bad, but they could also be positive. For example, much progress has been made using machine learning in breast cancer analysis. It’d be great to incentivize and celebrate these positive applications while continuing to look for and minimize possible biases and adverse effects. In the short term, that might be a regulatory body. In the long term, we should educate future technologists to think as deeply about technical knowledge and the societal impacts of their innovations.

Researchers warn that large language models like the type used by ChatGPT could be used by disinformation campaigns to more easily spread propaganda—and, as models become more accessible, easier to scale, and compose in more credible and persuasive text, they will be very effective for future influence operations.  Is the danger legitimate? What could be done to mitigate the threat of the tool’s weaponization if in the wrong hands?

There are and will always be bad actors in the world, and they’ll use whatever they can to do bad things. Will some bad people use ChatGPT to spread misinformation, write convincing phishing emails, etc.? Without a doubt. But I think we know a lot about how bad actors work with existing tools, and that knowledge goes a long way. We focus on the bad getting worse, but the good also gets better with new technologies.  

In a survey of 1,000 college students, online magazine Intelligent found nearly 60 percent used the chatbot on more than half of all their assignments, and 30 percent of them used ChatGPT on written assignments.  Some universities worry about ChatGPT’s impact on student work and assessments—given that it passed graduate-level exams at the University of Minnesota and Penn’s Wharton School of Business—but are refusing to bar the chatbot, instead advising professors to set their own policies.  What should colleges consider when it comes to ChatGPT?

Writing is a huge part of how students are assessed in education, so it’s not surprising that there’s concern about a program that generates reasonable essays, computer programs, language translations, etc. But ChatGPT is a technology that allows an opportunity to rethink what and how students learn—much like calculators, spell-checkers, Wikipedia, and similar tools. Changing education is challenging, so how do we do it? BC’s Center for Teaching Excellence created an excellent document on using ChatGPT that provides strategies for utilizing it to teach and minimize cheating. Other universities are investigating similar ways to work with ChatGPT versus trying to ban its use. The key is getting educators to start thinking together as a community to develop pedagogy that situate ChatGPT and other tools as intellectual partners rather than stuff to cheat with (it’s not called “CheatGPT”).

What do you mean when you talk about “tools as intellectual partners?”

People started talking about intelligence amplification or augmentation in the 1950s. The basic idea is that machines can assist us with cognitive tasks that would otherwise be difficult to perform alone. A calculator is a good example: It lets us offload things like computing square roots and multiplying big numbers by hand so we can focus on higher-level problem solving. You can imagine something similar with ChatGPT. I can prompt it to create a sample syllabus, party invitation, or a Q and A for the Chronicle and then iterate on the initial text to make it read in my voice and style and correct any errors it made along the way. ChatGPT is like a partner helping me brainstorm and improve ideas in this scenario.

By the way, I didn’t use it for this Q and A.

In a TIME magazine article, proponents of generative AI said it will “reorient the way we work, unlock creativity and scientific discoveries, allow humanity to achieve previously unimaginable feats, and boost the global economy by over $15 trillion by 2030.” But they expressed multiple concerns, not the least of which is the existential risk posed by AI companies creating Artificial General Intelligence (AGI), a tool that “thinks and learns more efficiently than humans,” potentially without human guidance or intervention.  How can we guarantee that AIs are aligned with human values?  

OpenAI did a lot of work creating “guardrails” to keep ChatGPT from spouting lots of crazy things. Unfortunately, that’s become politicized, with some saying ChatGPT is “woke” because it might avoid talking about certain people and ideas. But ChatGPT and similar language systems are trained on billions of documents written by humans. Suppose those programs produce language that goes against human values. That’d be because people have expressed and will continue to express horrible things that oppose human values. We can’t blame a computer for learning our bad habits; humans need to stop war, violence, discrimination, etc. Don’t hate the chatbot, hate the game.

TIME cautioned that the big technology companies that will eventually control AIs would likely become not only the world’s richest corporations by charging whatever they want for commercial use, but potentially morph into “geopolitical actors and to rival nation states.”  Are these fears realistic?  If so, what measures might be implemented to curb these developments?

This one’s out of my league; I’m afraid I don’t know anything about how AI might be used to create the Federal Kingdom of Microsoft or Amazon Republic. It’s an interesting scenario, but I’m hoping those companies might help us use AI to solve the significant challenges we face as a society. It won’t do much good for Google to take over a continent when it floods due to climate events. I look to our students—past, present, and future—to help with this. Hopefully, they’ll become the leaders of organizations that use AI for good rather than technological empire building.

Phil Gloudemans | University Communications | April 2023