Images generated by AI, many referencing Pope Leo XIV’s Chicago roots, were everywhere when the new pontiff made his recent debut. One showed him with a Portillo’s bag in his left hand and a bottle of Malört in his right. In others, he donned Bears-themed papal attire or held an Italian beef sandwich.
As artificial intelligence continues to seep into cultural moments like these, concerns about its use — including for individual artists trying to protect their work from mimicry — are shaping the conversation about the technology’s future.
R, in the basement theater of the Daley Building. It featured two panels discussing AI’s use as a tool in creative work, its legal complications and potential threats to individual artists.
One of the main legal concerns lies in how AI companies “train” their models to use art that is publicly available on the internet.
Andy Beach, former chief technology officer of media and entertainment at Microsoft, was the symposium’s keynote speaker.
He cautioned users not to humanize AI, and he instead likened the technology to a camera, or a tool, that people can use to get their own “shot” of an idea.
“I think human beings are hardwired to see human traits in nonhuman objects,” Beach said. “I worry that we are doing the same thing with AI when we call it our ‘assistant.’”
He said the “human spark” of creativity isn’t necessarily threatened by AI but advised users to be more skeptical of the technology.
“It’s very eager to give you the answer that it thinks you want,” Beach said. “By default, it is designed to stroke our ego, so we interact with it more and longer.”
OpenAI recently rolled back a recent update due to concerns about the chatbot being “overly flattering or agreeable,” it said in a statement.
Beach also urged transparency in people’s use of AI. “I think that we will get better creative tooling out of it if we are more open about how we use it, because then we — the technologists — will learn more about how we actually want to use it,” he said.
Aaron Owen, assistant professor in DePaul’s School of Cinematic Arts, attended the symposium and shared concerns about what the “democratization” of AI means in practice.
“Do we get to a true meritocracy where, in the marketplace of ideas, the original ones will win out because you’ll have a sea of sameness?” Owen said.
Another attendee, Cathy Ann Elias, said she thinks AI may enhance human creativity instead of replacing it. Elias is a professor in DePaul’s School of Music and said she is a part of a faculty study group on AI.
“I think real creativity comes from human beings, and it comes from messiness and trying to do things and redo them,” Elias said. “I don’t think machines will take over real creativity.”
Reached after the symposium, Elli Monti, a full-time freelance artist in Chicago who primarily makes three-dimensional art, said AI “does feel threatening.”
“I’m not going to be the person to ever use AI, and I know there’s a bunch of people like me,” Monti, 27, said. “But then there’s the other side of things where there’s a bunch of people who just use it every single day.”
She doubts protections for artists and their creative work will happen soon.
“Laws take forever to catch up to the digital space,” Monti said.
Lawsuits against AI companies are beginning to surface. The New York Times, for instance, is suing OpenAI for using its articles to train its model. The suit was recently allowed to move forward, despite a request from OpenAI to toss it.
Michael Grynberg, symposium panelist and professor of intellectual property law at DePaul’s College of Law, can see how ingesting online material to create AI models could be considered “actionable copying.”
“That leads to the follow-on question of whether or not the copying to create the models would be a fair use under copyright law,” Grynberg said.
How the courts will respond remains uncertain.
“I’d be surprised if the models got just a blanket blessing from the courts, but, really, your guess is as good as mine,” he said. “In the end, … it’s going to be a human decision, not a completely legalistic one.”
While legal remedies have been slow, projects like Glaze and Nightshade have been created to protect individual artists from style mimicry and to direct AI models away from their work.
Glaze makes slight, near-invisible changes to a digital piece of art to “cloak” its true appearance from models. Nightshade takes a more offensive, prompt-specific approach against model trainers, so a request for an image of a cow, for instance, could return one of a handbag.
Ben Zhao, computer science professor at the University of Chicago, leads the Glaze project and presented at the symposium. Both projects have been downloaded a total of 10.3 million times by artists worldwide since March 2023, according to Zhao.
He said the adoption of the project has been positive for artists. “(But) no one thing is going to completely end the struggle,” Zhao said. “This is just all a piece of what we need to do in order to protect human creatives.”
Related Stories:
- Navigating the artificial age: how AI interviews are changing the job search
- Bring a gun on the CTA? AI might know
- Slowing down AI for the health of the planet
Support Student Journalism!
The DePaulia is DePaul University’s award-winning, editorially independent student newspaper. Since 1923, student journalists have produced high-quality, on-the-ground reporting that informs our campus and city.
As the funding model for journalism changes, we rely on reader support more than ever. Your donation helps us fund the reporting that keeps our community informed. Donations are tax deducible through DePaul's giving page. Click the button below to donate.