The Student Newspaper of DePaul University

The DePaulia

The Student Newspaper of DePaul University

The DePaulia

The Student Newspaper of DePaul University

The DePaulia

DePaul’s approach to artificial intelligence

Maya Oclassen

With artificial intelligence programs such as ChatGPT emerging for public use less than a year ago, DePaul is considering how to institute AI into their academic policies. 

Many professors wrote policies for use of artificial intelligence, also known as AI, in their fall quarter syllabi, outlining rules for using the new technology for classwork.

ChatGPT became available for public use in November of 2023, increasing its presence in the higher education community.

Students use programs such as ChatGPT, Bing AI and Bard to generate answers to essays, quick homework help and as a brainstorming tool.

According to political science professor Dick Farkas, AI is here to stay.

“There’s no uninventing AI,” Farkas said. “It’s like nuclear weapons, you can’t just say let’s just push it aside … It’s a reality.”

But, as indicated by DePaul’s approach to artificial intelligence, the opinions surrounding the use of AI in a university setting have not reached a clear consensus.  

“Universities have a special challenge: on the one hand, we need to prepare our students for a world of work in which AI will certainly play a part, but on the other hand, we want our students to understand and practice integrity in the use of any sources, including those generated by Artificial Intelligence,” according to the guidance posted on DePaul’s resource page. 

At a meeting Wednesday, Sept. 13, Faculty Council members endorsed changing DePaul’s plagiarism policy to include generative AI.

The change to the policy classifies the use of generative AI programs, along with material created by anyone other than the author, as plagiarism, which can result in a failing grade for the course, or suspension. 

The policy now reads, “submitting a work prepared by someone else (e.g. generative artificial intelligence, research papers purchased from another person, website, paper mill, etc.)” 

Robert Karpinski, associate vice president for Academic and Library Affairs, believes that adding this change into syllabi would be a beneficial tactic. 

“It would be great in your syllabus to make it clear this is the policy,” Karpinski said.

Bamshad Mobasher, professor of computer science and chair of the Artificial Intelligence program at the School of Computing, is also helping create the DePaul Artificial Intelligence Institute (DAII), revealed in the Designing DePaul 51-page report on Aug. 31.  

“The idea is to develop this institute as a sort of a framework where you could do this very broad conversation and large interdisciplinary collaboration,” Mobasher said. 

Mobasher’s proposal for DAII also speaks upon promoting interdisciplinary research and encouraging collaboration “centered around the promises and the challenges presented by AI.”

Throughout the first week of the fall quarter, many students noticed that their professors had added a syllabi section on AI – a first for many.

Junior political science student Delaney Kaufman saw an AI policy present in her WRD 395 class.

Kaufman says her syllabus read, “Students are allowed to use generative AI tools, such as ChatGPT, in specific ways in this course, namely in the brainstorming stages of drafting.”

The syllabus goes on to ensure that students properly document and cite the use of the AI tool.

“I thought this was a really progressive way of viewing AI,” Kaufman said. “Allowing it in the classroom but giving students the opportunity to be smart about their usage.”

DePaul’s approach to AI  includes the continued use of  Turnitin, an app that professors can use to detect plagiarism in student work and now includes detection for AI. 

However, concerns about the accuracy of these detectors are apparent, with Turnitin stating in their official AI statement that “the work is not done.”

“I want students to understand that if they use AI and don’t acknowledge it, it’s still the old style of plagiarism,” Farkas said. 

Although it’s not brand new, AI’s newfound accessibility in academia continues to generate conversation about its implications – positive and negative.

Mobasher points out that context is important when debating the integrity of AI.

“You can imagine if you are a freshman taking English 101 and you’re supposed to write essays, it’s not appropriate to ask ChatGPT to write your essay,” Mobasher said.

However, Mobasher believes that utilizing AI programs like ChatGPT for longer research projects is a productive way for students and faculty to explore AI. 

“Think about it as a more intelligent version of a web search,” Mobasher said.

Despite the popular belief among students and faculty on the benefits of AI, some professors are worried about students becoming dependent on AI programs to complete their school work.

“The student that I worry about is the student who says they have no time to do an assignment and take the easy way out,” Farkas said. 

Both Farkas and Mobasher pose suggestions for interdisciplinary communication to navigate the future of AI.

Farkas says it would be beneficial to create opportunities for students to talk to faculty members and share their ideas of what AI could mean for their education.

“We’re always talking about communication and participation, yet we don’t create mechanisms for that to happen,” Farkas said.

But long-term solutions and guidelines for AI are still being developed

“We need a more complex policy that provides guidance,” Mobasher said. “But it’s going to take time to develop that.”

More to Discover