Artificial intelligence has completely rewired how many of us accomplish things, from drafting ideas to creating artwork. While there may be positives to using AI, there is growing concern about some negative impacts, including our overreliance on technology and the overlooked effect on the environment, because of the large amount of energy it takes to run AI software.
When someone says AI, most people will think of language models like ChatGPT, Google Gemini or Microsoft Copilot. These forms of AI respond and complete the user’s tasks in a conversational and organized structure.
Need to send a memo to your employees? AI can draft it. Need a recipe for brownies? AI can give you one. Need to finish your final paper for your ethics class? Check your school’s policy on AI usage, but it sure can write the whole thing for you.
There are genuine productive uses for this software, but we do not know the long term risks of using it. Like with any new technology, testing is important and it seems as though companies are throwing this out for the public to use and basing all of their updates and tweaks off of user experience.
PauseAI is a non-profit community organization that aims to convince governments to halt AI software advancement.
Joep Meindertsma, a Dutch software engineer who founded the organization, said he started the project because he is worried about the development of frontier AI models.
“It’s absolutely ludicrous that we’re allowing companies to run these dangerous experiments, and we all are just sitting ducks while it’s happening,” Meindertsma said.
Amy Merrick, a journalism instructor at DePaul, has been exploring artificial intelligence and is working on her master’s degree in computer science to study the implementation of AI. Merrick believes that AI will greatly impact work, education and communication.
“Some of the positives that we’ve seen in recent AI usage are happening in science and technology,” Merrick said. “For example, learning about the structure of proteins which could lead to the treatment of diseases.”
But the impact on the environment has serious implications for the future.
“The amount of energy that is needed to build and run these programs is making it difficult for big tech companies to reach zero emissions,” Merrick said.
That goal of reducing carbon emissions would only be harder to reach from this point forward as the demand for research and development grows.
NPR found that the number of data centers that house servers for AI has risen from 3,600 in 2015 to over 7,000 worldwide today. That 94% increase in data centers also comes with increases in emissions, according to the report. Google alone has reported a 48% increase in emissions since 2019.
Even with growing awareness, Meindertsma fears AI’s unchecked growth will cause huge problems.
“We are now getting into territory where the environmental impact is about to get serious, but if we allow it to continue, it will be 10 times worse,” Meindertsma said.
We do not know what AI will look like in the future. It could remain an instrumental tool yet also be detrimental to our thought processes, making us too reliant on its existence. From science fiction films that prophesize androids ruling the world to the current state of automation in many different industries that may put thousands of peoples’ jobs at risk, these fears about an AI takeover can be both exaggerated and plausible.
To me, it’s worth slowing down AI development to make sure we are prepared if things go wrong. If we treat AI the same way we treat all other things that harm the environment, then there will be no real change in its production. I don’t think anyone will stop using something until they can see the damage. Even then, they’ll still make an argument that it isn’t a problem, just like people have done with climate change.
Regulating AI lies in the hands of lawmakers and politicians, and that might make change seem insurmountable for the average person. With that, we could be looking at safer AI and a smaller harmful impact on our environment.
“Our politicians are completely sucked into this technological race dynamic … and they are not trying to work together,” Meindertsma said. “Working together is possible. We just need one country to start.”
I like certain aspects of AI but I think the consumer version that everyone’s mom, brother and dog are using is not something to blindly trust as it’s no secret it can be unsafe. Meindertsma even mentioned how some AI systems are able to hack other websites and steal data.
Do I think we’ll be enslaved by robots in 10 years? Probably not. What I do know is that when AI companies start having a bigger impact on our planet and it takes peoples’ jobs, everyone is going to start moaning that we should have done something about it when we could.
Let’s not get to that point. The time to do something is now.
Stay informed with The DePaulia’s top stories,
delivered to your inbox every Monday.