As of mid-2025, we are feeding Chat GPT more than 2.5 billion prompts per day (data by Axios). Artificial Intelligence feeds us responses to all our crazy questions, sometimes disregarding the ethics of a given question. While most students think of AI as a tool for homework or a way to spark creativity, it can just as easily be used for malicious propaganda. In addition, its realism has reached an uncanny level that poses major dangers to societies. Political deepfakes and mass production of fake news can be created in a matter of minutes. So, how do we make sure we aren’t susceptible to AI’s brainwashing?
Recognizing the impacts of AI on political and media bias
Rewinding back to the 2024 U.S. Presidential Election, usage of AI for propaganda was already prominently used to sway voters. The most scary of them all: deepfakes. Deepfakes can identically mask people’s voices and appearances, deceiving the audience into believing fake situations. NPR highlighted an example of this when “in January [2024], thousands of New Hampshire voters picked up their phones to hear what sounded like President Biden telling Democrats not to vote in the state’s primary, just days away.” Elections are one of the most important parts of maintaining American democracy, and sabotaging their outcomes is a major criminal offense that shouldn’t be easy to commit. In this case, the Democratic political consultant behind the call was caught and fined $2 million, however the damage is difficult to erase.
I interviewed Mr. Berg, one of our political science teachers, to gain a better understanding on the trends he’s noticing and predicting with AI. Mr. Berg has also done personal research from primary sources on Artificial Intelligence. When asked about the ability of AI to manipulate people’s opinions, he referenced Geoffery Hinton “The Godfather of AI” who has voiced concerns that manipulation could absolutely occur. I then asked about the need for regulations on AI and listed his answer below:
“Reading experts’ opinions they all say ‘absolutely, we need regulations.’ But it’s a hard topic because even if the U.S regulates it, is China or any other country going to? However overall we will need to have some type of regulation.”
I believe this point brings up an excellent argument around how much power do we want to sacrifice to AI so we can also balance our political power. Not only is AI a threat to making us more divided, but it’s a global rat race we are fighting to win. It’s very difficult to measure the needed balance of control and stability, so the most important way to handle AI is recognizing dangers, even if they advance.
The last thing I asked him is if he has encountered any political AI propaganda and he replied:
“Even before AI, the internet was already spreading misinformation digitally but with AI that becomes easier because you can create deep fake videos. While watching videos or images on social media you might encounter someone who’s doing something bad, but in reality it never happened.”

Misinformation in the digital age isn’t anything new, AI just helps amplify it to another level. Public figures can have their image exploited for different political campaigns due to deepfakes. AI generated propaganda images of Taylor Swift were spread around in the 2024 election despite her making a public endorsement of Joe Biden. It is easiest for humans to trust the first thing you see, which means these fake images could genuinely impact people.
How to combat AI propaganda:
Now that I have recognized the dangers of AI, let’s discuss how to prevent being “brainwashed.” The best way to fight AI is by improving your media literacy skills, making sure to fact-check all polarizing information before you comment. Taking a few minutes to search for verifying information on other sources can save you from snowballing a claim of misinformation. To get more insight about this topic, I interviewed our librarian Mrs. Potter, who teaches all our freshman classes the importance of media literacy while conducting research. Her responses to my questions in bold are listed below:
What is the importance of media literacy/fact-checking?
“So much information, and so much of that info is misinformation. So it’s very important that young people learn to differentiate between good and bad information, so that they can make good decisions, and not be taken advantage of or misled.”
What are your tips on how to check your sources?
“We’ve started teaching a method that is used by professional fact checkers, called lateral reading which is doing a little bit of research about a source using another. Using other sources to learn about that source’s credibility, bias, and funding sources.”
What’s the difference between AI information and human-spread information?
“AI will always give an answer even if it doesn’t know it. It’s very ‘people pleasing,’ and will give some misinformation to answer its prompt. Implicit bias can be present in AI.”
Here at Helix, it’s a true privilege we have educators like Mrs. Potter working to make students aware of the dangers of misinformation. Additionally on the Helix Library website, there is a full section to help students with media literacy. Mr. Berg and Mr. Morris are also teaching an AI bootcamp about the ethics of AI and how to use AI tutoring sources correctly.
However, with AI advancing, all students around the nation must learn how to deal with information on the internet. The Southern Regional Education Board (SREB) is a nonpartisan, nonprofit interstate compact focusing on improving 16 states public education at every level. Earlier this year in April, they published a guide on AI in K-12 education, focusing on four pillars of AI usage. The last pillar addresses media literacy stating, “By guiding students in evaluating AI-driven media’s credibility, bias and accuracy, teachers help them develop essential skills for discerning reliable information from misinformation. This support fosters better decision-making and responsible media consumption.” The best way teachers can work with artificial intelligence is by helping their students use it as a tool rather than a source. These skills transfer out of the classroom to all media consumption, helping our generation remain rational in their views.
AI will continue to advance in ways that make the future of media consumption uncertain. A world full of believable fake news may seem scary but it’s important to remember you have the one thing a robot will never have: a mind. Using logic and reasoning to safely cross-check references can help us reclaim our voices in a world that’s trying to take them away. Artificial Intelligence, a tool with great power, must come with great responsibility—and it’s time to decide who stays in control.























