The student news site of Dartmouth High School

The Spectrum

The student news site of Dartmouth High School

The Spectrum

The student news site of Dartmouth High School

The Spectrum

Misinformation and Disinformation about the Israel-Hamas War is Flooding Social Media

The+Al-Ahli+Arab+Hospital+after+the+explosion+on+October+17.
Shadi Al-Tabatibi/AFP/Getty Images
The Al-Ahli Arab Hospital after the explosion on October 17.

Hundreds were killed in the Al-Ahli Arab Hospital explosion, making October 17, 2023 one of the deadliest days of the Israel-Hamas War. Hamas did not hesitate to claim an Israeli airstrike targeted the hospital. US-based news outlets agreed.

“Based on what I’ve seen, it appears as though it was done by the other team, not you,” President Biden told Israeli Prime Minister Benjamin Netanyahu at a meeting in Tel-Aviv the next day. After analyzing video footage of the explosion, Al Jazeera and The New York Times were inclined to believe Palestinian forces were responsible. Breakdowns like these caused US news organizations to change their narrative. As the debate continues, the UN called for an independent international investigation into the explosion.

This is just one of many events throughout the course of this war that triggered a wave of mis- and disinformation online.

Why We Post on Social Media

In the three days after Hamas’ initial October 7 attacks, X users created 50 million posts about the conflict. “People flock to social media during a crisis for many reasons,” Vox senior technology reporter and editor A.W. Ohlheiser wrote. “Maybe it’s because the mainstream news doesn’t feel fast or immediate enough, or because the crisis has put them or someone close to them in harm’s way and they need help. Perhaps they want to see and share and say something that captures the reality of an important moment in time because they don’t know what else to do when the world is on fire.”

I spoke to a few DHS students who’ve used their social media to talk about the war. Some are indeed compelled to post for personal reasons.

I spoke to a few DHS students who’ve used their social media to talk about the war. Some are indeed compelled to post for personal reasons. “I am a Jew myself, and I believe that it is important to educate and stop the spread of Jewish hate,” an anonymous junior told me in an email. “History should not repeat itself, and we should stop the spread of antisemitism while we can.”

Sophomore Tassiana DaSilva posts about the war in moderation. “I have been trying not to lean too much towards any side and trying not to revolve my entire social media feed around [the war], because I know there are other conflicts happening at the same time all around the world that aren’t getting the same attention,” she wrote. “With that being said, my heart still breaks for all the innocent people who were hurt, killed, abused and still hurting for the rest of their lives. I still follow the news as closely as possible and hope and pray for the release of those in Gaza, both prisoners and citizens.”

Since the 2020 Black Lives Matter protests, more people use their social media accounts to post about social issues. Many are compelled to instantly express support for a side after a major event breaks out, whether it’s BLM, the Russo-Ukrainian War, or last year’s Israel-Palestine flare-up in early August. According to Ohleiser, “misinformation and manipulation often spread for the same reasons, slipping into the feeds of those who believe it can’t hurt to share a startling video or gruesome photograph or call for aid, even if they’re not sure of the reliability of the source.”

Frequently, people turn to social media to confirm their beliefs about the war. Seemingly everyone has an opinion about the Israel-Palestine conflict, so naturally they’d search for information that reinforces their beliefs and disproves counterarguments. Social media algorithms thrive off this phenomenon known as confirmation bias. “The consequence of this is that we often are shown content that is biased towards one angle and we see this angle often and consistently,” DHS Digital Literacy teacher Bryan Hellkamp told me in an email. “This leads us to believe that this idea is valid, true, and the ‘correct’ perspective. After all, if all that I’m seeing is similar information, similar ideas, and a similar perspective, then not only must it be true, but anyone who has a different opinion must be crazy, because everyone I know/see has the same opinion as me.”

Why False Information Spreads

As the hospital explosion demonstrated, both pro-Israel and pro-Palestine groups can villainize the other side to their own gain. On the Brookings Institute’s podcast “The Current,” Valerie Wirtschafter, a fellow in Brookings’ Foreign Policy program and Artificial Intelligence and Emerging Technology Initiative, explained how “that may filter into this sort of exaggeration, especially in these kinds of uncertain times when the information is incomplete.”

“What we don’t know at the moment is whether there are sort of more deliberate, state driven strategies,” she continued. For example, a false story that alleged Ukraine supplied Hamas with weapons could’ve been spread deliberately by Russian groups – but this hasn’t been confirmed, so it’s currently just a rumor.

Insufficient content moderation, especially on X, provides no filter for misinfo and disinfo.

Insufficient content moderation, especially on X, provides no filter for misinfo and disinfo. After purchasing the company, Elon Musk changed X’s verification system; blue check marks are no longer reserved for journalists, politicians, and experts, but anyone willing to pay $8/month for an X Blue membership. Posts by X Blue members get better promotion and engagement, and often accounts are paid for the amount of engagement they receive. “Recent research has found that [verified accounts] were responsible for spreading… something in the upwards of 70% of the misleading claims,” Wirtschafer reported.

X primarily relies on Community Notes to contextualize “potentially misleading posts.” If a user signs up to be a contributor, they can flag posts with falsehoods and write corrections, which appear under the posts in a box headed “Readers added context they thought people might want to know.” Community Notes work – but only to a certain extent. More posts have been flagged since its release, but the number is miniscule compared to what could be accomplished with a centralized, non-crowdsourced content moderation system (such as X’s sparse Trust and Safety Team).

Therefore, false information still lingers, especially since “a post with a Community Note will not be labeled, removed, or addressed by X unless it is found to be violating the X Rules, Terms of Service, or [the company’s] Privacy Policy.” This allows fabricated footage to rapidly spread across the platform. Hundreds of Hamas militants post propaganda masked as primary source videos on the messaging service Telegram, where most of the on-the-ground footage from the conflict is posted. In an interview with PBS NewsHour, Emerson Brooking, a senior resident fellow at the Atlantic Council’s Digital Forensic Research Lab, explained that “because Telegram is not a U.S. company, it’s not easily subject to international law. It’s extremely unlikely that content moderation action will ever be taken against Telegram itself.”

Al Jazeera reported on two notable cases of false footage reaching a massive audience. The first came from a pro-Hamas account, which posted a video of Hamas fighters shooting down an Israeli military helicopter. The footage was actually a clip from the video game Arma 3. A different account posted a photo from 2021 featuring Taliban soldiers in Afghanistan; it claimed the soldiers belonged to Hamas, and they were attacking Israel with weapons the US “left behind” in Afghanistan. Both of these posts were up on X for at least three days after the October 7 attack, the latter of which collected over 10 million views.

Musk himself spread misinfo on his X account in the aftermath of the October 7 attack, including suggesting his 150 million followers “get news on the attack from two verified accounts that have a clear history of sharing false information,” according to The Washington Post.

Along with Meta and Tiktok, the EU opened an investigation into X concerning “the alleged spreading of illegal content and disinformation, in particular the spreading of terrorist and violent content and hate speech.” The European Commission requested the companies explain the measures they’re taking to prevent the spread of disinformation.

How to Identify Misinfo and Disinfo

Both Wirtschafer and Ohlheiser stress the importance of looking at posts critically. Media that feels sensationalized, targets emotions, and neatly confirms all of its audience’s beliefs should be investigated before reposted; posts that focus on pathos rather than ethos get better engagement, but their primary purpose is not to inform. “Chances are there’s potentially context that’s missing,” Wirtschafer stated. “Wait a little bit for the full picture or the uncertainty around the whole picture to reveal itself.”

“Approaching this type of information with a little bit more deliberate care, particularly given the fact that not doing so or kind of running with an idea can really lead to real world violence potentially,” she added. “And so I think having that sort of pulse check is really, really important.”

Mr. Hellkamp encourages proactively searching for news outside of social media instead of waiting to see what social media algorithms curate for your feed. “For the first time in history, the news is coming to us instead of us going to the news,” he wrote. He recommends the Associated Press and Reuters as objective “news organizations… [that] tend to focus on reporting the story as opposed to the narrative of a story.”

Mr. Hellkamp also suggests reading news with a variety of perspectives. “When we consume a variety of different perspectives on the same topic it not only helps us to see the larger picture by understanding where multiple sides of the issue are coming from, but it can also help us strengthen our own beliefs and convictions by better understanding the perspective of those that differ from us,” he explained. Consumers can use MediaBiasFactCheck and similar resources to evaluate how a source leans and how factual their reporting is.

Not posting on social media can be helpful in its own right. “I have read a lot of posts, and I try to read them critically and figure out the bias. But I myself have not posted anything,” DHS Spanish Teacher Lili Chamberlain told me in an email. “It is a complicated situation, and I do not feel that I have enough expertise to pt anything online. I have had discussions with friends and family members, but again, I try to listen more than express opinions

Wirtschafter backed up her point. “Having these deeper conversations,” she said, “[and] doing this type of different research can substitute for the kind of cognitive load that comes with that sort of outrage process.”

Leave a Comment
More to Discover
About the Contributor
Annica Dupre
Annica Dupre, Assistant Editor
Annica Dupre is a sophomore at DHS. This is her second year on The Spectrum and her first year as an assistant editor. She writes about a wide variety of topics, with a recurring focus on environmental issues, education issues, and youth perspectives in media. She's also The Spectrum's unofficial, self-appointed tennis correspondent. Annica is the co-president of the Environmental Club, as well as a member of the Debate Club and the Student Advisory Committee. In the spring, she plays for the DHS Tennis Team.

Comments (0)

All The Spectrum Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *

Verified by MonsterInsights