Instagram users were left disturbed after an unexpected influx of graphic and explicit content flooded their Reels feed on February 26. Reports from around the world described the sudden appearance of violent, sexual, and distressing videos, sparking concerns over Meta’s content moderation policies. While Meta has since apologized and claimed to have fixed the issue, the recurrence of a similar incident on the same date in 2023 has left users questioning the platform’s security measures.
Users Report Disturbing Content Surge
Casual scrolling turned into a nightmare for many Instagram users on February 26, as their Reels feed was inundated with disturbing images and explicit content. Users reported seeing videos of violent sexual attacks, gruesome injuries, and other NSFW (not safe for work) content.
Upon reaching out to friends, many confirmed experiencing the same. Social media platforms like X (formerly Twitter) and Reddit were soon flooded with complaints from users who had noticed the disturbing trend despite having the highest level of Sensitive Content Control enabled.
A Google search the following day revealed that millions of users had been affected worldwide, raising questions about Instagram’s ability to filter and prevent harmful content.
A Troubling Recurrence: February 26, 2023
What made this incident even more suspicious was the fact that a similar issue had occurred on exactly the same date two years earlier—February 26, 2023.
During that time, users reported seeing violent videos, including torture and shootings, appearing randomly in their feeds. Meta had acknowledged the issue back then, but downplayed the impact, claiming that such content represented only a small fraction of total views.
However, the recurrence of this issue on the same date suggests deeper problems within Instagram’s content moderation system, leaving many users concerned about the platform’s ability to protect its audience from harmful content.
Meta Responds
Following an outpouring of user complaints, Meta quickly addressed the issue, stating that it had been caused by an internal error.
“We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended. We apologize for the mistake,” a Meta spokesperson told CNBC.
Meta reiterated its strict policies against graphic imagery, which prohibit content depicting “dismemberment, visible innards, or charred bodies,” among other forms of explicit violence. While some disturbing content is allowed for educational or awareness-raising purposes, it must carry appropriate warning labels.
Despite these assurances, the recurrence of such a major lapse has left many skeptical about Instagram’s ability to maintain a safe browsing experience.
Recent Policy Changes and Their Impact
This incident comes at a time when Meta is undergoing significant changes to its content moderation policies.
On January 7, 2025, the company announced a shift in its moderation strategy, prioritizing serious violations such as terrorism, child exploitation, fraud, and scams, while relying more on user reports for less severe infractions.
Additionally, CEO Mark Zuckerberg recently announced changes regarding political content, stating that Meta would allow more of it while implementing a “Community Notes” fact-checking system, similar to X’s approach.
Some experts believe these policy changes are an attempt to align Meta more closely with political figures, including former U.S. President Donald Trump, who has previously criticized the company’s moderation efforts. Earlier this month, Zuckerberg even visited the White House to discuss Meta’s role in U.S. technological leadership.
Layoffs and Moderation Challenges
Another factor that may have contributed to this lapse is the large-scale layoffs at Meta.
Between 2022 and 2023, Meta laid off over 21,000 employees—nearly a quarter of its workforce—including many from its trust and safety teams. These job cuts have raised concerns about whether the company has enough resources to effectively monitor and regulate content across its platforms.
Some experts on X highlighted how Meta’s reduced investment in moderation has led to an increase in harmful content. One user wrote, “Detailed metrics indicated that the circulation of violent imagery and explicit content spiked by nearly 500%, a dramatic escalation linked to Meta’s recent policy changes.”
Another user added, “Meta notably scaled back its content moderation efforts, including ending its third-party fact-checking program and reducing proactive oversight.”
A Wake-Up Call for Social Media Users
Although Meta has assured users that the issue has been resolved, the incident has ignited discussions on the reliability of social media platforms in curating safe content.
For many, including myself, this experience has been a stark reminder of the vulnerabilities inherent in social media. The next time we open Instagram, we may think twice before blindly scrolling, knowing that, at any moment, explicit and disturbing content could slip through the cracks once again.
As users demand better safeguards, Meta faces increasing pressure to ensure such incidents do not repeat—especially on February 26 of future years.
(With inputs from agencies)