Bubble Wars
- Alexander Persaud
- Jun 10
- 3 min read

In recent days, the deployment of masked federal forces to Los Angeles has reignited public concern—not just about constitutional overreach, but about how these events are portrayed, distorted, or even weaponized across digital platforms. The bigger question emerging is this: Are algorithms being manipulated to sow division? And if so, who benefits?
The Filter Bubble Effect
Modern digital platforms like Facebook, TikTok, X (formerly Twitter), and YouTube operate using sophisticated recommendation algorithms. These algorithms are designed to maximize engagement by showing users content they’re most likely to react to. Unfortunately, outrage, fear, and tribal narratives generate the most clicks—and that means divisive content gets priority placement.
In a liberal city like Los Angeles, residents are more likely to be exposed to content highlighting human rights concerns, ICE overreach, and civil liberties violations. Conversely, in conservative regions of red states, users are fed content emphasizing illegal immigration, crime, and the need for law-and-order responses.
The result? Two radically different digital realities, each reinforcing their viewers' preexisting beliefs.
Cambridge Analytica: The Blueprint for Digital Psychological Warfare
The 2016 Cambridge Analytica scandal laid bare just how easily data can be harvested and weaponized. The company illegally obtained data from over 87 million Facebook users through seemingly innocent quizzes and third-party apps. That data was used to create psychographic profiles—detailed psychological maps of users' personalities, fears, and desires.
Once profiled, users were microtargeted with tailored ads and content designed to exploit their cognitive vulnerabilities. For example:
Fear-based ads warning about immigration and terrorism were shown to those scoring high in “neuroticism.”
Nationalistic or moral purity content was targeted at those with strong conservative values.
Apathetic voters were bombarded with disinformation to suppress turnout.
These strategies were deployed in the 2016 U.S. election, Brexit referendum, and other elections in Kenya, Nigeria, and Trinidad. Multiple whistleblowers—including Christopher Wylie—confirmed that psychological manipulation wasn’t accidental; it was the business model.
Proven Cases of Digital Psychological Manipulation
Cambridge Analytica wasn’t the only case. Several documented instances show how digital tools have been used to manipulate public opinion, emotion, and behavior:
1. Facebook's Mood Experiment (2012)
Facebook conducted a secret experiment on 689,000 users by manipulating the emotional tone of their newsfeeds. Those exposed to more negative posts were more likely to post negatively themselves—proving emotional contagion through algorithmic curation.
2. YouTube's Radicalization Funnel
Internal studies and investigative journalism (e.g., from The New York Times) showed how YouTube’s recommendation algorithm often led users from mainstream content to increasingly extreme material, particularly around politics and conspiracy theories. This effect was particularly strong during election seasons.
3. TikTok’s Amplification of Polarizing Content
Researchers found that TikTok’s For You Page frequently amplifies divisive or emotionally charged content—especially around race, gender, and politics. Because the algorithm is opaque, disinformation spreads quickly before moderation catches up.
4. Russia’s IRA Bot Farms (2016)
The Internet Research Agency, a Kremlin-linked operation, used thousands of fake accounts to flood American social media with divisive content—creating fake Black Lives Matter groups, anti-immigration pages, and even organizing real-world protests through event invites. Their primary objective: widen racial and ideological divides.
5. WhatsApp and Lynching in India
In India, manipulated videos and false rumors spread rapidly via WhatsApp, triggering mob violence and lynching's. The lack of algorithmic filtering on encrypted platforms created an untraceable yet potent disinformation pipeline.
Is This Happening Again in LA?
All signs point to yes. Recent investigations from researchers at Graphika and the Stanford Internet Observatory have flagged:
Bot networks pushing pro-ICE and anti-Newsom hashtags on X.
AI-generated content farms pumping out divisive memes on Telegram and Facebook.
Psychographic ad targeting by political action committees focusing on zip codes surrounding Los Angeles.
Foreign disinformation campaigns amplifying footage of violent protests to justify federal intervention.
This isn't speculative—it's recycled strategy from 2016, refined for 2025.
The Incentive to Divide
Why would these tactics resurface now? Because division is politically profitable.
Upcoming elections mean high-stakes narrative control.
Fractured public perception ensures more outrage-driven engagement.
Chaos as cover helps controversial policies slip through unnoticed.
From immigration crackdowns to cuts in social services, a distracted and divided public is easier to govern—and harder to unite against shared grievances.
The Digital Cold War
We are living through a new kind of cold war: one fought not with missiles or tanks, but with memes, metadata, and microtargeting. Cities like Los Angeles are ground zero—not just for protest and policy, but for narrative warfare.
Algorithms weren’t built to divide us, but they are optimized for it. And bad actors—from political operatives to foreign intelligence services—are more than happy to exploit that optimization.
Final Thoughts
Digital media doesn’t just report the news—it shapes the battlefield. As users, we must learn to:
Recognize manipulated narratives.
Cross-check sources across ideological lines.
Demand transparency from tech companies.
Because if we don’t understand how our feeds are curated, we risk becoming foot soldiers in someone else’s war.