A few nights ago, I sat down to unwind and watch something on Netflix. I opened the app, flipped through a few tiles, and without thinking much, clicked on a show that looked vaguely interesting. Two hours later, I realized I hadn’t really made a decision at all. The show autoplayed its next episode, the visuals had been tailored to grab my attention, and the recommendations that led me there were carefully curated by an algorithm that knows me better than some of my friends.
This isn’t a glitch in the matrix—it’s the matrix itself.
The new age of “choice”
In a world dominated by AI-powered platforms, the idea of free will is being redefined. From what we watch, to what we read, buy, or even believe, algorithms are the invisible architects shaping our daily experiences.
Take Netflix, for instance. Its recommendation system is responsible for 80% of user watch activity. It doesn’t just suggest shows based on your viewing history. The algorithm considers several factors: what you watched, when you watched it, whether you finished it, how long you hovered over a thumbnail, and more.
Similarly, Amazon’s recommendation engine drives 35% of its total purchases. It knows what you’ve clicked, compared, wishlisted, or even just viewed, and uses this data to subtly influence your next purchase decision.
The platforms we interact with are no longer passive libraries of options. They’re active curators of our lives, making suggestions that feel like choices but are really predictions.
How algorithms learn about you
At the heart of these experiences is predictive personalization. Every scroll, click, pause, like, or share is a data point that feeds a system designed to optimize for one goal: engagement.
Social platforms like TikTok and Instagram measure engagement down to the millisecond. TikTok’s “For You” feed, for instance, doesn’t require likes or follows to tailor your experience—it tracks how long you linger on each video, how many times you rewatch, and how you swipe away. Within hours, the app builds a remarkably accurate picture of your preferences, moods, and even emotional triggers.
Facebook’s algorithms evaluate over 100,000 variables to rank posts in your feed. From the kind of content you interact with, to how often you engage with specific users, the algorithm creates a profile of what will keep you scrolling.
These algorithms aren’t evil by design. They’re just optimized for attention. But in doing so, they exploit human psychology: Nudging us toward what’s easy, engaging, and familiar—not necessarily what’s balanced, challenging, or true.
Where influence becomes manipulation
The real danger isn’t in the existence of algorithms; it’s in our unawareness of their power. This power manifests in subtle, cumulative ways across every domain of life.
1. In politics and news
Social media algorithms have been shown to reinforce ideological bubbles, exposing users to content that aligns with their beliefs and limiting exposure to opposing views. A 2023 Meta-funded study found that users were less likely to engage with political content from opposing viewpoints, and the algorithm quickly adapted to this behavior by deprioritizing cross-cutting content.
This doesn’t just limit perspective—it creates polarization. Over time, we become more convinced of our worldview because everything we see seems to confirm it.
2. In consumer behavior
You might think you buy what you want when you want it. However, platforms are constantly guiding you with “people also bought”, “customers like you”, and “trending near you” prompts. These aren’t neutral suggestions; they’re statistically engineered nudges designed to maximize conversions.
A 2023 study by Barilliance, found that personalized product recommendations contribute approximately 31 % of e‑commerce site revenues—a clear sign of their impact on choices and subsequent sales.
3. In emotional states
Facebook’s infamous 2012 study revealed that it could subtly manipulate the emotional tone of users’ posts by adjusting what they saw in their feed—without users even knowing it. Although the study took place over a decade ago, its implications still echo today in how emotionally provocative content—often outrage or fear—is prioritized because it fuels engagement.
In short: the algorithm often favors what triggers us.
Why this erodes free will
All of this creates what philosopher Shoshana Zuboff calls “behavioral surplus“. Platforms aren’t just reacting to your preferences; they’re shaping them. Your future choices are being predicted and preconditioned.
This makes us more predictable, more pliable, and ironically, more passive.
If you’re only shown one kind of news, one kind of product, one kind of worldview, then your decision isn’t really a choice, it’s a click along a path AI and algorithms have paved.
Can we take back control?
Yes, but it requires awareness, intention, and effort.
1. Recognize the system
Start by acknowledging that your feed, your suggestions, your notifications—they’re all curated. Ask: “Why am I seeing this?” Many social media platforms now offer explanations for their recommendations. Use them.
2. Break the bubble
Follow people and pages that challenge your views. Subscribe to newsletters outside your usual interests. Use browser extensions like NewsGuard or Ground News to compare media bias and coverage.
3. Switch to manual mode
Many platforms still offer a “chronological” or “manual” feed. Twitter (now X), Instagram, and Facebook are among them. While less addictive, they offer a more transparent view of what’s happening.
4. Demand transparency
Support legislation that calls for algorithmic accountability. The proposed Filter Bubble Transparency Act in the U.S., for example, would require platforms to offer users an unfiltered, non-curated version of their feed.
The bottom line
Algorithms are neither good nor bad. They are tools. However, tools wielded without transparency, and optimized for engagement over ethics, can distort our reality.
In the pursuit of convenience, we risk outsourcing our judgment. What to watch. What to believe. Who to vote for. Who to date. What to buy. These are intimate, human decisions. If we allow machines to make them for us, we might forget how to make them for ourselves.
The question isn’t whether algorithms will shape our lives. They already do.
The question is: How do we stay awake in a world that wants us to sleep-scroll through every choice?