Network of power: The end of free information?
How invisible algorithms, mass surveillance, and digital bubbles are redefining democracy, freedom, and truth in 2025
October 2025: The Digital Battlefield
On October 24, 2025, the European Commission announced preliminary findings that TikTok and Meta (Facebook/Instagram) violated transparency obligations under the Digital Services Act, the EU law regulating digital platforms. The potential fines? Up to 6% of annual global turnover — billions of dollars. The response from the other side of the Atlantic was immediate: the Trump administration increased pressure on Europe, with Vice President JD Vance threatening to withdraw support for NATO if Europe continues to “regulate Musk’s platforms.”
At the same time, internal documents reveal that the U.S. Environmental Protection Agency (EPA) is allegedly using artificial intelligence to monitor employee communications, searching for “language hostile to Trump or Musk.” In China, state-linked operators are using ChatGPT to draft proposals for systems of mass surveillance targeting Uyghurs. On TikTok, recent research confirms the existence of political echo chambers where 51 million users are exposed only to content that confirms their preexisting beliefs.
Welcome to the Platform Society of 2025: a society where digital platforms are no longer neutral tools, but power infrastructures mediating every aspect of social, political, and economic life.
The Platform Society: Living Inside Information
What is the Platform Society? The term, coined by Dutch researchers José van Dijck, Thomas Poell and Martijn de Waal, describes a society where platforms like Facebook, Google, Amazon, TikTok are no longer just services. They have permeated social structures. We don’t “use technology” anymore — we live inside digital ecosystems that shape how we work, communicate, inform ourselves, do politics, and consume culture.
Think about your day: you wake up to an alarm on a smartphone (Apple or Android, both closed ecosystems). You check WhatsApp notifications (Meta). You read news in algorithmic feeds (Facebook, X, TikTok). You work in cloud platforms (Google Workspace, Microsoft 365). At night you watch Netflix, order food on Deliveroo, call an Uber. Every interaction generates data. Every data point feeds algorithms. Every algorithm shapes your next decisions.
This is the digital ecology: an environment in which platforms aren’t isolated but form an interdependent ecosystem, where each element influences and depends on the others. And like in any ecosystem, whoever controls the fundamental resources — in this case data, algorithms, infrastructure — controls the entire system.
Algorithmic Bubbles: Prisoners of Our Own Preferences
In 2025, a study of over 51 million TikTok accounts confirmed what many suspected: the algorithms create distinct political echo chambers. Left-leaning and right-leaning users exist in completely separate networks, exposed to radically different narratives about the same events. Even more worrying: the most radical content gets fewer views but generates higher active engagement (comments, shares), building small but intensely polarized communities.
Algorithmic bubbles work like this: the algorithm observes what you click, how long you watch, what you share. Then it shows you more of the same. That sounds harmless — who doesn’t want “relevant” content? But the result is that you’re progressively insulated from different viewpoints. It’s not censorship in the traditional sense: nobody forbids you from seeking alternative information. It’s subtler: that information simply never reaches you, so you don’t even know it exists.
The problem gets worse when we realize that billions of people now get their news primarily from these algorithmic feeds. There is no longer a “shared reality” to start from. There are parallel realities, each with its own “facts,” driven by algorithms optimized not for truth, but for engagement.
Algorithmic Conformism: The Invisible Network of power:
But algorithmic bubbles are only part of the problem. There’s something more insidious: algorithmic conformism. Knowing we’re being watched — even if we don’t know exactly by whom or when — changes our behavior. We self-censor “controversial” posts. We avoid “problematic” topics. We follow trending formats so we’re not algorithmically punished with reduced visibility.
On platforms like TikTok, Instagram, YouTube, creators have learned to “play the algorithm”: they use substitute words to avoid shadow-bans (“unalive” instead of “dead,” “seggs” instead of “sex”), copy viral formats that don’t reflect what they actually want to say, post at “optimal” hours determined by the system. The outcome? Cultural mass-production, where everyone creates the same content to please an invisible algorithm.
This is the digital panopticon in action: nobody needs to watch us constantly. The mere awareness that we are observable is enough to shape our collective behavior. And whoever programs that invisible system wields enormous power without ever being seen.
Big Brother and Network of power:: Governments and Big Tech United in the Surveillance of Information
October 2025: OpenAI publishes a report documenting how Chinese government-linked operators used ChatGPT to draft proposals for mass surveillance tools that analyze “travel movements and police records of Uyghurs and other high-risk individuals.” In the same month, reports emerge that the U.S. administration is using AI to monitor internal communications in government agencies, searching for “hostile language” toward political leaders.
Big Brother is no longer only a literary dystopia — it’s operational reality, in multiple forms:
Government surveillance, tech-augmented: China has over half of the world’s surveillance cameras, many equipped with AI facial recognition. The “social credit” system punishes “antisocial” behavior by restricting access to services, travel, opportunities. But it’s not just China: in the U.S., the Department of Homeland Security uses AI for border screening, the NSA was one of the earliest mass adopters of AI for surveillance, and the ACLU has sued to obtain transparency over how these technologies are being used.
Corporate surveillance as a business model: Google knows what you search, where you go (Google Maps), what you watch (YouTube), who you are (Android). Meta knows who you talk to (WhatsApp), what you like (Facebook), what you buy (Instagram Shopping). Amazon knows what you purchase, what you read (Kindle), even what you say at home (Alexa). These data aren’t just sold to advertisers — they’re shared with governments on legal request, often without meaningful public oversight.
The public-private convergence: The line between government surveillance and corporate surveillance is dissolving. Companies like Palantir sell predictive surveillance tech to authoritarian governments. NSO Group (creators of the Pegasus spyware) sold tools to regimes that used them against journalists and dissidents. When Elon Musk, CEO of X (Twitter), appears alongside Trump and threatens to cut NATO funding if Europe “censors” his platforms, the distinction between political power and corporate power becomes irrelevant.
The paradox of 2025: We live in the era of the greatest access to information in human history — and yet we are more misinformed, more polarized, more isolated in cognitive bubbles, and more watched by both governments and corporations.
This is not a bug in the system — it is the design of the system. A system optimized for profit, control, and the stability of existing power.
Network of power: From Diagnosis to Structural Understanding
In 1975, the French philosopher Michel Foucault identified a precise mechanism: modern power operates through observation, not through direct coercion. Today, fifty years later, that mechanism has become code. Proprietary algorithms, interlocked financial ecosystems, platforms that process billions of interactions per second determine what millions of people see, think, believe.
This analysis is about who controls the cognitive infrastructure of our civilization through networks of algorithms, financial ecosystems, interconnected platforms. It is not a moral critique of individual people or their intentions. It is an investigation of how the system itself functions — and why that distributed, opaque control raises fundamental questions for democratic self-governance.
📖 Read more: Content That Resists →
The Digital Panopticon: When Surveillance Becomes Self-Discipline
In 1975, Michel Foucault published Discipline and Punish, a book that changed how we understand power. Foucault analyzed the panopticon, a circular prison designed in the 18th century by English philosopher Jeremy Bentham. The idea was brutally elegant: a central tower where guards could watch every prisoner, without the prisoners ever knowing if they were being watched at that exact moment. The result? Prisoners began to act as if they were always being watched, policing their own behavior. Power functioned without direct force: the mere possibility of being seen was enough.
Foucault’s insight was that this becomes the modern form of power: no longer the king executing punishment in the public square, but a system in which people self-regulate because they know (or believe) they are observed.
As Foucault wrote, modern power “reaches into the very grain of individuals, touches their bodies and inserts itself into their actions and attitudes, their discourses, learning processes and everyday lives.”
Today the panopticon is digital and distributed. It’s no longer a central tower; it’s an ecosystem of proprietary algorithms, personalized feeds, interlocked financial networks. But it works on the same principle: we know that everything we do online leaves a trace. Every search, every click, every second of watch time is recorded. We don’t know exactly who is watching or what will be done with that data, but we change our behavior anyway.
Big Tech platforms — Google, Meta (Facebook/Instagram/WhatsApp), Amazon, TikTok, Netflix — are not run by single omnipotent CEOs, but by complex corporate systems that profit from tracking and influencing user behavior. Surveillance is networked, invisible, planetary. And it operates with a precision Bentham and Foucault would have considered impossible.
Bentham’s panopticon had a visible guard. The digital panopticon has invisible algorithms, written by anonymous teams, validated by boards of directors, financed by hedge funds and sovereign wealth funds.

The Attention Economy: When We Are the Product
Foucault explains the control mechanism: surveillance that produces self-discipline. But to really understand what’s happening, we have to go a step deeper and ask: who controls the controllers? Who owns and governs these platforms?
In the attention economy, there is a core principle most users don’t fully grasp: if an online service is free, you are the product. Platforms don’t “sell software to users” — they sell users. Or more precisely: they sell access to users’ attention and the ability to influence their decisions.
The model works like this: platforms collect enormous behavioral data on how people think, what they want, how they make decisions. Then they sell the ability to influence those decisions. But who buys this influence? Not just companies selling products. The Big Tech ecosystem is deeply entangled with global finance:
- BlackRock and Vanguard — primary shareholders of Meta, Google, Amazon, Apple. Together they manage over $20 trillion.
- Sovereign wealth funds — Saudi Arabia’s PIF has poured billions into Uber, Lucid Motors, gaming; Qatar’s QIA holds stakes in Volkswagen, Barclays, European tech.
- Interconnected venture capital — Andreessen Horowitz, Sequoia Capital, Y Combinator finance the startups feeding the ecosystem.
- Data brokers — Acxiom, Epsilon, Oracle Data Cloud buy/sell data on 700+ million people.
This financial network is not a side note. It’s structurally constitutive of digital power. Google is not just an isolated search engine — it is a node in a global financial network:
- Alphabet’s market cap: ~1.7 trillion dollars
- Top institutional shareholders: BlackRock (6.1%), Vanguard (7.2%), Fidelity
- Advertising partnerships with millions of sites worldwide
- Android OS installed on 2.5+ billion devices
- YouTube with 2+ billion monthly users
- Google Cloud processing data for governments and Fortune 500 companies
Meta (Facebook) is not just Facebook. It’s an integrated ecosystem:
- Facebook: 3 billion monthly active users
- Instagram: 2 billion users
- WhatsApp: 2 billion users
- Oculus VR and metaverse investments: $36B spent in Reality Labs
- Physical infrastructure: undersea cables connecting 200+ countries
- Meta AI being progressively embedded into all products
The attention economy, combined with interlocked global finance, produces distributed control. There is no single “villain,” but a mesh of converging interests, all optimizing for profit. This is where Foucault meets political economy: the digital panopticon is not technologically neutral — it is economically oriented toward maximizing engagement and extracting behavioral value from users.
Network of power, Information as a Battlefield: Who Decides What’s True?
In a traditional democracy, information circulated through publicly recognizable and verifiable institutions: independent newspapers, libraries, universities, public broadcasters. Citizens could access a plurality of sources, compare narratives, and form opinions. This is the Enlightenment model: truth emerges from public debate, from rational confrontation of ideas.
But in 2025 this model is fundamentally transformed. Recent data show that 67% of Americans get news primarily from social media. 83% of under-30s in Europe use TikTok or Instagram as their main news source. These platforms, however, are not neutral public squares like a library or a town hall. They are algorithmic recommendation systems optimized for one metric: maximizing engagement, meaning time on platform.
Algorithms of Truth: Code That Decides Reality
What are algorithms? In simple terms, they’re sequences of instructions that computers follow to make decisions. On social media, algorithms decide which content to show you, in what order, with what priority. They’re like invisible filters standing between us and information.
Facebook’s News Feed, for example, processes 4 petabytes of data a day (about 4 million gigabytes) to decide what to show its 3 billion users. The algorithm doesn’t merely reflect reality — it actively constructs it. Each time it calculates which content to show which user, it’s making editorial decisions with political and social consequences. But these decisions are presented as neutral, technical, automatic.
Algorithmic neutrality is an ideological myth. Every algorithm necessarily encodes values and priorities.
When an algorithm decides what to show us, it has to choose: what do we optimize for? Engagement (time spent)? Accuracy? Diversity of perspectives? How do we balance those goals when they conflict? What content do we penalize, and for which reasons? These are fundamentally political decisions — decisions about what matters — disguised as technical “objectivity.”
One revealing case: in 2020, before the U.S. presidential election, Facebook temporarily changed its algorithm to reduce the spread of disinformation. Engagement dropped by 30%, but the overall quality of circulating information measurably improved. After the election, the changes were rolled back. Why? Because the drop in engagement hurt ad revenue. A choice between truth and profit. Profit won.
This is not an anomaly or an error. It’s the system’s internal logic. When information is monetized through advertising, the financial imperative becomes: maximize time spent on platform. Emotional, polarizing, shocking content maximizes engagement. Accurate, balanced, verified content often does not — it’s less shareable, less provocative, less “viral.”
Network of power and Information Filter Bubbles: When Everyone Lives in Their Own Reality
In 2011, digital activist Eli Pariser coined the term “filter bubble” to describe a disturbing phenomenon: algorithms personalize our feeds based on past behavior, creating information bubbles where we primarily see content that confirms what we already believe. The result is a progressive fragmentation of shared reality.
Concrete example: during the COVID-19 pandemic, one person’s Facebook feed showed scientific articles, virologists, vaccination data. Another user’s feed showed dissenting doctors, alternative theories, testimonials of adverse effects. It’s not that one version was “true” and the other “false” in some absolute sense — it’s that the algorithm optimized both feeds to confirm each user’s existing bias, because that keeps them scrolling and sharing.
This realizes one of George Orwell’s most unsettling predictions: control doesn’t necessarily require totalitarian censorship. It can happen through the fragmentation of truth itself. There is no longer a single public debate about different interpretations of shared facts. There are parallel, incompatible realities, each with its own “alternative facts.”
The Crisis of Authority: When Experts and Conspiracy Theorists Have Equal Weight
For centuries, democratic societies recognized the existence of epistemic authorities — institutions with credibility in determining what is true: universities, scientific academies, investigative journalism. These institutions had authority because they were subject to verification mechanisms: peer review in science, fact-checking in journalism, professional ethics.
The digital ecosystem has deeply eroded that authority. On YouTube, a virologist with 40 years of research competes algorithmically with a conspiracy theorist. Often, the conspiracy theorist wins — not because they’re more accurate, but because they’re more emotionally provocative, more engaging, more “shareable.” The algorithm doesn’t distinguish epistemic credibility from virality. It optimizes for watch time.
Bot Farms: Automated Lying at Scale
“Bot farms” are organized operations (often in countries like Macedonia, Russia, the Philippines) that run millions of fake accounts on social media. These accounts amplify specific narratives in a coordinated way. The cost? Around $100 for 10,000 fake followers, $15 for a coordinated blast of 1,000 retweets. Disinformation is now a purchasable commodity, an industrial service.
These operations hack platform logic: content with strong early engagement gets boosted by recommendation systems. Bot farms manufacture that early engagement, triggering organic amplification by real users. The result? A false narrative looks like it’s “rising from the people,” when in reality it was orchestrated.
Deepfakes: The End of “Seeing Is Believing”
For centuries, visual evidence — photos, video — acted as proof. Deepfake technology — hyper-realistic fake audio/video generated with AI — has destroyed that standard. You can now generate convincing video of a politician saying something they never said, in a matter of hours. AI-cloned audio is nearly indistinguishable from the original voice.
This is not just a technical problem. It’s epistemological. If we can’t trust our eyes and ears, what do we build belief on? How do we verify reality?
Cambridge Analytica: The Industrialization of Psychological Manipulation
The Cambridge Analytica scandal, which exploded in 2018, revealed mass-scale psychological manipulation as an industry. The company harvested data from 87 million Facebook profiles without consent, using advanced psychometric models (the OCEAN personality model) to build hyper-targeted political messaging.
The logic: different personality types respond to different emotional triggers. Algorithms identified each individual’s psychological profile, then generated messaging tailored to them — and then delivered that messaging to them, and only them. Two voters in the same district could see completely different campaign narratives from the same candidate, each calibrated to manipulate their specific psychological vulnerabilities.
This is no longer democratic persuasion (a candidate making a public argument and letting citizens decide). This is atomized psychological engineering. A candidate no longer has one public message, but millions of private micro-messages. Democratic transparency collapses when every voter is seeing a different, individually optimized campaign.
Industrialized disinformation is not just a “bad content” problem (which can be fact-checked and debunked). It is an architectural problem.
As long as the economic model rewards engagement over accuracy — and as long as recommendation algorithms amplify emotional, polarizing content — disinformation will always have a competitive advantage in the attention market. The sensational beats the verified, the provocative beats the nuanced — not by accident, but by design.
Inside the Network of Power: Control of Information Networks — Manuel Castells
If Foucault shows us how power operates (through surveillance and self-discipline), and economic analysis shows us who finances that power (global financial ecosystems), Spanish sociologist Manuel Castells gives us the framework to understand where, structurally, that power resides in contemporary society.
Castells is the theorist of the “network society.” His core thesis: we live in an era where power no longer primarily resides in traditional institutions (states, churches, armies), but in networks — and especially in the control of those networks. Whoever controls network architecture controls flows of information, capital, influence.
Castells identifies four forms of network of power in networks, from most superficial to most profound:
- Networking Power — the power of those inside the network over those excluded. Example: if you don’t have a smartphone, you’re cut off from banking, transport, essential communication.
- Network Power — the power that comes from the standards required to participate in the network. Example: if you want to publish an app, you must follow Apple’s or Google’s rules, or you’re out.
- Networked Power — the power some actors in the network exercise over others. Example: an influencer with millions of followers has more power than ordinary users in the same network.
- Network-making Power — the deepest level: the power to design the network itself and decide how it works. Example: whoever decides how Facebook’s algorithm distributes content wields immense, invisible power.
In 2025, Network-making Power — the power to program the networks — is not in the hands of single individuals. It sits with boards of directors, executive committees, engineering teams, strategic alliances that are often invisible to the public. This is the deepest, least visible layer of power: not who uses the network, not who wins inside the network, but who designs it and defines how it must function.
Google and the Android Ecosystem: Making a Network of Power
Networking Power: If you don’t have Android or iOS, you’re excluded from banking, transport, essential and emergency communications. Exclusion from the network is exclusion from functional society.
Network Power: Google Play Store enforces rigid standards. Violate policy? Your app is delisted, your business dies. The network’s standards determine who can participate.
Network-making Power: Google decides which APIs are exposed, how the OS behaves, which apps get access to which sensors/data. All opaque, decided internally. This is where the real power sits — not in “content moderation,” but in defining what is technically possible in the first place.
Meta controls the social graph of 3 billion people on Facebook, 2 billion on Instagram, 2 billion on WhatsApp. Not “a list of friends” — a full dynamic map of human relationships. This simultaneously realizes Foucault’s panopticon (permanent observation) and Castells’ network-making power (control of relational architecture).
With this, Meta can:
- Predict future behavior with high accuracy: divorces, job changes, onset of depression
- Micro-target political messaging at single individuals with precision marketing
- Manipulate collective emotion by tuning the News Feed (2012 experiment: induce sadness or happiness by adjusting the % of positive/negative posts)
Meta’s power is not in “content hosted.” It’s in the ability to modulate social relationships and information flows. This is Network-making Power applied to human relations themselves.
Google Search: Gatekeeper of Knowledge
93% of web searches go through Google. It is the de facto gatekeeper of online human knowledge. Google decides what exists epistemically — sites it doesn’t index are practically non-existent. Algorithmic shifts (Panda 2011, Penguin 2012, Helpful Content 2022) wiped out traffic for millions of sites overnight, often without explanation.
Google fuses Search with YouTube (video), Maps (local business), Shopping (e-commerce), Scholar (academia). Vertical control of the entire information journey. This is the highest degree of Network-making Power: not only deciding who participates in the network, but defining what counts as valid knowledge.
Democracy and Algorithmic Information: A Structural Problem
We’ve looked at how power works (Foucault), where it sits (Castells), who funds it (BlackRock, Vanguard), and how it manipulates information (industrialized disinformation). Now we have to face the core question: can democracy function when its information infrastructure is controlled by private ecosystems optimized for profit?
What Democracy Needs to Function
Classical democratic theory — from John Stuart Mill to Jürgen Habermas — assumes the existence of a “public sphere”: a space where citizens can access shared information, deliberate rationally, compare different views, and build consensus. This public sphere requires some basic conditions:
- Pluralistic access: Citizens must be able to access diverse sources and perspectives, not only those that confirm their beliefs.
- Transparency: Citizens must know who is speaking, and with what interests, to evaluate information critically.
- Communicative rationality: Arguments should be judged on merit, not emotional manipulation or psychological triggers.
- Inclusion: Everyone should be able to participate in public debate, not just those with resources or visibility.
The current digital ecosystem systematically violates all of these. Filter bubbles destroy informational pluralism. Psychological micro-targeting hides who is speaking, and why. Algorithms optimized for engagement privilege emotional manipulation over rational discourse. Bot farms and the digital divide drown authentic voices in manufactured noise.
Cambridge Analytica and the Erosion of the Informed Vote
One of democracy’s foundational assumptions is that citizens vote based on informed political preferences — meaning after rationally evaluating candidates and their programs. But what happens if these very preferences are the product of micro-targeted psychological manipulation?
Cambridge Analytica showed that with enough data and sufficiently advanced algorithms, it’s possible to shift political opinions by targeting messages to individuals’ psychological vulnerabilities. A voter anxious about security received fear-heavy messaging about threats. A voter with strong “family values” received something completely different. Same campaign, opposing messages — each engineered to manipulate a specific psychographic cluster.
This is not democratic persuasion — making a public argument in the public sphere. This is algorithmic behavioral engineering at the subconscious level. If voters are being manipulated like that, the concept of an “informed vote” collapses. We’re not choosing candidates freely — we’re reacting to stimuli engineered for us.
Network of Power The Moderation Paradox: Who Watches the Watchers?
Democracy requires freedom of expression. But it also requires limits: incitement to violence, hate speech, election disinformation must be moderated to protect the democratic system itself. Historically, that moderation was distributed across the democratic system: laws passed by elected parliaments, enforced by independent courts, with appeal mechanisms.
Today, content moderation is largely privatized. Meta, Google, X (Twitter) unilaterally decide what to remove from their platforms. Policies are opaque, decision-making is increasingly automated (done by algorithms, not humans), and there’s no meaningful appeal process. The result? The boundaries of democratic discourse are being defined by private corporations with no democratic mandate.
The battle over the Digital Services Act (DSA): The European Union attempted to address this with the Digital Services Act, which fully took effect in 2024. The law forces platforms with over 45 million monthly EU users to provide transparency, grants researchers access to platform data, and imposes fines of up to 6% of annual global turnover for violations.
On October 24, 2025, the European Commission announced preliminary findings that TikTok and Meta violated DSA transparency obligations. The accusation? They made it “excessively burdensome” for researchers to access public data, leaving them with “partial or unreliable datasets” that undermine the ability to study whether users — including minors — are exposed to illegal or harmful content.
Big Tech + Trump counterattack: The platforms did not sit still. The response came on multiple fronts:
Political pressure via the Trump administration: Mark Zuckerberg publicly argued that “the U.S. government has a role in defending [the American tech industry] abroad.” Apple’s Tim Cook reportedly asked the administration to intervene against EU fines. Trump responded: he criticized EU regulators for imposing fines on Apple, Google and Meta, and JD Vance threatened that the U.S. could pull NATO support if Europe tries to “regulate Musk’s platforms.”
The “censorship” narrative: Platforms pushed a false line that the DSA is a censorship law. In reality, the DSA is procedural and content-neutral: it requires platforms to flag and remove illegal content, adopt transparent moderation systems, and give users mechanisms to appeal. But branding it as “European censorship” is politically more effective.
Regulatory collision: Meta claimed that DSA data-sharing requirements “place the DSA and the GDPR [General Data Protection Regulation] in direct tension,” and asked regulators to “clarify how these obligations should be reconciled.” Classic strategy: use privacy law to resist transparency about how user data is exploited.
February 2025 — The Trump memorandum: On February 21, 2025, Trump signed a memorandum directing the administration to review EU and UK policies that might force U.S. tech companies to develop or deploy products that “undermine free speech or enable censorship.” The memo also targeted Digital Services Taxes (DST), arguing that foreign governments unfairly tax U.S. tech giants.
In August 2025, the U.S. State Department sent a directive to American diplomats, instructing them to condemn “undue restrictions” imposed by the DSA and specifically to push Europe to: narrow the definition of “illegal content,” revise or remove the Code of Practice on Disinformation, reduce fines, and avoid requiring platforms to respond to “trusted flaggers” (entities chosen by EU governments to report illegal content).
The unsolved paradox: This creates a structural paradox with no obvious solution: if we allow disinformation and hate speech to circulate freely, democracy is threatened by manipulation and violence. But if we allow private corporations to moderate unilaterally, democracy is threatened by unaccountable private censorship. And if democratic governments try to regulate, they’re accused of “censorship” and face economic and geopolitical retaliation.
As one analysis by the German Marshall Fund put it: “The rhetoric across the Atlantic paints a picture of fundamentally divergent visions from increasingly entrenched camps.” While Europe and the U.S. fight over regulation, technology evolves faster than law, and citizens remain trapped in a system nobody truly controls.
Authoritarian Technology: When Digital Infrastructure Serves Dictatorship
If platforms create structural problems for democracies, they are perfect tools for authoritarian regimes. China has shown how to combine total digital surveillance (Foucault’s panopticon scaled to a nation) with algorithmic control of information (Castells’ network-making power in the hands of the state) to build an unprecedented system of social control.
The “Great Firewall” censors undesirable content. WeChat, the dominant messaging app, monitors all private communications. The “social credit” system punishes behavior deemed “antisocial” by the regime, restricting access to services, travel, opportunities. AI facial recognition tracks the movement of 1.4 billion people. It’s an integrated system where technologies originally developed in the West for ad targeting are repurposed for authoritarian governance.
The ChatGPT–Uyghur surveillance case: In October 2025, OpenAI published an alarming report. Operators linked to the Chinese government used ChatGPT to draft proposals for surveillance tools that analyze “travel movements and police records of Uyghurs and other high-risk individuals.” As Ben Nimmo, OpenAI’s lead investigator, explained: “There is a push inside the PRC to enhance the use of artificial intelligence for really big things like surveillance. The Chinese Communist Party was already surveilling its own population, but now they’ve heard about AI and they’re thinking: maybe we can tune it up a bit.”
The EPA–Trump–Musk case: But AI-driven surveillance is no longer just a foreign authoritarian threat. In October 2025, Reuters reported that some managers at the U.S. Environmental Protection Agency (EPA) were told by Trump appointees that “Musk’s team is implementing AI to monitor workers, including scanning communications for language considered hostile to Trump or Musk.” The EPA categorically denied this as “completely false.” But the mere plausibility — and the credibility with which such reports are received — shows how thin the line between “democratic governance” and “authoritarian surveillance practice” has become.
The West condemns China’s system, but exports the same technologies. Companies like Palantir sell predictive surveillance platforms to authoritarian governments. NSO Group (the Israeli firm behind Pegasus spyware) sells interception tools to repressive regimes that deploy them against journalists and dissidents. Meta and Google operate in non-democratic states, adapting their policies to state censorship demands.
The State Department’s AI “catch and revoke” plan: In March 2025, Axios reported on a new U.S. State Department plan, “AI-powered,” to revoke visas from students suspected of “terror sympathies.” Days later, ICE (Immigration and Customs Enforcement) arrested Mahmoud Khalil, a Columbia University student involved in pro-Gaza protests. Trump promised: “This is the first arrest of many more to come.” The use of AI to surveil immigrants and marginalized communities is not new — but in 2025, the scale and aggressiveness reached a level that alarmed civil rights observers.
This raises a fundamental question: does digital technology have an intrinsic drift toward authoritarianism? Or can democracies still reclaim their cognitive infrastructure and govern it democratically? The answer will depend on political and technical choices in the next few years — and on whether citizens can see the structure before it becomes irreversible.
Network of Power: The Human Cost of the System
Theoretical analysis risks feeling abstract unless we connect it to material outcomes — to actual bodies experiencing harm inside this architecture of power.
Take the case of Dylan Mulvaney, a transgender influencer who, after a partnership with Bud Light — which cost the company $1.4 billion in lost value — spent months fearing for her physical safety, with inadequate corporate protection. Or Hamish Steele, who receives daily antisemitic hate after targeted harassment campaigns amplified on social media. The problem isn’t just what individual bad actors say online. The problem is the ecosystem that systematically amplifies hate:
- Algorithms optimized for engagement: Polarizing, hostile content naturally gets more reach, more shares, more comments. This is a direct consequence of the business model we’ve analyzed: maximizing engagement means maximizing ad revenue.
- Reactive, not proactive moderation: Hate speech is often only removed after the damage is already done, after mass reporting. Proactive moderation is expensive and reduces margins.
- Coordinated harassment campaigns: Organized on Discord or Telegram, executed on Twitter or Instagram, and almost impossible to stop once triggered. The network itself makes swarm harassment trivial.
- Doxing made easy: Personal information aggregated by data brokers (the same companies that sell our behavioral data) is dumped by anonymous accounts, putting real people in danger.
The Moderators: The Invisible Victims of the Network of Power
Meta, Google, TikTok employ tens of thousands of content moderators, often in countries like the Philippines, India, Kenya. They’re paid $2–$5/hour. Their job? To stare at traumatic material — graphic violence, child abuse, terror propaganda — for eight or more hours a day. PTSD is widespread. Psychological support is often inadequate.
This is the hidden cost of platform moderation. These workers are inside the network, have privileged access and expertise, and yet are fully exploited. They have zero strategic leverage and zero positive outcomes from their position. They are simultaneously enforcers of the panopticon (they apply moderation guidelines) and victims of the panopticon (they’re algorithmically monitored, measured on impossible KPIs, instantly replaceable).
The cruelty is not primarily the result of one evil executive. It’s economic design. Moderation costs money. Engagement prints money. The structural imperative is: minimize the first, maximize the second. The bodies of moderators are a cost center to be driven down — not people to be protected.
Information and Power: Algorithms, Capital, Infrastructure
Google Search: Monopoly of Knowledge
93% of web searches go through Google. But “Google” is not Sundar Pichai personally. It’s a layered algorithmic system run by distributed teams: anonymous Quality Raters, Machine Learning Engineers, Policy Committees. Invisible decision-makers. They don’t have public profiles. They operate under NDA. Their aggregate decisions determine what “exists” for billions of people.
This simultaneously realizes: Foucault’s panopticon (surveilling search behavior), Castells’ Network-making Power (Google defines the architecture of accessible knowledge). Power without a face. More dangerous than a visible prince because you can’t confront it. It appears technical, so it appears neutral. It’s distributed, so there is no single accountable person. It’s opaque, so it’s effectively unauditable.
The Meta Algorithm: Automated Social Engineering
Facebook’s News Feed, Instagram’s For You — algorithms that process 4 petabytes of data per day. Who sets the parameters? Entire teams with conflicting incentives: Growth (maximize engagement), Integrity (minimize harm), Monetization (optimize revenue), AI Research (push predictive modeling).
Meta’s final policy at any given moment is an emergent compromise. Meta has admitted (leaked docs, 2021) that it does not fully understand how its own algorithms shape user behavior. They know the system works. They don’t fully know why or how.
This is power that even its nominal owners can’t fully control. Complex systems self-evolve toward certain objectives (engagement, revenue), and the side effects (polarization, radicalization, addiction) are treated as “externalities.” Here we see the limit of human governance: even Network-making Power is partial when systems become too complex for any individual to grasp.
BlackRock & Vanguard: Capital as Invisible Governance
BlackRock manages roughly $10 trillion in assets. Vanguard manages $8.5 trillion. For scale: Italy’s annual GDP is ~€2 trillion. These two asset managers are among the top three shareholders in nearly every major tech company: Apple, Microsoft, Amazon, Google, Meta, Netflix. Also Disney, ExxonMobil, Pfizer, JPMorgan.
Their influence is exercised through shareholder voting and strategic pressure on corporate boards. The result? A strategic convergence among companies that should be competitors. Google, Meta, and Amazon compete for users and ad spend — but they share the same major shareholders. From BlackRock’s point of view, which holds large stakes in all three, a stable oligopoly is better for returns than destructive competition.
But who sets these funds’ strategies? Not Larry Fink (BlackRock’s CEO) alone. Hundreds of anonymous portfolio managers, risk committees, quantitative models. Vanguard is even more opaque: it’s “mutually owned” (owned by its own funds), quasi-cooperative, and governed largely by automated index-tracking logic.
This is where all of our theoretical frameworks merge: these funds exercise Network-making Power at the macro level (steering platforms’ strategic direction through ownership); they act as invisible stewards of the panopticon (financing surveillance infrastructure); and they are almost completely outside public accountability.
The Fundamental Asymmetry of Power
Compare different forms of power in the digital landscape:
Visible individuals like Musk:
- Transparency: High (every post is public)
- Accountability: Medium (they can be publicly criticized)
- Direct control: Low–medium (constrained by boards, shareholders, law)
- Vulnerability: High (reputation can evaporate quickly)
Proprietary algorithms:
- Transparency: None (secret code, opaque decisions)
- Accountability: Zero (nobody can sue “the algorithm” directly)
- Direct control: Absolute over billions of users
- Vulnerability: Low (slow to regulate, hard to audit)
Asset managers like BlackRock:
- Transparency: Minimal (internal strategy not public)
- Accountability: Near zero (answer only to investors)
- Control: Systemic and indirect (through ownership)
- Vulnerability: Extremely low (too large to fracture)
Cloud infrastructure (AWS, Azure, Google Cloud):
- Transparency: Partial (some tech details public)
- Accountability: Low (private contracts, few viable alternatives)
- Control: Absolute over the physical layer of the internet
- Vulnerability: Medium (expensive to replace but theoretically possible)
The conclusion is clear: the less transparent and less accountable the actor, the more systemic and pervasive the power. The most visible figures (celebrity CEOs) absorb all the media attention — but their actual power is relatively constrained. The invisible structures shape the field.
This is exactly the dynamic Foucault described: the most effective power is invisible, operating through normalization and infrastructures we take for granted. Castells adds: this power operates by programming the networks themselves — not merely using them.
The Invisible Face of Power Over Information
Real power in the network society is not where we look (high-visibility individuals on social media), but where we don’t look (algorithms in data centers, shareholder votes, compliance committee decisions).
As of this writing, the battlefield is clearly drawn:
In Europe, the Commission has just accused TikTok and Meta of violating the Digital Services Act. Twelve member states have sent an urgent letter calling on the Commission to “accelerate ongoing investigations” to safeguard European elections from foreign interference. Spanish Prime Minister Pedro Sánchez told the World Economic Forum in Davos: “The technology that was supposed to free us has become the instrument of our oppression. The social media that were supposed to bring unity, clarity and democracy have instead delivered division, vice, and a reactionary agenda.”
In the United States, the Trump administration is escalating pressure on Europe, threatening tariffs and withdrawal of NATO support. Vice President JD Vance publicly stated that Europe’s tech rules “reflect a much more restrictive approach to online expression and innovation that amounts to censorship.” Simultaneously, reports emerge of AI being used for internal monitoring of federal agencies, while the NSA — the U.S. intelligence agency most aggressively deploying AI — continues to operate with near-total opacity despite ACLU lawsuits demanding transparency.
In China, the government is using ChatGPT to develop proposals for more efficient mass surveillance. The infrastructure of facial recognition, social credit scoring, and communication control continues expanding, serving as a model — admired or feared, depending on who’s looking — for other authoritarian regimes.
On the platforms, research on 51 million TikTok accounts confirms increasingly polarized political echo chambers. Meta has recently changed its hate speech policies to allow language that describes trans people as “mentally ill” and women as “domestic objects” — a genuflection to the Trump agenda, according to critics. Algorithmically optimized engagement continues to reward emotional and polarizing content over accuracy or constructive dialogue.
Netflix will lose some subscribers. Elon Musk will keep tweeting. The flashpoints will come and go. But the structure remains.
The central question is still open: in a democracy, who should set the rules of the digital public sphere?
If the answer is “opaque ecosystems of proprietary algorithms, financed by global capital, managed by anonymous boards, shielded by geopolitical threats,” then the problem is structural, not accidental. It’s not about good or bad intentions of individual actors. It’s about architecture.
Foucault showed us how power becomes invisible through surveillance and self-discipline. Castells showed us how that power operates by programming networks and controlling the standards that make social interaction possible. Political economy showed us who finances this architecture and why: profit maximization through extraction of behavioral data. Information theory showed us how this architecture undermines the very possibility of rational democratic discourse.
The problem is not that Musk tweets or Zuckerberg makes controversial calls. The problem is the ecosystem that allows invisible networks of algorithms, capital, and infrastructure to control the public sphere with no democratic accountability.
Recognizing the mechanisms of the digital panopticon (Foucault), understanding where Network-making Power resides (Castells), and seeing how the political economy of platforms is built on extraction instead of democratic values — that is the first step toward understanding the reality we’re living in.
Technology evolves alongside humans, in a co-evolution where it’s increasingly difficult to tell where one ends and the other begins. Algorithms are not just tools: they’ve become part of the cognitive environment in which we think, decide, form our identities.
The answer is not on social media. It’s in understanding the structure of the networks themselves.
Note: October 2025
This article was written in a moment of exceptionally high tension in the global digital landscape. As the European Commission pursues its first Digital Services Act violations, the Trump administration escalates geopolitical pressure on EU tech regulation. State-linked actors in China and the U.S. test AI for mass surveillance. Research confirms increasingly polarized echo chambers on platforms like TikTok.
We don’t know if in six months, one year, five years, the DSA will still exist, if the EU will have resisted U.S. pressure, if new forms of AI-powered surveillance will have been normalized, or if platforms will have found new ways to evade any form of accountability. What we do know is that the struggle over who controls the cognitive infrastructure of civilization is fully underway — and its outcome will determine the future of democracy itself.
Algorithms manufacture distraction by focusing attention on visible individuals. But real power sits in the invisible architecture that decides who speaks, who hears, what circulates, and what counts as truth. Understanding that architecture — how it works, who controls it, why it’s structured this way — is the starting point for any serious reflection on the future of democracy in the digital age.
Technology evolves alongside humans, in a co-evolution where it’s increasingly difficult to tell where one ends and the other begins. Algorithms are not just tools: they’ve become part of the cognitive environment in which we think, decide, and form our identities. Whoever controls that environment controls the possible future.
Sources and References
Critical Theory and the Philosophy of Power
- Foucault, M. (1975). Surveiller et punir: Naissance de la prison. Gallimard.
- Castells, M. (2009). Communication Power. Oxford University Press.
- Van Dijck, J. (2006). “Digital divide research, achievements and shortcomings.” Poetics, 34(4-5).
Platform Society and Digital Ecology
- Van Dijck, J., Poell, T., & de Waal, M. (2018). The Platform Society: Public Values in a Connective World. Oxford University Press.
- Li, Y., Cheng, Z., & Gil de Zúñiga, H. (2025). “TikTok’s political landscape: Examining echo chambers.” New Media & Society.
Regulation and the Digital Services Act (2025)
- CNBC (24 October 2025). “EU says TikTok and Meta broke transparency rules under tech law.”
- European Commission. “The Digital Services Act package.”
- German Marshall Fund of the United States (2025). Analysis of the transatlantic regulatory conflict on disinformation, moderation, and the DSA.
AI Surveillance and Governments (2025)
- CNN (7 October 2025). “Chinese-linked actors used ChatGPT to pitch mass surveillance proposals targeting Uyghurs,” citing OpenAI.
- Reuters (October 2025). Report on alleged AI use inside the EPA to monitor “language hostile to Trump or Musk.” (The EPA officially denied this).
- ACLU (2024). “How is one of America’s biggest spy agencies using AI? We’re suing to find out.”
- Axios (March 2025). Internal U.S. State Department notes on an AI-driven “catch and revoke” visa plan and Trump’s statements after the arrest of Mahmoud Khalil.
Political Economy of Platforms
- Khan, L. M. (2017). “Amazon’s Antitrust Paradox.” Yale Law Journal, 126(3).
- Srnicek, N. (2017). Platform Capitalism. Polity Press.
- Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs.
Disinformation and Algorithmic Manipulation
- Cadwalladr, C. & Graham-Harrison, E. (2018). “Revealed: 50 million Facebook profiles harvested for Cambridge Analytica.” The Guardian.
- Pariser, E. (2011). The Filter Bubble: What the Internet Is Hiding from You. Penguin Press.
- Vosoughi, S., Roy, D., & Aral, S. (2018). “The spread of true and false news online.” Science, 359(6380).







