Network Power: What It Is, Its History, and the Definition of Network Power in the Digital Age
What is Network Power? Discover its definition, its origins, and how Big Tech governs the internet through standards, algorithms, and structural dependencies.
Introduction: The invisible power that governs our digital life
Why is it so hard to leave WhatsApp even when you want to? The answer is Network Power—a concept that explains how power operates in the digital age.
Unlike traditional power—which works through laws and penalties—Network Power makes some choices effectively mandatory not because alternatives are forbidden, but because they’re too costly. It doesn’t stop you from leaving; it makes leaving so inconvenient that you choose to stay.
In this article you’ll learn what Network Power is, how the concept emerged, how it works today through platforms like Google, Facebook, Apple, and Amazon, and why understanding it is essential to grasp who truly governs the internet.

What is Network Power: definition
Network Power is the ability to influence behavior by controlling the standards, infrastructures, and interconnections that make certain choices inevitable.
When a standard or platform becomes dominant, those who don’t use it become isolated. Individual choice remains formally free, but becomes materially costly. People converge on the same standard not because the law demands it, but because practical life does.
Consider WhatsApp and its 2 billion users. Technically, you can use Signal, a more privacy-respecting alternative. But if everyone you know is on WhatsApp, you won’t receive group messages, you’ll have to convince every contact to install a second app, and you’ll lose conversation history. The switching cost is huge. WhatsApp doesn’t force you to stay—it makes the alternative impoverishing.
This is the nature of Network Power: it operates through the architecture of relationships, not through direct commands. It hides behind convenience, behind “everyone uses it,” behind the practical friction of doing otherwise.
The roots of the concept: Grewal and Castells
David Singh Grewal and the power of standards (2008)
In 2008, philosopher and legal scholar David Singh Grewal published Network Power: The Social Dynamics of Globalization. His question: why do certain practices spread globally even without coercion?
The answer: you don’t need coercion when you can create structural dependency. Grewal identifies network externalities as the key mechanism. A network externality occurs when the value of a good increases as more people use it. The phone is the classic example: if you’re the only one with it, it’s useless; if everyone has it, it becomes indispensable.
This creates a virtuous cycle for those inside the dominant network and a vicious one for those outside. The more users join, the more valuable the network becomes, and the more new users are attracted. Those who stay out progressively lose value: fewer reachable contacts, fewer opportunities, less social relevance. Grewal calls this “coordination power”: it makes a choice increasingly inevitable because everyone else has already made it.
This power is non-coercive but structural. You don’t have to adopt the dominant standard, but if you don’t, you pay a rising price. It’s insidious because it looks like a natural convergence toward the most convenient solution. Behind that apparent spontaneity lies a precise power dynamic: control over coordination points.
Manuel Castells and programmable architectures (2009)
Sociologist Manuel Castells expands the analysis in Communication Power (2009). If Grewal explains why people converge, Castells explains who governs that convergence.
We live in a network society, but networks are not neutral environments: someone builds them, programs them, and decides the rules embedded in their architecture. Those who program networks organize attention, define what is visible, and determine how easy or difficult it is to perform certain actions.
When YouTube changes its recommendation algorithm, it’s not only optimizing technically—it’s making a political choice about what should circulate. When Instagram favors Reels in the feed, it steers the cultural production of millions of creators. The platform doesn’t impose formally, but those who don’t adapt lose visibility.

How Network Power works today
Big Tech—Google, Amazon, Meta, Apple, Microsoft—controls critical infrastructures through which the digital life of billions of people flows. This control operates at multiple levels, from what we see on the surface to deeper, invisible layers.
Control over digital identity
Every time you click “Sign in with Google” or “Sign in with Apple,” you outsource the management of your digital identity. That convenience generates structural power: Google and Apple become guarantors of your online identity—mandatory intermediaries between you and hundreds of services.
If your Google account is closed for suspected policy violations, you lose not only Gmail but access to an entire ecosystem: Drive documents, Google Photos, Play Store purchases, and every site registered with Google Sign-In. A single point of failure for your digital existence.
The power asymmetry is radical: Google decides unilaterally—often via automated systems—who keeps access and who doesn’t. Appeal procedures exist, but they’re slow, opaque, and uncertain.
The distribution monopoly: app stores
On iPhone, every app must go through the App Store. Apple decides which apps can exist inside iOS, applies criteria that are formally described but often opaque in practice, and retains the power to remove apps already published.
The Epic Games case (2020–2021) shows how difficult it is to challenge this power. When Epic tried to bypass Apple’s 30% commission, it was removed from the App Store. The legal dispute largely ended in Apple’s favor: the court recognized Apple’s right to control its ecosystem.
Android allows installation from external sources, but Google Play remains dominant: over 90% of users download apps only from there. Google has made the Play Store the default, “safe,” integrated path. Leaving it requires technical competence, acceptance of security risk, and giving up integrated services.
The EU’s Digital Markets Act designated these stores as “gatekeepers” and imposed opening obligations. Apple has enabled alternative app stores in Europe, but with economic conditions that some developers consider prohibitive. Formal compliance, a substantively closed ecosystem.
The algorithm as an invisible governor
When you open YouTube, Facebook, or Google Search, you see what the algorithm decides to show you. Algorithms aren’t neutral: they’re optimized for engagement (time spent) and monetization (ads served).
YouTube’s system analyzes what you watched, how long you watched, what similar users watch, and which videos generate interaction—then recommends content designed to keep you on the platform. Academic research has shown this can produce progressive radicalization: users who consume moderate content may be nudged toward more extreme content because it generates higher engagement.
The algorithm learns that emotionally charged content keeps people glued to the screen longer, so it promotes it. This economic logic produces social consequences: polarization, informational fragmentation, amplification of controversy.
Google Search handles over 90% of global online searches. Appearing on the first page means existing in public debate; not appearing means becoming invisible. 75% of users don’t go beyond the first page, and the top three results capture 60% of clicks. Google decides what is visible—and in the digital world, visibility is reality.
Cloud as infrastructural dependency
Amazon Web Services (AWS) holds 32% of the global cloud market, Microsoft Azure 23%, Google Cloud 10%. Together they provide the computational backbone for much of the internet. Thousands of organizations depend on them to store data, run applications, and process transactions.
A cloud provider can decide to stop serving a client, effectively interrupting operations. This happened to Parler after the January 2021 Capitol attack: AWS terminated hosting for terms-of-service violations. Parler stayed offline for weeks. Migrating to an alternative provider was complex and costly. Whoever controls the rails can shut down access.
Generative AI: from environment to cognitive tutor
Social media functioned as an environment that mediates perception, information, and attention—yet remained external to the cognitive process. LLMs operate differently: they act as tutors that participate in the production of thought itself.
A programmer using GitHub Copilot—an AI assistant that suggests code in real time—isn’t just adopting a tool. They’re reshaping a cognitive workflow: thinking in fragments, formulating prompts, verifying outputs, integrating suggestions. After months, that process becomes the norm. Going back to writing code entirely by hand feels inefficient and slow. The habit has sedimented.
This is cognitive lock-in, deeper than technical lock-in. Switching cloud providers requires migrating data. Switching away from an LLM—or stopping its use—requires unlearning habits, slowing down processes, and recalibrating expectations of what “fast” and “complete” mean. Organizations that integrate ChatGPT into customer service and internal documentation standardize not only software but ways of working.
Three cognitive functions of LLMs
When someone uses an LLM, the interaction tends to perform at least three functions that go beyond simple information retrieval:
- It turns intuitions into structured language. A vague idea or poorly framed problem becomes an argument. The user co-produces thought through interaction: the system proposes structures, links, and reasoning that the user accepts, edits, and integrates.
- It narrows the space of options. The answer isn’t merely informative: it proposes interpretive frames and action alternatives, reorganizing the decision space. Some options become salient; others remain out of view.
- It triggers operational trajectories. Each output contains implicit cues about what to do next. Interaction becomes a guided sequence: question → answer → follow-up question suggested by the answer → iteration.
This shift from environment to tutor introduces co-cognition: the system thinks with the user and therefore shapes how the user thinks and decides. Marvin Minsky described the mind as a “society of agents”—specialized cognitive modules that collaborate to produce thought. LLMs externalize and distribute a set of cognitive functions through an interface that sits on top of a global infrastructure.
Immanent mind: intelligence distributed in infrastructure
What emerges is a form of immanent mind: a synthetic intelligence distributed across computing networks, trained on humanity’s textual production, increasingly positioned between thought and expression, intention and action, subject and world.
When you ask an LLM to draft a résumé, a company policy, or a delicate email, you receive a structure that standardizes language and format. If thousands of organizations ask the same thing, convergence emerges: de facto standardization produced by repetition, not by explicit imposition.
Concentration is reinforced by rising economic barriers. Frontier models require massive investment, industrial-scale compute, and scarce expertise. Only a handful of global actors can compete in producing them. Everyone else builds on top, via APIs and licenses: concentrated infrastructure, distributed services.
If LLMs become cognitive infrastructure—a layer between problem and solution, question and usable output—then those who control these models also control the operational forms through which organizations think. Each workflow integration contributes to network effects. And as always with Network Power, individual choice is free; the cost of staying out rises with every new user who enters.
Fragmented perceived reality
Personalized algorithms create what Eli Pariser called the “filter bubble”: informational bubbles in which each user sees a different internet. Facebook’s News Feed doesn’t show the same content to two users—it selects based on past interactions, content type, and predicted engagement.
During the 2020 U.S. elections, conservative Facebook users saw a dominant narrative (election fraud, censorship), while progressive users saw the opposite (threats to democracy, white supremacism). Both groups lived in parallel realities assembled by personalization systems. There was no longer a shared “front page,” no common agenda to start from.
The personalized algorithm creates separate worlds because it optimizes for individual engagement, not collective understanding. Platforms have little economic incentive to fix it: polarization generates engagement, and engagement generates advertising profits.
Conclusion: recognizing power to contest it
Network Power is the dominant form of power in the digital age because it controls the very conditions of participation online. Big Tech is not just a service provider: it decides who can speak through moderation, who can be found through ranking, who can access through identity control, who can operate through cloud infrastructure, and increasingly—through LLMs—how problems and solutions are formulated.
Understanding this mechanism changes how you relate to technology. You recognize that convenience has a price: dependency. That apparent “free” services are paid through data and lock-in. That every design choice—from “Sign in with Google” to recommendation ranking, from moderation systems to LLM interfaces—embeds a distribution of power.
The network is not a neutral environment where the best ideas win spontaneously. It is an architecture governed by those who build the rails and write the rules. Until we recognize this dynamic, we will be governed by it without fully realizing it.
Network Power can be contested, but it requires coordinated effort: informed users who make different choices, regulators who enforce interoperability, developers who build open alternatives. No single actor can shift the equilibrium alone. But together, they can create an internet where power is distributed, switching platforms is feasible, and standards serve coordination rather than control.
The game is still open. What’s at stake is whether the internet remains contestable—or becomes a set of digital fiefdoms ruled by a few. The choice is not technological; it’s political. And it starts by recognizing that power exists.
Read more: keep mapping digital power and its infrastructures.
Read moreSuggested readings
- David Singh Grewal — Network Power: The Social Dynamics of Globalization (2008). Open
- Manuel Castells — Communication Power (2009). Open
- Shoshana Zuboff — The Age of Surveillance Capitalism (2019). Open
- Tarleton Gillespie — Custodians of the Internet (2018). Open
- Eli Pariser — The Filter Bubble (2011). Open
- Marvin Minsky — The Society of Mind (1986). Open








