Machine Learning: The Mathematics of Control
Machine learning is not magic. It’s computational power that replicates and amplifies the inequality structures that already exist in the real world.
The Machine That Learns (But From Whom?)
In 1959, Arthur Samuel defined machine learning as a machine’s ability to learn without being explicitly programmed.
It sounds liberating, almost democratic. Reality is sharper: machines learn from the data we feed them, and that data carries every prejudice, every power asymmetry, every discrimination encoded by the society that produced it.
Technically, machine learning is a subset of artificial intelligence that uses mathematical algorithms to identify patterns in data and make predictions about never-before-seen information. An algorithm is a set of rules. A model is the result of applying that algorithm to a dataset after training. The difference matters: before training you have procedures; after training you have an “autonomous” decision system that affects real lives.
The pipeline is predictable: collect data, prepare it, choose an algorithm, train the model, evaluate performance, tune parameters, deploy to production. very step hides a political choice disguised as a technical decision. Which data do you collect? Who labels it? Which variables count as relevant? Who decides what “optimal performance” means?
There are three main paradigms. Supervised learning uses labeled data to predict specific outputs, like flagging spam or approving loans. Unsupervised learning searches for hidden structure in unlabeled data, like clustering customers by purchasing behavior. Reinforcement learning learns through trial and error, maximizing rewards over time, like autonomous driving systems or high-frequency trading algorithms.
The mathematical complexity is real: linear regression, decision trees, neural networks, support vector machines, hierarchical clustering. But technical complexity also serves another purpose—less discussed. It creates an epistemic barrier between those who control algorithms and those who are subjected to them, turning social and political decisions into apparently neutral issues that require “technical expertise.”

The Prediction Market
The numbers tell the story of acceleration. In 2024, Italy’s AI market reached €1.2B (+58%), with a substantial share tied to experimentation that includes generative AI, while the rest remains “traditional” machine learning. In 2025, Big Data & Analytics spending climbed to €4.1B (+20%): the infrastructure that makes data “machine-readable” is expanding—turning interactions into algorithmic raw material.
But behind press releases sits a precise geography of power: investment capacity, access to infrastructure, control of data, bargaining strength with vendors, internal skills. In sectors like Telco & Media, Insurance, Banking, Energy, Retail, data isn’t an “asset.” It’s the lever used to rewrite the market.
Political translation: more data → more prediction → more capacity to intervene in behavior.
Economic translation: more prediction → more optimization → more value extraction.
Social translation: more opacity → less contestability → more asymmetry between those who decide and those who endure.
Automating Discrimination
Amazon had to scrap a machine learning recruiting system because it systematically discriminated against women applicants. The model learned from historical hiring data: if the past is skewed, the model mistakes the skew for “merit.”
This is not an isolated case. It’s structural when you deploy machine learning in unequal societies. In biometric systems, errors are not neutral: large-scale testing has documented demographic differences in false positives, and academic research has shown meaningful gaps across gender and skin tone in commercial classifiers.
The problem worsens because many models—especially deep learning—operate as opaque “black boxes.” Even when a system is technically explainable, the decision chain becomes a machine for distributed de-responsibilization: the developer “wrote code,” the manager “adopted a tool,” the model “optimized a metric.” Nobody is responsible—yet someone pays.
- Hiring: the past becomes a criterion for the future.
- Credit and scoring: prediction “measures” trustworthiness, then imposes it as destiny.
- Predictive policing: feedback loops turn surveillance into evidence.
- Biometrics: technical error becomes political risk (and it often hits the same people).
Governance: The Theater of Regulation
The EU has adopted the AI Act: it entered into force on 1 August 2024 and becomes fully applicable on 2 August 2026, with earlier milestones. From 2 February 2025, bans on prohibited practices and AI literacy obligations apply. From 2 August 2025, obligations for General-Purpose AI (GPAI) models kick in.
On paper the framework is clear: unacceptable uses (banned), high-risk systems (heavily regulated), limited risk (transparency), minimal risk (no specific obligations). On paper. In practice, compliance becomes a market filter: those with legal teams, internal audits, and infrastructure can “carry” regulation; everyone else becomes dependent on platforms, providers, and pre-trained models.
Penalties can be significant: up to €35M or 7% of global annual turnover (depending on the type of infringement). But tech regulation history suggests fines often become a predictable cost of doing business.
Machine learning The Question That Remains Open
In Italy, estimates suggest a very high theoretical automation potential—close to 50% of “equivalent jobs” could be automatable. But the technical number is not the real story. The political question is: who decides what to automate, by which criteria, for whose benefit.
Every machine learning model embeds a social ontology: a vision of how the world should work, a value hierarchy turned into a mathematical objective function. When an algorithm decides who gets a loan, who gets hired, who gets surveilled, who gets priority, it is operationalizing a theory of justice. The difference is: nobody voted for that theory.
Transparency isn’t enough. Explainability isn’t enough. Even if we opened every black box and understood every decision, the core problem would remain: we are delegating moral and political choices to systems designed to optimize quantifiable metrics. But not everything that matters can be counted—and not everything that can be counted truly matters.
Machine learning: Essential sources
- AI in Italy: record market numbers — Osservatori Digital Innovation (Politecnico di Milano)
- Big Data market in Italy +20% in 2025 — Osservatori Digital Innovation (Politecnico di Milano)
- AI Act: application timeline and regulatory framework — European Commission (Shaping Europe’s Digital Future)
- Commission welcomes political agreement on AI Act (penalties incl. €35M / 7%) — European Commission (Press Corner)
- AI and work: automation potential and Italy scenarios — AI Observatory (Politecnico di Milano)
- Amazon scraps secret AI recruiting tool that showed bias against women — Jeffrey Dastin (Reuters, Oct 11, 2018)
- Demographic effects in face recognition — NIST (2019)
- Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification — Joy Buolamwini & Timnit Gebru (2018)
- To Predict and Serve? — Kristian Lum & William Isaac (2016)
- Predictably Unequal? The Effects of Machine Learning on Credit Markets — Andreas Fuster et al. (Journal of Finance, 2022)
- Some Studies in Machine Learning Using the Game of Checkers — Arthur L. Samuel (1959)
- Weapons of Math Destruction — Cathy O’Neil (Penguin Random House)
- The Black Box Society — Frank Pasquale (Harvard University Press)








