From clicks to heart-rate spikes, AI feeds on harvested lives. A sharp guide to data colonialism, its human costs, and the pushback gaining ground.
Banner image courtesy of Omar Lopez Rincon
We like to talk about AI as magic: apps that anticipate us, feeds that “know” us, models that conjure music and text from thin air. But none of it is thin air. It runs on extraction: clicks, voices, images, locations, intimate behavioural patterns, all harvested at scale and turned into profit and power. That system has a name: data colonialism.

Nick Couldry and Ulises A. Mejías laid the groundwork in The Costs of Connection (2019), then pushed it further in Data Grab (2024). Their claim is sharper than “we’re being watched.” Platforms don’t just collect information about you, but position your life as extractable: your likes, your heart-rate spikes, your midnight clean-outs, all processed into profit and leverage.
Imagine 1492, but with paperwork. No ships, no beach landings, no flag in the sand. Just a “Agree and continue” button you click while half-asleep. Where older empires extracted gold, rubber, labour, and land, today’s extraction is quieter and more scalable: clicks, location pings, voice notes, heart-rate spikes, the little behavioural tells you didn’t even realise you were generating.
This is the shift Nick Couldry and Ulises A. Mejías are trying to get us to see. In The Costs of Connection they describe how society is being rewired into what they call “data relations”, a social arrangement that makes extraction feel normal, even inevitable. Your coffee catch-up becomes engagement. Your flare of anger becomes a measurable signal. Your “private” life turns into machine-readable material. Consent, in practice, is often less a choice than a ritual, buried in legalese that most people never truly get to negotiate.
And the mechanism is boring, which is exactly why it works.
First: datafication. Life gets translated into legible units. A playlist becomes a mood profile. A walk is no longer a walk, it is a route, a pace, a pattern. Once something is measurable, it becomes tradable.
Second: appropriation. What began as your behaviour becomes somebody else’s asset. Companies accumulate the datasets, build the models, own the infrastructure, then sell back the benefits as “personalisation” and “convenience”.
Third: concentration. The winners compound. Platforms do not just attract users, they lock them in. The more people show up, the more value they can extract, the better they get at pulling even more people in. It is network effects with an extraction engine attached.
The human cost is not theoretical, and it is not evenly distributed. A lot of the invisible labour that keeps AI running has been pushed outward, to places where wages are lower and scrutiny is thinner. Investigations have reported Kenyan workers paid less than $2 an hour to label and filter brutal content for OpenAI-related training work, with serious psychological toll. The product feels frictionless to users, but the supply chain is not.
Then there is the physical footprint. Data does not float, it sits in warehouses full of servers that drink electricity and water. The International Energy Agency estimates data centres consumed around 1.5% of global electricity in 2024, and the trendline is up. Even OpenAI’s CEO has recently had to publicly address the energy questions, because the scrutiny is no longer niche.
So yes, it echoes older colonial logics. Not because history is repeating itself exactly, but because the structure is familiar: extraction framed as progress, value flowing towards the centre, and the costs externalised.
But the story does not end with doom and a black mirror soundtrack. Pushback is real, and it is getting more organised.
Indigenous data sovereignty movements, like Te Mana Raraunga in Aotearoa New Zealand, argue that data about Indigenous peoples should be governed with Indigenous authority, not treated as a free-for-all resource. Regulators have also started landing punches that actually sting. The EU’s data protection regime has produced record-scale penalties, including a €1.2bn GDPR fine against Meta’s Irish entity in 2023. Brazil’s LGPD has its own enforcement regime too, with penalties enforceable since 2021.
Creators are pushing from another angle. The legal terrain is messy, but the pressure is mounting around training data, rights, and the boundary between “learning” and “taking.” In the UK, for instance, the Getty v Stability AI case has already produced a headline-grabbing High Court judgment that shows how undercooked parts of the legal framework still are.
For anyone working in culture, design, or media, this is not abstract politics. It is your taste, your references, your archive, your craft, flattened into a statistical paste and sold back to you as “inspiration.” It is power, but rendered as interface. It does not arrive with a baton, it arrives with a prompt box.
And if rebellion is going to be credible, it has to be practical. Audit your permissions. Use your data access and deletion rights where they actually work. Choose tools that let you opt out of training where possible. Support governance models that treat data as something people can steward collectively, not something platforms get to quietly annex.
Data colonialism is not lurking in the background. It is already in your palm, quietly earning interest.


