The ethics of surveillance capitalism
- The EPF Atlas

- Sep 7, 2025
- 4 min read
“Surveillance Capitalism”, a concept popularised by academics such as Shoshana Zuboff, is defined as a modern economic order in which human experience is treated as free raw material for covert commercial operations. The practice of extraction, prediction, and trade of data embodies the adage “if the product is free, you are the product.”
In 2021, leaked internal documents from Facebook whistleblower Frances Haugen revealed that Instagram’s data-driven algorithm, designed to maximise user engagement, was systematically pushing harmful content to teenage girls struggling with body image. As the algorithm optimises screen time by harvesting user behaviour (likes, shares, clicks), the users who previously engaged with content related to diet, fitness or beauty would be fed with more extreme versions of that content. In other words, the algorithm leverages behavioural data from users to maximise the time users spend on the platform as a proxy for advertising revenue. To achieve this, the algorithm progressively “nudges” users toward material that is more emotionally polarising, because such content is statistically more likely to elicit further engagement. In the context of body image, this often means a gradual shift from generic fitness posts to extreme dieting advice or pro-eating-disorder communities.
Empirical evidence has demonstrated that this algorithmic feedback loop has a profound psychological impact on social media users. Firstly, it creates an echo chamber that magnifies vulnerabilities rather than balancing them. Overconsumption of such hyper-tailored and affectively charged content has been linked to rising rates of anxiety, depression, and self-harm ideation among adolescents. Recent research indicates a strong positive correlation between the intensity of “upward social comparison” on Instagram and depressive symptoms (Yuan et al., 2025), suggesting that the platform is not merely reflecting user preferences but actively shaping mental states. Corporations’ practice of extracting and leveraging users' data is undoubtedly efficient and profitable - however, it highlights how personal autonomy is undermined and violated under the surveillance capitalism model.
Beyond commercial motives, user data also carries political value. Cyber-utopianism refers to the belief that new communication technologies inherently promote freedom—a belief that reached its peak during the Arab Spring (2011–2012), when social media was hailed as a tool of grassroots mobilisation. However, media theorist Evgeny Morozov criticised this as naïve, pointing out that authoritarian regimes could equally harness these tools to manipulate citizens’ political beliefs. A stark example is the persecution of the Rohingya, a Muslim minority in Myanmar, between 2016 and 2017, when military officers and extremist groups created fake accounts and pages to spread disinformation, labelling the Rohingya as “terrorists”. Facebook’s algorithm amplified these hate memes to politically active users, thus gradually normalising ethnic violence in the public discourse (UN Human Rights Council, 2018). It can be seen that institutions can take advantage of surveillance data and the algorithm that works upon it to reinforce latent racial prejudices for their specific political agenda.

When discussing the implications of data harvesting, the Cambridge Analytica scandal is almost always revisited. In 2014, a seemingly innocuous Facebook personality-quiz app developed by Dr. Aleksandr Kogan collected the personal information of about 270,000 users, including their names, locations, friends list, liked pages, private messages, and data about their friends. In total, 87 million raw data points were collected, which were used to build “psychographic profiles” - which classify users’ personalities and segment them into distinct voter categories. Kogan then shared the illegally obtained data with Cambridge Analytica and its affiliate SCL Group for political purposes - particularly to better target political messages to people who are more susceptible to influence. Cambridge Analytica's practice spanned major political events such as the 2016 U.S. Presidential Election and possibly the Brexit referendum in the UK (European Parliament, 2018). In this way, data harvesting under the surveillance capitalism model can be used to manipulate voters' behaviour, and ultimately the outcome of free, democratic elections, which are often referred to as the foundation of democracy itself.

Although many argue that political beliefs or perceptions of oneself have always been deterministic throughout history, the immense power of digital media can soon exaggerate these negative consequences. Today, scholars argue for the urgent need to establish robust legal frameworks and return data control to users, thus striking a balance between technological progress and individuals’ autonomy within society. Rebuilding this relationship would allow us to re-centre human rights to privacy and reclaim our agency from the digital nihilism that has always been there in our everyday lives.
Sources:
European Parliament. (2018) European Parliament resolution of 25 October 2018 on the use of Facebook users’ data by Cambridge Analytica and the impact on data protection (2018/2834(RSP)) [Online]. Strasbourg: European Parliament.<https://www.europarl.europa.eu/doceo/document/TA-8-2018-0433_EN.pdf> (Accessed: 8 June 2025).
United Nations Human Rights Council. (2018) Report of the independent international fact‑finding mission on Myanmar (A/HRC/39/64) [Online]. Geneva: UN Human Rights Council.<https://www.ohchr.org/sites/default/files/Documents/HRBodies/HRCouncil/FFM-Myanmar/A_HRC_39_64.pdf> (Accessed: 8 June 2025).
Yuan, M., Sun, Y., Wang, Y. and Yu, L. (2025) ‘Depressive symptoms and upward social comparisons during Instagram use: A vicious circle’, Development and Psychopathology. <https://www.sciencedirect.com/science/article/pii/S0191886923003811> (Accessed: 8 June 2025).







Comments