Data solidarity: why sharing is not always caring 


 The following article is authored by Barbara Prainsack.


  • Data governance lies at the core of global problems and can alleviate or exacerbate inequalities.
  • Data solidarity can help to distribute the risks and benefits of digital practices more equitably by facilitating data uses that create public value and by preventing and mitigating harm.
  • Policy makers should analyse the risks and benefits of different forms of data use, outlawing unacceptably risky forms, and should consider proposals for national legislation and rights to non-datafication, ensuring that exercising these rights does not lead to discrimination.

Data governance is a first world problem, some people say. Others do not spell it out, but they are thinking the same. The world has more pressing problems to deal with, they feel.


Unfortunately, they are wrong. Digital practices, and the data they produce, lie at the core of problems such as climate change, poverty or the rise of authoritarianism. And they can form part of the solution. They help to alleviate inequalities, such as giving more people access to education and information. At the same time, however, they exacerbate inequalities and create new ones. Often, the same digital practices do both.


Take telehealth, for example. As a way of consulting a health professional who is in a different physical location than the patient, telehealth has received a boost during COVID-19. Now that regulatory adjustments have made the reimbursement of telehealth consultations easier, telehealth is hailed as a new, “safe” model for medical consultations in much wider contexts. In one sense, this is great – telehealth can go a long way in improving healthcare services in remote areas, for example. But if virtual consultations become the norm, this deprives people of the human encounters that many depend on: Because face-to-face, “high-touch” interactions help to fight loneliness, and because some of us cannot or do not want to use digital tools (e.g., Warr et al., 2021).


And there is one more point. When you speak to a health professional face-to-face, you can jointly decide what aspects of your encounter are recorded in data. You can decide to keep some things “off the record”. You can correct mistakes. When your medical consultation is digital, in contrast, all aspects of your health and illness are automatically datafied. Admittedly, there is something to be said for everyone’s data being included in the datasets that may later be used to develop software to detect and treat health problems. But when digital health becomes the default for the masses, and high-touch medicine involving direct, face-to-face encounters with health professionals is reserved for those who can pay for it (besides urgent and serious health problems), then privacy becomes a luxury good (Elvy, 2017; see also Prainsack and Forgó, 2022). It is a macabre irony that policies seeking to improve services cause poor people to be exposed to much more digital surveillance than others (e.g., O’Neill, 2016; Eubanks, 2018; Johnson, 2021).


To solve these problems, we need to think about data governance in new ways. It is no longer enough to assume that asking people to consent to how their data is used is sufficient to prevent harm. In our example of telehealth, and in virtually all data-related scandals of the last decade, from Cambridge Analytica to Robodebt, informed consent did not, or could not, have avoided the problem. We all regularly agree to data uses that we know are problematic – not because we do not care about privacy. We agree because this is the only way to get access to benefits, a mortgage, or teachers and health professionals. In a world where face-to-face assessments are unavailable or excessively expensive, opting out of digital practices would no longer be an option (Prainsack, 2017, pp. 126-131; see also Oudshoorn, 2011).


Solidarity-based data governance (in short: data solidarity) can help us to distribute the risks and the benefits of digital practices more equitably. The details of the framework are spelled out in full elsewhere (Prainsack et al., 2022a, b). In short, data solidarity seeks to facilitate data uses that create significant public value, and at the same time prevent and mitigate harm (McMahon et al., 2020). One important step towards both goals is to stop ascribing risks to data types, and to distinguish between different types of data use instead. In some situations, harm can be prevented by making sure that data is not used for harmful purposes, such as online tracking. In other contexts, however, harm prevention can require that we do not collect the data in the first place. Not recording something, making it invisible and uncountable to others, can be the most responsible way to act in some contexts.


This means that recording and sharing data should not become a default. More data is not always better. Instead, policymakers need to consider carefully – in a dialogue with the people and communities that have a stake in it – what should be recorded, where it will be stored and who governs the data once it has been collected – if at all (see also Kukutai and Taylor, 2016).


In concrete terms, these steps should be taken: [1]


  1. Regulation should move from ascribing different risks to data types to focusing on the risks and benefits of different types of data use, depending on the benefits and harms that they are likely to yield (El-Sayed and Prainsack, 2022).
  2. Data use that poses unacceptable risks to individuals or communities should be outlawed, with fines high enough to deter even large corporations. Reliance on the self-regulation of the corporate sector, or the assumption that ethics alone can solve the problem without the help of black letter law, are toothless approaches in the current political economy.
  3. Initiatives are currently underway to codify new rights of people in the digital era, such as the right to internet access or the right to be offline (e.g., Kusters, 2022). Policy makers should consider which of their proposals should and could meaningfully be implemented via national legislation.
  4. Consider a right to non-datafication for individuals, pertaining to aspects of their body and behaviour, and as a collective right of communities, for commonly held resources such as the environment or cultural practices. Ensure that individuals and communities exercising their right to non-datafication are not affected by discrimination. One way to do this would be the issuing of data non-discrimination legislation in analogy to genetic non-discrimination legislation (e.g., Joly et al., 2020; Prainsack and Van Hoyweghen, 2020).


Have you seen?
Data value: to share, or not to share 
Data is an instrument – are we using it right?
Data privacy and the Internet of Things




Custers, B. (2022) New Digital Rights: Imagining Additional Fundamental Rights for the Digital Era. Computer Law & Security Review 44: 105636.


Elvy, S.A. (2017) Paying for privacy and the personal data economy. Columbia Law Review 117/6: 1369-1454.


Eubanks, V. (2018) Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martins Press.


Johnson, P.A. (2021) Pandemic-Driven Technology Adoption: Public Decision Makers Need to Tread Cautiously. International Journal of E-Planning Research 10/2: 59-65.


Joly, Y., Dupras, C., Pinkesz, M., Tovino, S., and Rothstein, M. (2020) Looking Beyond GINA: Policy Approaches to Address Genetic Discrimination. Annual Review of Genomics and Human Genetics 21/1: 491–507.


Kukutai, T., and Taylor, J. (2016) Indigenous data sovereignty: Toward an agenda. Canberra: ANU Press.


McMahon, A., Buyx, A., and Prainsack, B. (2019) Big data governance needs more collective responsibility: The role of harm mitigation in the governance of data use in medicine and beyond. Medical Law Review 28/1: 155-182.


O’Neil, C. (2016) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown Publishers.


Oudshoorn, N. (2011) Telecare Technologies and the Transformation of Healthcare. New York: Palgrave Macmillan.


Prainsack B., and Van Hoyweghen, I. (2020) Shifting Solidarities: Personalisation in Insurance and Medicine. In: Van Hoyweghen, I., Pulignano, V., and Meyers, G., eds. Shifting Solidarities: Trends and Developments in European Societies. Cham: Springer International Publishing: 127–51.


Prainsack, B. (2017) Personalized Medicine: Empowered Patients in the 21st Century? New York City: New York University Press.


Prainsack, B. and El-Sayed, S. (2022). Success of the European Health Data Space hinges on operationalizing public value, in addition to bridging digital divides. BMJ Rapid Response (8 July). Available at: (Accessed: January 1, 2023).


Prainsack, B. and Forgó, N. (2022). Why paying individual people for their data is a bad idea. Nature Medicine 28: 1989-1991.


Prainsack, B., El-Sayed, S., Forgó, N., Szoszkiewicz, Ł., and Baumer, P. (2022) Data solidarity: a blueprint for governing health futures. The Lancet: Digital Health 4/11: E773-E774.


Prainsack, B., El-Sayed, S., Forgó, N., Szoszkiewicz, Ł., and Baumer, P. (2022) Data solidarity – A White Paper. Geneva: The Lancet & Financial Times Commission on Governing Health Futures. Available at: (Accessed: January 1, 2023).


Warr, D., Luscombe, G., and Couch, D. (2021) Hype, evidence gaps and digital divides: Telehealth blind spots in rural Australia. Health (online first: doi:10.1177/13634593211060763).


[1] The full White Paper on Data solidarity, including all Recommendations, can be found here:




Barbara Prainsack is a professor at the Department of Political Science at the University Vienna, where she also directs the Centre for the Study of Contemporary Solidarity (CeSCoS), and the interdisciplinary Research Platform Governance of Digital Practices. Her work explores the social, ethical, and regulatory dimensions of genetic and data-driven practices and technologies. Barbara is also a member of the Austrian National Bioethics Commission, and as Chair of the European Group on Ethics in Science and New Technologies which advises the European Commission.


The facts, ideas and opinions expressed in this piece are those of the authors; they are not necessarily those of UNESCO or any of its partners and stakeholders and do not commit nor imply any responsibility thereof. The designations employed and the presentation of material throughout this piece do not imply the expression of any opinion whatsoever on the part of UNESCO concerning the legal status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries.