Diversity in technology design – from economic strategy to social justice imperative

Join

The following article is authored by Ruby Pappoe and Laura Schelenz.

 


  • Heightened awareness of biases in data and technology have placed major companies under scrutiny as questions of diversity and its value are raised.

  • Improve technological inclusion of underrepresented groups through user co-design and improved data collection and critical analysis.

  • Reframe discourses around diversity in technology to be centred around social justice rather than economic strategy.
     


Diversity is a hot topic in the technology industry. Given heightened awareness that data underlying the technology and the design of the technology itself can be biased, companies like Google, Microsoft and Salesforce have established programmes aimed at countering discrimination (Chi et al., 2021; Zou and Schiebinger, 2018). Google has launched the 'Responsible AI practices', a set of principles for the ethical design of Artificial Intelligence.[1] The company also laid out 'commitments to racial equity' in 2020, assuring the investment in programmes supporting Black employees and causes for racial justice.[2] These efforts come as more and more responsibility for inclusive design is distributed from leadership to the designer, and technology companies face criticism that their diversity measures are inadequate (Chi et al., 2021; Wong, 2020). This begs the question: What does diversity mean in the context of technology development? What is its value, according to companies and designers? And what should its value be from a perspective of social and global justice?

 

Who is the user, really? - stereotypes and rigid classification

 

Imagine an app that recommends products for the somewhat romantic, highly commercialised celebration of Valentine’s Day. The user reads a message reminding them of the anticipated event: 'Shop Valentine’s Day gifts for him!' The user, being a lesbian woman, is offended by this message and stops using the app. This anecdote is covered in Sarah Wachter-Böttcher’s 2017 book 'Technically Wrong', where she criticises the design of 'toxic tech'. She argues that designers lack a clear idea of who the user is and what their needs are. Attention to the diversity of users has increased in recent years with products being 'diversified' to attract different user groups (Himmelsbach et al., 2019). Yet the adoption of diversity concepts in technology design is not without pitfalls.

 

In a study analysing diversity concepts in computer science, Schelenz (2022) found that diversity concepts like race, gender, (dis)ability, culture, personality and education are conceptualised in a binary, superficial and uncritical way. For example, gender is operationalised as 'male' and 'female', which ignores the diversity of gender identities and harms queer and transgender users (Keyes, 2019). Another concern is that binary gender classifications obscure diverse experiences within groups: Black women have structurally different experiences from White women because their experiences of gender-based discrimination are compounded with experiences of racism (Crenshaw, 1989). Data underlying AI-based technology heavily influences the technology’s quality towards fair decision-making. Compiling data sets from binary, superficial categories without considering the complexity in experiences can thus lead to injustice.

 

Diversity as economic strategy

 

Diversity efforts in technology design often lack a social justice-oriented motivation and primarily serve the expansion of the user (read: consumer) base for economic reasons (Schelenz, 2022, p. 16). If a technology works for more people, and thus achieves higher adoption rates, profits of technology companies rise. In this vein, diversity is often linked to user satisfaction in computer science discourses (Eskandanian and Mobasher, 2020). From a perspective of social justice, technology design should be inspired by a concern for ethical and social issues such as supporting well-being and reducing inequalities. This requires working in real partnership with local communities and marginalised groups to understand and tackle injustice. Collecting data from a non-commercial angle means resisting the urge to always collect more data. Sometimes, it can be in the best interest of communities to protect the privacy of its members or pursue qualitative data collection in an attempt to represent stories, which can be richer accounts of community life than a sterile data set.

 

The flaws of universal design

 

Of importance also is the issue of language. At the global level, designers need to approach their design more holistically to include African languages. Such an approach would adequately reflect local users’ experiences and ways of knowing. While Google Translate, Microsoft Office and Chinese companies that target Africa offer African languages, this is restricted to the languages of a few dominant ethnic groups (Aludhilu and Bidwell, 2018). In fact, many web interfaces and applications exclude African languages altogether. In recent years, economic growth in the continent has led to a rise in demand for translating content for the African market. More translation tools have also been developed to capture African languages. However, inadequate data and native speaker oversight have caused these translation tools to fail to grasp the nuances within some of these languages, leading to technical failures and misinformation (O’Brien and Asadu, 2021). Moreso, designers usually build from a universalised system that often inhibits language diversity by eliminating or reducing user-specific and contextual linguistic features.

 

Discussing his involvement in a Wikipedia Diarrhoea Project (WDP) focused on translating and localising health information about diarrhoea into Ewe (a West African language), Agbozo (2023) finds that although the Pootle localisation software that was used for the project was built for use in Africa, it lacked the recognition of unique characters and tones particular to African languages. Thus, in the case of the Ewe language, the software rejected and excluded those characters and tones important to meaning-making in the language. As Agbozo noted, this issue arises when the design of the software is based on the experience of the designers only and in a manner that assumes that the software would be able to translate and localise all languages (p. 17). This universal approach erases the situated linguistic experiences of local users and ultimately inhibits language diversity in technology production. Such an approach also creates a situation in which designers privilege standard English over other forms of meaning-making, even to a multilingual or global audience. In addition, despite the different varieties of English that are spoken across the world, for example, African American Vernacular English and African Englishes, some translation technologies do not recognise accented English (Koenecke et al., 2020; Amugongo, 2018).

 

Recommendations: towards a social justice-oriented diversity

 

These research findings suggest that we need to diversify the design of technology to include the contextual experiences of users, that is, the particular lived experiences and cultural orientations of the user. This requires that we invest in more and improved data collection/analysis about underrepresented groups. This said, in creating datasets and working with data, we should be more aware of the history of and power relations in classification processes and involve native speakers in the data review process. Questioning datasets and adopting a critical perspective such as data feminism (D’Ignazio and Klein, 2020) can help refocus the intention of data scientists to 'do good' with data. Centring the contextual experiences of users is key to diversity-aware design, as is the involvement of users as co-designers in the entire design process. Further, we need more critical pedagogies in technology education to help overcome Western-centric paradigms in technology - starting from the classroom. Intersectional-type analysis, intercultural technology ethics and critical theory can help rethink (global) relationships and address power dynamics between those who design technology and those who are framed as beneficiaries. Most importantly, discourses around diversity in technology development should centre causes for social justice rather than view diversification as an economic strategy to expand consumer markets.

 

Have you seen?
Digital Public Infrastructure – lessons from India
Speaking of the future: make sure no voice is left behind
Apply research skills to new data, transform developmental effectiveness

 

References

 

Agbozo, G.E. (2023) ‘Software-Mediated Diarrhea Localization: Reflections from a Transnational Locus’, Technical Communication and Social Justice, 1(1), pp. 8–23. 

 

Aludhilu, H.N. and Bidwell, N.J. (2018) ‘Home is not Egumbo: Language, Identity and Web Design’, Second African Conference for Human Computer Interaction: Thriving Communities. Edited by A. Peters et al. Windhoek, Namibia, 3 December. New York, NY: ACM, pp. 1-11.

 

Amugongo, L.M. (2018) ‘Understanding what Africans say’, 2018 CHI Conference on Human Factors in Computing Systems. Edited by D. Mulligan et al. Montreal, Canada, 21 – 26 April. New York, NY: ACM, pp. 1-6.

 

Asadu, C. and Brien, M.O. (2021) In Africa, rescuing the languages that Western Tech ignores, AP News. Available at: https://apnews.com/article/coronavirus-pandemic-technology-science-business-health-24d3789e1a87b212d61e80f0fe2e89b1 (Accessed: 08 September 2023).

 

Chi, N., Lurie, E. and Mulligan, D.K. (2021) ‘Reconfiguring Diversity and Inclusion for AI Ethics’, Conference on AI, Ethics, and Society. Edited by D. Mulligan et al. Virtual event USA, 19 – 21 May. New York, NY: ACM, pp. 447–457.

 

Crenshaw, K. (1989) ‘Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics’, University of Chicago Legal Forum, 1989(1), pp. 139–167.

 

Eskandanian, F. and Mobasher, B. (2020) ‘Using Stable Matching to Optimize the Balance between Accuracy and Diversity in Recommendation’. 28th ACM Conference on User Modeling, Adaptation and Personalization. Edited by C. Gena et al., Genoa, Italy, 14 - 17 July, New York, NY: ACM, pp. 71–79.

 

Himmelsbach. J. et al. (2019) ‘Do We Care About Diversity in Human Computer Interaction: A Comprehensive Content Analysis on Diversity Dimensions in Research’. 2019 CHI Conference on Human Factors in Computing Systems. Edited by S. Brewster et al., Glasgow, Scotland, 4 -9 May, New York, NY: ACM, pp. 1–16.

 

Keyes, O. (2019) Counting the countless, Real Life Magazine. Available at: https://reallifemag.com/counting-the-countless/ (Accessed: 08 September 2023).

 

Koenecke, A. et al. (2020) ‘Racial disparities in automated speech recognition’, Proceedings of the National Academy of Sciences, 117(14), pp. 7684–7689. doi:10.1073/pnas.1915768117. 

 

Schelenz, L. (2022) ‘Diversity Concepts in Computer Science and Technology Development: A Critique’, Science, Technology, & Human Values, pp. 1-26. doi:10.1177/01622439221122549.

 

Susser, D., Roessler, B. and Nissenbaum, H.F. (2018) ‘Online Manipulation: Hidden Influences in a Digital World’, Georgetown Law Technology Review, 4. doi:10.2139/ssrn.3306006.

 

Wachter-Boettcher, S. (2017) Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech. New York, NY: W.W. Norton and Company.

 

Wong, J.C. (2020) More than 1,200 Google Workers Condemn Firing of AI Scientist Timnit Gebru, The Guardian. Available at: https://www.theguardian.com/technology/2020/dec/04/timnit-gebru-google-ai-fired-diversity-ethics (Accessed: 08 September 2023).

 

Zou, J. and Schiebinger, L. (2018) ‘AI can be sexist and racist — it’s time to make it Fair’, Nature, 559(7714), pp. 324–326. doi:10.1038/d41586-018-05707-8.

 


 

Notes: 

[1]  See the Google principles here: https://ai.google/responsibilities/responsible-ai-practices/

 

[2]  The tool is called “Playing with AI Fairness”: https://pair-code.github.io/what-if-tool/ai-fairness.html

 

….

 

Ruby Pappoe is a Teaching Assistant Professor in Technical Writing at University of North Carolina. Her research focuses on Cultural Rhetorics, Visual/Digital Rhetorics, and Technical Communication. 

 

Laura Schelenz is a Researcher at University of Tübingen, and is an ethics Researcher for the EU project "WeNet – The Internet of Us" as well as the German project "Digilog@BW". Her research deals with ethical and feminist perspectives on technology development.

 

The facts, ideas and opinions expressed in this piece are those of the authors; they are not necessarily those of UNESCO or any of its partners and stakeholders and do not commit nor imply any responsibility thereof. The designations employed and the presentation of material throughout this piece do not imply the expression of any opinion whatsoever on the part of UNESCO concerning the legal status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries.

Join