Who decides which content to include in curricula? Where does this content come from and who was involved in the production of the respective type of knowledge? These questions point to biases in knowledge production as well as dissemination which can favor one perspective over the other (for example in history) or neglect topics of global relevance (like the climate crisis in geography).
Few authors pointed out that biases in curricula do exist and need to be addressed. More often than not, knowledge is taken as an objective and rather rigid or fixed something, external from political processes and thus not impacted by (biased) decision-making. However, some authors take a critical stance on content and school curricula by inquiring the social purpose of education at large, and by asking who this kind of knowledge structure is serving.
Especially with the advent of digital technologies in education, some articles highlight that the issue of biases in teaching and learning becomes even more pressing as digitization hugely comprises algorithms which on their part learn from data provided. Imbalances in the data between groups of students might be taken up and reproduced. Those authors, therefore, point to the danger of discriminatory mechanisms and (unintended) practices in the development and use of digital learning tools and curricula.
Excerpts from the literature
“Critical conceptualizations of scientific literacy reject notions of value-neutral knowledge production and claims of objectivity based on normative frameworks. Instead, these approaches view scientific literacy as comprised of complex interrelationships between the natural and physical world, humans, and society. This conceptualization of scientific literacy is particularly relevant to education for sustainability, as sustainability decisions are characterized by complexity, uncertainty, and ambiguity. What constitutes knowledge and learning within this arena is constantly in flux, and traditional scientific and other forms of situated knowledge (e.g., local, traditional ecological, relational) are particularly relevant considerations. Stetsenko (2015, 2017) calls for the development of a “transformative activist stance” that results in deliberate and goal-directed action and a commitment to social transformation.”
“[The] agentive conceptualizations of scientific literacy require that literate thinkers have experiences that build the knowledge, skills, and competencies to engage in sociopolitical action that moves beyond traditional ways of meeting a challenge. Through this view, teacher educators, PSTs, and P-12 students can begin to understand the drivers of their own behaviors, participate persuasively in advocacy efforts, and ultimately become involved in transdisciplinary arrangements to address challenges characterized by complexity, uncertainty, and ambiguity.”
- These quotes are extracted from a report titled “Looking inward, outward and forward: exploring the process of transformative learning in teacher education for a sustainable future” written by Weinberg, Trott, Wakefield, Merritt, Archambault and published in 2020. This study explores the process of transformative learning among 67 preservice elementary teachers (PSTs) enrolled in a sustainability science course by analysing six assignments submitted throughout the semester.
“Dixon-Roman, Nichols and Nyame-Mensah (2020), in this issue, draw critical attention to the ‘racializing forces’ of commercial AI in education, through a detailed case study of one commercial AIed application. They argue that AI applications in education may ‘inherit socio-political relations’ and reproduce racializing processes and educational inequities through the ‘cloak of algorithmic objectivity’. Their argument reflects the ways that AI and data systems are implicated in race-based profiling and ‘discriminatory designs’ that reinforce and normalize racial hierarchies and may enforce ‘racial correction’ (Benjamin 2016, 148). Critical EdTech research, for example, has highlighted the role of ‘digital redlining’ in excluding certain groups from access to knowledge and information based on gender, class and race: Digital redlining arises out of policies that regulate and track students’ engagement with information technology. … Digital redlining is not a renaming of the digital divide. It is a different thing, a set of education policies, investment decisions, and IT practices that actively create and maintain class boundaries through structures that discriminate against specific groups. … Digital redlining takes place when policymakers and designers make conscious choices about what kinds of tech and education are ‘good enough’ for certain populations.”
- This quote is extracted from a report titled “Historical threads, missing links, and future directions in AI in education” written by Williamson and published in 2020. Such research draws urgent attention to the ways that the architectures of technologies are involved in new forms of exclusion and discrimination.
“In such a volatile, uncertain, complex, and ambiguous world, the question of what schools ought to be teaching our young people to navigate and thrive in such a century has been written about and discussed for at least the last 20 years. But the questions about who gets to decide what is learned, the kinds of conditions that enable this kind of learning, and how we can build together the kind of education culture and infrastructure that will be responsive to continually changing needs have not received as much attention.”
- This quote is extracted from an essay titled “Learning to Shape the Future: Reflections on 21st Century Learning” written by Chung, which is part of a book “Future Frontiers: Education for an AI world” published in 2017.