IB DP Digital Society 5.2 Governance and Human Rights: A Comprehensive Study Guide
- lukewatsonteach
- 1 hour ago
- 11 min read
Digital technologies are fundamentally reshaping governance structures, democratic participation, and human rights protection globally. In the IB DP Digital Society HL Extension (5.2 Governance and Human Rights), students explore this transformation through three interconnected domains:
Conflict, Peace and Security
Participation and Representation
Diversity and Discrimination
This study guide provides theoretical frameworks, contemporary case studies, and research directions that connect to Digital Society's core concepts, content areas, and contextual applications. Use it as a springboard for deeper investigation into how digital systems challenge traditional governance models and reshape human rights in the 21st century.
1. Conflict, Peace and Security | IB DP Digital Society 5.2
Key Terms
Algorithmic warfare · Lethal autonomous weapons systems (LAWS) · Cyber deterrence · Digital surveillance · Crisis mapping · Social media intelligence · Internet shutdowns · Digital resistance · AI-powered targeting · Deepfake warfare
Key Themes
Dual Nature of Digital Technology: ICTs simultaneously enable conflict prevention through early warning systems and crisis mapping, while creating new forms of warfare through autonomous weapons and cyber attacks.
Governance Challenges: Traditional international law struggles to address AI weapons, cyber warfare, and digital surveillance, creating regulatory gaps that affect civilian protection.
Democratic Tension: The balance between national security imperatives and citizens' digital rights becomes increasingly complex as surveillance capabilities expand.
Contemporary Theoretical Frameworks
Algorithmic Warfare Theory (Scharre, 2018): Military AI systems operate at speeds beyond human decision-making, fundamentally altering warfare's ethical and strategic dimensions.
Cyber Deterrence Framework (Libicki, 2009; Nye, 2017): Unlike nuclear deterrence, cyber deterrence faces attribution problems, proportionality challenges, and unclear escalation dynamics.
Digital Authoritarianism (Diamond, 2019): Authoritarian regimes use digital technologies for social control, creating new forms of oppression that transcend borders.
Critical Security Studies (Peoples & Vaughan-Williams, 2021): Security extends beyond state-centric military threats to include human security, environmental security, and digital rights.
Key Thinkers
Mary Kaldor (2012): "New Wars" theory demonstrates how globalisation and ICT create conflicts that blur distinctions between war, organised crime, and human rights violations.
Zeynep Tufekci (2017): Digital technologies create new protest capabilities but also new vulnerabilities, as seen in government surveillance and platform manipulation.
Paul Scharre (2018): Autonomous weapons represent a "Third Revolution in Warfare" after gunpowder and nuclear weapons, requiring new ethical frameworks and international regulation.
Shoshana Zuboff (2019): "Surveillance capitalism" extends beyond commercial exploitation to state surveillance, creating new forms of social control.
Cathy O'Neil (2016): "Weapons of Math Destruction" framework applies to military AI systems that perpetuate bias and lack accountability mechanisms.
Contemporary Case Studies (2020-2025)
Israel-Gaza War (2023-present):
AI-powered targeting systems like "Lavender" and "Gospel" automate target selection with minimal human oversight (Haaretz, 2024)
Social media warfare includes deepfake propaganda and coordinated disinformation campaigns
Platform content moderation becomes a battlefield for narrative control
Russia-Ukraine Conflict Evolution (2022-present):
Autonomous drone swarms demonstrate AI warfare capabilities (Defense One, 2024)
Starlink satellite networks become critical infrastructure and military targets
AI-generated disinformation campaigns target Western audiences through social media platforms
Myanmar Digital Coup (2021-present):
Military uses surveillance apps and internet shutdowns as weapons of oppression (Human Rights Watch, 2023)
Digital resistance networks use mesh networking and encrypted communications
International sanctions target digital infrastructure and surveillance technology exports
Iran Protests (2022-2023):
Government deploys facial recognition and social media monitoring for protester identification
Starlink and mesh networks enable communication during internet shutdowns
Platform policies on content moderation become human rights issues
IB Connections
Concepts:
Change (2.1) – warfare's digital transformation;
Power (2.4) – algorithmic control systems;
Values & Ethics (2.7) – autonomous weapons accountability;
Space (2.5) – cyberspace as conflict domain
Content:
AI (3.6) – machine learning in weapons systems;
Data (3.1) – surveillance and targeting;
Robots (3.7) – autonomous weapons;
Networks (3.4) – cyber warfare infrastructure
Contexts:
Political (4.6C) – warfare regulation;
Global contexts – international law gaps;
Environmental (4.3) – critical infrastructure vulnerabilities
2. Participation and Representation
Key Terms
Digital democracy · Algorithmic amplification · Platform governance · Echo chambers · Filter bubbles · Computational propaganda · Astroturfing · Digital literacy · E-governance · Blockchain voting
Key Themes
Democratic Enhancement vs. Undermining: Digital platforms simultaneously enable grassroots organising and facilitate misinformation campaigns that undermine democratic processes.
Platform Power: Private technology companies increasingly control public discourse, raising questions about democratic accountability and content governance.
Digital Divides: Unequal access to digital technologies creates new forms of political exclusion, potentially undermining representative democracy.
Evolving Theoretical Frameworks
Platform Constitutionalism (Suzor, 2019): Private platforms function as quasi-governmental entities, requiring constitutional-like frameworks for content governance and user rights.
Algorithmic Governance (Katzenbach & Ulbricht, 2019): Algorithms increasingly mediate political participation, from news curation to voter targeting, reshaping democratic processes.
Digital Rights Framework (UN Special Rapporteur, 2021): Human rights principles must be actively protected in digital spaces, not assumed to transfer automatically from offline contexts.
Liquid Democracy Theory (Blum & Zuber, 2016): Digital technologies enable new forms of direct and delegated democracy that could supplement representative systems.
Key Thinkers
Cass Sunstein (2017): "#Republic" framework demonstrates how algorithmic filtering creates political polarisation and undermines democratic discourse.
Zeynep Tufekci (2017): "Twitter and Tear Gas" reveals both the power and fragility of digitally-mediated social movements.
Clay Shirky (2008): "Here Comes Everybody" framework shows how digital technologies reduce coordination costs for collective action.
danah boyd (2017): "Data and Society" research demonstrates how algorithmic manipulation exploits cognitive biases to influence political behaviour.
Kate Crawford (2021): "Atlas of AI" framework reveals how AI systems concentrate power and require democratic oversight.
Timnit Gebru (2021): AI ethics research highlights how algorithmic bias in content moderation affects marginalised communities' political participation.
Contemporary Case Studies (2020-2025)
Brazil's 2022 Presidential Election:
WhatsApp misinformation campaigns target specific demographic groups (Reuters, 2022)
Telegram channels coordinate voter suppression efforts in favelas
Supreme Court orders platform content removal, raising free speech concerns
EU Digital Services Act Implementation (2024):
Platform transparency requirements for algorithmic content curation
"Illegal content" definitions vary across member states, creating enforcement challenges
Impact on political advertising and election manipulation
India's Digital Democracy Experiments:
Blockchain-based voting pilots in local elections (Economic Times, 2023)
Aadhaar integration with political processes raises privacy concerns
Digital literacy programs target rural communities for electoral participation
Iran's Digital Resistance (2022-2023):
Mesh networking enables coordination during internet shutdowns
Platform policies on content removal become international relations issues
Digital documentation of human rights violations challenges state narratives
US Social Media and 2024 Election:
AI-generated content in political advertising
Platform policies on deepfakes and synthetic media
State-level legislation on political bot disclosure
IB Connections
Concepts:
Expression (2.2) – digital political speech;
Identity (2.3) – online political communities;
Power (2.4) – platform control over discourse;
Space (2.5) – digital public spheres
Content:
Media (3.5) – social platforms and democracy;
AI (3.6) – algorithmic content curation;
Networks (3.4) – communication infrastructure;
Data (3.1) – voter profiling and targeting
Contexts:
Political (4.6A/B) – electoral processes and governance;
Cultural (4.1D) – youth political movements;
Social (4.7) – digital community formation
3. Diversity and Discrimination
Key Terms
Algorithmic bias · Intersectional discrimination · Digital redlining · Facial recognition bias · Predictive policing · AI fairness · Digital inclusion · Assistive technology · Accessibility · Data colonialism
Key Themes
Systemic Bias Amplification: AI systems often perpetuate and amplify existing social inequalities rather than creating neutral technological solutions.
Digital Human Rights: Traditional human rights frameworks require adaptation for digital contexts, particularly regarding algorithmic decision-making and data privacy.
Global Digital Divides: Technology access and design often reflect and reinforce global inequalities based on geography, race, gender, and economic status.
Contemporary Theoretical Frameworks
Algorithmic Justice (Benjamin, 2019): "Race After Technology" framework demonstrates how seemingly neutral algorithms reproduce racial hierarchies through biased data and design choices.
Intersectional AI Ethics (Crenshaw, 1989; Noble, 2018): Multiple identity categories intersect in algorithmic systems, creating complex discrimination patterns that single-axis approaches cannot address.
Data Justice Framework (Global Data Justice, 2021): Examines how data collection, processing, and application perpetuate social inequalities and proposes community-centred alternatives.
Surveillance Capitalism and Marginalised Communities (Zuboff, 2019; Browne, 2015): Surveillance technologies disproportionately target and harm marginalised communities while extracting value from their data.
Key Thinkers
Ruha Benjamin (2019): "Race After Technology" demonstrates how algorithmic systems perpetuate racial inequality through the "New Jim Code."
Safiya Umoja Noble (2018): "Algorithms of Oppression" reveals how search engines and recommendation systems reflect and amplify societal biases.
Joy Buolamwini (2018): "Algorithmic Justice League" research exposes racial and gender bias in facial recognition systems.
Timnit Gebru & Emily Bender (2021): "Stochastic Parrots" framework critiques large language models' environmental costs and bias amplification.
Cathy O'Neil (2016): "Weapons of Math Destruction" framework demonstrates how algorithmic decision-making perpetuates inequality in criminal justice, employment, and education.
Simone Browne (2015): "Dark Matters" framework connects historical surveillance of Black bodies to contemporary digital surveillance systems.
Virginia Eubanks (2018): "Automating Inequality" shows how algorithmic systems in social services harm poor and working-class communities.
Contemporary Case Studies (2020-2025)
Generative AI Bias Studies (2023-2024):
ChatGPT perpetuates racial and gender stereotypes in content generation (Stanford HAI, 2023)
Image generation models demonstrate racial bias in professional representation
Language models show cultural bias in translation and content recommendation
Facial Recognition Controversies (2020-2024):
Amazon, IBM, and Microsoft suspend facial recognition sales to police following Black Lives Matter protests
EU considers facial recognition bans in public spaces
China's Uyghur surveillance system demonstrates ethnic targeting capabilities
India's Aadhaar System (2018-2024):
Digital identity requirements exclude marginalised communities from services
Biometric authentication failures disproportionately affect manual labourers
Supreme Court cases balance welfare efficiency against privacy rights
Platform Content Moderation and Marginalised Communities:
Instagram and TikTok algorithms suppress content from Black creators (2021-2023)
WhatsApp's India fact-checking partnerships face cultural bias criticism
YouTube's LGBTQ+ content demonetization policies affect community organising
Climate Justice and Digital Rights:
Indigenous communities use digital mapping to document land rights violations
Environmental racism documented through air quality sensors and data analysis
Climate migration tracking raises privacy concerns for vulnerable populations
IB Connections
Concepts:
Identity (2.3) – intersectional digital identities;
Values & Ethics (2.7) – algorithmic fairness principles;
Power (2.4) – technological power concentration;
Systems (2.6) – interconnected discrimination systems
Content:
AI (3.6) – bias in machine learning;
Data (3.1) – discriminatory data practices;
Algorithms (3.2) – biased decision-making systems;
Media (3.5) – representation in digital content
Contexts:
Social (4.7) – race, gender, class intersections;
Health (4.4) – healthcare access disparities;
Economic (4.2) – employment discrimination;
Cultural (4.1) – digital preservation and representation
Contemporary Governance Frameworks (2020-2025)
International Initiatives
UNESCO AI Ethics Recommendation (2021): First global framework for AI ethics, emphasizing human rights, inclusion, and environmental sustainability.
EU AI Act (2024): World's first comprehensive AI regulation, establishing risk-based approach to AI governance with extraterritorial implications.
UN Secretary-General's AI Advisory Body (2023): Develops recommendations for international AI governance, focusing on Global South perspectives.
Council of Europe AI Convention (2024): First international treaty on AI, emphasizing human rights protection in AI development and deployment.
Emerging Research Areas
AI Governance: Algorithmic accountability, explainable AI, AI auditing methodologies, participatory AI development
Platform Regulation: Content moderation transparency, algorithmic amplification disclosure, platform worker rights, data portability
Digital Rights: Right to explanation, right to algorithmic contestation, collective data rights, environmental data justice
Bibliography
Foundational Texts
Aquinas, T. (1265–1274). Summa Theologica. Translated by the Fathers of the English Dominican Province. Christian Classics.
Augustine. (c. 426). The City of God. Translated by Henry Bettenson. London: Penguin Classics, 2003.
Arnstein, S. R. (1969). A ladder of citizen participation. Journal of the American Institute of Planners, 35(4), 216-224.
Butler, J. (1990). Gender Trouble: Feminism and the Subversion of Identity. New York: Routledge.
Castells, M. (1996). The Rise of the Network Society. Oxford: Blackwell.
Castells, M. (2009). Communication Power. Oxford: Oxford University Press.
Crenshaw, K. (1989). Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. University of Chicago Legal Forum, 1989(1), 139-167.
Habermas, J. (1962/1989). The Structural Transformation of the Public Sphere. Cambridge: Polity Press.
Haraway, D. (1985/1991). A cyborg manifesto: Science, technology, and socialist-feminism in the late twentieth century. In Simians, Cyborgs, and Women: The Reinvention of Nature (pp. 149-181). New York: Routledge.
Kaldor, M. (2012). New and Old Wars: Organized Violence in a Global Era (3rd ed.). Cambridge: Polity Press.
Lyon, D. (2003). Surveillance as Social Sorting: Privacy, Risk, and Digital Discrimination. London: Routledge.
McLuhan, M. (1964). Understanding Media: The Extensions of Man. New York: McGraw-Hill.
Nussbaum, M. C. (2011). Creating Capabilities: The Human Development Approach. Cambridge, MA: Harvard University Press.
Olson, M. (1965). The Logic of Collective Action: Public Goods and the Theory of Groups. Cambridge, MA: Harvard University Press.
Rheingold, H. (2002). Smart Mobs: The Next Social Revolution. Cambridge, MA: Basic Books.
Sen, A. (1999). Development as Freedom. Oxford: Oxford University Press.
Shirky, C. (2008). Here Comes Everybody: The Power of Organizing Without Organizations. New York: Penguin Press.
Sunstein, C. R. (2001). Republic.com. Princeton: Princeton University Press.
Sunstein, C. R. (2017). #Republic: Divided Democracy in the Age of Social Media. Princeton: Princeton University Press.
Contemporary Critical Texts
Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge: Polity Press.
Blum, C., & Zuber, C. I. (2016). Liquid democracy: Potentials, problems, and perspectives. Journal of Political Philosophy, 24(2), 162-182.
boyd, d. (2017). The manipulation of public opinion is accelerating. Data & Society Research Institute. https://datasociety.net/
Browne, S. (2015). Dark Matters: On the Surveillance of Blackness. Durham, NC: Duke University Press.
Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press.
Diamond, L. (2019). The road to digital unfreedom: The threat of postmodern totalitarianism. Journal of Democracy, 30(1), 20-24.
Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin's Press.
Gebru, T., Bender, E. M., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610-623).
Katzenbach, C., & Ulbricht, L. (2019). Algorithmic governance. Internet Policy Review, 8(4). https://policyreview.info/concepts/algorithmic-governance
Libicki, M. C. (2009). Cyberdeterrence and Cyberwar. Santa Monica, CA: RAND Corporation.
Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press.
Nye, J. S. (2017). Deterrence and dissuasion in cyberspace. International Security, 41(3), 44-71.
O'Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown.
Peoples, C., & Vaughan-Williams, N. (2021). Critical Security Studies: An Introduction (3rd ed.). London: Routledge.
Scharre, P. (2018). Army of None: Autonomous Weapons and the Future of War. New York: W. W. Norton.
Suzor, N. (2019). Lawless: The Secret Rules That Govern Our Digital Lives. Cambridge: Cambridge University Press.
Tufekci, Z. (2017). Twitter and Tear Gas: The Power and Fragility of Networked Protest. New Haven: Yale University Press.
Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs.
Recent Reports and Policy Documents
Council of Europe. (2024). Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. Opened for signature 5 September 2024. Council of Europe.
Defense One. (2024, January 15). Ukraine is pioneering the use of AI-powered drone swarms. Defense One. https://www.defenseone.com/
Global Data Justice. (2021). Data Justice Research Handbook. University of Edinburgh.
Haaretz. (2024, January 8). Inside Israel's AI military targeting system. Haaretz. https://www.haaretz.com/
Human Rights Watch. (2023). "No Such Thing as a Free Phone": How China's Surveillance Tech Spreads Globally. Human Rights Watch.
Project Ploughshares. (2024). Tracking human rights violations with no certain access to satellite data. The Ploughshares Monitor, Spring 2024.
UN General Assembly. (2024). Global Digital Compact. Adopted 22 September 2024 as annex to the Pact for the Future (A/RES/79/1). United Nations.
UN Office of the High Commissioner for Human Rights. (2024). Technical standards and human rights: A report on integrating human rights considerations into technical standard-setting processes. United Nations.
UNESCO. (2021). AI Ethics: Global normative framework. UNESCO Publishing.
Recent Academic Research
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proceedings of Machine Learning Research (Vol. 81, pp. 77-91).
Defense One. (2024, January 15). Ukraine is pioneering the use of AI-powered drone swarms. Defense One. https://www.defenseone.com/
Economic Times. (2023, March 22). India pilots blockchain voting in local elections. The Economic Times. https://economictimes.indiatimes.com/
Haaretz. (2024, January 8). Inside Israel's AI military targeting system. Haaretz. https://www.haaretz.com/
Reuters. (2022, October 28). WhatsApp misinformation surges ahead of Brazil election. Reuters. https://www.reuters.com/
Stanford HAI. (2023). Artificial Intelligence Index Report 2023. Stanford University Human-Centered AI Institute.
International Legal Documents
UN General Assembly. (1948). Universal Declaration of Human Rights (A/RES/217 A). United Nations.
UN General Assembly. (1966). International Covenant on Civil and Political Rights (A/RES/2200A). United Nations.
UN General Assembly. (1966). International Covenant on Economic, Social and Cultural Rights (A/RES/2200A). United Nations.
United Nations Development Programme. (1994). Human Development Report 1994: New Dimensions of Human Security. Oxford University Press.

Comments