top of page

ITGS + Digital Society  IBDP

Digital Society Blog

IB DP Digital Society HL: 5.2 Governance and Human Rights (STAGE TWO): Explore and investigate challenges

  • Writer: lukewatsonteach
    lukewatsonteach
  • Sep 16
  • 46 min read

Updated: Oct 29

5.2 Governance and Human Rights | STAGE TWO: Explore and investigate challenges

Students explore sources and investigate their extended inquiry focus by considering some of the following questions.

  • What is the relationship between digital systems and this challenge?

  • What is the nature and scope of this challenge in digital society?

  • What course concepts, content and contexts will be most helpful to consider with this challenge?

  • How does this challenge manifest itself at local and global levels?

  • Who are the specific people and communities affected by this challenge?

  • What are some impacts and implications related to this challenge?

Governance and Human Rights Challenges: Key Statistics and Insights for IB DP Digital Society


Key Findings:

  • Escalating Global Conflicts: There are 56 active conflicts in the world today (the most since WWII), with 61 active conflicts reached in 2024 being the highest since 1946, and 97 countries experiencing declining peacefulness IpbGlobal Security.

  • Cybersecurity Crisis: Cyberthreats are estimated to cost $24 trillion by 2027, with over 100 cyber incidents having potential to undermine international peace and security identified in the past year United NationsUnited Nations.

  • Democratic Backsliding: Democracy continued its decline in 2023, with four in nine countries worse off than in 2018, while currently there are 91 democracies and 88 autocracies worldwide International IDEACapacity4dev.

  • Massive Displacement: 123.2 million people were forcibly displaced at the end of 2024, reaching an unprecedented 120 million in May 2024 United NationsUNSD.

  • Digital Discrimination: Digital technology represents a new source of discrimination, with women holding only 26.7% of tech employment and algorithmic bias digitizing and amplifying sexism UN NewsEnterprise Apps Today.

  • Workplace Discrimination Surge: The EEOC received 88,531 new discrimination charges in 2024 (9% increase), with 40% of Black workers reporting workplace discrimination EEOCWalker Law.


5.2 Governance and Human Rights

Digital Society 5.2 Governance and Human Rights - Conflict, Peace and Security

Global Conflict Escalation:

  • There are 56 active conflicts in the world today, which is the most since the second World War (International Peace Bureau - Global Peace Index 2024)

  • Despite rising global conflicts, which reached a record high of 61 active conflicts in 2024 — "the most since 1946" (UN Security Council - With Conflicts at Highest Since 1946)

  • 97 countries have experienced a decline in peacefulness in 2024, which marks a record for any given year since the beginning of the index (International Peace Bureau - Global Peace Index 2024)


Civilian Casualties and Violence:

  • In 2023, civilian casualties in armed conflicts surged by a staggering 72 per cent, the steepest rise since 2015 (UN Statistics Division - SDG Goals Report 2024)

  • By 2023, civilian casualties had soared to over 33,400, nearing the 2015 peak (UN Statistics Division - SDG Goals Report 2024)

  • Palestine is the most dangerous and violent place in the world in 2024. 81% of Palestine's population is exposed to conflict, ACLED records 35,000 fatalities in the past 12 months (ACLED - Conflict Index 2024)


Digital Warfare and Cybersecurity Threats:

  • The 2024 Global Risks Report underscores cyberthreats as one of the most serious challenges of our time, estimating potential cybercrime costs of $24 trillion by 2027 (UN Security Council - Digital Breakthroughs Must Serve Betterment of People)

  • Over the past year, more than a hundred cyber incidents with the potential to undermine international peace and security were identified (UN - A New Era of Conflict and Violence)

  • Recent levels of violence have been unprecedentedly high, with several 'record-breaking' months in the past year. At the beginning of 2025, conflict event rates are expected to grow by 15% (ACLED - Conflict Index 2024)


Forced Displacement:

  • At the end of 2024, 123.2 million people worldwide were forcibly displaced as a result of persecution, conflict, violence, human rights violations and events seriously disturbing public order (UNHCR - Global Trends 2024)

  • The number of forcibly displaced people reached an unprecedented 120 million in May 2024 (UN Statistics Division - SDG Goals Report 2024)


Gaza-Israel Conflict: Digital Warfare and Surveillance

The Israeli military uses AI-powered digital tools, including the "Lavender" system that uses machine learning to assign numerical scores to Gaza residents based on suspected likelihood of being militants, and evacuation monitoring systems that track civilian movements across 620 blocks dividing Gaza. The conflict has seen extensive "hacktivism" with groups like AnonGhost compromising Israel's Alert App, while pro-Israeli hackers targeted Palestinian banks and infrastructure, and cryptocurrency plays an increasing role with Hamas reportedly receiving $135 million between 2021-2023. Israel controls Gaza's internet infrastructure and has imposed multiple telecommunications blackouts lasting up to 34 hours, with connectivity dropping to 1% by November 2023, preventing ambulances from reaching injured civilians and cutting off critical communications.


The digital dimension of this conflict demonstrates how technology has become weaponised on multiple fronts. Israel's Unit 8200 cyber warfare unit conducts extensive surveillance of Palestinian society through intercepting communications for political persecution, while the Israeli military allegedly used spyware like Pegasus to hack Palestinian human rights defenders' phones. The consequences of cyber conflict are primarily felt by civilians rather than combatants, with denial-of-service attacks targeting banks, technology companies, and news sites, while both sides engage in disinformation campaigns. This raises critical questions about digital rights during armed conflict, the ethics of AI in warfare, and whether internet access constitutes a basic human necessity requiring protection under international humanitarian law.


Ukraine-Russia Conflict: Cyber Operations and Information Warfare

Russia launched destructive data-wiping cyberattacks one day before the February 2022 invasion, targeting Ukrainian government, energy, and financial organisations, with over 2,194 attacks reported by 2023 - a three-fold increase from pre-war levels. The invasion was accompanied by a cyberattack on Viasat satellite communications, disrupting tens of thousands in Ukraine and Europe, while attacks have impacted 25 sectors in over 50 countries. Russia has conducted over 200 successful cyberattacks on Ukrainian media outlets to spread propaganda, while the "Doppelgänger" operation created hundreds of thousands of fake social media profiles and cloned major news sites for disinformation campaigns.


However, contrary to predictions of "cyber Pearl Harbour," the strategic impact of cyber operations has been limited. Microsoft and other tech companies helped Ukraine withstand attacks by dispersing digital infrastructure into public clouds, while private sector innovation and international support have made cyberspace surprisingly "defence-dominant". Russian cyber strategy focuses more on information warfare and disinformation than kinetic cyber attacks - following their doctrine that "information destroys nations, not networks". The conflict has blurred lines between combatants and civilians in cyberspace, with patriotic hackers and private companies playing unprecedented roles, prompting the International Criminal Court to investigate cybercrimes for the first time. This conflict serves as a real-world laboratory for understanding how international humanitarian law applies to cyber warfare and the evolving nature of hybrid warfare in the digital age.


United Nations Peace-Keeping

The United Nations peacekeeping operations represent one of the organisation's most visible efforts to maintain international peace and security. UN peacekeepers—often called "Blue Helmets"—are deployed to conflict zones around the world to help countries transition from war to peace, protect civilians, support political processes, and assist in disarmament and demobilisation efforts. These missions operate under mandates from the UN Security Council and typically involve military personnel, police officers, and civilian staff from member states working together in challenging environments from South Sudan to Cyprus to Lebanon.


Digital technology has become increasingly vital to modern peacekeeping operations. UN missions now use satellite imagery and geospatial analysis to monitor cease-fire violations, track armed group movements, and plan protection strategies for vulnerable populations. Drones and surveillance technology help peacekeepers assess security situations in remote or dangerous areas without putting personnel at risk. Additionally, digital communications tools enable better coordination among multinational forces, while data analytics help missions identify patterns of violence and allocate resources more effectively. The UN has also been exploring how artificial intelligence and machine learning could enhance early warning systems for conflicts and improve the protection of civilians in volatile regions.


NATO

The North Atlantic Treaty Organisation (NATO) is a military alliance founded in 1949, currently comprising 32 member countries primarily from Europe and North America. Unlike the UN, which focuses on global peacekeeping with consent from all parties, NATO is a collective defence alliance based on the principle that an attack against one member is an attack against all—enshrined in Article 5 of the NATO treaty. The organisation's primary purpose is to safeguard the freedom and security of its members through political and military means. NATO provides a framework for military cooperation, joint training exercises, and coordinated defence planning among allies, while also serving as a platform for political dialogue on security issues.


In terms of peacekeeping and security operations, NATO has evolved significantly since the Cold War. While the UN typically conducts peacekeeping with the consent of conflicting parties, NATO often engages in more robust crisis management and stabilisation operations. The alliance has conducted major missions in the Balkans (Bosnia and Kosovo in the 1990s), Afghanistan (from 2001-2021 under its first-ever Article 5 invocation following 9/11), and Libya (2011). NATO also focuses on cyber defence, counter-terrorism, and deterring potential aggression—particularly along its eastern borders. The alliance emphasises interoperability among member forces, maintains rapid response capabilities, and increasingly addresses emerging security challenges like hybrid warfare and threats in space and cyberspace.


ASEAN

ASEAN, as a dynamic regional bloc, has significant influence across the three key areas outlined in the digital society curriculum: conflict, peace and security; participation and representation; and diversity and discrimination. ASEAN has invested in peacebuilding within digital spaces, working through initiatives like the ASEAN Institute for Peace & Reconciliation and cyber peacebuilding conferences, where enhanced cybersecurity and cross-border intelligence-sharing are used to address emerging threats, from cybercrime to misinformation to humanitarian crises. These frameworks create early warning systems and regional mechanisms that foster conflict prevention and promote enduring peace and security in Southeast Asia's increasingly digital landscape.


Unmanned Aerial Vehicles (UAVs) & Unmanned Uncrewed Aerial Vehicles (UUAVs),

Unmanned aerial vehicles (UAVs), unmanned uncrewed aerial vehicles (UUAVs), and other drone technologies are having a significant impact on the nature of modern conflict and security. Drones are now deployed by both state and non-state actors, dramatically increasing their presence on the battlefield. Militaries use drones for surveillance, reconnaissance, and precision strikes, allowing them to respond rapidly to emerging threats while minimising the risk to personnel. These systems can compress the "kill chain", the process from detecting to destroying a target, and are capable of hours-long reconnaissance, surveillance, and even documentation of war crimes or collateral damage for humanitarian purposes.​


Importantly, the accessibility and affordability of commercially available civilian drones have allowed non-state actors, including terrorist organisations, to improvise attacks and carry out operations that would have been previously impossible without a conventional air force. The proliferation of drones in conflicts such as Ukraine, Syria, and others has led to an increase in strikes, fatalities, and complexity on the battlefield. Drones, armed and unarmed, are also used in peacekeeping contingents for monitoring, securing locations, and protecting civilians. While drone technology brings strategic and tactical advantages, it also raises ethical concerns regarding transparency, international law, and the normalisation of conflict. The shift toward remote and autonomous capabilities marks a profound transformation in both conflict management and peace operations worldwide.


Cybercrime, Cyber Attacks, Ransomware & Cyber Security

Cybercrime and cyber attacks have become some of the most severe threats in today's digital society, with global damages expected to reach an astonishing $10.5 trillion annually in 2025, making cybercrime more costly than natural disasters and more profitable than the global trade of major illegal drugs. Cybercrime encompasses various offences, including ransomware, phishing, data breaches, identity theft, and financial fraud, and is increasingly enabled by automation, AI-driven social engineering, and “cybercrime-as-a-service” models. Ransomware remains the fastest-growing threat, causing billions in financial losses, widespread data destruction, business disruption, and reputational harm for victims worldwide.​


Securing against cyber threats is a major challenge as attackers exploit evolving technologies, complex supply chains, and global communications infrastructure. Organisations must navigate geopolitical risks, a widening workforce skills gap, and a constantly changing attack landscape, making cybersecurity an essential priority for governments, companies, and individuals alike. The proliferation of attacks on critical sectors, from energy grids to satellite infrastructure, demonstrates the urgent need for robust cyber defence, international cooperation, and ongoing investment in talent and technology to ensure long-term digital resilience.


CASE STUDY 1: Autonomous Drone Swarms in Ukraine (2024-2025)

Date: January 2024 - Present


Context: Ukraine has pioneered the use of coordinated AI-powered drone swarms in warfare, representing a significant evolution in military technology.


Key Facts:

  • Defense One (Jan 2024) reports Ukraine deploying swarms of 10-50 drones operating autonomously

  • Cost: Each drone costs $400-1000 vs. $400,000 for traditional missiles

  • Russian forces have downed over 10,000 Ukrainian drones but cannot keep pace with production

  • AI enables drones to coordinate without centralized control, continuing mission if communications jammed

  • Facial recognition on drones can identify and track specific individuals on battlefield


Digital Systems Involved:

  • Machine learning algorithms for target recognition

  • Mesh networking for drone-to-drone communication

  • Computer vision systems

  • Autonomous navigation (GPS-denied environments)

  • Edge computing on individual drones


Stakeholders:

  • Ukrainian forces (tactical advantage, reduced casualties)

  • Russian forces (defensive disadvantage, high attrition)

  • Civilian populations in combat zones (increased accuracy reduces collateral damage vs. artillery BUT surveillance concerns)

  • International military observers (learning for future conflicts)

  • Arms control advocates (concerned about proliferation)


Impacts & Implications:

5.2A: CONFLICT, PEACE AND SECURITY
5.2A: CONFLICT, PEACE AND SECURITY

Human Rights Affected:

  • Right to life (both protection and violation potential)

  • Rights of combatants vs. civilians (autonomous targeting decisions)

  • Right to human dignity (killed by algorithm without human oversight)


IB Connections:

  • Concepts: Change (2.1) - warfare transformation; Systems (2.6) - interconnected autonomous systems; Values & Ethics (2.7) - accountability gaps

  • Content: AI (3.6) - machine learning in warfare; Robots (3.7) - autonomous weapons systems; Networks (3.4) - mesh communication

  • Contexts: Political (4.6C) - military applications; Environmental (4.3) - battlefield conditions affect operations


Potential Interventions to Research:

  1. International treaty banning fully autonomous weapons (Campaign to Stop Killer Robots)

  2. Requirement for "human in the loop" before lethal force

  3. Transparent AI decision-making with explainable algorithms

  4. Limitations on swarm size

  5. Registration and tracking system for military drones


Sources:

  • Defense One, "Ukraine is pioneering the use of AI-powered drone swarms" (January 15, 2024)

  • IISS Armed Conflict Survey 2024

Digital Society 5.2 Governance and Human Rights - Participation and Representation

Democratic Decline:

  • Democracy continued its recent decline in 2023, with notable challenges emerging with regard to Representation and Rights (International IDEA - The Global State of Democracy 2024)

  • On balance, four in nine countries were worse off in 2023 than they had been in 2018, while only one in four had improved (International IDEA - The Global State of Democracy 2024)

  • The report suggests that there are currently 91 democracies and 88 autocracies in the world (V-Dem Democracy Report 2024)


Women's Political Participation:

  • As of 1 June 2024, there are 27 countries where 28 women serve as Heads of State and/or Government. At the current rate, gender equality in the highest positions of power will not be reached for another 130 years (UN - Democracy)

  • Only six countries have over 50% women in their lower houses of parliament: Rwanda (61%), Cuba (56%), Nicaragua (54%), Andorra, Mexico, New Zealand, and the UAE (each 50%) (UN - Democracy)

  • However, 21 countries have less than 10% of women in parliament, with some having no women at all. Gender parity in national legislatures is not expected to be achieved before 2063 at the current rate of progress (UN - Democracy)


International IDEA (the International Institute for Democracy and Electoral Assistance)

International IDEA (the International Institute for Democracy and Electoral Assistance) is an intergovernmental organisation founded in 1995 that supports sustainable democracy worldwide. With 35 member countries spanning all continents, International IDEA works to strengthen democratic institutions and processes by providing comparative knowledge, assisting in democratic reform, and influencing policies and politics. The organisation focuses on key areas including electoral processes, constitution-building, political participation and representation, and democracy assessment. International IDEA serves as a unique platform where policymakers, researchers, and practitioners can access resources, data, and expertise to address challenges facing democracies globally—from declining voter turnout and political polarisation to threats against democratic institutions.


In the context of digital society and civic participation, International IDEA has become increasingly engaged with how technology impacts democratic governance and representation. The organisation researches and provides guidance on digital voter registration systems, electronic voting, online political campaigning, and the use of social media in elections. They also examine how digital tools can enhance citizen participation in decision-making processes while addressing concerns about disinformation, data privacy, and the digital divide that can exclude certain populations from democratic participation. International IDEA's work includes helping countries navigate the opportunities and risks of digital technology in elections and governance, ensuring that technological advancement strengthens rather than undermines democratic principles of fair representation and inclusive participation.


The United Nations Democracy Fund (UNDEF)

The United Nations Democracy Fund (UNDEF) was created by UN Secretary-General Kofi Annan in 2005 as a UN trust fund to support democratization efforts around the world. UNDEF funds projects that empower civil society, promote human rights, and encourage the participation of all groups in democratic processes, with the large majority of funds going directly to local civil society organisations. This approach is unique within the UN system because it complements the organisation's traditional work with governments by instead supporting grassroots democratic movements. UNDEF operates entirely on voluntary contributions from governments and by 2024 had reached almost $250 million in contributions from more than 45 donor countries, including many middle- and low-income states in Africa, Asia, and Latin America. The fund focuses on seven key areas:

  • civic engagement for climate action,

  • support for electoral processes,

  • women's leadership and gender equality,

  • media and freedom of information,

  • rule of law and human rights,

  • strengthening civil society interaction with government, and

  • youth engagement.


The impact of UNDEF has been substantial despite its selective approach. Over 18 funding rounds, UNDEF has supported more than 920 two-year projects in over 130 countries, with grants ranging from $100,000 to $200,000. The selection process is highly rigorous—UNDEF receives an average of 1,500-2,000 proposals annually but selects only about 40 projects, meaning fewer than 2% of proposals receive funding. This selective approach ensures resources go to the highest-impact initiatives. By channelling funds directly to local civil society organisations rather than government entities, UNDEF has played a crucial role in strengthening democratic governance from the ground up, particularly in countries emerging from conflict, new democracies, and least developed nations. The fund's emphasis on civil society reflects the understanding that lasting democracy requires active citizen participation and robust independent organisations, not just governmental structures.


Estonia's i-voting System

Estonia has been a global pioneer in digital democracy, becoming the first country to implement nationwide internet voting (i-voting) in 2005 for local elections. The Estonian e-voting system allows citizens to cast their ballots remotely using their national digital ID cards and a computer with an internet connection. The system works through a secure, encrypted process where voters authenticate themselves using a two-factor system (ID card and PIN), make their selection, and digitally sign their ballot. A unique feature of Estonia's system is that voters can vote multiple times during the early voting period—only the final vote counts—which provides a safeguard against coercion since voters can change their vote privately. The system also allows voters to override their internet vote by casting a paper ballot on election day, ensuring that physical voting remains the final authority.


The impacts of Estonia's e-voting system have been significant for democratic participation and representation. By 2023, nearly half of all Estonian voters were casting their ballots online, demonstrating widespread adoption and trust in the system. The convenience of i-voting has particularly benefited certain demographics: younger, more tech-savvy voters; people with mobility challenges; Estonians living abroad; and those with busy schedules who find traditional polling station hours restrictive. However, the system has also raised important concerns about the digital divide—older citizens and those less comfortable with technology may feel excluded or disadvantaged. Critics have also pointed to security vulnerabilities and the difficulty of ensuring vote secrecy and preventing coercion in remote voting environments. Despite these debates, Estonia's experience has provided valuable lessons for other countries considering digital voting, demonstrating both the potential for technology to increase democratic participation and the need to carefully address security, accessibility, and trust issues.


Finland's Online Voting System

Finland conducted a limited e-voting pilot in 2008 for municipal elections in three municipalities using internet-enabled voting machines supplied by Scytl. While the government initially considered the pilot successful, the Supreme Administrative Court later declared the results invalid and ordered a rerun using traditional paper ballots after complaints revealed a critical usability problem—in 232 cases (2% of votes), voters had selected their choice but failed to confirm it, meaning their votes were not recorded. Since this failed trial, Finland has never used electronic voting on a large scale, and all voting continues to be conducted with pen and paper with ballots counted by hand. This demonstrates how a seemingly small technical flaw can undermine an entire digital voting system and highlights the importance of user interface design in ensuring every vote is properly recorded. Finland's experience shows the risks of moving too quickly with digital democracy without adequate testing and verification processes.


Switzerland's Online Voting System

Switzerland presents a more complex story of gradual progress and setbacks. The country pioneered e-voting trials starting in 2004, with 15 cantons participating in over 300 trials, but in 2019 all trials were temporarily halted after public penetration testing revealed security vulnerabilities in systems that had been certified for widespread use. After a four-year suspension, Switzerland resumed limited e-voting trials in June 2023 with only around 65,000 voters (approximately 1.2% of the electorate) in four cantons—Basel-Stadt, St Gallen, Thurgau, and Graubünden—able to vote online. The system has proven popular among its target groups: in Basel-Stadt, over 53% of Swiss citizens abroad who voted chose to vote online, and similar rates were seen in Thurgau. However, the impacts have been mixed—while advocates like the Organisation of the Swiss Abroad and disability rights groups champion e-voting as essential for democratic participation, critics point to extremely high costs, with some IT specialists calculating it would be cheaper to fly every Swiss citizen abroad to Switzerland to vote in person than to maintain the current e-voting system. Switzerland's cautious "security before speed" approach contrasts sharply with Estonia's widespread adoption, demonstrating that concerns about security, cost, and public trust can significantly limit the expansion of digital voting even in technically advanced democracies.


Digital Activism & Hacktivism

Digital activism refers to the use of digital technology and online platforms such as social media, messaging apps, and websites to promote social, political, and environmental change. Activists can mobilise supporters, raise awareness, and organise campaigns with unprecedented speed and reach, often transcending geographical boundaries. Examples include viral hashtag movements, online petitions, citizen journalism, and virtual protests, which have made it possible for voices to be heard even in repressive environments where traditional participation is risky or restricted. In 2025, young activists in particular have harnessed AI, short-form video, and real-time engagement to build collective action and solidarity worldwide, despite challenges such as disinformation and algorithmic bias.​


Hacktivism merges the techniques of hacking with activism, using computer technology to directly challenge power structures and advocate for causes like open sourcing, privacy, and freedom of speech. Hacktivist actions range from website defacement to more disruptive forms like distributed denial-of-service (DDoS) attacks, designed to overwhelm targeted servers and draw attention to political issues. Groups such as Anonymous have infamously used these tactics against governments and corporations, bringing controversial topics into the spotlight and sparking debates over ethics, digital rights, and civil disobedience in cyberspace. While both digital activism and hacktivism contribute to broader participation and representation, they also expose the tension between lawful dissent and disruptive action in the digital public sphere.


CASE STUDY 2: Deepfake Technology in 2024 Elections

Date: Throughout 2024 election year globally


Context: 2024 saw over 60 national elections worldwide, with AI-generated deepfakes playing unprecedented role in disinformation campaigns.


Key Facts:

  • Slovakia (September 2024): Deepfake audio of opposition leader discussing election rigging surfaced 48 hours before voting, too late for effective fact-checking. Election decided by 2.5% margin

  • Indonesia (February 2024): Deceased president Suharto "resurrected" in deepfake video endorsing candidate Prabowo Subianto

  • Pakistan (February 2024): Jailed Imran Khan used AI voice clone to deliver victory speech from prison

  • India (April-May 2024): Political parties openly used AI to create multilingual campaign messages with candidates "speaking" in languages they don't know

  • United States (2024): Robocalls using AI-cloned President Biden's voice told voters wrong election date in New Hampshire primary


Technology Evolution:

  • 2020: Deepfakes required specialized skills and hours of processing

  • 2024: Consumer apps create convincing deepfakes in minutes on smartphones

  • Audio deepfakes more problematic than video (easier to create, harder to detect)

  • Real-time deepfakes now possible (can impersonate during live video calls)


Digital Systems Involved:

  • Generative AI (Sora, Synthesia, ElevenLabs, etc.)

  • Text-to-speech neural networks

  • Face-swapping algorithms

  • Social media distribution algorithms (amplify engagement regardless of authenticity)

  • Detection tools (inconsistent effectiveness)


Stakeholders:

Deepfake Technology in 2024 Elections
Deepfake Technology in 2024 Elections

Impacts & Implications:

  • On Democracy: Undermines informed consent; creates "liar's dividend" (real evidence dismissed as fake)

  • On Trust: Accelerates distrust in institutions, media, even personal observations

  • On Participation: Voters feel powerless; some disengage entirely

  • On Human Rights: Right to free elections compromised; freedom of expression complicated


IB Connections:

  • Concepts: Expression (2.2) - synthetic speech; Identity (2.3) - impersonation; Power (2.4) - who controls narrative; Values & Ethics (2.7) - truth and authenticity

  • Content: AI (3.6) - generative models; Media (3.5) - synthetic media; Networks (3.4) - distribution systems

  • Contexts: Political (4.6A/B) - electoral integrity; Cultural (4.1D) - truth and misinformation


Interventions Being Tried:

  1. Content Provenance: Coalition for Content Provenance and Authenticity (C2PA) creating "nutrition labels" for media

  2. Platform Policies: Meta/YouTube requiring disclosure of AI-generated political content

  3. Legislation: EU AI Act requires labeling; US states passing varied laws

  4. Detection Technology: Perceptual hashing, blockchain verification, forensic analysis

  5. Media Literacy: Finland's model of teaching detection skills in schools


Evaluation of Interventions:

  • Content provenance: Promising but requires widespread adoption; doesn't work for screenshots

  • Platform policies: Inconsistent enforcement; arrives late in election cycle

  • Legislation: Fragmented (different in each country); free speech concerns; enforcement difficult

  • Detection tech: Arms race with creators; often unreliable

  • Media literacy: Long-term solution but doesn't help immediate elections


Sources:

  • Reuters coverage of Slovakia election (2024)

  • MIT Technology Review, "Deepfake Democracy" series

  • Stanford HAI, Artificial Intelligence Index Report 2024


CASE STUDY 3: Brazil's X/Twitter Ban (August-September 2024)

Date: August 30 - September 30, 2024 (30 days)


Context: Supreme Court Justice Alexandre de Moraes ordered complete ban of X (formerly Twitter) in Brazil after Elon Musk refused to comply with content moderation orders and closed X's Brazilian office.


Key Facts:

  • Brazil has 220+ million people, X's 5th largest market (40+ million users)

  • Ban imposed after X refused to remove accounts spreading misinformation and antidemocratic content

  • Individuals using VPNs to access X faced fines up to $8,800 USD (50,000 reais) per day

  • X's assets in Brazil frozen; daily fines of $900,000 for non-compliance

  • Bluesky (competitor) gained 2.6 million Brazilian users in 3 days

  • Ban lifted September 30 after X complied: appointed legal representative, paid fines, removed flagged content


Timeline:

  • April 2024: Moraes orders X to block accounts accused of spreading misinformation about 2022 election

  • August 17: X closes Brazilian office rather than comply; Musk calls Moraes "dictator"

  • August 28: Moraes gives X 24 hours to appoint legal representative

  • August 30: X banned nationwide; Apple and Google ordered to remove from app stores

  • September 2: X files compliance but rejected as insufficient

  • September 20: X agrees to all conditions

  • September 30: Ban lifted


Digital Systems Involved:

  • Platform content moderation systems

  • ISP-level blocking (ANATEL - Brazilian telecom agency)

  • VPN detection and blocking

  • App store distribution controls

  • Financial systems for fine collection


Stakeholders & Their Positions:

Brazil's X/Twitter Ban (August-September 2024)
Brazil's X/Twitter Ban (August-September 2024)

Digital Rights Analysis:

Arguments For Ban:

  • National sovereignty: countries have right to enforce laws within borders

  • Democratic protection: misinformation threatened election integrity

  • Platform accountability: X's refusal to have legal presence meant no accountability

  • Legal due process: court orders went through proper channels


Arguments Against Ban:

  • Freedom of expression: government shouldn't block communication platforms

  • Judicial overreach: single judge shouldn't have this much power

  • Effectiveness: Drove users to less moderated platforms; created martyrdom narrative

  • Chilling effect: Sets precedent other countries might abuse


Impacts & Implications:

  • On Participation: Reduced space for political organizing, particularly for right-wing groups

  • On Representation: Amplified judicial power over digital communication

  • On Platform Governance: Demonstrated states can successfully enforce compliance through bans

  • On Global Internet: Strengthened "splinternet" trend (fragmented global internet along national lines)


IB Connections:

  • Concepts: Power (2.4) - state vs. platform; Expression (2.2) - limits on speech; Space (2.5) - territorial jurisdiction over digital platforms

  • Content: Media (3.5) - social platforms; Networks (3.4) - blocking mechanisms

  • Contexts: Political (4.6A) - political speech; Social (4.7) - community formation; Economic (4.2) - platform business models


Comparison to Other Bans:

  • Turkey (banned Twitter 2014, 2023): Often lifted after days/weeks

  • Russia (banned Twitter/X 2022): Permanent; part of broader information control

  • India (temporary bans): Hours to days; usually negotiated

  • Pakistan (frequent bans): Political tool; used repeatedly

  • Brazil's uniqueness: Enforced through fines on users; 30-day duration; ended in full compliance


Lessons for Digital Governance:

  1. Platforms ultimately comply when faced with losing major markets

  2. Bans are increasingly feasible technologically (ISP-level blocking works)

  3. Alternative platforms benefit from bans

  4. User behavior: Most don't use VPNs despite availability

  5. National sovereignty in digital space is strengthening


Sources:

  • Reuters, BBC reporting (August-September 2024)

  • Access Now digital rights analysis

  • Brazilian Supreme Court official statements

Digital Society 5.2 Governance and Human Rights - Diversity and Discrimination

Workplace Discrimination Statistics:

  • The EEOC received 88,531 new charges of discrimination in fiscal year 2024 alone, reflecting a more than 9% increase over the number of charges filed in fiscal year 2023 (U.S. Equal Employment Opportunity Commission - 2024 Annual Performance Report)

  • Four out of ten black workers report experiencing discrimination or unfair treatment in the workplace (Walker Law - Workplace Discrimination Statistics 2024)

  • 27% of Black men and 23% of Black women have reported experiencing workplace discrimination, with minimal variation across income levels (DOIT Software - 2024 Diversity in the Workplace Statistics)


Digital Gender Discrimination:

  • Digital technology – the product of an industry that is predominantly male - represents a new source of discrimination and bias (UN News - Digital technology new source of discrimination against women)

  • Rather than presenting facts and addressing bias, technology based on incomplete data and badly-designed algorithms is digitizing and amplifying sexism – with deadly consequences (UN News - Digital technology new source of discrimination against women)

  • Women hold only 26.7% of tech employment, while men hold 73.3% of these positions (Enterprise Apps Today - Diversity in Tech Statistics 2024)


Racial and Ethnic Disparities:

  • According to the World Justice Project Rule of Law Index, 70% of countries have seen race discrimination worsen between 2021 and 2022 (ElectroIQ - Race Discrimination Statistics 2024)

  • Globally, one out of five people have experienced race discrimination in some form (ElectroIQ - Race Discrimination Statistics 2024)

  • Job resumes with traditionally white-sounding names get 50% more callbacks than traditionally black-sounding names (ElectroIQ - Race Discrimination Statistics 2024)


Digital Inclusion Challenges:

  • 243 million people may need help accessing services because their identity documents are non-standard or outdated (affects migrants, refugees, and marginalised communities) (Sumsub - Addressing the Digital Divide in 2025)

  • 96 million people experience challenges in verification processes because their appearance differs from their ID photos due to medical conditions or other factors (Sumsub - Addressing the Digital Divide in 2025)


LGBTQ+ Workplace Discrimination:

  • When applying for jobs, nearly a quarter (23.7%) of LGBTQ+ Americans have experienced discrimination based on sexual orientation or gender identity (Druthers Search - Diversity & Inclusion Workplace Statistics 2024)

  • In the European Union, 19% of LGBTQ+ men and 21% of LGBTQ+ women experienced discrimination at work in 2019, but transgender employees specifically reported much higher proportions of discrimination (36%) (Druthers Search - Diversity & Inclusion Workplace Statistics 2024)


Systemic Inequality in Digital Access:

  • The gender digital divide is fast becoming the new face of gender inequality (UN News - Digital technology new source of discrimination against women)

  • Online spaces are not safe for women and girls, as they have been attacked, targeted, or denigrated on the internet (UN News - Digital technology new source of discrimination against women)


Key Challenges:

  • Algorithmic bias perpetuates and amplifies existing discrimination

  • Digital identity systems often fail to accommodate diverse populations

  • AI systems trained on biased data reproduce discriminatory outcomes

  • Limited diversity in tech development teams affects product design and functionality

  • Digital platforms can become venues for harassment and discrimination


Amnesty International

Amnesty International is a global human rights organisation founded in 1961 by British lawyer Peter Benenson, which has grown into one of the world's largest and most influential grassroots movements for human rights. With more than 10 million members, supporters, and activists in over 150 countries, Amnesty operates independently of any government, political ideology, economic interest, or religion. The organization's mission is to campaign for a world where human rights are enjoyed by all, working to prevent and end grave abuses of the rights to physical and mental integrity, freedom of conscience and expression, and freedom from discrimination. Amnesty's work spans a wide range of human rights issues including opposing the death penalty, combating torture and arbitrary detention, defending freedom of expression, protecting refugees and migrants' rights, holding corporations accountable for human rights abuses, and advocating for the rights of women, LGBTQ+ individuals, and marginalized communities. The organization is particularly known for its urgent action campaigns, where members worldwide mobilize quickly to write letters, send emails, and petition governments on behalf of individuals at immediate risk.


The impact of Amnesty International has been profound and far-reaching. The organization has documented human rights abuses in virtually every country, published thousands of authoritative research reports that have influenced international policy, and successfully campaigned for the release of tens of thousands of prisoners of conscience—people imprisoned solely for peacefully exercising their human rights. Amnesty's investigations and advocacy have contributed to landmark achievements including the establishment of the International Criminal Court, the adoption of the Arms Trade Treaty, and numerous national reforms on issues from abolishing the death penalty to protecting freedom of speech. The organization's rigorous research methodology and reputation for impartiality have made its reports essential references for journalists, policymakers, and courts worldwide. Through its "naming and shaming" approach—publicly exposing human rights violations and putting pressure on perpetrators—Amnesty has helped shift the international consensus on what constitutes acceptable state behavior, making human rights a central concern in global politics. The organization received the Nobel Peace Prize in 1977 for its defense of human dignity against violence and subjugation, recognizing its unique contribution to making the protection of human rights a fundamental principle of international relations.


China's Integrated Joint Operations Platform (IJOP)

China's Xinjiang region has become one of the world's most sophisticated surveillance states, where digital technology is used to systematically monitor and control the Uyghur population and other predominantly Muslim ethnic minorities. At the center of this system is the Integrated Joint Operations Platform (IJOP), which aggregates data from multiple sources and flags individuals considered potentially threatening—some of whom are then detained and sent to political education camps. The IJOP system tracks everyone in Xinjiang by monitoring the location data of phones, ID cards, and vehicles, as well as electricity and gas station usage. The surveillance infrastructure includes iris scanners, CCTV cameras with facial and voice recognition capabilities, and mandatory DNA sampling, all linked to residents' online activity, banking information, phone calls and text messages CBC. Chinese technology companies have developed specialised tools for this system, including ethnicity analytics software specifically designed to distinguish Uyghurs from other populations. Researcher Adrian Zenz estimates that Xinjiang's domestic security spending reached around $8 billion in 2017, a tenfold increase from 2007.


The human rights impacts of this surveillance state are profound and discriminatory. Authorities consider many forms of lawful, everyday behaviour suspicious, such as "not socialising with neighbours, often avoiding using the front door," using encrypted communication tools like WhatsApp, donating to mosques, or preaching the Quran without authorisation. Behaviour deemed "suspicious" can include innocuous activities like buying more gas than usual or exiting one's home through the back door too frequently, which can trigger automated alerts sent directly to police officers' phones, leading to interrogation and potential detention. An estimated up to one million Uyghurs and members of other ethnic groups have been arbitrarily held in so-called "re-education camps". The system represents what some experts describe as mass social reengineering aimed at eliminating Uyghur cultural and religious identity through technology-enabled control. This case demonstrates how digital surveillance technology can be weaponised to enable systematic discrimination and human rights violations against specific ethnic and religious groups, serving as a warning about the potential for technology to facilitate state oppression rather than enhance democratic participation.


Chinese Exclusion Act

The Chinese Exclusion Act of 1882 was the first significant U.S. law to restrict immigration based on a specific nationality or ethnicity, marking a dark chapter in American history regarding discrimination and civil rights. Signed into law by President Chester A. Arthur, the act prohibited Chinese labourers from immigrating to the United States for a period of ten years and denied Chinese immigrants already in the country the right to become naturalised citizens. The law emerged from growing anti-Chinese sentiment, particularly on the West Coast, where Chinese immigrants who had come during the Gold Rush and to build the transcontinental railroad faced increasing hostility, violence, and scapegoating during economic downturns. White workers blamed Chinese labourers for depressing wages and taking jobs, despite the fact that Chinese workers often performed dangerous, low-paying work that others wouldn't do. The act was initially temporary but was extended multiple times and made permanent in 1902, remaining in effect until its repeal in 1943.


The impacts of the Chinese Exclusion Act were devastating and long-lasting for Chinese American communities. The law separated families for decades, as those already in America couldn't bring relatives or spouses to join them, and created a bachelor society among Chinese immigrants since the vast majority were men. It codified racial discrimination into federal law and established a precedent for future restrictive immigration policies targeting specific ethnic groups. The act also fostered a climate of suspicion and harassment, with Chinese residents required to carry certificates of residence and facing constant scrutiny from immigration officials. Beyond its immediate effects on Chinese Americans, the legislation normalised the idea that certain groups could be deemed undesirable and excluded based on race or national origin, influencing subsequent immigration laws like the Immigration Act of 1924 that extended exclusions to other Asian groups. The Chinese Exclusion Act stands as a stark reminder of how government policy can institutionalise discrimination and serves as an important historical context for understanding ongoing debates about immigration, citizenship, and racial justice in America.


China’s Social Credit System

China's Social Credit System (SCS) is a set of databases and initiatives that monitor and assess the trustworthiness of individuals, companies, and government entities, with rewards for those with high ratings and punishments for those with low scores South China Morning Post. First officially proposed in 2014 with the release of the Outline for the Construction and Planning of the Social Credit System, the system aims to build a credit rating system targeting financial, social, and legal compliance. The databases are managed by China's National Development and Reform Commission and the People's Bank of China, gathering data from traditional sources such as financial records, criminal records, governmental records, and registry offices, as well as third-party sources like online credit platforms. The government utilises blacklists and redlists as key enforcement mechanisms—those on blacklists have already been barred from buying plane tickets and travelling domestically or abroad. The system emerged partly to address real problems: trust issues in Chinese society including food safety scandals, labour law violations, intellectual property theft, and corruption that resulted from rapid economic and social changes since 1978.


However, the system raises significant concerns about discrimination, privacy, and social control. While there is not a unified "Social Credit Score" that rates all individual behaviour, this being a common Western misconception, efforts have focused on establishing comprehensive digital files that track and document legal compliance. The most severe penalties, like travel restrictions, typically apply to serious or repeat offenders such as those who refuse to pay court-ordered fines, though the system's flexibility allows it to be swiftly redirected toward rules enforcement in changing policy circumstances. Critics argue the system can function as a tool for social control that violates individual privacy and freedoms, potentially creating second-class citizens based on their scores and enabling discrimination against those who fall afoul of broad or politically-motivated criteria. A 2022 academic study found that revealing the repressive potential of the SCS significantly reduces support for it among Chinese citizens. The system exemplifies how digital technology can be deployed by governments to monitor and regulate citizen behaviour at an unprecedented scale, raising fundamental questions about the balance between social order and individual rights in an increasingly data-driven world.


The Rohingya Genocide & Facebook

The Rohingya genocide represents one of the most devastating humanitarian crises of the 21st century. The Rohingya, a Muslim ethnic minority in Myanmar's Rakhine State, have faced systematic persecution by the Tatmadaw (Myanmar's armed forces) in a series of violent campaigns, with major crackdowns occurring from October 2016 to January 2017 and intensifying from August 2017 onward. In August 2017, Myanmar's military launched what it called "clearance operations" in response to Arakan Rohingya Salvation Army attacks on border posts, but the UN found evidence of widespread human rights violations including extrajudicial killings, summary executions, gang rapes, arson of Rohingya villages, and infanticides—killing at least 6,700 Rohingya in the first month alone between August 25 and September 24, 2017. The UN described the persecution as "a textbook example of ethnic cleansing," and various UN agencies, International Criminal Court officials, human rights groups, and governments have labelled it genocide. The violence forced over 700,000 Rohingya to flee to Bangladesh, creating the world's largest refugee camp, while approximately 600,000 remain in Myanmar facing ongoing restrictions and threats Wikipedia. In March 2022, U.S. Secretary of State Antony Blinken formally determined that members of the Burmese military committed genocide and crimes against humanity against the Rohingya.


Facebook (now Meta) played a deeply troubling role in amplifying the violence through its algorithmic systems. Amnesty International found that in the months and years leading up to the 2017 atrocities, Facebook's algorithms intensified a storm of hatred against the Rohingya which contributed to real-world violence, with the platform's engagement-based systems actively promoting inflammatory content including hate speech that incited violence, hostility, and discrimination. Facebook operated with a de facto monopoly in Myanmar during the crucial years before the genocide—for many in Myanmar, Facebook was essentially the entire internet, serving as Google, LinkedIn, and Reddit all in one. Despite Meta receiving repeated communications and visits from local civil society activists between 2012 and 2017 warning that the platform risked contributing to extreme violence, and internal studies dating back to 2012 indicating that Meta knew its algorithms could result in serious real-world harms, the company repeatedly failed to heed warnings and enforce its own hate speech policies. Even well-intentioned measures backfired—when Facebook supported a 2014 civil society anti-hate campaign by creating "sticker packs" for users to post against violent content, the algorithms interpreted the responses as engagement and further increased the visibility and spread of harmful content. A 2020 investigation found that over 70% of views of a video by a banned anti-Rohingya hate figure came from Facebook's "chaining" feature, meaning users weren't seeking it out but had it fed to them by the platform's recommendation algorithms This case exemplifies how digital platforms' pursuit of engagement and profit can enable and amplify discrimination, hate speech, and even genocide when algorithms prioritise inflammatory content without adequate safeguards or accountability.


ASEAN

ASEAN actively promotes broader participation and representation by supporting digital inclusion and empowering marginalised groups, especially women and minorities, in regional dialogue and policymaking. Digital platforms have enabled greater access for traditionally excluded groups to information, decision-making spaces, and peacebuilding processes, reducing barriers to participation and amplifying underrepresented voices in both physical and virtual contexts. ASEAN's commitment to diversity and anti-discrimination is evident in its push for gender-responsive digital policies and campaigns to address cyberbullying and online harassment. Ongoing dialogue ensures that social inclusion, respect for cultural diversity, and protections against discrimination remain central as the region advances toward a more equitable digital society.


The Gender Shades Study: bias in facial recognition technology

The Gender Shades study, conducted in 2018 by MIT Media Lab researcher Joy Buolamwini and Microsoft Research's Timnit Gebru, was a groundbreaking investigation into bias in commercial facial recognition technology that revealed significant disparities in how AI systems performed across different demographics. The study tested three commercial facial-analysis programs from major technology companies (IBM, Microsoft, and Face++) and found dramatic differences in error rates: for light-skinned men, the error rates in determining gender were never worse than 0.8 percent, but for darker-skinned women, error rates ballooned to more than 20 percent in one case and more than 34 percent in the other two. Buolamwini, who is Black, began the research after discovering that commercial facial-recognition programs failed to recognise her own photos as featuring a human face at all in several cases, and when they did, consistently misclassified her gender. To investigate systematically, Buolamwini assembled a dataset of more than 1,200 images where women and people with dark skin were much better-represented than in typical evaluation datasets, working with a dermatologic surgeon to code images according to the Fitzpatrick scale of skin tones. The study revealed that existing benchmark datasets overrepresented lighter men and lighter individuals in general, a phenomenon the researchers termed "Pale Male Data".


The impact of the Gender Shades study has been profound in raising awareness about algorithmic bias and discrimination in AI systems. Buolamwini introduced the concept of the "coded gaze"—the idea that automated systems reflect the priorities, preferences, and prejudices of those who have the power to mould artificial intelligence. The findings raised critical questions about how neural networks that learn by looking for patterns in huge datasets are trained and evaluated, particularly when researchers at major tech companies were claiming accuracy rates of over 97 percent without acknowledging demographic disparities. Following the study's publication, IBM announced in 2020 that it would end its facial recognition program, with the Gender Shades project cited as influential research inspiring this decision. The research demonstrated how AI systems trained on non-diverse data are "destined to fail the rest of society" and highlighted the urgent need for fairness, accountability, and transparency in machine learning systems—particularly as these technologies are increasingly used in high-stakes decisions affecting employment, criminal justice, and access to services. The study sparked a broader movement for algorithmic justice and led Buolamwini to found the Algorithmic Justice League to advocate for more ethical and inclusive technology development.


U.S. Immigration and Customs Enforcement (ICE)

U.S. Immigration and Customs Enforcement (ICE) is a federal law enforcement agency created in 2003 under the Department of Homeland Security following the September 11 attacks. ICE was formed through a merger of the investigative and interior enforcement elements of the former U.S. Customs Service and the Immigration and Naturalisation Service, and now has more than 20,000 employees in over 400 offices across the United States and around the world. The agency operates through four main directorates: Homeland Security Investigations (HSI), which investigates transnational criminal organisations; Enforcement and Removal Operations (ERO), which enforces U.S. immigration law and manages detention and deportation; the Office of the Principal Legal Advisor; and Management and Administration. HSI special agents investigate violations of more than 400 U.S. laws, including human smuggling and trafficking, narcotics smuggling, cyber crime, child exploitation, financial crimes, and weapons smuggling. Between October 2014 and November 2024, ICE made approximately 3.62 million detention book-ins and returned about 2.32 million people to their country of citizenship, with Mexican citizens accounting for the largest proportion at 31.1% of detainees.


However, ICE has become deeply controversial due to its extensive use of surveillance technology and significant human rights concerns, making it particularly relevant to discussions of digital society, governance, and discrimination. According to a Georgetown Law Center on Privacy & Technology report, ICE has used facial recognition to search through the driver's license photos of one in three adults in the U.S., with the agency having access to driver's license data of almost three-fourths (74%) of all U.S. adults, in most cases without obtaining a search warrant. ICE buys and collects information from data brokers, departments of motor vehicles, utility companies, automated license plate reader databases, cell phone location data, social media, and even foreign law enforcement databases, then uses automated analytical tools, including risk assessment software and facial recognition technology that have been shown to be rife with inaccuracies and racial bias. In 2025, ICE began using a mobile application called Mobile Fortify that can identify someone through facial recognition in the field, and signed a $10 million contract with Clearview AI, making it their largest contract to date, for facial recognition software that controversially scrapes images from social media. The United Nations Human Rights Committee in November 2023 explicitly called out ICE for surveillance practices that conflict with human rights law, noting there is no federal statute authorising ICE to engage in digital surveillance on such a scale and no independent oversight to ensure compliance with domestic civil rights law or human rights law. Growing under the second Trump administration, ICE has been accused of numerous civil rights abuses, with people detained by ICE reporting being deprived of food, water, and showers, and several people dying in ICE custody in the first couple of months of 2025. This makes ICE a significant case study in how government agencies can use digital surveillance technology in ways that raise serious questions about privacy, discrimination, algorithmic bias, and human rights, particularly affecting immigrant and minority communities.


Diversity, Equity, and Inclusion (DEI) Policies

Diversity, Equity, and Inclusion (DEI) policies are organisational frameworks designed to promote fair treatment and full participation of all people, particularly groups historically underrepresented or discriminated against based on identity or disability. Diversity refers to the presence of variety within the organizational workforce in characteristics such as race, gender, ethnicity, sexual orientation, disability, age, and religion; equity refers to fairness and justice, including fair compensation and allocating resources to groups historically disadvantaged; and inclusion refers to creating an organizational culture where all employees feel their voices will be heard and experience a sense of belonging Wikipedia. DEI initiatives were a direct response to widespread institutionalized discrimination in America, with landmark policies like the Civil Rights Act of 1964 laying the groundwork, and additional federal anti-discrimination laws such as Title IX and the Americans with Disabilities Act expanding protections to include gender equity, disability rights, and LGBTQ+ individuals American Civil Liberties Union. Examples of DEI initiatives include implementing accessibility measures for people with disabilities, addressing gender pay inequity, expanding recruitment practices among underrepresented demographics, and holding anti-discrimination trainings ABC News. According to a Pew Research Center survey, 56% of employed U.S. adults say focusing on increasing DEI at work is a good thing, with 61% reporting their workplace has policies ensuring fairness in hiring, pay, or promotions, and 52% having DEI trainings or meetings Pew Research Center.


The impacts of DEI policies have been significant but contested. Research from McKinsey & Company in 2020 found that companies seeking diverse candidates and engaging in diversity training generally outperform less DEI-focused companies and operate more efficiently, with benefits including increased innovation, employee engagement, and talent retention TechTarget. DEI initiatives enhance creativity and innovation, broaden talent attraction and retention, and can help address systemic inequality that threatens economic development and social cohesion UN Global Compact. However, DEI has become highly politicised, particularly in recent years. In 2024 and 2025, several large American companies, including Google, Boeing, Disney, Walmart, Meta, Amazon, and McDonald',s scaled back or ended their DEI programs owing to pressure from President Trump and his administration, though these companies generally said they would continue fostering safe and inclusive workplaces while ending specific DEI-focused programs Wikipedia. In January 2025, President Trump issued an executive order ending what he called "radical and wasteful government DEI programs," describing them as "illegal and immoral discrimination programs" and coordinating the termination of all DEI mandates in the federal government White House. Since the Supreme Court's 2023 decision limiting affirmative action in higher education, state lawmakers have introduced more than 106 anti-DEI bills American Civil Liberties Union. Critics argue DEI initiatives can lead to reverse discrimination or are implemented poorly, while proponents maintain they are essential for addressing systemic barriers and creating truly equitable opportunities. This ongoing debate reflects deeper tensions about how societies should address historical discrimination and whether identity-conscious policies help or hinder progress toward equality.


Big Data & the SDGs

Big data is increasingly pivotal for tracking and achieving the United Nations Sustainable Development Goals (SDGs), enabling governments, scientists, and organisations to monitor progress, identify challenges, and generate innovative solutions for complex issues like climate action, poverty, and public health. By integrating massive datasets from sources such as satellite imagery, mobile technology, social media, and environmental sensors, big data provides dynamic, real-time insights that support evidence-based policymaking and resource allocation. For example, machine learning applied to climate data can help map vulnerable regions, assess disaster risks, and inform emergency responses, while mobile data analytics can track migration patterns or economic activity where traditional statistics fall short.​


Big data, coupled with artificial intelligence, makes it possible to uncover nuanced patterns related to gender equality, education, and economic growth by combining official statistics with non-traditional data sources like household surveys and remote sensing. International forums now emphasise global cooperation through shared digital platforms, accelerating SDG progress with open data resources and policy coordination. However, fulfilling the full potential of big data for the SDGs requires investing in data governance, statistical infrastructure, and building technical capacity, ensuring that digital solutions are inclusive and geared toward “leaving no one behind” as set out in the 2030 Agenda.


Additional Case Studies Related to Diversity and Discrimination:

  1. Amazon's AI Recruitment Tool (2018) - Amazon's automated hiring system showed bias against women, penalising resumes containing the word "women's" or from women's colleges, demonstrating how historical hiring data can embed discrimination into AI systems.

  2. Predictive Policing in the US - Tools like PredPol and algorithms used in risk assessment (COMPAS) have shown racial bias, disproportionately targeting minority communities and raising questions about algorithmic justice in criminal justice systems.

  3. Voice Recognition Technology Bias - Studies show that voice assistants (Siri, Alexa, Google Assistant) have difficulty understanding accents from non-native English speakers and certain regional dialects, creating accessibility barriers.

  4. UK's Algorithmic Visa System - The UK Home Office's "streaming" algorithm for visa applications was found to discriminate based on nationality, affecting applicants from certain countries.

  5. Netherlands' Welfare Fraud Detection System (SyRI) - Ruled unlawful in 2020 for violating privacy rights and disproportionately targeting low-income and immigrant neighbourhoods.

  6. India's Aadhaar Biometric ID System - World's largest biometric database raising concerns about surveillance, data breaches, and exclusion of marginalized groups from essential services.

  7. TikTok's Algorithm and Beauty Bias - Reports of content moderation suppressing videos from users deemed "ugly," poor, or disabled to attract new users.

  8. Healthcare AI Bias - Algorithm used by US hospitals showed racial bias, systematically providing less care to Black patients than equally sick white patients.


Tolerance for Religious and Cultural Differences

Digital technology plays a transformative role in supporting tolerance for religious and cultural differences by creating new spaces for dialogue, education, and collaboration across traditional boundaries. Through social media platforms, AI-powered applications, and virtual religious services, individuals and faith leaders can share beliefs, foster mutual understanding, and build community with people from vastly different backgrounds, all in real time. Examples include interfaith online forums, livestreamed multicultural worship sessions, and AI-driven apps that help minorities access guidance on their religious traditions, contributing to greater inclusion and digital pluralism.​


However, digital technology also brings challenges related to bias, misinformation, and the digital divide. AI-generated religious content and algorithms may inadvertently reinforce stereotypes or amplify existing prejudices, making ethical oversight and algorithmic transparency essential to ensure fairness and inclusivity in religious education and online discourse. As digital tools shape attitudes toward diverse faiths and cultures, educators and technologists must collaborate to create environments that support critical thinking, acknowledge historic inequalities, and encourage respectful engagement. When developed thoughtfully, digital platforms have the power to nurture interfaith tolerance, mutual learning, and peacebuilding in increasingly diverse societies.


Here are some well-known case studies and events illustrating how digital technology intersects with tolerance for religious and cultural differences, serving as concrete examples for further student research:

  • Muslim Pro App and Digital Surveillance: The globally popular Muslim Pro app, designed to help Muslims practice their faith, was found to have sold user location data, which ended up with U.S. military and intelligence agencies, raising concerns about privacy and religious freedom in an age of digital surveillance.​

  • Airbnb's “We Accept” Campaign: In response to discrimination, Airbnb launched this campaign to promote cultural inclusion and tolerance, showcasing diverse hosts and emphasising acceptance across communities via online platforms.​

  • Heineken's “Worlds Apart” Social Experiment: Through a viral video, Heineken brought together individuals with opposing views—including cultural and religious viewpoints—for open dialogue, using digital media to encourage understanding and respect.​

  • Indonesian Migrant Vloggers in Europe: Indonesian marriage migrants in Europe used YouTube and other platforms to negotiate religious identity and advocate for tolerance among followers, demonstrating digital homemaking and cross-cultural exchange.​

  • Algorithmic Bias and Religious Freedom: Discussions and case analyses, such as those by academic commentators on social media platforms, highlight real cases where algorithmic content recommendation and moderation have unintentionally suppressed religious self-expression or amplified hate speech.​


Key Organisations Related to Diversity and Discrimination:

Advocacy & Research:

  • Algorithmic Justice League - Founded by Joy Buolamwini, focuses on algorithmic bias

  • AI Now Institute (NYU) - Research on social implications of AI

  • Data & Society - Research institute on social and cultural issues of data-centric technologies

  • Electronic Frontier Foundation (EFF) - Digital rights and civil liberties

  • Access Now - Digital rights advocacy globally

  • Article 19 - Freedom of expression and information rights

  • AlgorithmWatch - European nonprofit examining algorithmic decision-making

Standards & Policy:

  • Partnership on AI - Multi-stakeholder organisation for responsible AI

  • IEEE Standards Association - Ethical tech standards

  • Centre for Data Ethics and Innovation (UK) - Government advisory body

  • European Digital Rights (EDRi) - Network of digital rights organisations

Human Rights Organisations (Digital Focus):

  • Human Rights Watch (Technology & Rights division)

  • Witness - Uses video technology for human rights documentation

  • Privacy International - Campaigns against government and corporate surveillance

Specialised Groups:

  • Centre for Democracy & Technology - Digital rights in democracy

  • Ranking Digital Rights - Corporate accountability project

  • Mozilla Foundation - Internet health and digital rights

  • Web Foundation - Tim Berners-Lee's organisation for digital equality


CASE STUDY 4: India's Internet Shutdowns (2024 data)

Date: Throughout 2024


Context: India leads the world in internet shutdowns, using them as tool for "maintaining law and order" - but critics argue they suppress protest and democratic participation.


Key Statistics (2024):

  • India imposed 116 internet shutdowns in 2024 (more than all other countries combined)

  • 80% were in Jammu & Kashmir (Muslim-majority region)

  • Shutdowns lasted avg 68 hours; longest was 178 days

  • Affected approximately 73 million people in 2024

  • Economic cost: Estimated $450 million USD in lost productivity


Notable 2024 Shutdowns:

1. Manipur (Ethnic violence)

  • Duration: 212 days total (various periods)

  • Reason: Clashes between Meitei and Kuki communities

  • Impact: Prevented coordination of relief efforts; families couldn't locate missing relatives


2. Punjab (Farmer protests)

  • Duration: 6 separate shutdowns, 2-5 days each

  • Reason: Farmers protesting agricultural policies

  • Impact: Prevented protest organizing; hurt businesses


3. Haryana (Religious tensions)

  • Duration: 8 days

  • Reason: VHP (Hindu nationalist group) rally expected to cause violence

  • Impact: Pre-emptive; no violence occurred but internet cut anyway


4. Rajasthan (Caste tensions)

  • Duration: 24 hours

  • Reason: Communal tensions

  • Impact: Standard "preventive" shutdown


Digital Systems Affected:

  • Mobile data networks (most common shutdown type)

  • Broadband internet (sometimes exempted for businesses)

  • SMS services (sometimes included in shutdown)

  • WhatsApp, social media platforms (specific app blocking)

  • VPNs (government attempted to block, limited success)


Legal Framework:

  • Indian Telegraph Act 1885, Section 5(2): Allows government to intercept or prevent transmission of messages in "public emergency" or "public safety"

  • Temporary Suspension of Telecom Services Rules 2017: Formal process for ordering shutdowns

  • Process: District magistrate → State government → Home Ministry (Union govt)

  • Problem: "Public safety" interpreted extremely broadly


Stakeholders:

India's Internet Shutdowns (2024 data)
India's Internet Shutdowns (2024 data)

Comparison: India vs. Other Countries:

Comparison: India vs. Other Countries:
Comparison: India vs. Other Countries:

Impacts & Implications:

Democratic Participation:

  • Prevents political organizing and protest coordination

  • Enables government actions without documentation

  • Creates information vacuum filled with rumors

  • Disproportionately affects marginalized communities (Kashmir, farmers, minorities)


Economic:

  • Small businesses dependent on digital payments lose income

  • Freelancers and gig workers cannot work

  • Supply chains disrupted

  • Tourism and hospitality sectors affected


Social:

  • Families cannot locate loved ones during crises

  • Emergency services coordination hampered

  • Increases anxiety and sense of oppression

  • Isolates affected regions from national discourse


Human Rights:

  • Violates ICCPR Article 19 (freedom of expression)

  • UN Human Rights Council resolution (2016) condemned shutdowns

  • Creates accountability vacuum (violence occurs without documentation)


IB Connections:

  • Concepts: Power (2.4) - state control; Expression (2.2) - silencing dissent; Space (2.5) - territorial control of internet

  • Content: Networks (3.4) - infrastructure control; Media (3.5) - information access

  • Contexts: Political (4.6C) - government control; Social (4.7) - community isolation; Economic (4.2) - business impact


Interventions & Effectiveness:

Interventions & Effectiveness
Interventions & Effectiveness

Critical Analysis:

Government Justification: "Shutdowns prevent spread of rumors and fake news that could lead to mob violence and communal tensions."


Counter-Arguments:

  • Shutdowns themselves don't prevent violence; correlation not proven

  • Information vacuum creates MORE rumors and anxiety, not less

  • Alternative: targeted content removal rather than blanket shutdowns

  • Disproportionate: punishes entire population for actions of few

  • Lack of transparency and due process

  • Often used politically to suppress protest, not just prevent violence


Sources:

  • Internet Shutdown Tracker (Software Freedom Law Centre India)

  • Access Now #KeepItOn campaign data

  • Economic Times reporting

  • Human Rights Watch reports


CASE STUDY 5: ChatGPT and Generative AI Bias (2023-2024 studies)

Date: Ongoing research throughout 2023-2024


Context: As generative AI tools like ChatGPT, Claude, Gemini, and others entered mainstream use, researchers discovered they perpetuate and amplify existing biases in ways that affect millions of users' decisions.


Key Research Findings:

Stanford HAI Study (2023):

  • Asked ChatGPT to evaluate job applicants: Consistently rated resumes with "white-sounding" names higher

  • Same qualifications + different names = 15% difference in ranking

  • For leadership positions: masculine traits valued over feminine regardless of job requirements

  • Asked to write performance reviews: Used more assertive language for men, communal language for women


Bloomberg Law Analysis (2024):

  • Legal AI tools showed bias in:

    • Criminal sentencing recommendations (harsher for African American defendants)

    • Employment discrimination cases (sided with employers more when plaintiffs were women of color)

    • Immigration cases (more skeptical of asylum claims from certain nationalities)


USC Study on Image Generation (2024):

  • DALL-E, Midjourney, Stable Diffusion tested with neutral prompts

  • "CEO": 97% generated white men

  • "Nurse": 89% generated white women

  • "Criminal": 72% generated Black men

  • "Terrorist": 83% generated Middle Eastern men

  • Even with diverse training data, systems amplified stereotypes


Healthcare AI Bias (JAMA study 2024):

  • AI diagnostic tools:

    • Less accurate for darker skin conditions (dermatology AI)

    • Underestimated pain levels for Black patients

    • Recommended less aggressive treatment for women with heart disease symptoms


Digital Systems Involved:

  • Large Language Models (LLMs): GPT-4, Claude, Gemini, LLaMA

  • Training datasets: Internet scraping (contains historical biases)

  • Reinforcement Learning from Human Feedback (RLHF): Amplifies annotator biases

  • Image generation models: DALL-E 3, Midjourney, Stable Diffusion

  • Embedding spaces: Geometric relationships encode biases (e.g., "man is to programmer as woman is to...?")


Why Bias Occurs:

  1. Training Data Bias:

    • Internet overrepresents certain demographics (English-speaking, Western, educated)

    • Historical text contains historical prejudices

    • News articles overrepresent certain groups in certain contexts (crime reporting disproportionately features Black men)

  2. Amplification Through RLHF:

    • Human annotators (often contractors paid low wages) have implicit biases

    • "Helpful" responses might reinforce stereotypes users expect

    • "Harmless" filtering can over-censor discussion of marginalized identities

  3. Statistical Patterns ≠ Fairness:

    • Models learn correlations, not causation

    • Historical discrimination becomes "pattern" to replicate

    • "Most likely" answer often = "most stereotypical" answer

  4. Context Collapse:

    • Training on internet data without understanding context

    • Satire, criticism of bias, and actual bias all get mixed together

    • Models can't distinguish between describing vs. endorsing stereotypes


Real-World Applications & Harms:

Employment:

  • Companies using ChatGPT to screen resumes

  • AI-generated job descriptions contain gendered language

  • Interview prep tools give different advice based on perceived name ethnicity

  • Estimated millions of job applications affected in 2024


Education:

  • Students using AI tutors receive stereotyped career advice

  • AI grading shows bias in evaluating essays by topic and writing style

  • Recommendation letters generated by AI reflect gender/racial stereotypes


Healthcare:

  • Doctors using AI diagnostic assistants miss conditions more often in women and people of color

  • AI mental health chatbots provide culturally inappropriate advice

  • Medical AI trained predominantly on white, male patients


Criminal Justice:

  • Lawyers using AI legal research get biased precedents

  • Judges consulting AI sentencing recommendations (some jurisdictions)

  • Predictive policing tools powered by biased AI


Content Creation:

  • AI-generated images used in marketing reinforce stereotypes

  • News organizations experimenting with AI-written articles

  • Entertainment using AI for script consultation


Stakeholders:

ChatGPT and Generative AI Bias (2023-2024 studies)
ChatGPT and Generative AI Bias (2023-2024 studies)

IB Connections:

  • Concepts: Identity (2.3) - stereotyping; Values & Ethics (2.7) - fairness and discrimination; Power (2.4) - algorithmic authority; Systems (2.6) - bias in interconnected systems

  • Content: AI (3.6) - machine learning bias; Data (3.1) - training data bias; Algorithms (3.2) - decision-making bias

  • Contexts: Social (4.7) - perpetuating inequality; Economic (4.2) - employment discrimination; Health (4.4) - healthcare bias


Interventions & Effectiveness:

Interventions & Effectiveness
Interventions & Effectiveness

Critical Evaluation:

Why is this so difficult to solve?

  1. No "neutral" baseline: Every choice embeds values - even "equal representation" may not be fair in all contexts

  2. Bias is intersectional: Can't fix gender bias and racial bias separately

  3. Moving target: As society's understanding of bias evolves, AI must too

  4. Scale: Billions of user interactions make complete monitoring impossible

  5. Opacity: Even developers often can't explain why model produces specific outputs

  6. Economic incentives: Fixing bias is expensive; companies prioritize speed to market


Philosophical Questions:

  • If AI trained on human-generated content, how can it be less biased than humans?

  • Should AI represent the world as it is (biased) or as it should be (aspirational)?

  • Who decides what counts as "bias" vs. "accurate pattern recognition"?

  • Is it possible to create "fair" AI in unfair society?


Sources:

  • Stanford HAI, AI Index Report 2024

  • Bloomberg Law, AI bias analysis

  • USC Annenberg study on image generation

  • JAMA, healthcare AI bias studies

  • MIT Technology Review ongoing coverage


CASE STUDY 6: TikTok's Algorithm and Content Moderation Bias (2024)

Date: Ongoing 2024, based on leaked documents and research


Context: TikTok's algorithm and content moderation practices have come under scrutiny for systematic bias that affects visibility of content from marginalized creators.


Key Findings:

Body Type & Appearance Bias:

  • Leaked internal documents (2024) reveal TikTok moderators instructed to suppress content from:

    • Creators with "abnormal body shape" (too fat or too thin)

    • Users with facial disabilities or disfigurement

    • People with visible poverty indicators (shabby houses, cracks in walls)

    • Elderly creators

  • Reason stated: "Reduce risk of new users being repelled"


LGBTQ+ Content Suppression:

  • Russian version of TikTok blocks LGBTQ+ content entirely (compliance with local law)

  • Global algorithm deprioritizes LGBTQ+ content compared to equivalent straight content

  • Analysis by Media Matters (2024): LGBTQ+ creators need 3x more engagement to reach "For You" page

  • Shadowbanning of hashtags: #lesbian, #gay appear suppressed vs. straight equivalents


Racial Bias in Moderation:

  • Study by USC Annenberg (2024): Black creators' videos removed for "violations" at 2.5x rate of white creators for equivalent content

  • Content about Black culture (hairstyles, dialects, music) flagged more frequently as "suspicious"

  • When appealed, reinstatement takes longer for Black creators (avg 8 days vs 2 days)


Political Content:

  • Research by NetBlocks (2024): Pro-Palestinian content suppressed during Gaza conflict

  • China-critical content systematically suppressed or shadowbanned (Citizen Lab research)

  • Uyghur activists' accounts frequently suspended


Mental Health Double Standard:

  • Content about depression from thin white women remains visible

  • Same content from fat creators or creators of color flagged as "concerning" and suppressed

  • Support communities for marginalized groups harder to find


Digital Systems Involved:

  • Recommendation Algorithm: Determines what appears on "For You" page

  • Content Moderation AI: First line of review (automated)

  • Human Moderators: Second line (often in Global South countries, paid poorly)

  • Shadowban System: Reduces visibility without notification

  • Hashtag Suppression: Prevents trending of certain topics

  • Appeal Process: Allows challenges to removals (but biased in implementation)


How the Algorithm Works (Based on disclosed information):

  1. Video Analysis:

    • Computer vision identifies faces, bodies, environments

    • Audio analysis identifies language, music, tone

    • Text analysis reads captions, hashtags

  2. Engagement Prediction:

    • Predicts likelihood user will watch full video, like, share, comment

    • "Success" = high engagement

    • Problem: Biased training data means algorithm "learns" stereotypes

  3. Diversity Injection:

    • Algorithm attempts to show variety of content

    • But defines "diverse" based on platform's priorities

    • Maintains "baseline" of conventional content

  4. Regional Customization:

    • Different rules for different countries

    • Compliance with local censorship laws

    • Problem: Most restrictive rules sometimes applied globally


Stakeholders:

TikTok's Algorithm and Content Moderation Bias (2024)
TikTok's Algorithm and Content Moderation Bias (2024)

Real-World Harms Documented:

Mental Health:

  • Teenage girls of color report feeling "invisible" when content gets no views

  • LGBTQ+ youth can't find support communities

  • Eating disorder content promoted for thin white girls, suppressed for others (strange inconsistency)


Economic:

  • Creator economy discriminates - harder for marginalized creators to monetize

  • Brand deals go to creators with high engagement = disproportionately white, conventional creators

  • Estimated millions in lost income for Black, disabled, LGBTQ+ creators


Political:

  • Information warfare: Suppression of Palestinian content during conflict affects public opinion

  • Democratic participation: Political organizing harder for suppressed groups

  • Human rights documentation: Atrocities go viral less when from certain regions/groups


Social Justice:

  • Replicates offline discrimination online

  • Creates "digital redlining" - segregated algorithmic spaces

  • Marginalizes voices that need amplification most


IB Connections:

  • Concepts: Identity (2.3) - suppression of identities; Power (2.4) - algorithmic power; Expression (2.2) - unequal speech; Values & Ethics (2.7) - discrimination

  • Content: AI (3.6) - biased recommendation systems; Media (3.5) - content gatekeeping; Algorithms (3.2) - discriminatory decision-making

  • Contexts: Social (4.7) - identity and community; Cultural (4.1D) - representation and visibility; Economic (4.2) - creator economy discrimination; Political (4.6) - censorship and control


Interventions & Evaluation:

Interventions & Evaluation
Interventions & Evaluation

Comparative Analysis: TikTok vs. Other Platforms

Comparative Analysis: TikTok vs. Other Platforms
Comparative Analysis: TikTok vs. Other Platforms

Critical Questions:

  • If algorithm reflects user preferences (which may be biased), should platform override?

  • How much transparency is technically/commercially feasible?

  • Can algorithmic fairness coexist with engagement maximization?

  • Who should decide what "fair" means in recommendation systems?


Sources:

  • Media Matters, TikTok bias research (2024)

  • USC Annenberg study

  • NetBlocks, TikTok censorship analysis

  • Leaked internal moderation documents

  • Black creator lawsuit filings

BIBLIOGRAPHY

Conflict, Peace and Security


Participation and Representation


Digital Society : Diversity and Discrimination


Additional Resources

IB DP HL Digital Society student studying Diversity and Discrimination
IB DP HL Digital Society student studying Diversity and Discrimination

6 Comments


Beatriz Barata
Beatriz Barata
4 hours ago

Great post — really thoughtful and insightful! Your examination of how digital systems intersect with governance and human rights challenges gives a clear, big-picture view of the world’s problems today. I especially appreciate the way you highlight global conflicts, cyber-threats and digital discrimination — all so relevant. Thanks for putting this together and encouraging deeper reflection on these critical issues! Beatriz Barata

Like

Veronica Dantas
Veronica Dantas
5 hours ago

Great post — very insightful! 🌟 I appreciate how you clearly map out the connections between digital systems and governance/human-rights challenges, while grounding them in real global statistics and case studies. The breakdown of issues like cyber warfare, displacement, digital discrimination and democratic backsliding makes the stakes tangible. This kind of rigorous analysis helps readers understand the profound implications of digital society. Thanks for sharing! Veronica Dantas

Like

Sidney De Queiroz Pedrosa
Sidney De Queiroz Pedrosa
5 hours ago

Great post! Really enjoyed how you unpacked the links between digital systems, governance and human rights — the statistics you shared bring powerful clarity to the challenges we face globally. 👏 Looking forward to more thoughtful posts like this. Sidney De Queiroz Pedrosa

Like

LuizAntonio DuarteFerreira
LuizAntonio DuarteFerreira
a day ago

This is a truly insightful post! It does an excellent job of exploring how digital systems intersect with governance and human rights — from cybersecurity risks to democratic backsliding. Thanks for breaking down complex data and sparking important questions about our digital future. Luiz Antonio Duarte Ferreira

Like

Daniel Dantas
Daniel Dantas
a day ago

What a thought-provoking and comprehensive post! 🌐 This article on “5.2 Governance and Human Rights” brilliantly highlights how digital systems reshape global challenges — from cyber threats and displaced populations to algorithmic bias and democratic decline. The depth and clarity make it a valuable resource for anyone studying digital society. Kudos to the author — much appreciated! Daniel Dantas

Like
  • Instagram
  • Youtube
  • X

2025 IBDP DIGITAL SOCIETY | LUKE WATSON TEACH

bottom of page