3.6 Artificial Intelligence: Digital Society Content Deep Dive
- lukewatsonteach
- Apr 9
- 25 min read
Updated: 22 hours ago
This is a Digital Society IB DP: Three-Level Study Resources for 3.6 Artificial Intelligence topics to help IB DP Digital Society students prepare for their exams with a focus on.
Level 1: Quick Study (Time-Poor Students)
Comprehensive Glossary of Critical AI Terms with Definitions and Examples
Types of AI
Narrow/Weak AI: AI systems designed and trained for a specific task without general cognitive abilities.
Example 1: Gmail's smart reply feature that suggests short responses to emails based on content analysis
Example 2: Chess programs like Stockfish that can defeat grandmasters but cannot perform any other tasks
General/Strong AI: Hypothetical AI with human-like cognitive abilities across multiple domains.
Example 1: Research projects like DeepMind's Gato, which can perform over 600 different tasks using a single neural network
Example 2: OpenAI's GPT-4, which demonstrates abilities across language, reasoning, and image interpretation
Superintelligent AI: Theoretical AI that surpasses human intelligence across all domains.
Example 1: HAL 9000 from "2001: A Space Odyssey," a fictional AI that demonstrates cognitive abilities far beyond humans
Example 2: The concept of "intelligence explosion" proposed by I.J. Good, where AI systems capable of improving themselves create increasingly superior versions
Types of Machine Learning
Supervised Learning: Training algorithms on labeled data to make predictions or categorizations.
Example 1: Credit card fraud detection systems trained on millions of labeled transactions to identify suspicious activities
Example 2: Automatic image captioning systems trained on pairs of images and human-written descriptions
Unsupervised Learning: Algorithms that identify patterns in unlabeled data without explicit instruction.
Example 1: Retail store algorithms that identify purchasing patterns to create product placement strategies without predefined categories
Example 2: Medical research tools that cluster patient data to discover previously unknown disease subtypes
Reinforcement Learning: Training algorithms through trial and error using reward/penalty systems.
Example 1: Boston Dynamics robots that learn to navigate complex terrain through trial and error with rewards for successful movement
Example 2: OpenAI's DOTA 2 bots that learned to defeat professional players by playing millions of games against themselves
Artificial Neural Networks
Convolutional Neural Networks (CNNs): Neural networks specialized for visual pattern recognition and image processing.
Example 1: Google Lens, which identifies objects in photos taken by smartphone cameras
Example 2: Dermatology AI systems that can identify potential skin cancers from photographs with accuracy comparable to specialists
Recurrent Neural Networks (RNNs): Neural networks designed to recognize patterns in sequential data.
Example 1: Smart reply systems in messaging apps that suggest contextually appropriate responses
Example 2: Weather prediction models that incorporate sequential time-series data to forecast conditions
Generative Adversarial Networks (GANs): Neural network architecture where two networks compete to generate authentic content.
Example 1: ThisPersonDoesNotExist.com, which generates photorealistic images of non-existent people
Example 2: Style transfer applications that can transform photographs to mimic the style of famous artists
Evolution of AI
Symbolic AI: Early AI approach based on explicit rules and logic programming.
Example 1: Tax preparation software that follows explicit rules to calculate deductions
Example 2: Flight booking systems that use logical rules to determine optimal routes and connections
Connectionism: AI approach using interconnected networks inspired by human neural systems.
Example 1: Speech recognition systems like those in smartphones that learn patterns from millions of voice samples
Example 2: Recommendation engines that learn connection patterns between user behaviors and content preferences
Hybrid Systems: AI that combines multiple approaches for more flexible problem-solving.
Example 1: Autonomous vehicles that combine rule-based systems for traffic laws with neural networks for object recognition
Example 2: Modern chess engines that combine tree search algorithms with neural network evaluation functions
AI Dilemmas
Algorithmic Bias: Systematic errors in AI systems that create unfair outcomes for certain groups.
Example 1: COMPAS criminal risk assessment tool found to predict higher recidivism rates for Black defendants than white defendants with similar profiles
Example 2: Gender Shades research that found commercial facial recognition had error rates up to 34.7% higher for darker-skinned women than lighter-skinned men
AI Accountability Gap: The challenge of determining responsibility when AI systems cause harm.
Example 1: Uber's self-driving car fatality in Arizona in 2018, which raised questions about responsibility between the company, the safety driver, and the software developers
Example 2: Automated welfare eligibility systems that make decisions without clear appeal processes for affected individuals
Black Box Problem: The inability to explain or interpret how complex AI systems reach decisions.
Example 1: AI-powered loan approval systems where applicants cannot understand why they were denied
Example 2: Medical diagnosis systems that recommend treatments without providing the reasoning that doctors can verify
Regulatory Lag: The gap between rapidly advancing AI technology and the development of appropriate governance.
Example 1: Clearview AI collecting billions of facial images from social media before comprehensive regulations were in place
Example 2: Deepfake technology advancing rapidly while legal frameworks addressing synthetic media misuse are still developing
Technological Unemployment: Job loss due to automation and AI replacing human roles.
Example 1: Automated checkout systems in retail stores reducing cashier positions
Example 2: Autonomous trucking technology that threatens millions of professional driving jobs
AI Augmentation: Technology designed to enhance rather than replace human capabilities.
Example 1: GitHub Copilot assisting programmers by suggesting code snippets based on context
Example 2: AI-enhanced microscopes that highlight potential cancer cells to help pathologists make more accurate diagnoses
Technical Terms
Algorithm: A step-by-step procedure or formula for solving a problem or accomplishing a task, forming the basis of all AI systems.
Example 1: PageRank algorithm that determines Google search result rankings
Example 2: Dijkstra's algorithm used in GPS navigation to find the shortest path between locations
Dataset: Collection of information used to train AI systems, with the quality and diversity directly affecting system performance.
Example 1: ImageNet, a collection of over 14 million annotated images used to train visual recognition systems
Example 2: The Common Crawl dataset containing petabytes of web page data used to train large language models
Deep Learning: A subset of machine learning using neural networks with multiple layers, enabling more complex pattern recognition.
Example 1: DeepMind's AlphaFold that predicts protein structures with unprecedented accuracy
Example 2: Tesla's Autopilot system that uses deep learning to interpret camera inputs for driver assistance
Machine Learning (ML): AI approach where systems learn from data rather than being explicitly programmed.
Example 1: Spotify's music recommendation system that learns from user listening patterns
Example 2: Gmail's spam filter that continuously improves based on user feedback
Neural Network: Computing system inspired by biological neural networks, forming the foundation of modern AI.
Example 1: The neural networks in Apple's Face ID system that authenticate users by recognising facial features
Example 2: Google Translate's neural machine translation system that considers entire sentences for more natural translations
Training: Process of teaching an AI system by exposing it to examples and adjusting its parameters to improve performance.
Example 1: Amazon's process of training Alexa on millions of voice commands to improve speech recognition
Example 2: Training radiology AI systems on thousands of labeled medical images to identify conditions
Inference: Process where a trained AI model makes predictions or decisions based on new inputs.
Example 1: Smart doorbell cameras identifying package deliveries in real-time using trained models
Example 2: Sentiment analysis tools analyzing customer reviews to determine positive or negative opinions
Parameters: Configurable variables in AI models that are adjusted during training to improve performance.
Example 1: GPT-4's parameters that are adjusted during training to improve language generation
Example 2: Instagram's algorithm parameters that determine what content appears in a user's feed
Overfitting: When an AI model learns patterns specific to training data but fails to generalize to new data.
Example 1: A facial recognition system that works perfectly in lab conditions but fails in real-world lighting
Example 2: Medical diagnosis systems that perform well on training hospital data but poorly when deployed in different facilities
Feature Extraction: Process of identifying relevant characteristics in data for AI analysis.
Example 1: Shazam extracting audio fingerprints from songs for music identification
Example 2: LinkedIn extracting key skills and experiences from resumes to match with job opportunities
Practical Applications
Computer Vision: AI technology that enables machines to interpret and understand visual information from the world.
Example 1: Amazon Go stores using computer vision to track items taken from shelves for checkout-free shopping
Example 2: Agricultural drones that monitor crop health by analyzing field imagery
Natural Language Processing (NLP): AI technology focused on enabling computers to understand, interpret, and generate human language.
Example 1: Customer service chatbots that can understand and respond to customer inquiries
Example 2: Language translation apps that can translate speech in real-time between multiple languages
Predictive Analytics: Use of data, statistical algorithms, and machine learning techniques to identify the likelihood of future outcomes.
Example 1: Weather forecasting systems that predict storm patterns and intensity
Example 2: Retail inventory management systems that predict demand to optimize stock levels
Robotics: Field that combines AI with physical systems to create machines that can interact with the physical world.
Example 1: Warehouse robots that autonomously pick and pack orders
Example 2: Surgical robots that assist surgeons with precise movements during operations
Autonomous Systems: Self-governing machines capable of operating without human intervention based on AI decision-making.
Example 1: Autonomous vacuum cleaners that navigate homes and clean without supervision
Example 2: Self-driving delivery vehicles that navigate city streets to deliver packages
Expert Systems: AI programs designed to replicate the decision-making abilities of human experts in specific domains.
Example 1: Medical diagnostic systems that suggest potential diagnoses based on symptoms
Example 2: Financial advisory systems that recommend investment strategies based on client profiles
Pattern Recognition: Technology that recognizes patterns and regularities in data, fundamental to many AI applications.
Example 1: Handwriting recognition software used in postal sorting systems
Example 2: Music streaming services that identify patterns in listening habits to recommend new artists
Decision Support Systems: AI systems that assist human decision-makers by analyzing complex data and suggesting potential actions.
Example 1: Healthcare systems that help doctors select optimal treatment plans based on patient data
Example 2: Urban planning tools that model traffic patterns to optimize road design and traffic light timing
Basic AI Comparison Charts

2-Mark AO1 Practice Exam Questions
Define the term "Narrow AI." (2)
Outline two characteristics of supervised learning. (2)
State two examples of Generative Adversarial Networks in everyday applications. (2)
Identify two differences between Symbolic AI and Connectionism. (2)
Describe what a Convolutional Neural Network is used for. (2)
List two potential consequences of algorithmic bias. (2)
Define the black box problem in AI systems. (2)
State two examples of reinforcement learning applications. (2)
Identify two characteristics of technological unemployment. (2)
Outline the concept of AI augmentation. (2)
Describe what overfitting means in machine learning. (2)
List two applications of Recurrent Neural Networks. (2)
Define the term "feature extraction" in AI systems. (2)
State two examples of AI accountability gaps. (2)
Identify two characteristics of superintelligent AI. (2)
Outline the concept of regulatory lag in AI governance. (2)
Describe the purpose of neural network parameters. (2)
List two applications of computer vision technology. (2)
Define what an autonomous system is in AI. (2)
Identify two examples of decision support systems in different industries. (2)
Level 2: Medium Study Time
AI Case Studies with 4-Mark AO2 Questions, AI Timeline & Characteristics, Advantages & Disadvantages
Case Study 1: Amazon's AI Recruitment Tool Bias
In 2018, Amazon abandoned its AI recruitment tool after discovering it discriminated against women. The system was trained using resumes submitted to the company over a 10-year period, most of which came from men, reflecting the male dominance in the tech industry. The AI learned to penalize resumes that included terms associated with women, such as "women's chess club captain" and even downgraded graduates from all-women's colleges. Amazon's engineers attempted to edit the algorithm to be neutral toward these terms, but ultimately couldn't guarantee the system wouldn't find other ways to discriminate, leading to the project's termination.
4-Mark Question: Explain two ways algorithmic bias in AI recruitment tools could impact workforce diversity in technology companies. (4)
Case Study 2: AlphaGo vs. Lee Sedol
In March 2016, DeepMind's AlphaGo defeated world champion Lee Sedol in the ancient board game Go, winning 4 out of 5 matches. This achievement was considered a milestone in AI development, as Go has significantly more possible positions than chess and requires intuition along with calculation. AlphaGo combined neural networks with tree search techniques, initially learning from human expert games (supervised learning) before improving through self-play (reinforcement learning). Most notably, in Game 2, AlphaGo made a highly unconventional move (Move 37) that commentators initially thought was a mistake but proved to be brilliant, demonstrating that AI could develop novel strategies beyond human conventions.
4-Mark Question: Analyze how AlphaGo's victory demonstrates both the capabilities and limitations of reinforcement learning in AI. (4)
Case Study 3: COMPAS Recidivism Algorithm
The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is an algorithm used in the U.S. justice system to predict the likelihood that a defendant will reoffend. In 2016, ProPublica investigated the system and found that it was twice as likely to incorrectly flag Black defendants as high risk compared to white defendants, while white defendants were more likely to be incorrectly flagged as low risk. The tool's developer, Northpointe (now Equivant), disputed these findings, stating their tool was equally accurate across racial groups in terms of overall accuracy. The case highlighted how different mathematical definitions of "fairness" can yield contradictory results, raising profound questions about how justice systems should implement AI tools.
4-Mark Question: Discuss two ethical challenges that arise when predictive algorithms are used in criminal justice systems. (4)
Case Study 4: Boston Dynamics' Robot Evolution
Boston Dynamics has developed a series of increasingly sophisticated robots over the past decade. Their evolution shows the rapid advancement of physical AI systems: from BigDog (2005), a quadruped robot that could navigate difficult terrain while carrying heavy loads; to Atlas (2013), a humanoid robot that can run, jump, and perform backflips; to Spot (2020), a commercially available quadruped robot being used in industrial settings for tasks like remote inspection. These robots combine various AI approaches, including reinforcement learning for movement and computer vision for navigation. While initially developed with military funding, Boston Dynamics (now owned by Hyundai) has shifted toward commercial and industrial applications, sparking discussions about how autonomous robots may transform workplaces.
4-Mark Question: Examine two potential impacts of increasingly autonomous robots like those from Boston Dynamics on the future of industrial work. (4)
Case Study 5: Cambridge Analytica Data Scandal
In 2018, it was revealed that Cambridge Analytica had harvested personal data from millions of Facebook users without consent. The company used this data to build psychological profiles of voters, which were then used to target political advertisements during the 2016 US presidential election and the UK Brexit referendum. The company employed machine learning algorithms to analyze user data and predict personality traits, political leanings, and susceptibility to different types of messaging. The scandal highlighted how AI systems can leverage vast datasets to influence decision-making at scale, raising concerns about privacy, consent, and the potential for algorithmic manipulation of democratic processes.
4-Mark Question: Explain two ways in which AI-powered data analytics could threaten democratic processes. (4)
Case Study 6: IBM Watson for Oncology
IBM Watson for Oncology was launched with the promise of revolutionising cancer treatment by using AI to recommend personalised treatment plans for patients. The system was trained using case data and treatment guidelines from Memorial Sloan Kettering Cancer Centre. However, by 2018, reports emerged of "multiple examples of unsafe and incorrect treatment recommendations" made by the system. Investigation revealed several problems: the training data was not diverse enough to be globally applicable, there was a lack of transparency in how recommendations were generated, and doctors found it difficult to understand or verify Watson's reasoning. IBM subsequently scaled back its healthcare ambitions, illustrating the challenges of deploying AI in high-stakes medical environments.
4-Mark Question: Analyse two implications of the black box problem when AI systems are used in healthcare diagnostics. (4)
Simple Timeline of AI Development
1950s-1960s: Birth of AI
1950: Alan Turing proposes the "Turing Test"
1956: Dartmouth Conference coins the term "Artificial Intelligence"
1958: Frank Rosenblatt invents the Perceptron, an early neural network
1970s-1980s: AI Winter and Expert Systems
1970s: Development of expert systems like MYCIN
1980s: Rise of Symbolic AI and rule-based systems
1990s-2000s: Revival and Machine Learning
1997: IBM's Deep Blue defeats chess champion Garry Kasparov
Early 2000s: Machine learning becomes more practical with increased computing power
2010s-Present: Deep Learning Revolution
2012: AlexNet breakthrough in image recognition using deep learning
2016: AlphaGo defeats world champion Lee Sedol
2020s: Large language models and multimodal AI systems emerge
Your task - expand this simple timeline, especially from 2020-present.
Artificial Intelligence (AI) Key Terminology: Characteristics and Advantages/Disadvantages
3.6A Types of AI
Narrow/Weak AI
Characteristics:
Performs specific, predefined tasks
Cannot transfer learning from one domain to another
Has no self-awareness or contextual understanding
Requires human programming and supervision
Operates within strict parameters
Advantages:
Highly efficient for specialised tasks
More easily controlled and understood by designers
Lower computational requirements than broader AI systems
Easier to implement for specific business needs
Presents fewer ethical concerns than general AI
Disadvantages:
Limited to its programmed domain
Lacks adaptability to new situations
Cannot understand context beyond its programming
Requires separate systems for different tasks
Limited creative problem-solving abilities
General/Strong AI
Characteristics:
Capable of performing any intellectual task a human can
Transfers learning across different domains
Demonstrates common sense reasoning
Can understand context and nuance
Adapts to new situations without specific programming
Advantages:
Would revolutionize problem-solving across all fields
Could tackle complex, multifaceted challenges
Would possess creative thinking capabilities
Potentially more efficient than many narrow AI systems combined
Could understand human needs more comprehensively
Disadvantages:
Currently theoretical and not fully realized
Raises profound ethical and existential questions
Could be unpredictable or difficult to control
Presents significant security and safety concerns
Could lead to human dependency on AI systems
Superintelligent AI
Characteristics:
Intelligence far surpassing human capabilities
Potential for recursive self-improvement
Ability to analyze and understand concepts beyond human comprehension
Potential autonomy beyond human control
Operates across all intellectual domains simultaneously
Advantages:
Could solve humanity's most complex problems
Might advance science and technology exponentially
Could potentially discover new knowledge beyond human discovery capabilities
Might optimize resource allocation globally
Could potentially help address existential threats to humanity
Disadvantages:
Presents profound control and alignment problems
Could pose existential risk if misaligned with human values
Unpredictable consequences for society and power structures
Might render human intellectual contribution obsolete
Philosophical implications for human purpose and meaning
3.6B Types and Uses of Machine Learning
Supervised Learning
Characteristics:
Trained using labelled datasets
Learns from known input-output pairs
Requires human annotation of training data
Makes predictions based on historical patterns
Performance can be measured against known correct answers
Advantages:
Produces highly accurate models for specific tasks
Results are measurable and can be validated
Well-suited for classification and regression problems
Clear training methodology
Easier to understand why certain predictions are made
Disadvantages:
Requires large amounts of labelled data
Human biases can be encoded in training data
Limited to patterns present in training data
Can be expensive and time-consuming to create labeled datasets
May struggle with novel situations not represented in training
Unsupervised Learning
Characteristics:
Works with unlabeled data
Identifies inherent structures and patterns
Discovers relationships without prior knowledge
Self-organises data into meaningful clusters
No "correct answers" to validate against
Advantages:
Doesn't require expensive labelled datasets
Can discover unexpected patterns and relationships
Useful for exploratory data analysis
Can work with diverse data sources
Adaptable to a wider range of problems
Disadvantages:
Results can be difficult to validate objectively
Patterns discovered may not be relevant to intended purpose
Often requires human interpretation of results
May identify spurious correlations
Generally less precise than supervised learning for specific tasks
Reinforcement Learning
Characteristics:
Learns through trial and error
Uses reward/penalty feedback systems
Involves an agent interacting with an environment
Optimises for maximum cumulative reward
Balances exploration of new strategies with exploitation of known good strategies
Advantages:
Can learn complex behaviours with minimal human guidance
Adapts to changing environments
Develops novel solutions humans might not consider
Well-suited for sequential decision-making problems
Can optimise for long-term rather than immediate rewards
Disadvantages:
Often requires massive computational resources
Can be unstable during training
May learn unexpected or undesirable behaviours if reward function is poorly designed
Difficult to apply in scenarios where mistakes are costly
Challenging to design appropriate reward structures
3.6C Uses of Artificial Neural Networks
Convolutional Neural Networks (CNNs)
Characteristics:
Specialised for grid-like data (images, video)
Uses convolutional filters to detect spatial features
Hierarchical feature extraction
Shared weights across the spatial dimension
Preserves spatial relationships between pixels
Advantages:
Excellent performance on visual recognition tasks
Reduces computational requirements through parameter sharing
Built-in translation invariance (recognises objects regardless of position)
Effective feature extraction without manual engineering
Scales well with data and computational resources
Disadvantages:
Requires large training datasets
Computationally intensive to train
Can be prone to overfitting without proper regularisation
Limited interpretability of intermediate features
Struggles with understanding context beyond visual patterns
Recurrent Neural Networks (RNNs)
Characteristics:
Processes sequential data using internal memory
Maintains state information between inputs
Can handle variable-length input sequences
Information cycles through feedback loops
Designed to find patterns over time or sequence
Advantages:
Well-suited for time series, text, and speech data
Can capture long-range dependencies in sequences
Remembers context from earlier in a sequence
Flexible architecture for many sequence-based tasks
Can generate new sequential content
Disadvantages:
Prone to vanishing/exploding gradient problems
Difficulty learning very long-range dependencies
Computationally expensive for long sequences
Training can be unstable
Often outperformed by transformer architectures for many tasks now
Generative Adversarial Networks (GANs)
Characteristics:
Consists of generator and discriminator networks in competition
Generator creates content, discriminator evaluates authenticity
Unsupervised learning approach
Trains through adversarial process
Learns data distributions implicitly
Advantages:
Can generate highly realistic synthetic data
Learns complex data distributions without explicit modelling
Creates novel content rather than just classifying
Continues improving through competitive training
Useful for data augmentation and simulation
Disadvantages:
Notoriously difficult to train and tune
Can suffer from mode collapse (generating limited varieties)
Training instability and convergence issues
Difficult to evaluate objectively
Can be used to create misleading deepfakes
3.6D Evolution of AI
Symbolic AI
Characteristics:
Based on explicit rules and logic
Uses symbols to represent knowledge
Relies on human-encoded knowledge
Transparent reasoning process
Strong logical inference capabilities
Advantages:
Highly interpretable decision-making
Can explain its reasoning process
Encodes expert knowledge directly
Works well with limited data
Precise control over behaviour
Disadvantages:
Struggles with ambiguity and uncertainty
Requires extensive manual knowledge engineering
Difficulty handling exceptions to rules
Cannot easily learn from experience
Inflexible in novel situations
Connectionism
Characteristics:
Inspired by brain neural structures
Learns from data rather than explicit rules
Distributed representation of knowledge
Parallel processing of information
Emergent intelligence from simple connected units
Advantages:
Learns patterns directly from data
Better handles noisy or incomplete information
Can discover patterns humans might miss
Generalizes well to similar but unseen examples
Adaptable to changing environments
Disadvantages:
"Black box" nature limits interpretability
Can require massive amounts of training data
May encode biases present in training data
Difficult to incorporate prior knowledge
Challenging to debug when errors occur
Hybrid Systems
Characteristics:
Combines multiple AI approaches
Integrates symbolic reasoning with neural networks
Uses different techniques for different subtasks
Balances knowledge-based and data-driven components
Often employs modular architecture
Advantages:
Leverages strengths of multiple approaches
Can be both interpretable and adaptable
Often performs better than single-approach systems
More robust across varied problems
Can incorporate both human expertise and learned patterns
Disadvantages:
Increased system complexity
More difficult to design and implement
May have conflicting internal representations
Challenging to integrate different learning paradigms
Often requires more careful engineering
Level 3: AI Dilemmas, Key Theories with AO3 Exam Questions
AI Dilemmas with AO3 Questions
Dilemma 1: Automation and Employment
The rapid advancement of AI and automation technologies is fundamentally transforming labor markets worldwide, challenging traditional employment patterns that have existed for generations. While technological revolutions throughout history have always displaced certain jobs, the current AI revolution is unique in its pace, scope, and ability to affect both manual and cognitive labor simultaneously. McKinsey Global Institute estimates that between 400-800 million jobs could be automated by 2030, representing up to 30% of the global workforce. This transition raises profound questions about economic systems, income distribution, education, and the very nature of work in society. Unlike previous technological shifts, AI may not create sufficient new jobs to offset those eliminated, potentially leading to structural unemployment in certain sectors and deepening existing socioeconomic inequalities.
Viewpoint A: AI automation will create technological unemployment, displacing millions of workers across sectors. The unprecedented speed and breadth of AI-driven automation will outpace our ability to retrain workers or create new jobs, leading to permanent structural unemployment. This will disproportionately impact vulnerable populations with less education and fewer resources for retraining.
Viewpoint B: AI will create new job categories and augment human capabilities, similar to previous technological revolutions. Historical evidence shows that technological revolutions initially displace jobs but ultimately create more employment in new sectors. AI will eliminate routine tasks while enhancing human creativity and interpersonal skills, creating entirely new industries and work opportunities.
Viewpoint C: The impact will be uneven, requiring targeted retraining programs and potential universal basic income policies. AI will transform rather than eliminate work, but this transition requires proactive policies including education reform, mid-career retraining, and potentially new social safety nets like universal basic income to ensure equitable distribution of AI-created prosperity. The automation dilemma ultimately represents a pivotal moment in human history where technology's relationship to labor must be fundamentally reconsidered. The outcome will depend not only on technological capabilities but on policy choices, business decisions, and societal values. Whatever path emerges will redefine not just employment but how we conceptualize work, purpose, and economic participation in the 21st century.
8-Mark Question: To what extent is the fear of widespread technological unemployment due to AI justified? Evaluate multiple perspectives in your response. (8)
Dilemma 2: Regulation of AI Development
As artificial intelligence becomes increasingly powerful and integrated into critical systems, the question of how to regulate its development has emerged as one of the most consequential policy challenges of our time. AI regulation must balance competing interests: fostering innovation that can solve pressing global problems while preventing harmful applications and ensuring systems remain beneficial to humanity. The issue is complicated by the global nature of AI development, where regulatory fragmentation between nations could lead to "regulatory arbitrage" with companies relocating to jurisdictions with minimal oversight. Moreover, the technical complexity of AI systems makes effective governance particularly challenging, as regulators may lack the expertise to evaluate increasingly sophisticated technologies, while the rapid pace of innovation can quickly render regulations obsolete.
Viewpoint A: Strict regulation is needed to prevent harmful AI applications and ensure safety. The unprecedented risks posed by advanced AI—from autonomous weapons to social manipulation at scale—require robust governmental oversight. Proactive regulation should include mandatory safety testing, transparency requirements, liability frameworks, and possibly development moratoria for particularly dangerous capabilities.
Viewpoint B: Excessive regulation will hamper innovation and beneficial developments. Heavy-handed regulation risks stifling the development of technologies that could address humanity's greatest challenges, from climate change to disease. The complexity and unpredictability of AI development means regulators cannot foresee all consequences, making flexible, industry-led standards more appropriate than rigid legal frameworks.
Viewpoint C: A balanced, adaptive regulatory framework focusing on high-risk applications is optimal. Regulation should be proportionate to risk, with stricter oversight for AI systems deployed in critical domains like healthcare or criminal justice. "Regulatory sandboxes" can test governance approaches, while international coordination can prevent regulatory fragmentation while allowing for cultural and regional differences in AI governance.
The resolution of this dilemma will significantly shape AI's development trajectory and its impact on society. Regulatory choices made today will influence whether AI primarily amplifies existing power structures or democratizes access to technology's benefits. They will also determine whether AI systems uphold or undermine fundamental human values like privacy, autonomy, and dignity, setting precedents that may endure for generations.
8-Mark Question: Evaluate the claim that self-regulation by the AI industry is sufficient to address ethical concerns about AI development. (8)
Dilemma 3: Transparency vs. Performance
Modern artificial intelligence systems, particularly deep learning models, present a fundamental tension between performance and transparency. As AI systems have grown more powerful, they have also become more opaque, with the most capable systems often functioning as "black boxes" whose decision-making processes cannot be easily explained or interpreted. This opacity becomes particularly problematic as AI increasingly makes or influences high-stakes decisions in domains like healthcare, criminal justice, and financial services. While transparency supports important values like accountability, fairness, and user trust, the most accurate and powerful AI approaches often sacrifice explainability for performance, creating a technical and ethical dilemma that cuts to the heart of responsible AI development.
Viewpoint A: All AI systems should be fully explainable, even at the cost of performance. Citizens have a right to understand how decisions affecting their lives are made. In high-stakes domains, transparency should be non-negotiable, as accountability and fairness outweigh marginal performance improvements. If a system cannot explain its decisions, it should not be deployed in sensitive applications regardless of its accuracy.
Viewpoint B: Performance should be prioritized, with post-hoc explanation methods developed. The benefits of highly capable AI systems—even if partially opaque—outweigh the costs of using less capable but more transparent alternatives. Research should focus on developing better techniques to explain complex models after the fact, rather than limiting AI capabilities for the sake of inherent transparency.
Viewpoint C: Different standards should apply to different domains based on risk and impact. A nuanced approach balances transparency and performance based on context. Critical applications like medical diagnosis or judicial decision-making warrant higher transparency requirements, while lower-risk applications like entertainment recommendations can prioritize performance with minimal explanation. This dilemma reflects a deeper question about the relationship between humans and increasingly autonomous systems. How we resolve the tension between transparency and performance will ultimately determine whether AI systems remain tools under meaningful human oversight or evolve into autonomous agents whose decisions we must accept without fully understanding. The technical challenge of creating explainable yet powerful AI may ultimately require new approaches that fundamentally reconcile these seemingly opposed values.
8-Mark Question: Discuss whether algorithmic transparency should be legally required for AI systems used in public services. Consider multiple ethical perspectives. (8)
Key Theories and Their Practical Implications
Computational Theory of Mind
Core concept: The mind operates like a computer, with thoughts being computational processes
Implications: Supports the possibility of creating true artificial general intelligence by replicating computational structures
Applied in: Cognitive architectures for AI systems attempting to model human reasoning
Embodied Cognition
Core concept: Intelligence requires a body and physical interaction with the environment
Implications: Questions whether disembodied AI can achieve true intelligence
Applied in: Robotics and physical AI systems that interact with the world
Distributed Representation Theory
Core concept: Knowledge is encoded across networks of connections rather than specific locations
Implications: Supports neural network approaches to AI rather than symbolic representations
Applied in: Deep learning systems that distribute knowledge across millions of parameters
Profiles of Influential Thinkers
Alan Turing (1912-1954)
Key contributions: Proposed the Turing Test; conceptualised machine intelligence
Perspective: Intelligence should be judged by behaviour, not internal processes
Influence: Established foundational concepts for evaluating artificial intelligence
Marvin Minsky (1927-2016)
Key contributions: Co-founded MIT's AI laboratory; developed early neural networks
Perspective: Intelligence requires multiple approaches, including symbolic reasoning
Influence: Pioneered both symbolic AI and connectionist approaches
Yoshua Bengio (1964-present)
Key contributions: Pioneer in deep learning and neural networks
Perspective: Neural networks can learn hierarchical representations to solve complex problems
Influence: Helped spark the current deep learning revolution
Advanced Questions on AI Theories and Thinkers
Key Theories Questions
Computational Theory of Mind
8-Mark Questions:
Evaluate the claim that human consciousness can be fully replicated through computational models. (8)
To what extent does the computational theory of mind provide a useful framework for developing artificial general intelligence? (8)
Discuss how the computational theory of mind influences modern approaches to natural language processing in AI systems. (8)
12-Mark Questions:
Discuss the ethical implications of creating artificial general intelligence based on computational theory of mind. Consider multiple perspectives and implications for human uniqueness. (12)
To what extent is the computational theory of mind sufficient to explain human cognitive functions? Evaluate its strengths and limitations for AI development. (12)
"The computational theory of mind fundamentally misunderstands human intelligence." Evaluate this claim with reference to both supporters and critics of the theory. (12)
Embodied Cognition
8-Mark Questions:
Compare and contrast the implications of embodied cognition theory with computational theory of mind for AI development. (8)
Examine the limitations of disembodied AI systems through the lens of embodied cognition theory. (8)
To what extent is physical interaction with the environment necessary for developing truly intelligent AI systems? (8)
12-Mark Questions:
Evaluate the claim that truly intelligent AI systems must have physical bodies. Consider the implications for different types of artificial intelligence applications. (12)
Discuss how embodied cognition theory challenges traditional approaches to AI development. Consider both theoretical and practical implications. (12)
To what extent should embodied cognition theory influence the development of AI systems for different applications? Consider contexts where embodiment may or may not be necessary. (12)
Distributed Representation Theory
8-Mark Questions:
Evaluate how distributed representation theory has transformed our approach to knowledge representation in AI systems. (8)
Discuss the advantages and limitations of distributed representations compared to symbolic approaches in AI. (8)
To what extent does distributed representation theory explain both the capabilities and limitations of current large language models? (8)
12-Mark Questions:
Evaluate the claim that distributed representation theory has solved the knowledge representation problem in artificial intelligence. Consider both successes and limitations. (12)
Discuss how distributed representation theory has influenced both the capabilities and limitations of modern AI systems. Consider applications in various domains. (12)
"Distributed representations in neural networks create inherently uninterpretable systems." Evaluate this claim with reference to the tension between performance and explainability in AI. (12)
Influential AI Thinkers Questions
Alan Turing
8-Mark Questions:
Examine how Alan Turing's conception of machine intelligence continues to influence AI evaluation methods today. (8)
To what extent is the Turing Test still relevant for evaluating modern AI systems? (8)
Discuss the strengths and limitations of Turing's behavioral approach to defining machine intelligence. (8)
12-Mark Questions:
"The Turing Test fundamentally misunderstands the nature of intelligence." Evaluate this claim with reference to both historical and contemporary perspectives on AI. (12)
Discuss how Turing's ideas about machine intelligence have shaped both technological development and philosophical debates about the mind. Consider multiple perspectives. (12)
To what extent was Turing's vision of artificial intelligence ahead of its time? Evaluate with reference to both historical context and contemporary developments. (12)
Marvin Minsky
8-Mark Questions:
Evaluate Minsky's contribution to the development of symbolic AI and its lasting impact on the field. (8)
To what extent does Minsky's perspective on multiple approaches to intelligence provide a useful framework for contemporary AI research? (8)
Compare and contrast Minsky's vision of AI with current developments in neural network-based systems. (8)
12-Mark Questions:
Evaluate Minsky's claim that intelligence requires multiple diverse approaches rather than a single unified system. Consider the implications for modern AI development. (12)
Discuss how Minsky's work has influenced both the symbolic and connectionist traditions in artificial intelligence. Consider areas of success and limitation in each approach. (12)
"Marvin Minsky's vision of AI has been superseded by modern deep learning approaches." Evaluate this claim with reference to both historical developments and current AI systems. (12)
Yoshua Bengio
8-Mark Questions:
Examine how Bengio's work on deep learning has transformed practical applications of AI in society. (8)
To what extent has Bengio's approach to neural networks addressed previous limitations in AI systems? (8)
Discuss the relationship between Bengio's technical contributions and the ethical challenges posed by deep learning systems. (8)
12-Mark Questions:
Evaluate the claim that Bengio's work on deep learning represents a paradigm shift in artificial intelligence. Consider both technical and philosophical implications. (12)
Discuss the extent to which Bengio's approach to neural networks addresses or perpetuates longstanding challenges in AI, such as the black box problem. (12)
"Deep learning as pioneered by researchers like Bengio will ultimately lead to artificial general intelligence." Evaluate this claim from multiple theoretical perspectives. (12)
IB DP Digital Society Exam Questions - Artificial Intelligence
2-Mark AO1 Command Term Questions
Define (2 marks each)
Define the term "narrow AI."
Define what is meant by "supervised learning."
Define "algorithmic bias."
Define the term "Convolutional Neural Network."
Define "technological unemployment" in the context of AI.
Define what is meant by a "black box problem" in AI systems.
Define "reinforcement learning."
Define "AI augmentation."
Define "Generative Adversarial Networks."
Define "hybrid AI systems."
Describe (2 marks each)
Describe two key characteristics of machine learning.
Describe how weak AI differs from strong AI.
Describe two purposes of artificial neural networks.
Describe two ways AI systems can demonstrate bias.
Describe two challenges related to AI accountability.
Describe two characteristics of symbolic AI.
Describe two applications of unsupervised learning.
Describe two features of recurrent neural networks.
Describe two aspects of the regulatory lag in AI governance.
Describe two ways AI might displace human workers.
Outline (2 marks each)
Outline two advantages of narrow AI systems.
Outline two limitations of current machine learning approaches.
Outline two ethical concerns related to facial recognition systems.
Outline two ways in which neural networks process information.
Outline two implications of the black box problem for society.
Outline two reasons why AI transparency is important.
Outline two differences between symbolic AI and connectionist approaches.
Outline two challenges in developing general AI.
Outline two potential impacts of AI on future work.
Outline two approaches to addressing algorithmic bias.
State (2 marks each)
State two industries significantly transformed by AI applications.
State two characteristics of supervised learning.
State two applications of Convolutional Neural Networks.
State two milestones in the historical evolution of AI.
State two ethical dilemmas created by autonomous AI systems.
State two features of unsupervised learning.
State two potential impacts of AI automation on employment.
State two reasons why regulatory frameworks for AI often lag behind technological development.
State two examples of how AI is being used to address global challenges.
State two challenges in developing fair AI systems.
Identify (2 marks each)
Identify two types of machine learning.
Identify two characteristics of reinforcement learning systems.
Identify two applications of Generative Adversarial Networks.
Identify two ways AI systems can learn from data.
Identify two approaches to making AI more transparent.
Identify two key differences between human intelligence and artificial intelligence.
Identify two potential risks of superintelligent AI.
Identify two ways that algorithmic bias can manifest in AI systems.
Identify two examples of AI augmentation in professional contexts.
Identify two stakeholders responsible for ethical AI development.
Suggest (2 marks each)
Suggest two ways educational institutions could prepare students for an AI-transformed job market.
Suggest two methods that could help reduce bias in machine learning systems.
Suggest two potential applications of AI in environmental conservation.
Suggest two approaches to governing AI development responsibly.
Suggest two ways AI might be used to improve healthcare accessibility.
Suggest two potential benefits of AI augmentation versus full automation.
Suggest two ways transparency could be improved in AI systems.
Suggest two potential applications of AI that could help address social inequalities.
Suggest two strategies for ensuring human values are reflected in AI systems.
Suggest two ways individuals can protect their data from AI-powered analytics.
Examine (2 marks each)
Examine two ways in which GANs have changed creative industries.
Examine two challenges in developing ethical guidelines for AI.
Examine two implications of AI for personal privacy.
Examine two ways AI development reflects existing power structures in society.
Examine two potential consequences of regulatory lag in AI governance.
Examine two limitations of current neural network approaches.
Examine two reasons why AI accountability is difficult to establish.
Examine two ways AI might change human-computer interaction.
Examine two aspects of the tension between AI performance and explainability.
Examine two impacts of automation on labor markets.
Explain (2 marks each)
Explain the difference between supervised and unsupervised learning.
Explain how reinforcement learning systems improve over time.
Explain the significance of the black box problem in AI ethics.
Explain how algorithmic bias can be introduced into AI systems.
Explain how AI systems can both create and eliminate jobs.
Explain how symbolic AI differs from connectionist approaches.
Explain how AI augmentation differs from complete automation.
Explain how neural networks process information differently from traditional computing.
Explain two challenges in regulating rapidly evolving AI technologies.
Explain how the evolution of AI has shifted from rule-based to data-driven approaches.

Comments