top of page

ITGS + Digital Society  IBDP

Digital Society Blog

IB DP Digital Society HL Paper 3: Answering the Questions (Part 2)

  • Writer: lukewatsonteach
    lukewatsonteach
  • Feb 28
  • 14 min read

About This HL Digital Society Guide

This is Part 2 of your Paper 3 preparation guide. Part 1 covered Paper 3 source booklet. This guide focuses on the questions themselves... what each one is asking, how to structure your answer, and what the examiner needs to see to award marks at each band.


Before you write anything

Read all four sources before attempting a single question. This is not optional. Strategic reading takes five minutes. It saves twenty.


As you read, ask:

  • Which pre-release stakeholders do I recognise in the scenario?

  • What subject-specific terms appear in Source B that I should use in my answers?

  • What is the central tension between Sources C and D?

  • What is my evaluation thesis and what is the one sentence that captures the core trade-off? (Write your thesis sentence at the top of your answer booklet before you begin Q3 or Q4. It will anchor everything that follows.)


Paper 3 Structure:

The answer space provided is a signal. The IB prints a fixed number of lines for each question. Counting them across past papers reveals a consistent pattern:

  • Q1: 6 lines each ~ enough for one or two precise sentences

  • Q2: 3-mark question gets 9 lines, a 2-mark question gets 6

  • Q3 [8 marks]: approximately 60 lines across two pages

  • Q4 [12 marks]: approximately 90 lines across three pages

Q1: Identify / Outline Mastery [2 marks]

What the question is asking: Q1 tests your ability to read the source booklet accurately and connect it to your Stage 2 investigation of the challenge. The answers are in the sources in front of you. This is the most straightforward mark on the paper, and the most commonly dropped through misreading the command term.


The content connection: Before the exam, look at your pre-release additional terminology list and identify which DS content area it belongs to. That is almost certainly the content area Q1 will test. In May 2025, data points and qualitative/quantitative data pointed to 3.1 Data. In November 2025, voting apps and social media campaigns pointed to 3.3 Computers and 3.5 Media. In May 2024, wearable devices and satellite phones pointed to 3.3 Computers. The pattern is consistent.


Know your command terms:

  • Identify = name it. One word or one phrase. No explanation needed.

  • Outline = name it and add one meaningful development. One sentence is usually enough.


The formula:

  • Identify: Provide an answer from a number of possibilities. [name the thing]

  • Outline: Give a brief account or summary. [name the thing] + [one development that shows you understand it]


Answer space: 6 lines per sub-question, confirmed across multiple past papers. That is enough for two precise sentences at most. Do not overwrite.


What the examiner needs: Accuracy and precision. The answer is either there or it isn't. A vague or generic response scores zero even if it is broadly correct. Use the exact language from the source where appropriate.


Stage connection — Stage 2: Q1 tests whether you explored and investigated the challenge thoroughly enough to recognise its features instantly in a new scenario.


Discovery prompts — ask yourself:

  • Which DS content area does this question connect to?

  • Have I used the precise technical term, not a description of it?

  • If the command term is outline, have I added a genuine development?

  • Did Source B contain this vocabulary? If so, have I used it?

  • Would an examiner who knows the DS content area recognise my answer as technically correct?

Q2: Explain Mastery [4–6 marks]

What the question is asking: Q2 tests your ability to explain the technical mechanisms behind the interventions. It draws on DS Content, specifically the uses, mechanisms, and early dilemma sub-sections of the relevant content area. Where Q1 asked you to identify or describe, Q2 asks you to explain how and why. It also consistently splits across both interventions (one sub-question per intervention) which is the IB beginning to build the evaluative comparison that Q3 and Q4 will complete.


The content connection: Q2 operates one level deeper into the content areas than Q1. It moves from types and characteristics into mechanisms, uses, and early dilemmas.


The formula — every mark works the same way: Point + development in the specific context of the scenario. A point with no development earns half the available marks.


Example of the difference:

  • Weak: "The app uses blockchain technology, which makes it secure."

  • Strong: "The voting app uses blockchain technology, which creates an immutable and decentralised record of each vote cast, meaning that in Danton's local election, individual votes cannot be altered or deleted after submission, reducing the risk of vote tampering."

The second answer uses subject-specific terminology, explains the mechanism, anchors it in the specific scenario, and this provides the examiner with the "development" they are seeking.


What the examiner needs:

  • Subject-specific terminology used precisely (from Source B and your pre-release preparation)

  • A clear explanation of the mechanism (not just a description of the feature)

  • Explicit connection to the scenario in the source booklet

  • For multi-mark explain questions: each point must have its own development


Discovery prompts — ask yourself:

  • Which DS content area is this question testing (and which sub-section specifically)?

  • Have I explained the mechanism (the how and why)?

  • Have I used at least one subject-specific term from Source B?

  • Is my development anchored in the specific scenario?

  • Does my answer fill the available space without padding?

  • Is there an ethical or evaluative dimension to this question that points toward Q3?


Question 3 (8 marks): EVALUATION Mastery

What the question is asking: Q3 is a Stage 4 evaluation essay focused on one specific content dilemma from the DS curriculum. It asks you to evaluate that dilemma from both sides using two analytical tools working together: the six evaluation criteria and the DS concepts.


The content dilemma connection: Every Q3 activates a specific dilemma from the E sub-sections of the content areas. In May 2024 it was 3.1I data privacy. In May 2025 it was 3.2E algorithmic bias and 3.6E AI transparency. In November 2025 it was 3.5D digital media dilemmas and 3.4G internet privacy. The paper walks students systematically through content knowledge (Q1), mechanisms (Q2), and dilemmas (Q3).


The six evaluation criteria: your analytical framework: The mark scheme tags every model answer with criteria in brackets. These are the lenses through which the examiner reads your response. You must use them deliberately.

  • Equity: Does the intervention fairly address everyone affected? Who is excluded or disadvantaged?

  • Acceptability: Do affected communities find this acceptable? Is it transparent and accountable?

  • Cost: What are the financial, social, cultural and environmental costs? Do they outweigh benefits?

  • Feasibility: Is it technically, socially and politically workable? What are the barriers?

  • Innovation: Is this approach genuinely new? What does it change?

  • Ethics: Is it ethically sound? Who decides? What safeguards are needed?


You do not have time to use all six. Choose two or three that cut deepest into the specific dilemma being asked about, and develop them fully on both sides. Two criteria explored with genuine insight will always outperform six criteria listed superficially.


How to choose your criteria? Let the question tell you:

  • Feasibility appears most frequently across all Q3 mark schemes — almost every intervention has workability questions.

  • Ethics and Values appear in virtually every bullet point. Equity and Acceptability dominate questions about access, exclusion, and trust.


A question about access and exclusion → Equity and Feasibility A question about trust, privacy, or harm → Ethics and Acceptability A question about empowerment or participation → Equity, Power concept, Acceptability A question about whether something is worth doing → Cost and Feasibility


Using DS concepts: At least one concept should frame your evaluation, not decorate it. Name it, define it briefly in context, and show how it applies to this specific dilemma and scenario. Power and Values & Ethics appear in almost every Q3 mark scheme. The concept provides the theoretical lens; the criteria provide the evaluative structure.


Using sources and independent research: Every evaluative claim should be anchored in a named source or your own independent research. "As Source C shows..." or "According to [named report/study]..." signals sustained evaluation. Generic claims signal description. The mark scheme explicitly flags generic pre-rehearsed responses as a concern for examiners.


The mark bands: what the examiner is actually judging: Q3 is assessed using mark bands applied holistically. Three dimensions are being judged simultaneously: understanding of the question, quality of evaluation, and organisation. Here is what each band looks like in practice:

  • 1–2: The response describes rather than evaluates. Points are unsupported and generic. Little sense of structure.

  • 3–4: Some evaluation appears but it is not sustained. The response drifts back into description. Partially organised, ideas present but not developed fully.

  • 5–6: Evaluation is present and relevant, and the response is adequately organised. Both sides are represented. But the evaluation is not consistent throughout, it may appear in some paragraphs and disappear in others.

  • 7–8: Evaluation is sustained throughout the entire response,not just in the conclusion. Every paragraph evaluates. The response is well-structured, well-supported, and focused on the specific demands of the question from the first sentence to the last.


The two words that separate a 6 from a 7: "sustained" and "throughout." These words appear only at band 7–8. A student can use criteria, reference sources, and develop both sides, and still score a 6 if their evaluation drops away mid-response. Sustained means every paragraph evaluates. Throughout means from the opening sentence to the final line — not just the conclusion.


The core distinction:

  • Describing what the dilemma is → band 3–4

  • Evaluating what it means for one side → band 5–6

  • Evaluating what it means for both sides, sustained throughout → band 7–8


A rough structure for 20 minutes:

  1. Opening sentence: state what you are evaluating and signal your evaluative lens — name a criterion or concept immediately (1–2 minutes)

  2. Evaluate the first side using 2 criteria, anchored in sources and stakeholders (7 minutes)

  3. Evaluate the second side using 2 criteria, anchored in sources and stakeholders (7 minutes)

  4. Tentative conclusion: weigh the two sides and state a reasoned, qualified judgment (3 minutes)


A Q3 structure that works

  1. Opening (2–3 sentences): Frame the central tension. State which two or three criteria you will use and why they are most relevant to this specific question. Name the DS concept that will frame your evaluation. This signals to the examiner from the first sentence that you are evaluating, not describing.

  2. Body (two or three developed arguments — both sides): For each criterion, apply the R-E-S cycle from the PRESTO framework below. Each criterion block should develop both the case for and the case against before moving on.

  3. Conclusion (2–3 sentences): A tentative, reasoned position. Not a verdict, an honest acknowledgement of complexity. "To a significant extent... however... this depends on..." Return to the concept named in your opening. This creates the structural coherence the band 7–8 descriptor requires.


The PRESTO Evaluation Framework

PRESTO stands for: Parameters → Research → Examine → Scrutinise → Thoughtful synthesis → Overall implications.


INTRODUCTION

  • P — Parameters: Define which criteria you are using and why. This happens in your opening paragraph — do it once, clearly.


BODY

Then for each criterion, repeat the R-E-S cycle:

  • R — Research integration: What do the sources show? Cite specific sources and your own independent research — not vague gestures. "Source C shows..." or "According to the UNEP 2024 report..."

  • E — Examine stakeholder perspectives: Who gains? Who loses? Whose voice is centred in this source — and whose is absent?

  • S — Scrutinise trade-offs: What tension or limitation does this criterion reveal? This is where genuine evaluation happens. Without S, you have description with evidence — which is band 5–6. With S, you have evaluation — which is band 7–8.


CONCLUSION

  • T — Thoughtful synthesis: This is your conclusion — weigh the criteria against each other. Which matters most in this specific scenario and why?

  • O — Overall implications: What does your evaluation suggest about the intervention's value, limitations, or conditions for success?


Why PRESTO maps to the mark bands:

  • R gives the "well-supported" evidence band 7–8 requires.

  • E gives the stakeholder awareness that prevents generic responses.

  • S gives the evaluative tension that makes evaluation "sustained" rather than one-sided.

  • A response with R and E but no S will almost always land at band 5–6 — evidence and perspectives are present, but without genuine trade-off analysis, the evaluation is not sustained.


The core distinction: Describing what the dilemma is → band 3–4 Evaluating what it means for one side → band 5–6 Evaluating what it means for both sides, with R-E-S sustained throughout → band 7–8


Answer space: approximately 60 lines across two pages — verified from past papers.


Discovery prompts — ask yourself:

  • Which content dilemma is this question activating?

  • Which two or three criteria cut deepest into this specific dilemma?

  • Have I opened by naming my criteria and a DS concept?

  • Am I applying R-E-S for each criterion, or am I only doing R and E and skipping S?

  • Is my evaluation sustained throughout, or does it only appear in my conclusion?

  • Have I referenced at least one named source and one piece of independent research?

  • Does my conclusion return to the concept named in my opening?

  • Have I used the available 60 lines?


Question 4: Evaluate and Recommend MASTERY (12 marks)

What the question is asking: Q4 is the complete Stage 4 task. It requires everything Q3 requires, sustained evaluation using criteria and concepts, anchored in sources and research, plus a clear recommendation with explicitly stated trade-offs and implications. Both interventions must be evaluated. The recommendation must name which intervention to choose, why, what it costs, and what conditions it depends on.


This is the most demanding question on the paper. It also gives you the most space, approximately 90 lines across three pages.


The three sources every Q4 must draw on: The mark scheme explicitly states that responses should reference the pre-release, the sources, and independent research. These are not suggestions; they are the three evidential pillars the examiner is looking for. A response that uses only the source booklet cannot access the top band. A response that uses only independent research and ignores the sources will be flagged as pre-rehearsed. All three must be present and integrated.


  1. Pre-release: the challenge, the interventions, the stakeholder list, the additional terminology

  2. Sources A–D: specific named references — "As Source C shows..." not vague gestures

  3. Independent research: external academic, professional, or organisational sources studied before the exam


The six evaluation criteria, applied to both interventions: The mark scheme tags every Q4 bullet point with criteria labels, exactly as it does for Q3. The difference is that Q4 requires the criteria to be applied to both interventions comparatively, not just to one side of a dilemma. For each criterion you choose, you must evaluate how both interventions perform against it, and then make a reasoned judgment about which performs better and why.

  • Equity — Which intervention more fairly addresses all affected stakeholders?

  • Acceptability — Which is more likely to gain trust and community acceptance?

  • Cost — Which offers better value when financial, social, and environmental costs are weighed?

  • Feasibility — Which is more workable given the specific constraints of this scenario?

  • Innovation — Which represents a more meaningful or sustainable advance?

  • Ethics — Which raises fewer ethical concerns, or handles them more responsibly?


What the examiner is actually judging: Q4 uses a 12-mark band scale. Four dimensions are assessed simultaneously: understanding of the question, quality and accuracy of knowledge, strength of recommendation, and organisation.

  • 1–3: Limited understanding. Unsupported generalisations. No recommendation, or one with minimal support. Limited organisation.

  • 4–6: Some understanding. Knowledge present but not always relevant or accurate. Recommendation present but not sustained or only partially effective. Partially organised.

  • 7–9: Adequate understanding. Well-supported with relevant and accurate knowledge. Recommendation effectively supported. Adequately organised.

  • 10–12: In-depth understanding. Well-supported throughout. Recommendation presented with a clear consideration of possible trade-offs and implications. Well-structured and effectively organised.


The single phrase that separates a 9 from a 10: "A clear consideration of possible trade-offs and implications."


This phrase appears only at band 10–12. A student who recommends one intervention and supports it well will reach band 7–9. To access band 10–12, they must also: acknowledge the strongest argument for the intervention they did not choose, state what their recommendation costs or risks, and name the conditions under which their recommendation holds. Without trade-offs, the ceiling is 9.


A Q4 Structure That Works (30 minutes, 90 lines:

The five parts below are the skeleton of a band 10–12 Q4 response. PRESTO's R-E-S cycle is what you do inside each evaluation section; it is the muscle that fills the skeleton.


Opening paragraph (3–4 minutes): State your recommendation and briefly signal why. Do not leave the examiner guessing where you stand. Name the intervention, give one criterion-based reason, and signal the DS concept that will frame your evaluation. The examiner needs to know your position from the first sentence.


Evaluate Intervention 1 (8 minutes): Evaluate through two or three criteria (advantages and disadvantages), with specific source references and stakeholder perspectives. For each criterion, apply R-E-S: cite your evidence (R), examine who gains and who loses (E), scrutinise the trade-off or limitation (S). Include both strengths and weaknesses even if this is not your recommended intervention.


Evaluate Intervention 2 (8 minutes): Same approach, same depth. Apply R-E-S per criterion. Include both strengths and weaknesses even if this is your recommended intervention. The mark scheme provides a full for/against analysis of each intervention, the examiner expects the same structure from you.


Trade-offs paragraph (6 minutes): This is where band 10–12 is won or lost. Acknowledge the strongest counterargument to your recommendation. Name specifically what is being sacrificed by not choosing the other intervention. Then explain why, despite this, your recommendation still holds, with stated conditions. "The strongest argument for [other intervention] is [X]. However, in this specific context, [your recommendation] remains more appropriate because [Y], provided that [condition]." Without this paragraph, the ceiling is band 9.


Conclusion (3 minutes): Reinforce your recommendation. State the conditions under which it will succeed and what safeguards are needed. Return to the DS concept named in your opening. A concept that frames both your opening and your conclusion creates the structural coherence band 10–12 requires. Your conclusion should address three specific things:

  • The conditions under which your recommendation will succeed: "This recommendation is workable provided that [specific condition from the scenario]..."

  • The safeguards needed to address the risks identified in your evaluation: "To mitigate the [ethics / equity / feasibility] concerns identified, [specific safeguard] would need to be in place..."

  • The implications for specific stakeholders: "For [named stakeholder], this recommendation means [specific implication]..."


What makes Q4 different from Q3: Q3 evaluates one dilemma from both sides and reaches a tentative conclusion. Q4 evaluates two interventions against each other, commits to a recommendation, and explicitly accounts for trade-offs and implications. The evaluative thinking is the same. The addition is the recommendation: stated early, supported throughout, and qualified honestly.


A note on Stage 3 and the MIER categories: Your Stage 3 investigation asked you to categorise each intervention as mitigating, interceding, enhancing, or resolving (MIER) the challenge. These categories matter most in Q4. An intervention that intercedes to change an underlying cause is doing fundamentally different work from one that only mitigates a symptom, and that difference is exactly the kind of precise, evaluative distinction that a trade-offs paragraph needs. If you have done your Stage 3 work thoroughly, you already know which category each intervention belongs to. Use that knowledge in your answers. Do not name the categories explicitly because examiners are asked to watch for pre-rehearsed responses, and framework labelling is one of the clearest signals of one. Let the thinking be visible. Keep the framework invisible.


Discovery prompts — ask yourself:

  • Have I stated my recommendation in the opening paragraph?

  • Have I evaluated both interventions (not just the one I recommend)?

  • Have I used all three evidential sources: pre-release, named sources, and independent research?

  • Have I applied at least two evaluative criteria to each intervention comparatively?

  • Have I named the strongest argument against my recommendation and addressed it directly?

  • Have I stated what my recommendation costs, risks, or depends on (i.e. the trade-offs and implications)?

  • Does my conclusion return to a concept named in my opening?

  • Have I used the available 90 lines?

The Examiner's Mind: Avoiding Fatal Errors

The "Superficial Trap" (Kills Scores)

Avoid: Listing pros and cons without framework connection

Do: Apply consistent evaluation criteria with evidence integration


Source Integration Excellence

Poor: "The source mentions privacy concerns"

Good: "Source 1 explicitly states that 'only 23% of residents trust the system with medical data,' highlighting significant acceptability barriers"


Independent Research Excellence

Poor: "Studies show telemedicine works"

Good: "Andre's 2023 comparative analysis of rural telemedicine programs demonstrates that community-led training increases adoption rates by 34%"


Quick Reference Cards for Exam Room

Q3 Success Checklist (Evaluation)

  •  Framework clearly stated and applied

  •  Explicit source citations ("Source 1 states...")

  •  Multiple stakeholder perspectives

  •  Counter-arguments addressed

  •  Evidence-based synthesis

  •  Clear evaluative conclusion


Q4 Success Checklist (Recommendation)

  •  Both interventions evaluated

  •  Trade-offs explicitly discussed

  •  Independent Research cited

  •  Specific implementation details

  •  Timeline and resource realism

  •  Success metrics identified


Excellence Indicators

  • Theoretical sophistication

  • Evidence mastery from multiple sources

  • Recognition of complexity and nuance

  • Strategic thinking addressing root causes

  • Global perspective on digital transformation

  • Future-oriented considerations


The Grade 7 Standard — What It Actually Means

The Grade 7 descriptor requires: conceptual awareness, precise use of subject-specific terminology, the ability to analyse and evaluate evidence, awareness of alternative perspectives and ideological biases, and the ability to come to reasonable, albeit tentative, conclusions.

In practice:

  • Concepts (power, systems, values, ethics, change...) are your analytical language, use them to move from observation to insight

  • Terminology from Source B is your technical vocabulary, use it precisely

  • Tentative conclusions are not weakness, they are the mark of a student who understands complexity

  • Alternative perspectives means naming whose interests are served and whose are not, including voices that are absent from the sources



IB DP Digital Society HL students doing very well on paper 3
IB DP Digital Society HL students doing very well on paper 3

2 Comments


Gökay Yılmaz
Gökay Yılmaz
Sep 30, 2025

Hey,

Your blog says the Paper 3 exam is 2hours15min long. But the Digital Society guide says the opposite. It says 1hours15min. Has there been a change in the guide that we might not be aware of?

Like
lukewatsonteach
lukewatsonteach
Oct 27, 2025
Replying to

Oh! Well spotted! This is an error and I will fix it ASAP.

Paper 3 = 1 hour 15 minutes

Like
  • Instagram
  • Youtube
  • X

2025 IBDP DIGITAL SOCIETY | LUKE WATSON TEACH

bottom of page