Practice 3.5 Media with authentic IB Digital Society (DS) exam questions for both SL and HL students. This question bank mirrors Paper 1, 2, 3 structure, covering key topics like systems and structures, human behavior and interaction, and digital technologies in society. Get instant solutions, detailed explanations, and build exam confidence with questions in the style of IB examiners.
It is a paradox to brand a technological innovation as tainted goods by its very name. ‘Deepfake’ is a victim of its own capabilities. Negative connotations and recent incidents have pigeonholed this innovation in the taboo zone. The rise of deepfake technology has ushered in very interesting possibilities and challenges. This synthetic media, created through sophisticated artificial intelligence algorithms, has begun to infiltrate various sectors, raising intriguing questions about its potential impact on education and employability.
The dawn of deepfake technology introduces a realm of possibilities in education. Imagine medical students engaging in lifelike surgical simulations or language learners participating in authentic conversations. The potential for deepfake to revolutionise training scenarios is vast and could significantly enhance the educational experience. Beyond simulations, deepfake can transport students to historical events through realistic reenactments or facilitate virtual field trips, transcending the boundaries of traditional education. The immersive nature of deepfake content holds the promise of making learning more engaging and memorable.
However, while these potential abuses of the technology are real and concerning that doesn't mean we should turn a blind eye to the technology’s potential when using it responsibly, says Jaime Donally, a well-known immersive learning expert.
“Typically, when we're hearing about it, it's in terms of the negative – impersonation and giving false claims,” Donally says. “But really, the technology has the power of bringing people from history alive through old images that we have using AI.”
Donally, a former math teacher and instructional technologist, has written about how a type of deep fake technology called deep nostalgia technology that went viral in 2021 can allow students to form a stronger connection with the past and their personal family heritage. The technology, available on the MyHeritage app, allows images to uploaded that are then turned into short animations thanks to AI technology.
Here are some of the ways in which teachers can utilize deep fake technology in the classroom utilizing the MyHeritage app. Teachers have used the deep fake technology in the My Heritage app to bring historical figures such as Amelia Earhart and Albert Einstein to life. One teacher Donally has communicated with used an animation of Frederick Douglass (above) to help students connect with Douglass’ famous 1852 speech about the meaning of the Fourth of July to Black enslaved Americans. Another teacher has plans to use the app to have students interview a historical figure and create dialogue for them, then match the dialogue to the animation.
Donally herself has paired animations she's created with other types of immersive technology. “I layered it on top in augmented reality,” she says. “When you scan the photo of my grandfather, it all came to life. And it became something that was much more relevant to see in your real-world space.”
With proper supervision, students can use the technology to animate images of family members or local historical figures, and can experiment with augmented reality (AR) in the process. “It makes you want to learn more,” Donally says of animations created using deep fake technology. “It drives you into kind of the history and understanding a bit more, and I think it also helps you identify who you are in that process.”
Education platforms are harnessing deepfake technology to create AI tutors that provide customised support to students. Rather than a generic video lecture, each learner can get tailored instruction and feedback from a virtual tutor who speaks their language and adjusts to their level.
For example, Anthropic built Claude, an AI assistant designed specifically for education. Claude can answer students’ natural language questions, explain concepts clearly, and identify knowledge gaps.
Such AI tutors make learning more effective, accessible, and inclusive. Students feel like they have an expert guide helping them master new skills and material.
AI and deepfake technology have enormous potential to enhance workforce training and education in immersive new ways too. As Drew Rose, CSO and founder of cybersecurity firm Living Security, explains, “educators can leverage deepfakes to create immersive learning experiences. For instance, a history lesson might feature a ‘guest appearance’ by a historical figure, or a science lesson might have a renowned scientist explaining complex concepts.” Ivana Bartoletti, privacy and data protection expert at Wipro and author of An Artificial Revolution – On Power, Politics and AI envisions similar applications.
“Deepfake technologies could provide an easier and less expensive way to train and visualise,” she says. “Students of medicine and nursing currently train with animatronic robots. They are expensive and require special control rooms. Generative AI and augmented or virtual reality headsets or practice rooms will be cheaper and allow for the generalisation, if not the gamification, of simulation.”
Medical students could gain experience diagnosing and treating simulated patients, while business students could practice high-stakes scenarios like negotiations without real-world consequences. These immersive, gamified environments enabled by AI and deepfakes also have vast potential for corporate training.
Bartoletti notes, “A similar use case could be made for other types of learning that require risky and skill-based experiences. The Air Force uses AI as adversaries in flight simulators, and humans have not beaten the best AIs since 2015.
With reference to Source A identify 3 harmful uses of deep fakes.
With reference to Source B and one other real-world example you have studied, explain why deepfakes may be used for beneficial purposes in today's world
Compare what Source C and Source D reveal about the perspectives of deepfakes in the education sector.
With reference to the sources and your own knowledge, discuss whether the use of deepfakes in the educational sector is an incremental or transformational change.
Source A
Source B (MetroStream website excerpt)
MetroStream is Harbor City’s publicly funded streaming channel, created when the city’s local broadcaster merged its TV newsroom with its radio team and a small digital studio. MetroStream publishes each major story in several formats: short vertical clips optimized for phones, longer segments for the streaming channel, and audio recaps for commuters. The service also runs interactive ads in the mobile app, where viewers can tap to save a coupon or follow a sponsor link. MetroStream states that sponsors do not choose which stories appear, but the app prioritizes “viewer-friendly” formats such as short clips and captioned highlights. For transparency, MetroStream labels sponsored segments and keeps an archive of full-length interviews to “reduce selective quoting.” Critics argue that the shift toward short clips changes what counts as news.
Source C (local newspaper report statistics)
In 2026, MetroStream published 2.8× more items than the old broadcaster, largely due to short clips and audio recaps.
Average watch time: vertical clips 19 seconds, long segments 6.5 minutes.
62% of mobile viewers watch with captions on; 18% of viewers report they “rarely” open the full interview archive.
Sponsored segments account for 14% of total mobile views but generate 53% of mobile revenue.
Complaints about “misleading context” rose by 27% after the app introduced auto-generated highlight reels.
Source D (same as above)
MetroStream calls itself “modern public media,” but it is drifting toward the logic of platforms: compress the story, maximize the scroll, and monetize attention through interactivity. The newsroom may be converged, yet the audience experience is fragmented. One group gets 19-second clips with captions, another gets long segments, and almost nobody checks the interview archive that is supposed to provide accountability. Sponsored segments are labelled, yes, but the deeper influence is structural: when half your mobile revenue comes from a small portion of ad-friendly content, the system quietly teaches editors what to prioritize. Public media is meant to create shared understanding, not just shared metrics. The danger is not only misinformation; it is “thin information”; that is, news that is technically accurate but stripped of context, uncertainty, and depth. MetroStream should not confuse multi-format distribution with democratic communication.
Describe how Source A shows MetroStream using multiple media formats to distribute a single story.
Explain how MetroStream’s use of interactive ads and short clips in Source B could influence what audiences understand as “news.”
Compare what Source C and Source D suggest about the impact of MetroStream’s mobile-first approach on trust and context.
Discuss whether MetroStream demonstrates the benefits or the problems of digital media convergence. Answer this with reference to all the sources (A–D) and your own knowledge of the Digital Society course. Consider how media are created and distributed across platforms, the role of advertising and attention, and the ethical dilemmas of compression, framing, and sponsorship.
Wildfire modelling
The fire control centre in the Kinakora National Park in New Zealand often has to cope with the natural phenomenon of wildfires. Staff have been collecting data about wildfires since 1970.
The size of each wildfire is measured, and the vegetation types affected are recorded. Data on the weather conditions is collected from sensors in the park. The staff at the fire control centre use this information to fight the wildfire.
A new computer modelling system is being developed using data collected from previous wildfires. This new system will improve the quality of the information available when fighting future wildfires.
The new system will also enable staff at Kinakora National Park to send information to tourists in the park to warn them when they are in danger from a wildfire.
Identify two measurements that could be taken by the weather sensors in Kinakora National Park.
Identify two methods that could be used to train the staff to use the new computer modelling system.
Identify two methods of visualization that could be used to present information from the new computer modelling system.
Two methods for informing tourists about wildfires in Kinakora National Park are:
Analyse these two methods.
Evaluate Kinakora National Park’s decision to use computer modelling to develop strategies for dealing with wildfires.
Online learning
TailorEd is a free online learning system that personalizes students’ learning by providing teachers with data about how students are progressing in their courses. Students create a personal profile and work through the assignments at their own pace. Teachers can log in to the learning system to see how the students are progressing. However, concerns have been expressed about the amount of data that is being collected.
The school has found that when students access the course platform, some content is being blocked. The network administrator has been asked to investigate the situation. Teachers believe that it would be more appropriate to train the students to use the platform responsibly, rather than use technology to block their access to certain websites.
Identify two ways how the TailorEd system could provide feedback to students.
Identify two ways how the data collected about students’ academic progress could be used by TailorEd.
Outline how a firewall functions.
There are two possible methods for ensuring students use the TailorEd online learning system responsibly. They are:
Analyse these two methods.
To what extent do the benefits of collecting students’ academic progress data outweigh the concerns of the students, teachers and parents?
Discuss the decision for an owner of an art gallery to develop a virtual tour that is accessible online.
User interfaces (UI) are critical in making devices accessible to a diverse range of users. For example, voice-activated interfaces, like those on smartphones, allow individuals with limited mobility to use devices effectively. While these interfaces promote inclusivity, there are challenges, such as accuracy and user privacy, that can affect their effectiveness.
Evaluate the effectiveness of user interfaces, such as voice and graphic interfaces, in promoting accessibility in computing, considering both the benefits for users with disabilities and the associated technical challenges.
Facial recognition algorithms, used for security in airports, rely on large datasets and are sometimes criticized for algorithmic bias. For instance, these algorithms have been known to misidentify individuals of certain racial backgrounds, raising fairness and transparency issues.
Identify two issues related to algorithmic bias in facial recognition software.
Explain why transparency is essential for accountability in facial recognition algorithms used in security.
Discuss one risk associated with “black box” algorithms in facial recognition systems.
Evaluate the impact of algorithmic bias on fairness in facial recognition, particularly concerning racial and ethnic disparities.
Virtual reality (VR) and augmented reality (AR) have transformed entertainment, gaming, and education by creating immersive experiences. For instance, VR gaming offers players a simulated environment, while AR enhances the real-world experience by overlaying digital elements, as seen in games like Pokémon GO.
Identify two types of immersive digital media and describe their applications.
Outline one benefit of VR technology in gaming.
Explain how AR enhances real-world interactions in educational settings.
Evaluate the potential of immersive media, such as VR and AR, in transforming learning experiences, considering both engagement and accessibility challenges.
Fake news
We see and hear news every day and trust that the information provided is accurate. That belief may soon end.
Artificial intelligence (AI) software is now being developed that can produce fake video footage of public figures using recordings of their own voices. Using as little as one minute of user-generated content (data), it can reproduce a particular person’s voice. The developer of this software demonstrated the results by using the voices of Bill Clinton, George Bush and Barack Obama in a computer-generated conversation.
Once a person’s voice has been reproduced, a fake video can be created by processing hundreds of videos of the person’s face. Video footage of politicians are often used, as there is so much data available online.
Law professor John Silverman commented that, as humans we tend to believe what we see, and the increased number of tools to make fake media that is unrecognizable from real media is going to prove a major challenge in the future.
Discuss the claim that companies who develop software that can create fake videos of politicians should be accountable for the fake videos posted by users of their software on social media platforms.
Should we completely automate journalism?
Some of the news articles that you read are written by automated journalism software. This software uses algorithms and natural language generators to turn facts and trends into news stories.
Narrative Science, a company that produces automated journalism software, predicts that by 2026 up to 90 % of news articles could be generated by machine learning algorithms.
Discuss whether it is acceptable for news articles to be generated by automated journalism software.
Practice 3.5 Media with authentic IB Digital Society (DS) exam questions for both SL and HL students. This question bank mirrors Paper 1, 2, 3 structure, covering key topics like systems and structures, human behavior and interaction, and digital technologies in society. Get instant solutions, detailed explanations, and build exam confidence with questions in the style of IB examiners.
It is a paradox to brand a technological innovation as tainted goods by its very name. ‘Deepfake’ is a victim of its own capabilities. Negative connotations and recent incidents have pigeonholed this innovation in the taboo zone. The rise of deepfake technology has ushered in very interesting possibilities and challenges. This synthetic media, created through sophisticated artificial intelligence algorithms, has begun to infiltrate various sectors, raising intriguing questions about its potential impact on education and employability.
The dawn of deepfake technology introduces a realm of possibilities in education. Imagine medical students engaging in lifelike surgical simulations or language learners participating in authentic conversations. The potential for deepfake to revolutionise training scenarios is vast and could significantly enhance the educational experience. Beyond simulations, deepfake can transport students to historical events through realistic reenactments or facilitate virtual field trips, transcending the boundaries of traditional education. The immersive nature of deepfake content holds the promise of making learning more engaging and memorable.
However, while these potential abuses of the technology are real and concerning that doesn't mean we should turn a blind eye to the technology’s potential when using it responsibly, says Jaime Donally, a well-known immersive learning expert.
“Typically, when we're hearing about it, it's in terms of the negative – impersonation and giving false claims,” Donally says. “But really, the technology has the power of bringing people from history alive through old images that we have using AI.”
Donally, a former math teacher and instructional technologist, has written about how a type of deep fake technology called deep nostalgia technology that went viral in 2021 can allow students to form a stronger connection with the past and their personal family heritage. The technology, available on the MyHeritage app, allows images to uploaded that are then turned into short animations thanks to AI technology.
Here are some of the ways in which teachers can utilize deep fake technology in the classroom utilizing the MyHeritage app. Teachers have used the deep fake technology in the My Heritage app to bring historical figures such as Amelia Earhart and Albert Einstein to life. One teacher Donally has communicated with used an animation of Frederick Douglass (above) to help students connect with Douglass’ famous 1852 speech about the meaning of the Fourth of July to Black enslaved Americans. Another teacher has plans to use the app to have students interview a historical figure and create dialogue for them, then match the dialogue to the animation.
Donally herself has paired animations she's created with other types of immersive technology. “I layered it on top in augmented reality,” she says. “When you scan the photo of my grandfather, it all came to life. And it became something that was much more relevant to see in your real-world space.”
With proper supervision, students can use the technology to animate images of family members or local historical figures, and can experiment with augmented reality (AR) in the process. “It makes you want to learn more,” Donally says of animations created using deep fake technology. “It drives you into kind of the history and understanding a bit more, and I think it also helps you identify who you are in that process.”
Education platforms are harnessing deepfake technology to create AI tutors that provide customised support to students. Rather than a generic video lecture, each learner can get tailored instruction and feedback from a virtual tutor who speaks their language and adjusts to their level.
For example, Anthropic built Claude, an AI assistant designed specifically for education. Claude can answer students’ natural language questions, explain concepts clearly, and identify knowledge gaps.
Such AI tutors make learning more effective, accessible, and inclusive. Students feel like they have an expert guide helping them master new skills and material.
AI and deepfake technology have enormous potential to enhance workforce training and education in immersive new ways too. As Drew Rose, CSO and founder of cybersecurity firm Living Security, explains, “educators can leverage deepfakes to create immersive learning experiences. For instance, a history lesson might feature a ‘guest appearance’ by a historical figure, or a science lesson might have a renowned scientist explaining complex concepts.” Ivana Bartoletti, privacy and data protection expert at Wipro and author of An Artificial Revolution – On Power, Politics and AI envisions similar applications.
“Deepfake technologies could provide an easier and less expensive way to train and visualise,” she says. “Students of medicine and nursing currently train with animatronic robots. They are expensive and require special control rooms. Generative AI and augmented or virtual reality headsets or practice rooms will be cheaper and allow for the generalisation, if not the gamification, of simulation.”
Medical students could gain experience diagnosing and treating simulated patients, while business students could practice high-stakes scenarios like negotiations without real-world consequences. These immersive, gamified environments enabled by AI and deepfakes also have vast potential for corporate training.
Bartoletti notes, “A similar use case could be made for other types of learning that require risky and skill-based experiences. The Air Force uses AI as adversaries in flight simulators, and humans have not beaten the best AIs since 2015.
With reference to Source A identify 3 harmful uses of deep fakes.
With reference to Source B and one other real-world example you have studied, explain why deepfakes may be used for beneficial purposes in today's world
Compare what Source C and Source D reveal about the perspectives of deepfakes in the education sector.
With reference to the sources and your own knowledge, discuss whether the use of deepfakes in the educational sector is an incremental or transformational change.
Source A
Source B (MetroStream website excerpt)
MetroStream is Harbor City’s publicly funded streaming channel, created when the city’s local broadcaster merged its TV newsroom with its radio team and a small digital studio. MetroStream publishes each major story in several formats: short vertical clips optimized for phones, longer segments for the streaming channel, and audio recaps for commuters. The service also runs interactive ads in the mobile app, where viewers can tap to save a coupon or follow a sponsor link. MetroStream states that sponsors do not choose which stories appear, but the app prioritizes “viewer-friendly” formats such as short clips and captioned highlights. For transparency, MetroStream labels sponsored segments and keeps an archive of full-length interviews to “reduce selective quoting.” Critics argue that the shift toward short clips changes what counts as news.
Source C (local newspaper report statistics)
In 2026, MetroStream published 2.8× more items than the old broadcaster, largely due to short clips and audio recaps.
Average watch time: vertical clips 19 seconds, long segments 6.5 minutes.
62% of mobile viewers watch with captions on; 18% of viewers report they “rarely” open the full interview archive.
Sponsored segments account for 14% of total mobile views but generate 53% of mobile revenue.
Complaints about “misleading context” rose by 27% after the app introduced auto-generated highlight reels.
Source D (same as above)
MetroStream calls itself “modern public media,” but it is drifting toward the logic of platforms: compress the story, maximize the scroll, and monetize attention through interactivity. The newsroom may be converged, yet the audience experience is fragmented. One group gets 19-second clips with captions, another gets long segments, and almost nobody checks the interview archive that is supposed to provide accountability. Sponsored segments are labelled, yes, but the deeper influence is structural: when half your mobile revenue comes from a small portion of ad-friendly content, the system quietly teaches editors what to prioritize. Public media is meant to create shared understanding, not just shared metrics. The danger is not only misinformation; it is “thin information”; that is, news that is technically accurate but stripped of context, uncertainty, and depth. MetroStream should not confuse multi-format distribution with democratic communication.
Describe how Source A shows MetroStream using multiple media formats to distribute a single story.
Explain how MetroStream’s use of interactive ads and short clips in Source B could influence what audiences understand as “news.”
Compare what Source C and Source D suggest about the impact of MetroStream’s mobile-first approach on trust and context.
Discuss whether MetroStream demonstrates the benefits or the problems of digital media convergence. Answer this with reference to all the sources (A–D) and your own knowledge of the Digital Society course. Consider how media are created and distributed across platforms, the role of advertising and attention, and the ethical dilemmas of compression, framing, and sponsorship.
Wildfire modelling
The fire control centre in the Kinakora National Park in New Zealand often has to cope with the natural phenomenon of wildfires. Staff have been collecting data about wildfires since 1970.
The size of each wildfire is measured, and the vegetation types affected are recorded. Data on the weather conditions is collected from sensors in the park. The staff at the fire control centre use this information to fight the wildfire.
A new computer modelling system is being developed using data collected from previous wildfires. This new system will improve the quality of the information available when fighting future wildfires.
The new system will also enable staff at Kinakora National Park to send information to tourists in the park to warn them when they are in danger from a wildfire.
Identify two measurements that could be taken by the weather sensors in Kinakora National Park.
Identify two methods that could be used to train the staff to use the new computer modelling system.
Identify two methods of visualization that could be used to present information from the new computer modelling system.
Two methods for informing tourists about wildfires in Kinakora National Park are:
Analyse these two methods.
Evaluate Kinakora National Park’s decision to use computer modelling to develop strategies for dealing with wildfires.
Online learning
TailorEd is a free online learning system that personalizes students’ learning by providing teachers with data about how students are progressing in their courses. Students create a personal profile and work through the assignments at their own pace. Teachers can log in to the learning system to see how the students are progressing. However, concerns have been expressed about the amount of data that is being collected.
The school has found that when students access the course platform, some content is being blocked. The network administrator has been asked to investigate the situation. Teachers believe that it would be more appropriate to train the students to use the platform responsibly, rather than use technology to block their access to certain websites.
Identify two ways how the TailorEd system could provide feedback to students.
Identify two ways how the data collected about students’ academic progress could be used by TailorEd.
Outline how a firewall functions.
There are two possible methods for ensuring students use the TailorEd online learning system responsibly. They are:
Analyse these two methods.
To what extent do the benefits of collecting students’ academic progress data outweigh the concerns of the students, teachers and parents?
Discuss the decision for an owner of an art gallery to develop a virtual tour that is accessible online.
User interfaces (UI) are critical in making devices accessible to a diverse range of users. For example, voice-activated interfaces, like those on smartphones, allow individuals with limited mobility to use devices effectively. While these interfaces promote inclusivity, there are challenges, such as accuracy and user privacy, that can affect their effectiveness.
Evaluate the effectiveness of user interfaces, such as voice and graphic interfaces, in promoting accessibility in computing, considering both the benefits for users with disabilities and the associated technical challenges.
Facial recognition algorithms, used for security in airports, rely on large datasets and are sometimes criticized for algorithmic bias. For instance, these algorithms have been known to misidentify individuals of certain racial backgrounds, raising fairness and transparency issues.
Identify two issues related to algorithmic bias in facial recognition software.
Explain why transparency is essential for accountability in facial recognition algorithms used in security.
Discuss one risk associated with “black box” algorithms in facial recognition systems.
Evaluate the impact of algorithmic bias on fairness in facial recognition, particularly concerning racial and ethnic disparities.
Virtual reality (VR) and augmented reality (AR) have transformed entertainment, gaming, and education by creating immersive experiences. For instance, VR gaming offers players a simulated environment, while AR enhances the real-world experience by overlaying digital elements, as seen in games like Pokémon GO.
Identify two types of immersive digital media and describe their applications.
Outline one benefit of VR technology in gaming.
Explain how AR enhances real-world interactions in educational settings.
Evaluate the potential of immersive media, such as VR and AR, in transforming learning experiences, considering both engagement and accessibility challenges.
Fake news
We see and hear news every day and trust that the information provided is accurate. That belief may soon end.
Artificial intelligence (AI) software is now being developed that can produce fake video footage of public figures using recordings of their own voices. Using as little as one minute of user-generated content (data), it can reproduce a particular person’s voice. The developer of this software demonstrated the results by using the voices of Bill Clinton, George Bush and Barack Obama in a computer-generated conversation.
Once a person’s voice has been reproduced, a fake video can be created by processing hundreds of videos of the person’s face. Video footage of politicians are often used, as there is so much data available online.
Law professor John Silverman commented that, as humans we tend to believe what we see, and the increased number of tools to make fake media that is unrecognizable from real media is going to prove a major challenge in the future.
Discuss the claim that companies who develop software that can create fake videos of politicians should be accountable for the fake videos posted by users of their software on social media platforms.
Should we completely automate journalism?
Some of the news articles that you read are written by automated journalism software. This software uses algorithms and natural language generators to turn facts and trends into news stories.
Narrative Science, a company that produces automated journalism software, predicts that by 2026 up to 90 % of news articles could be generated by machine learning algorithms.
Discuss whether it is acceptable for news articles to be generated by automated journalism software.