Practice 4.2 Economic with authentic IB Digital Society (DS) exam questions for both SL and HL students. This question bank mirrors Paper 1, 2, 3 structure, covering key topics like systems and structures, human behavior and interaction, and digital technologies in society. Get instant solutions, detailed explanations, and build exam confidence with questions in the style of IB examiners.
Sentencing criminals using artificial intelligence (AI)
In 10 states in the United States, artificial intelligence (AI) software is used for sentencing criminals. Once criminals are found guilty, judges need to determine the lengths of their prison sentences. One factor used by judges is the likelihood of the criminal re-offending*.
The AI software uses machine learning to determine how likely it is that a criminal will re-offend. This result is presented as a percentage; for example, the criminal has a 90 % chance of re-offending. Research has indicated that AI software is often, but not always, more reliable than human judges in predicting who is likely to re-offend.
There is general support for identifying people who are unlikely to re-offend, as they do not need to be sent to prisons that are already overcrowded.
Recently, Eric Loomis was sentenced by the state of Wisconsin using proprietary AI software. Eric had to answer over 100 questions to provide the AI software with enough information for it to decide the length of his sentence. When Eric was given a six-year sentence, he appealed and wanted to see the algorithms that led to this sentence. Eric lost the appeal.
On the other hand, the European Union (EU) has passed a law that allows citizens to challenge decisions made by algorithms in the criminal justice system.
* re-offending: committing another crime in the future
Identify two characteristics of artificial intelligence (AI) systems.
Outline one problem that may arise if proprietary software rather than open-source software is used to develop algorithms.
The developers of the AI software decided to use supervised machine learning to develop the algorithms in the sentencing software.
Identify two advantages of using supervised learning.
The developers of the AI software used visualizations as part of the development process.
Explain one reason why visualizations would be used as part of the development process.
Explain two problems the developers of the AI system could encounter when gathering the data that will be input into the AI system.
To what extent should the decisions of judges be based on algorithms rather than their knowledge and experience?
It is a paradox to brand a technological innovation as tainted goods by its very name. ‘Deepfake’ is a victim of its own capabilities. Negative connotations and recent incidents have pigeonholed this innovation in the taboo zone. The rise of deepfake technology has ushered in very interesting possibilities and challenges. This synthetic media, created through sophisticated artificial intelligence algorithms, has begun to infiltrate various sectors, raising intriguing questions about its potential impact on education and employability.
The dawn of deepfake technology introduces a realm of possibilities in education. Imagine medical students engaging in lifelike surgical simulations or language learners participating in authentic conversations. The potential for deepfake to revolutionise training scenarios is vast and could significantly enhance the educational experience. Beyond simulations, deepfake can transport students to historical events through realistic reenactments or facilitate virtual field trips, transcending the boundaries of traditional education. The immersive nature of deepfake content holds the promise of making learning more engaging and memorable.
However, while these potential abuses of the technology are real and concerning that doesn't mean we should turn a blind eye to the technology’s potential when using it responsibly, says Jaime Donally, a well-known immersive learning expert.
“Typically, when we're hearing about it, it's in terms of the negative – impersonation and giving false claims,” Donally says. “But really, the technology has the power of bringing people from history alive through old images that we have using AI.”
Donally, a former math teacher and instructional technologist, has written about how a type of deep fake technology called deep nostalgia technology that went viral in 2021 can allow students to form a stronger connection with the past and their personal family heritage. The technology, available on the MyHeritage app, allows images to uploaded that are then turned into short animations thanks to AI technology.
Here are some of the ways in which teachers can utilize deep fake technology in the classroom utilizing the MyHeritage app. Teachers have used the deep fake technology in the My Heritage app to bring historical figures such as Amelia Earhart and Albert Einstein to life. One teacher Donally has communicated with used an animation of Frederick Douglass (above) to help students connect with Douglass’ famous 1852 speech about the meaning of the Fourth of July to Black enslaved Americans. Another teacher has plans to use the app to have students interview a historical figure and create dialogue for them, then match the dialogue to the animation.
Donally herself has paired animations she's created with other types of immersive technology. “I layered it on top in augmented reality,” she says. “When you scan the photo of my grandfather, it all came to life. And it became something that was much more relevant to see in your real-world space.”
With proper supervision, students can use the technology to animate images of family members or local historical figures, and can experiment with augmented reality (AR) in the process. “It makes you want to learn more,” Donally says of animations created using deep fake technology. “It drives you into kind of the history and understanding a bit more, and I think it also helps you identify who you are in that process.”
Education platforms are harnessing deepfake technology to create AI tutors that provide customised support to students. Rather than a generic video lecture, each learner can get tailored instruction and feedback from a virtual tutor who speaks their language and adjusts to their level.
For example, Anthropic built Claude, an AI assistant designed specifically for education. Claude can answer students’ natural language questions, explain concepts clearly, and identify knowledge gaps.
Such AI tutors make learning more effective, accessible, and inclusive. Students feel like they have an expert guide helping them master new skills and material.
AI and deepfake technology have enormous potential to enhance workforce training and education in immersive new ways too. As Drew Rose, CSO and founder of cybersecurity firm Living Security, explains, “educators can leverage deepfakes to create immersive learning experiences. For instance, a history lesson might feature a ‘guest appearance’ by a historical figure, or a science lesson might have a renowned scientist explaining complex concepts.” Ivana Bartoletti, privacy and data protection expert at Wipro and author of An Artificial Revolution – On Power, Politics and AI envisions similar applications.
“Deepfake technologies could provide an easier and less expensive way to train and visualise,” she says. “Students of medicine and nursing currently train with animatronic robots. They are expensive and require special control rooms. Generative AI and augmented or virtual reality headsets or practice rooms will be cheaper and allow for the generalisation, if not the gamification, of simulation.”
Medical students could gain experience diagnosing and treating simulated patients, while business students could practice high-stakes scenarios like negotiations without real-world consequences. These immersive, gamified environments enabled by AI and deepfakes also have vast potential for corporate training.
Bartoletti notes, “A similar use case could be made for other types of learning that require risky and skill-based experiences. The Air Force uses AI as adversaries in flight simulators, and humans have not beaten the best AIs since 2015.
With reference to Source A identify 3 harmful uses of deep fakes.
With reference to Source B and one other real-world example you have studied, explain why deepfakes may be used for beneficial purposes in today's world
Compare what Source C and Source D reveal about the perspectives of deepfakes in the education sector.
With reference to the sources and your own knowledge, discuss whether the use of deepfakes in the educational sector is an incremental or transformational change.
Cloud networks allow for data storage and access over the internet, making data accessible from anywhere. This accessibility supports remote work, file sharing, and collaboration but also raises concerns about data security and control over personal information.
Evaluate the impact of cloud networks on data accessibility, considering the benefits for remote work and the potential security risks.
Students should be provided with the pre-release document ahead of the May 2018 HL paper 3 examination, this can be found under the 'Your tests' tab > supplemental materials > May 2018 HL paper 3 pre-release document: Accessibility.
Improving the accessibility to the curriculum for children with special educational needs and disabilities (SEND)
Source 1: Tayton School
Tayton School is a primary school that teaches 500 children aged between 5 and 12. There are three classes in each year group, with a maximum of 24 students in each class. The school’s motto is “Education for Everyone”, and inclusion is at the heart of the school’s mission.
The school’s Inclusion Department consists of five full-time staff, led by Sandra, and 10 learning support assistants who are active in working with the children. Sandra has recently produced a report on the students with special educational needs and disabilities (SEND) in the school, in which she found that the increasing numbers of students, and the types of SEND, means that the schools needs to invest in expanding the amount of support for the students (see Table 1).
Table 1: SEND at Tayton School

Sandra’s report argues that, next year, the work of the Inclusion Department would be more effective if the school purchased educational digital technologies, such as social robots and assistive technologies.
Source 2: Social robots in education
Sandra researched social robots and came back to the department meeting with this information:
In 2020, a report on the use of social robots in education was published by a prestigious university professor, who concluded that social robots have the potential to be a key player in education in the way textbooks and whiteboards have been in the past. A social robot has the potential to support students in ways that could never have been envisaged 20 years ago. However, there are significant technical limitations, particularly linked to the social robot’s ability to interact with students, that will restrict their usability for the next few years
Source 3: Mary sees the positives
Mary, one of the learning assistants at Tayton School, says:
“As a parent of two school-age children, I think the potential introduction of social robots has both advantages and disadvantages. My children thought the idea of having a robot that sits with them very exciting, and I think they would do what the robot asks without questioning it. The robot will also be much more patient while they are learning their times tables!” (See Figure 1).
Figure 1: Students interacting with a social robot

Source 4: James has doubts
James, another learning assistant at Tayton School, is wary of the overuse of digital technology in schools for children with special needs based on his experiences in other schools. He has found some research that supports his ideas.

Machine learning (ML) allows systems to learn from data, enabling applications like image recognition in social media and fraud detection in finance. These applications rely on different types of machine learning, such as supervised learning, where algorithms are trained on labeled data, and unsupervised learning, where systems find patterns without labels.
Identify two types of machine learning and describe their uses.
Outline how supervised learning is applied in image recognition.
Explain how unsupervised learning helps in detecting fraud in financial transactions.
Evaluate the challenges of using machine learning for high-stakes decisions, such as in financial fraud detection, considering both accuracy and accountability.
Online learning
TailorEd is a free online learning system that personalizes students’ learning by providing teachers with data about how students are progressing in their courses. Students create a personal profile and work through the assignments at their own pace. Teachers can log in to the learning system to see how the students are progressing. However, concerns have been expressed about the amount of data that is being collected.
The school has found that when students access the course platform, some content is being blocked. The network administrator has been asked to investigate the situation. Teachers believe that it would be more appropriate to train the students to use the platform responsibly, rather than use technology to block their access to certain websites.
Identify two ways how the TailorEd system could provide feedback to students.
Identify two ways how the data collected about students’ academic progress could be used by TailorEd.
Outline how a firewall functions.
There are two possible methods for ensuring students use the TailorEd online learning system responsibly. They are:
Analyse these two methods.
To what extent do the benefits of collecting students’ academic progress data outweigh the concerns of the students, teachers and parents?
Can digital technologies be used sustainably?
Many organizations claim that the most efficient use of information technology (IT) equipment, such as laptops and printers, is to replace them on a regular basis. For example, an organization’s strategy may be to do this every three years.
Other organizations purchase IT equipment that can easily be upgraded by increasing the storage and memory or upgrading the processing capabilities only when required. They claim they do not need to replace their IT equipment on such a regular basis and believe this is a more sustainable practice.
Evaluate the sustainability of these two strategies.
Discuss the decision for an owner of an art gallery to develop a virtual tour that is accessible online.
Using a Segway with machine learning capabilities?
The Segway Patroller is a two-wheeled, battery-powered electric vehicle. Recently, Segway Patrollers have been used for security purposes in cities as well as in public spaces such as concerts, railway stations and shopping malls.
The Segway Patroller can travel up to a speed of 20 kilometres per hour (about 12 miles per hour) and travel about 40 kilometres (25 miles) in distance before the battery needs to be recharged.
Figure 3: A Segway Patroller

Each Segway Patroller can be customized by adding the following features.
The managers at Oliverstadt Station claim the introduction of upgraded Segways that have a GPS navigation system and machine learning capabilities would lead to improvements in the customer service provided.
Discuss whether the Segway Patrollers at Oliverstadt Station should be upgraded to include machine learning capabilities.
To what extent are employers responsible and accountable for employees’ health issues caused by the use of computers in the workplace, and when working from home?
Practice 4.2 Economic with authentic IB Digital Society (DS) exam questions for both SL and HL students. This question bank mirrors Paper 1, 2, 3 structure, covering key topics like systems and structures, human behavior and interaction, and digital technologies in society. Get instant solutions, detailed explanations, and build exam confidence with questions in the style of IB examiners.
Sentencing criminals using artificial intelligence (AI)
In 10 states in the United States, artificial intelligence (AI) software is used for sentencing criminals. Once criminals are found guilty, judges need to determine the lengths of their prison sentences. One factor used by judges is the likelihood of the criminal re-offending*.
The AI software uses machine learning to determine how likely it is that a criminal will re-offend. This result is presented as a percentage; for example, the criminal has a 90 % chance of re-offending. Research has indicated that AI software is often, but not always, more reliable than human judges in predicting who is likely to re-offend.
There is general support for identifying people who are unlikely to re-offend, as they do not need to be sent to prisons that are already overcrowded.
Recently, Eric Loomis was sentenced by the state of Wisconsin using proprietary AI software. Eric had to answer over 100 questions to provide the AI software with enough information for it to decide the length of his sentence. When Eric was given a six-year sentence, he appealed and wanted to see the algorithms that led to this sentence. Eric lost the appeal.
On the other hand, the European Union (EU) has passed a law that allows citizens to challenge decisions made by algorithms in the criminal justice system.
* re-offending: committing another crime in the future
Identify two characteristics of artificial intelligence (AI) systems.
Outline one problem that may arise if proprietary software rather than open-source software is used to develop algorithms.
The developers of the AI software decided to use supervised machine learning to develop the algorithms in the sentencing software.
Identify two advantages of using supervised learning.
The developers of the AI software used visualizations as part of the development process.
Explain one reason why visualizations would be used as part of the development process.
Explain two problems the developers of the AI system could encounter when gathering the data that will be input into the AI system.
To what extent should the decisions of judges be based on algorithms rather than their knowledge and experience?
It is a paradox to brand a technological innovation as tainted goods by its very name. ‘Deepfake’ is a victim of its own capabilities. Negative connotations and recent incidents have pigeonholed this innovation in the taboo zone. The rise of deepfake technology has ushered in very interesting possibilities and challenges. This synthetic media, created through sophisticated artificial intelligence algorithms, has begun to infiltrate various sectors, raising intriguing questions about its potential impact on education and employability.
The dawn of deepfake technology introduces a realm of possibilities in education. Imagine medical students engaging in lifelike surgical simulations or language learners participating in authentic conversations. The potential for deepfake to revolutionise training scenarios is vast and could significantly enhance the educational experience. Beyond simulations, deepfake can transport students to historical events through realistic reenactments or facilitate virtual field trips, transcending the boundaries of traditional education. The immersive nature of deepfake content holds the promise of making learning more engaging and memorable.
However, while these potential abuses of the technology are real and concerning that doesn't mean we should turn a blind eye to the technology’s potential when using it responsibly, says Jaime Donally, a well-known immersive learning expert.
“Typically, when we're hearing about it, it's in terms of the negative – impersonation and giving false claims,” Donally says. “But really, the technology has the power of bringing people from history alive through old images that we have using AI.”
Donally, a former math teacher and instructional technologist, has written about how a type of deep fake technology called deep nostalgia technology that went viral in 2021 can allow students to form a stronger connection with the past and their personal family heritage. The technology, available on the MyHeritage app, allows images to uploaded that are then turned into short animations thanks to AI technology.
Here are some of the ways in which teachers can utilize deep fake technology in the classroom utilizing the MyHeritage app. Teachers have used the deep fake technology in the My Heritage app to bring historical figures such as Amelia Earhart and Albert Einstein to life. One teacher Donally has communicated with used an animation of Frederick Douglass (above) to help students connect with Douglass’ famous 1852 speech about the meaning of the Fourth of July to Black enslaved Americans. Another teacher has plans to use the app to have students interview a historical figure and create dialogue for them, then match the dialogue to the animation.
Donally herself has paired animations she's created with other types of immersive technology. “I layered it on top in augmented reality,” she says. “When you scan the photo of my grandfather, it all came to life. And it became something that was much more relevant to see in your real-world space.”
With proper supervision, students can use the technology to animate images of family members or local historical figures, and can experiment with augmented reality (AR) in the process. “It makes you want to learn more,” Donally says of animations created using deep fake technology. “It drives you into kind of the history and understanding a bit more, and I think it also helps you identify who you are in that process.”
Education platforms are harnessing deepfake technology to create AI tutors that provide customised support to students. Rather than a generic video lecture, each learner can get tailored instruction and feedback from a virtual tutor who speaks their language and adjusts to their level.
For example, Anthropic built Claude, an AI assistant designed specifically for education. Claude can answer students’ natural language questions, explain concepts clearly, and identify knowledge gaps.
Such AI tutors make learning more effective, accessible, and inclusive. Students feel like they have an expert guide helping them master new skills and material.
AI and deepfake technology have enormous potential to enhance workforce training and education in immersive new ways too. As Drew Rose, CSO and founder of cybersecurity firm Living Security, explains, “educators can leverage deepfakes to create immersive learning experiences. For instance, a history lesson might feature a ‘guest appearance’ by a historical figure, or a science lesson might have a renowned scientist explaining complex concepts.” Ivana Bartoletti, privacy and data protection expert at Wipro and author of An Artificial Revolution – On Power, Politics and AI envisions similar applications.
“Deepfake technologies could provide an easier and less expensive way to train and visualise,” she says. “Students of medicine and nursing currently train with animatronic robots. They are expensive and require special control rooms. Generative AI and augmented or virtual reality headsets or practice rooms will be cheaper and allow for the generalisation, if not the gamification, of simulation.”
Medical students could gain experience diagnosing and treating simulated patients, while business students could practice high-stakes scenarios like negotiations without real-world consequences. These immersive, gamified environments enabled by AI and deepfakes also have vast potential for corporate training.
Bartoletti notes, “A similar use case could be made for other types of learning that require risky and skill-based experiences. The Air Force uses AI as adversaries in flight simulators, and humans have not beaten the best AIs since 2015.
With reference to Source A identify 3 harmful uses of deep fakes.
With reference to Source B and one other real-world example you have studied, explain why deepfakes may be used for beneficial purposes in today's world
Compare what Source C and Source D reveal about the perspectives of deepfakes in the education sector.
With reference to the sources and your own knowledge, discuss whether the use of deepfakes in the educational sector is an incremental or transformational change.
Cloud networks allow for data storage and access over the internet, making data accessible from anywhere. This accessibility supports remote work, file sharing, and collaboration but also raises concerns about data security and control over personal information.
Evaluate the impact of cloud networks on data accessibility, considering the benefits for remote work and the potential security risks.
Students should be provided with the pre-release document ahead of the May 2018 HL paper 3 examination, this can be found under the 'Your tests' tab > supplemental materials > May 2018 HL paper 3 pre-release document: Accessibility.
Improving the accessibility to the curriculum for children with special educational needs and disabilities (SEND)
Source 1: Tayton School
Tayton School is a primary school that teaches 500 children aged between 5 and 12. There are three classes in each year group, with a maximum of 24 students in each class. The school’s motto is “Education for Everyone”, and inclusion is at the heart of the school’s mission.
The school’s Inclusion Department consists of five full-time staff, led by Sandra, and 10 learning support assistants who are active in working with the children. Sandra has recently produced a report on the students with special educational needs and disabilities (SEND) in the school, in which she found that the increasing numbers of students, and the types of SEND, means that the schools needs to invest in expanding the amount of support for the students (see Table 1).
Table 1: SEND at Tayton School

Sandra’s report argues that, next year, the work of the Inclusion Department would be more effective if the school purchased educational digital technologies, such as social robots and assistive technologies.
Source 2: Social robots in education
Sandra researched social robots and came back to the department meeting with this information:
In 2020, a report on the use of social robots in education was published by a prestigious university professor, who concluded that social robots have the potential to be a key player in education in the way textbooks and whiteboards have been in the past. A social robot has the potential to support students in ways that could never have been envisaged 20 years ago. However, there are significant technical limitations, particularly linked to the social robot’s ability to interact with students, that will restrict their usability for the next few years
Source 3: Mary sees the positives
Mary, one of the learning assistants at Tayton School, says:
“As a parent of two school-age children, I think the potential introduction of social robots has both advantages and disadvantages. My children thought the idea of having a robot that sits with them very exciting, and I think they would do what the robot asks without questioning it. The robot will also be much more patient while they are learning their times tables!” (See Figure 1).
Figure 1: Students interacting with a social robot

Source 4: James has doubts
James, another learning assistant at Tayton School, is wary of the overuse of digital technology in schools for children with special needs based on his experiences in other schools. He has found some research that supports his ideas.

Machine learning (ML) allows systems to learn from data, enabling applications like image recognition in social media and fraud detection in finance. These applications rely on different types of machine learning, such as supervised learning, where algorithms are trained on labeled data, and unsupervised learning, where systems find patterns without labels.
Identify two types of machine learning and describe their uses.
Outline how supervised learning is applied in image recognition.
Explain how unsupervised learning helps in detecting fraud in financial transactions.
Evaluate the challenges of using machine learning for high-stakes decisions, such as in financial fraud detection, considering both accuracy and accountability.
Online learning
TailorEd is a free online learning system that personalizes students’ learning by providing teachers with data about how students are progressing in their courses. Students create a personal profile and work through the assignments at their own pace. Teachers can log in to the learning system to see how the students are progressing. However, concerns have been expressed about the amount of data that is being collected.
The school has found that when students access the course platform, some content is being blocked. The network administrator has been asked to investigate the situation. Teachers believe that it would be more appropriate to train the students to use the platform responsibly, rather than use technology to block their access to certain websites.
Identify two ways how the TailorEd system could provide feedback to students.
Identify two ways how the data collected about students’ academic progress could be used by TailorEd.
Outline how a firewall functions.
There are two possible methods for ensuring students use the TailorEd online learning system responsibly. They are:
Analyse these two methods.
To what extent do the benefits of collecting students’ academic progress data outweigh the concerns of the students, teachers and parents?
Can digital technologies be used sustainably?
Many organizations claim that the most efficient use of information technology (IT) equipment, such as laptops and printers, is to replace them on a regular basis. For example, an organization’s strategy may be to do this every three years.
Other organizations purchase IT equipment that can easily be upgraded by increasing the storage and memory or upgrading the processing capabilities only when required. They claim they do not need to replace their IT equipment on such a regular basis and believe this is a more sustainable practice.
Evaluate the sustainability of these two strategies.
Discuss the decision for an owner of an art gallery to develop a virtual tour that is accessible online.
Using a Segway with machine learning capabilities?
The Segway Patroller is a two-wheeled, battery-powered electric vehicle. Recently, Segway Patrollers have been used for security purposes in cities as well as in public spaces such as concerts, railway stations and shopping malls.
The Segway Patroller can travel up to a speed of 20 kilometres per hour (about 12 miles per hour) and travel about 40 kilometres (25 miles) in distance before the battery needs to be recharged.
Figure 3: A Segway Patroller

Each Segway Patroller can be customized by adding the following features.
The managers at Oliverstadt Station claim the introduction of upgraded Segways that have a GPS navigation system and machine learning capabilities would lead to improvements in the customer service provided.
Discuss whether the Segway Patrollers at Oliverstadt Station should be upgraded to include machine learning capabilities.
To what extent are employers responsible and accountable for employees’ health issues caused by the use of computers in the workplace, and when working from home?