Practice 2.6 Systems with authentic IB Digital Society (DS) exam questions for both SL and HL students. This question bank mirrors Paper 1, 2, 3 structure, covering key topics like systems and structures, human behavior and interaction, and digital technologies in society. Get instant solutions, detailed explanations, and build exam confidence with questions in the style of IB examiners.
Sentencing criminals using artificial intelligence (AI)
In 10 states in the United States, artificial intelligence (AI) software is used for sentencing criminals. Once criminals are found guilty, judges need to determine the lengths of their prison sentences. One factor used by judges is the likelihood of the criminal re-offending*.
The AI software uses machine learning to determine how likely it is that a criminal will re-offend. This result is presented as a percentage; for example, the criminal has a 90 % chance of re-offending. Research has indicated that AI software is often, but not always, more reliable than human judges in predicting who is likely to re-offend.
There is general support for identifying people who are unlikely to re-offend, as they do not need to be sent to prisons that are already overcrowded.
Recently, Eric Loomis was sentenced by the state of Wisconsin using proprietary AI software. Eric had to answer over 100 questions to provide the AI software with enough information for it to decide the length of his sentence. When Eric was given a six-year sentence, he appealed and wanted to see the algorithms that led to this sentence. Eric lost the appeal.
On the other hand, the European Union (EU) has passed a law that allows citizens to challenge decisions made by algorithms in the criminal justice system.
* re-offending: committing another crime in the future
Identify two characteristics of artificial intelligence (AI) systems.
Outline one problem that may arise if proprietary software rather than open-source software is used to develop algorithms.
The developers of the AI software decided to use supervised machine learning to develop the algorithms in the sentencing software.
Identify two advantages of using supervised learning.
The developers of the AI software used visualizations as part of the development process.
Explain one reason why visualizations would be used as part of the development process.
Explain two problems the developers of the AI system could encounter when gathering the data that will be input into the AI system.
To what extent should the decisions of judges be based on algorithms rather than their knowledge and experience?
It is a paradox to brand a technological innovation as tainted goods by its very name. ‘Deepfake’ is a victim of its own capabilities. Negative connotations and recent incidents have pigeonholed this innovation in the taboo zone. The rise of deepfake technology has ushered in very interesting possibilities and challenges. This synthetic media, created through sophisticated artificial intelligence algorithms, has begun to infiltrate various sectors, raising intriguing questions about its potential impact on education and employability.
The dawn of deepfake technology introduces a realm of possibilities in education. Imagine medical students engaging in lifelike surgical simulations or language learners participating in authentic conversations. The potential for deepfake to revolutionise training scenarios is vast and could significantly enhance the educational experience. Beyond simulations, deepfake can transport students to historical events through realistic reenactments or facilitate virtual field trips, transcending the boundaries of traditional education. The immersive nature of deepfake content holds the promise of making learning more engaging and memorable.
However, while these potential abuses of the technology are real and concerning that doesn't mean we should turn a blind eye to the technology’s potential when using it responsibly, says Jaime Donally, a well-known immersive learning expert.
“Typically, when we're hearing about it, it's in terms of the negative – impersonation and giving false claims,” Donally says. “But really, the technology has the power of bringing people from history alive through old images that we have using AI.”
Donally, a former math teacher and instructional technologist, has written about how a type of deep fake technology called deep nostalgia technology that went viral in 2021 can allow students to form a stronger connection with the past and their personal family heritage. The technology, available on the MyHeritage app, allows images to uploaded that are then turned into short animations thanks to AI technology.
Here are some of the ways in which teachers can utilize deep fake technology in the classroom utilizing the MyHeritage app. Teachers have used the deep fake technology in the My Heritage app to bring historical figures such as Amelia Earhart and Albert Einstein to life. One teacher Donally has communicated with used an animation of Frederick Douglass (above) to help students connect with Douglass’ famous 1852 speech about the meaning of the Fourth of July to Black enslaved Americans. Another teacher has plans to use the app to have students interview a historical figure and create dialogue for them, then match the dialogue to the animation.
Donally herself has paired animations she's created with other types of immersive technology. “I layered it on top in augmented reality,” she says. “When you scan the photo of my grandfather, it all came to life. And it became something that was much more relevant to see in your real-world space.”
With proper supervision, students can use the technology to animate images of family members or local historical figures, and can experiment with augmented reality (AR) in the process. “It makes you want to learn more,” Donally says of animations created using deep fake technology. “It drives you into kind of the history and understanding a bit more, and I think it also helps you identify who you are in that process.”
Education platforms are harnessing deepfake technology to create AI tutors that provide customised support to students. Rather than a generic video lecture, each learner can get tailored instruction and feedback from a virtual tutor who speaks their language and adjusts to their level.
For example, Anthropic built Claude, an AI assistant designed specifically for education. Claude can answer students’ natural language questions, explain concepts clearly, and identify knowledge gaps.
Such AI tutors make learning more effective, accessible, and inclusive. Students feel like they have an expert guide helping them master new skills and material.
AI and deepfake technology have enormous potential to enhance workforce training and education in immersive new ways too. As Drew Rose, CSO and founder of cybersecurity firm Living Security, explains, “educators can leverage deepfakes to create immersive learning experiences. For instance, a history lesson might feature a ‘guest appearance’ by a historical figure, or a science lesson might have a renowned scientist explaining complex concepts.” Ivana Bartoletti, privacy and data protection expert at Wipro and author of An Artificial Revolution – On Power, Politics and AI envisions similar applications.
“Deepfake technologies could provide an easier and less expensive way to train and visualise,” she says. “Students of medicine and nursing currently train with animatronic robots. They are expensive and require special control rooms. Generative AI and augmented or virtual reality headsets or practice rooms will be cheaper and allow for the generalisation, if not the gamification, of simulation.”
Medical students could gain experience diagnosing and treating simulated patients, while business students could practice high-stakes scenarios like negotiations without real-world consequences. These immersive, gamified environments enabled by AI and deepfakes also have vast potential for corporate training.
Bartoletti notes, “A similar use case could be made for other types of learning that require risky and skill-based experiences. The Air Force uses AI as adversaries in flight simulators, and humans have not beaten the best AIs since 2015.
With reference to Source A identify 3 harmful uses of deep fakes.
With reference to Source B and one other real-world example you have studied, explain why deepfakes may be used for beneficial purposes in today's world
Compare what Source C and Source D reveal about the perspectives of deepfakes in the education sector.
With reference to the sources and your own knowledge, discuss whether the use of deepfakes in the educational sector is an incremental or transformational change.
Cloud networks allow for data storage and access over the internet, making data accessible from anywhere. This accessibility supports remote work, file sharing, and collaboration but also raises concerns about data security and control over personal information.
Evaluate the impact of cloud networks on data accessibility, considering the benefits for remote work and the potential security risks.
Define the term “finite” in the context of algorithms.
Identify two reasons why an algorithm should have well-defined inputs and outputs.
Explain why an algorithm must be unambiguous to function correctly.
Describe one example where the feasibility of an algorithm impacts its use in a real-world application.
Define the term ‘autonomous vehicle’.
Identify and explain the function of 2 sensors on an autonomous vehicle
Explain how sensors would be used by autonomous vehicles to avoid obstacles in the road.
Firewalls are critical for network security, acting as barriers between internal networks and external threats. They control incoming and outgoing traffic, protecting against unauthorized access and cyber attacks. However, configuring firewalls effectively can be challenging, especially in large organizations.
Evaluate the role of firewalls in securing organizational networks, considering their effectiveness and potential challenges in implementation.
Malicious software (malware) is a significant threat to users of personal devices, as it can steal sensitive information, disrupt services, or even cause financial losses. With increased connectivity, devices are more vulnerable to these attacks, raising ethical questions about responsibility in cybersecurity.
Evaluate the ethical responsibilities of software developers and users in preventing the spread of malicious software on personal devices.
Cameras in school
The principal at Flynn School has received requests from parents saying that they would like to monitor their children’s performance in school more closely. He is considering extending the school’s IT system by installing cameras linked to facial recognition software that can record student behaviour in lessons.
The facial recognition software can determine a student’s attention level and behaviour, such as identifying if they are listening, answering questions, talking with other students, or sleeping. The software uses machine learning to analyse each student’s behaviour and gives them a weekly score that is automatically emailed to their parents.
The principal claims that monitoring students’ behaviour more closely will improve the teaching and learning that takes place.
Discuss whether Flynn School should introduce a facial recognition system that uses machine learning to analyse each student’s behaviour and give them a score that is automatically emailed to their parents.
Moore’s Law has driven rapid advancements in technology by predicting that the number of transistors on a chip doubles approximately every two years. This trend has influenced the affordability, size, and power of devices like smartphones and laptops, though some predict Moore’s Law may be slowing down.
Discuss the significance of Moore’s Law in shaping the development of personal computing devices, including potential consequences if the law’s trend no longer holds true.
In criminal justice, "black box" algorithms are increasingly used to make decisions about bail, parole, and sentencing. However, the lack of transparency and potential for bias raise serious ethical concerns about fairness and accountability.
Evaluate the challenges of implementing algorithmic transparency and accountability in criminal justice, particularly with “black box” algorithms.