Practice Topic 2 - Concepts - Digital Society with authentic IB Digital Society (DS) exam questions for both SL and HL students. This question bank mirrors Paper 1, 2, 3 structure, covering key topics like systems and structures, human behavior and interaction, and digital technologies in society. Get instant solutions, detailed explanations, and build exam confidence with questions in the style of IB examiners.
Sentencing criminals using artificial intelligence (AI)
In 10 states in the United States, artificial intelligence (AI) software is used for sentencing criminals. Once criminals are found guilty, judges need to determine the lengths of their prison sentences. One factor used by judges is the likelihood of the criminal re-offending*.
The AI software uses machine learning to determine how likely it is that a criminal will re-offend. This result is presented as a percentage; for example, the criminal has a 90 % chance of re-offending. Research has indicated that AI software is often, but not always, more reliable than human judges in predicting who is likely to re-offend.
There is general support for identifying people who are unlikely to re-offend, as they do not need to be sent to prisons that are already overcrowded.
Recently, Eric Loomis was sentenced by the state of Wisconsin using proprietary AI software. Eric had to answer over 100 questions to provide the AI software with enough information for it to decide the length of his sentence. When Eric was given a six-year sentence, he appealed and wanted to see the algorithms that led to this sentence. Eric lost the appeal.
On the other hand, the European Union (EU) has passed a law that allows citizens to challenge decisions made by algorithms in the criminal justice system.
* re-offending: committing another crime in the future
Identify two characteristics of artificial intelligence (AI) systems.
Outline one problem that may arise if proprietary software rather than open-source software is used to develop algorithms.
The developers of the AI software decided to use supervised machine learning to develop the algorithms in the sentencing software.
Identify two advantages of using supervised learning.
The developers of the AI software used visualizations as part of the development process.
Explain one reason why visualizations would be used as part of the development process.
Explain two problems the developers of the AI system could encounter when gathering the data that will be input into the AI system.
To what extent should the decisions of judges be based on algorithms rather than their knowledge and experience?
It is a paradox to brand a technological innovation as tainted goods by its very name. ‘Deepfake’ is a victim of its own capabilities. Negative connotations and recent incidents have pigeonholed this innovation in the taboo zone. The rise of deepfake technology has ushered in very interesting possibilities and challenges. This synthetic media, created through sophisticated artificial intelligence algorithms, has begun to infiltrate various sectors, raising intriguing questions about its potential impact on education and employability.
The dawn of deepfake technology introduces a realm of possibilities in education. Imagine medical students engaging in lifelike surgical simulations or language learners participating in authentic conversations. The potential for deepfake to revolutionise training scenarios is vast and could significantly enhance the educational experience. Beyond simulations, deepfake can transport students to historical events through realistic reenactments or facilitate virtual field trips, transcending the boundaries of traditional education. The immersive nature of deepfake content holds the promise of making learning more engaging and memorable.
However, while these potential abuses of the technology are real and concerning that doesn't mean we should turn a blind eye to the technology’s potential when using it responsibly, says Jaime Donally, a well-known immersive learning expert.
“Typically, when we're hearing about it, it's in terms of the negative – impersonation and giving false claims,” Donally says. “But really, the technology has the power of bringing people from history alive through old images that we have using AI.”
Donally, a former math teacher and instructional technologist, has written about how a type of deep fake technology called deep nostalgia technology that went viral in 2021 can allow students to form a stronger connection with the past and their personal family heritage. The technology, available on the MyHeritage app, allows images to uploaded that are then turned into short animations thanks to AI technology.
Here are some of the ways in which teachers can utilize deep fake technology in the classroom utilizing the MyHeritage app. Teachers have used the deep fake technology in the My Heritage app to bring historical figures such as Amelia Earhart and Albert Einstein to life. One teacher Donally has communicated with used an animation of Frederick Douglass (above) to help students connect with Douglass’ famous 1852 speech about the meaning of the Fourth of July to Black enslaved Americans. Another teacher has plans to use the app to have students interview a historical figure and create dialogue for them, then match the dialogue to the animation.
Donally herself has paired animations she's created with other types of immersive technology. “I layered it on top in augmented reality,” she says. “When you scan the photo of my grandfather, it all came to life. And it became something that was much more relevant to see in your real-world space.”
With proper supervision, students can use the technology to animate images of family members or local historical figures, and can experiment with augmented reality (AR) in the process. “It makes you want to learn more,” Donally says of animations created using deep fake technology. “It drives you into kind of the history and understanding a bit more, and I think it also helps you identify who you are in that process.”
Education platforms are harnessing deepfake technology to create AI tutors that provide customised support to students. Rather than a generic video lecture, each learner can get tailored instruction and feedback from a virtual tutor who speaks their language and adjusts to their level.
For example, Anthropic built Claude, an AI assistant designed specifically for education. Claude can answer students’ natural language questions, explain concepts clearly, and identify knowledge gaps.
Such AI tutors make learning more effective, accessible, and inclusive. Students feel like they have an expert guide helping them master new skills and material.
AI and deepfake technology have enormous potential to enhance workforce training and education in immersive new ways too. As Drew Rose, CSO and founder of cybersecurity firm Living Security, explains, “educators can leverage deepfakes to create immersive learning experiences. For instance, a history lesson might feature a ‘guest appearance’ by a historical figure, or a science lesson might have a renowned scientist explaining complex concepts.” Ivana Bartoletti, privacy and data protection expert at Wipro and author of An Artificial Revolution – On Power, Politics and AI envisions similar applications.
“Deepfake technologies could provide an easier and less expensive way to train and visualise,” she says. “Students of medicine and nursing currently train with animatronic robots. They are expensive and require special control rooms. Generative AI and augmented or virtual reality headsets or practice rooms will be cheaper and allow for the generalisation, if not the gamification, of simulation.”
Medical students could gain experience diagnosing and treating simulated patients, while business students could practice high-stakes scenarios like negotiations without real-world consequences. These immersive, gamified environments enabled by AI and deepfakes also have vast potential for corporate training.
Bartoletti notes, “A similar use case could be made for other types of learning that require risky and skill-based experiences. The Air Force uses AI as adversaries in flight simulators, and humans have not beaten the best AIs since 2015.
With reference to Source A identify 3 harmful uses of deep fakes.
With reference to Source B and one other real-world example you have studied, explain why deepfakes may be used for beneficial purposes in today's world
Compare what Source C and Source D reveal about the perspectives of deepfakes in the education sector.
With reference to the sources and your own knowledge, discuss whether the use of deepfakes in the educational sector is an incremental or transformational change.
Cloud networks allow for data storage and access over the internet, making data accessible from anywhere. This accessibility supports remote work, file sharing, and collaboration but also raises concerns about data security and control over personal information.
Evaluate the impact of cloud networks on data accessibility, considering the benefits for remote work and the potential security risks.
The UN secretary-general has called for states to conclude a new international treaty by 2026 to prohibit weapons systems without human control or oversight and that cannot be used in compliance with international humanitarian law. The treaty should regulate all types of autonomous weapons systems. The report reflects 58 submissions from over 73 countries and 33 submissions from the International Committee of the Red Cross and civil society groups. The UN General Assembly is considered a venue for inclusive discussions on autonomous weapons systems, considering international peace and security concerns.
On 27 March 2020, the Prime Minister of Libya, Faiez Serraj, announced the commencement of Operation PEACE STORM, which moved GNA-AF to the offensive along the coastal littoral. The combination of the Gabya-class frigates and Korkut short-range air defence systems provided a capability to place a mobile air defence bubble around GNA-AF ground units, which took Hafter Affiliated Forces (HAF) air assets out of the military equation. Libya classifies HAF as a terrorist rebel organization. The enhanced operational intelligence capability included Turkish-operated signal intelligence and the intelligence, surveillance and reconnaissance provided by Bayraktar TB-2 and probably TAI Anka S unmanned combat aerial vehicles. This allowed for the development of an asymmetrical war of attrition designed to degrade HAF ground unit capability. The GNA-AF breakout of Tripoli was supported with Firtina T155 155 mm self-propelled guns and T-122 Sakarya multi-launch rocket systems firing extended range precision munitions against the mid-twentieth century main battle tanks and heavy artillery used by HAF.
Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (see annex 30) and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability. The unmanned combat aerial vehicles and the small drone intelligence, surveillance and reconnaissance capability of HAF were neutralized by electronic jamming from the Koral electronic warfare system.
In the autumn of 2001, however, the United States was unwilling to launch a full-scale land invasion in a region 7000 miles from home. Instead, a plan evolved to send into Afghanistan a small number of CIA agents and Special Forces in support of anti-Taliban militias, with the aid of the US Air Force. That first October night was a powerful display of coordination involving laser- guided munitions dropped from the air and Tomahawk cruise missiles launched from the sea. General Tommy Franks, who then led the US Central Command (CENTCOM), the military command overseeing operations in Afghanistan, wrote in his memoir American Soldier that the assault involved in total some 40,000 personnel, 393 aircraft, and 32 ships. But one aircraft did not feature at all in the Air Force’s complex planning: a tiny, CIA-controlled, propeller-driven spy drone, Predator tailfin number 3034 which had crept into Afghanistan some hours earlier. It now hangs suspended in the Smithsonian Air and Space Museum in Washington, D.C., its place in history assured. Yet its actions that first night of the war – in which numerous agencies in the vast US military-intelligence machine each played sharply contradictory roles – remain steeped in controversy.
Human Rights Watch released a report stating that representatives from around 50 countries will meet in the summer of 2021 at the UN to discuss worldwide policy alignment on ‘killer robots’ or ‘lethal autonomous weapons systems’. In their report, Human Rights Watch expressed objections to delegating lethal force to machines without the presence of meaningful human control. Bonnie Docherty, senior arms research at Human Rights Watch said: ‘The fundamental moral, legal and security concerns raised by autonomous weapons systems warrant a strong and urgent response in the form of a new international treaty ... International law needs to be expanded to create new rules that ensure human control and accountability in the use of force.’ Human Rights Watch proposes a treaty that covers the use of all weapons that operate autonomously that includes limitations and restrictions such as banning the use of killer robots, with many claims reinforcing that meaningful human control must be involved in the selection and engagement of targets. It goes on to define the scope and prevalence of ‘meaningful human control’ to ensure that humans have access to the data, risks and potential impacts prior to authorizing an attack.
With reference to Source A, identify two different or unexpected impacts of ‘killer robots’.
With reference to Source D, explain why it may be difficult to reach global agreement on ‘killer robot’ policy.
Compare and contrast how Sources B and C present their messages of events involving unmanned combat aerial vehicles events.
With reference to the sources and your own knowledge, evaluate the decision to ban automated military technology.
Drones are widely used for surveillance in law enforcement and border control. While they enhance monitoring capabilities and can improve public safety, drones also raise concerns about privacy, consent, and the potential misuse of surveillance technology in public and private spaces.
Discuss the impact of drone technology on public surveillance and privacy, considering both the benefits for security and the ethical implications for individual privacy rights.
Define the term “finite” in the context of algorithms.
Identify two reasons why an algorithm should have well-defined inputs and outputs.
Explain why an algorithm must be unambiguous to function correctly.
Describe one example where the feasibility of an algorithm impacts its use in a real-world application.
Define the term ‘autonomous vehicle’.
Identify and explain the function of 2 sensors on an autonomous vehicle
Explain how sensors would be used by autonomous vehicles to avoid obstacles in the road.
Firewalls are critical for network security, acting as barriers between internal networks and external threats. They control incoming and outgoing traffic, protecting against unauthorized access and cyber attacks. However, configuring firewalls effectively can be challenging, especially in large organizations.
Evaluate the role of firewalls in securing organizational networks, considering their effectiveness and potential challenges in implementation.
Machine learning (ML) allows systems to learn from data, enabling applications like image recognition in social media and fraud detection in finance. These applications rely on different types of machine learning, such as supervised learning, where algorithms are trained on labeled data, and unsupervised learning, where systems find patterns without labels.
Identify two types of machine learning and describe their uses.
Outline how supervised learning is applied in image recognition.
Explain how unsupervised learning helps in detecting fraud in financial transactions.
Evaluate the challenges of using machine learning for high-stakes decisions, such as in financial fraud detection, considering both accuracy and accountability.
Malicious software (malware) is a significant threat to users of personal devices, as it can steal sensitive information, disrupt services, or even cause financial losses. With increased connectivity, devices are more vulnerable to these attacks, raising ethical questions about responsibility in cybersecurity.
Evaluate the ethical responsibilities of software developers and users in preventing the spread of malicious software on personal devices.