Practice 5.2 Governance and human rights with authentic IB Digital Society (DS) exam questions for both SL and HL students. This question bank mirrors Paper 1, 2, 3 structure, covering key topics like systems and structures, human behavior and interaction, and digital technologies in society. Get instant solutions, detailed explanations, and build exam confidence with questions in the style of IB examiners.
It is a paradox to brand a technological innovation as tainted goods by its very name. ‘Deepfake’ is a victim of its own capabilities. Negative connotations and recent incidents have pigeonholed this innovation in the taboo zone. The rise of deepfake technology has ushered in very interesting possibilities and challenges. This synthetic media, created through sophisticated artificial intelligence algorithms, has begun to infiltrate various sectors, raising intriguing questions about its potential impact on education and employability.
The dawn of deepfake technology introduces a realm of possibilities in education. Imagine medical students engaging in lifelike surgical simulations or language learners participating in authentic conversations. The potential for deepfake to revolutionise training scenarios is vast and could significantly enhance the educational experience. Beyond simulations, deepfake can transport students to historical events through realistic reenactments or facilitate virtual field trips, transcending the boundaries of traditional education. The immersive nature of deepfake content holds the promise of making learning more engaging and memorable.
However, while these potential abuses of the technology are real and concerning that doesn't mean we should turn a blind eye to the technology’s potential when using it responsibly, says Jaime Donally, a well-known immersive learning expert.
“Typically, when we're hearing about it, it's in terms of the negative – impersonation and giving false claims,” Donally says. “But really, the technology has the power of bringing people from history alive through old images that we have using AI.”
Donally, a former math teacher and instructional technologist, has written about how a type of deep fake technology called deep nostalgia technology that went viral in 2021 can allow students to form a stronger connection with the past and their personal family heritage. The technology, available on the MyHeritage app, allows images to uploaded that are then turned into short animations thanks to AI technology.
Here are some of the ways in which teachers can utilize deep fake technology in the classroom utilizing the MyHeritage app. Teachers have used the deep fake technology in the My Heritage app to bring historical figures such as Amelia Earhart and Albert Einstein to life. One teacher Donally has communicated with used an animation of Frederick Douglass (above) to help students connect with Douglass’ famous 1852 speech about the meaning of the Fourth of July to Black enslaved Americans. Another teacher has plans to use the app to have students interview a historical figure and create dialogue for them, then match the dialogue to the animation.
Donally herself has paired animations she's created with other types of immersive technology. “I layered it on top in augmented reality,” she says. “When you scan the photo of my grandfather, it all came to life. And it became something that was much more relevant to see in your real-world space.”
With proper supervision, students can use the technology to animate images of family members or local historical figures, and can experiment with augmented reality (AR) in the process. “It makes you want to learn more,” Donally says of animations created using deep fake technology. “It drives you into kind of the history and understanding a bit more, and I think it also helps you identify who you are in that process.”
Education platforms are harnessing deepfake technology to create AI tutors that provide customised support to students. Rather than a generic video lecture, each learner can get tailored instruction and feedback from a virtual tutor who speaks their language and adjusts to their level.
For example, Anthropic built Claude, an AI assistant designed specifically for education. Claude can answer students’ natural language questions, explain concepts clearly, and identify knowledge gaps.
Such AI tutors make learning more effective, accessible, and inclusive. Students feel like they have an expert guide helping them master new skills and material.
AI and deepfake technology have enormous potential to enhance workforce training and education in immersive new ways too. As Drew Rose, CSO and founder of cybersecurity firm Living Security, explains, “educators can leverage deepfakes to create immersive learning experiences. For instance, a history lesson might feature a ‘guest appearance’ by a historical figure, or a science lesson might have a renowned scientist explaining complex concepts.” Ivana Bartoletti, privacy and data protection expert at Wipro and author of An Artificial Revolution – On Power, Politics and AI envisions similar applications.
“Deepfake technologies could provide an easier and less expensive way to train and visualise,” she says. “Students of medicine and nursing currently train with animatronic robots. They are expensive and require special control rooms. Generative AI and augmented or virtual reality headsets or practice rooms will be cheaper and allow for the generalisation, if not the gamification, of simulation.”
Medical students could gain experience diagnosing and treating simulated patients, while business students could practice high-stakes scenarios like negotiations without real-world consequences. These immersive, gamified environments enabled by AI and deepfakes also have vast potential for corporate training.
Bartoletti notes, “A similar use case could be made for other types of learning that require risky and skill-based experiences. The Air Force uses AI as adversaries in flight simulators, and humans have not beaten the best AIs since 2015.
With reference to Source A identify 3 harmful uses of deep fakes.
With reference to Source B and one other real-world example you have studied, explain why deepfakes may be used for beneficial purposes in today's world
Compare what Source C and Source D reveal about the perspectives of deepfakes in the education sector.
With reference to the sources and your own knowledge, discuss whether the use of deepfakes in the educational sector is an incremental or transformational change.
The UN secretary-general has called for states to conclude a new international treaty by 2026 to prohibit weapons systems without human control or oversight and that cannot be used in compliance with international humanitarian law. The treaty should regulate all types of autonomous weapons systems. The report reflects 58 submissions from over 73 countries and 33 submissions from the International Committee of the Red Cross and civil society groups. The UN General Assembly is considered a venue for inclusive discussions on autonomous weapons systems, considering international peace and security concerns.
On 27 March 2020, the Prime Minister of Libya, Faiez Serraj, announced the commencement of Operation PEACE STORM, which moved GNA-AF to the offensive along the coastal littoral. The combination of the Gabya-class frigates and Korkut short-range air defence systems provided a capability to place a mobile air defence bubble around GNA-AF ground units, which took Hafter Affiliated Forces (HAF) air assets out of the military equation. Libya classifies HAF as a terrorist rebel organization. The enhanced operational intelligence capability included Turkish-operated signal intelligence and the intelligence, surveillance and reconnaissance provided by Bayraktar TB-2 and probably TAI Anka S unmanned combat aerial vehicles. This allowed for the development of an asymmetrical war of attrition designed to degrade HAF ground unit capability. The GNA-AF breakout of Tripoli was supported with Firtina T155 155 mm self-propelled guns and T-122 Sakarya multi-launch rocket systems firing extended range precision munitions against the mid-twentieth century main battle tanks and heavy artillery used by HAF.
Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (see annex 30) and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability. The unmanned combat aerial vehicles and the small drone intelligence, surveillance and reconnaissance capability of HAF were neutralized by electronic jamming from the Koral electronic warfare system.
In the autumn of 2001, however, the United States was unwilling to launch a full-scale land invasion in a region 7000 miles from home. Instead, a plan evolved to send into Afghanistan a small number of CIA agents and Special Forces in support of anti-Taliban militias, with the aid of the US Air Force. That first October night was a powerful display of coordination involving laser- guided munitions dropped from the air and Tomahawk cruise missiles launched from the sea. General Tommy Franks, who then led the US Central Command (CENTCOM), the military command overseeing operations in Afghanistan, wrote in his memoir American Soldier that the assault involved in total some 40,000 personnel, 393 aircraft, and 32 ships. But one aircraft did not feature at all in the Air Force’s complex planning: a tiny, CIA-controlled, propeller-driven spy drone, Predator tailfin number 3034 which had crept into Afghanistan some hours earlier. It now hangs suspended in the Smithsonian Air and Space Museum in Washington, D.C., its place in history assured. Yet its actions that first night of the war – in which numerous agencies in the vast US military-intelligence machine each played sharply contradictory roles – remain steeped in controversy.
Human Rights Watch released a report stating that representatives from around 50 countries will meet in the summer of 2021 at the UN to discuss worldwide policy alignment on ‘killer robots’ or ‘lethal autonomous weapons systems’. In their report, Human Rights Watch expressed objections to delegating lethal force to machines without the presence of meaningful human control. Bonnie Docherty, senior arms research at Human Rights Watch said: ‘The fundamental moral, legal and security concerns raised by autonomous weapons systems warrant a strong and urgent response in the form of a new international treaty ... International law needs to be expanded to create new rules that ensure human control and accountability in the use of force.’ Human Rights Watch proposes a treaty that covers the use of all weapons that operate autonomously that includes limitations and restrictions such as banning the use of killer robots, with many claims reinforcing that meaningful human control must be involved in the selection and engagement of targets. It goes on to define the scope and prevalence of ‘meaningful human control’ to ensure that humans have access to the data, risks and potential impacts prior to authorizing an attack.
With reference to Source A, identify two different or unexpected impacts of ‘killer robots’.
With reference to Source D, explain why it may be difficult to reach global agreement on ‘killer robot’ policy.
Compare and contrast how Sources B and C present their messages of events involving unmanned combat aerial vehicles events.
With reference to the sources and your own knowledge, evaluate the decision to ban automated military technology.
Drones are widely used for surveillance in law enforcement and border control. While they enhance monitoring capabilities and can improve public safety, drones also raise concerns about privacy, consent, and the potential misuse of surveillance technology in public and private spaces.
Discuss the impact of drone technology on public surveillance and privacy, considering both the benefits for security and the ethical implications for individual privacy rights.
Firewalls are critical for network security, acting as barriers between internal networks and external threats. They control incoming and outgoing traffic, protecting against unauthorized access and cyber attacks. However, configuring firewalls effectively can be challenging, especially in large organizations.
Evaluate the role of firewalls in securing organizational networks, considering their effectiveness and potential challenges in implementation.
In criminal justice, "black box" algorithms are increasingly used to make decisions about bail, parole, and sentencing. However, the lack of transparency and potential for bias raise serious ethical concerns about fairness and accountability.
Evaluate the challenges of implementing algorithmic transparency and accountability in criminal justice, particularly with “black box” algorithms.
On 23 December 2011, an e-card with the subject ‘Merry Christmas!’ was supposedly sent by the US President’s office (from ‘jeff.jones@whitehouse.org’) to a massive number of recipients. Recipients who clicked to download and open the card (a .zip file) saw an animated Christmas tree while a trojan virus accessed their saved documents and passwords, and uploaded them to a server in Belarus.
Outline four steps in the process of how victims opening the e-card resulted in their files being uploaded to servers in Belarus.
In response to the news about the e-card trojan virus, some employees decided to search for, download and install FREE email protection software for their work computers instead of waiting for instructions from their employer. Evaluate this decision.
To what extent are employers responsible and accountable for employees’ health issues caused by the use of computers in the workplace, and when working from home?
Fake news
We see and hear news every day and trust that the information provided is accurate. That belief may soon end.
Artificial intelligence (AI) software is now being developed that can produce fake video footage of public figures using recordings of their own voices. Using as little as one minute of user-generated content (data), it can reproduce a particular person’s voice. The developer of this software demonstrated the results by using the voices of Bill Clinton, George Bush and Barack Obama in a computer-generated conversation.
Once a person’s voice has been reproduced, a fake video can be created by processing hundreds of videos of the person’s face. Video footage of politicians are often used, as there is so much data available online.
Law professor John Silverman commented that, as humans we tend to believe what we see, and the increased number of tools to make fake media that is unrecognizable from real media is going to prove a major challenge in the future.
Discuss the claim that companies who develop software that can create fake videos of politicians should be accountable for the fake videos posted by users of their software on social media platforms.
AI-driven algorithms on social media platforms play a significant role in shaping public opinion by curating political content and advertisements based on users' behavior. While this personalization can engage voters, it also raises concerns about the concentration of power in the hands of tech companies, the spread of misinformation, and the potential manipulation of public opinion.
Evaluate the extent to which AI-driven content personalization influences political power dynamics, considering both the benefits of increased voter engagement and the risks of biased information and manipulation.
Evaluate one intervention used in the music industry to advocate for better conditions for new music artists.