- IB
- Topic 5 - Challenges and Interventions - Digital Society (HL)
Practice Topic 5 - Challenges and Interventions - Digital Society (HL) with authentic IB Digital Society (DS) exam questions for both SL and HL students. This question bank mirrors Paper 1, 2, 3 structure, covering key topics like systems and structures, human behavior and interaction, and digital technologies in society. Get instant solutions, detailed explanations, and build exam confidence with questions in the style of IB examiners.
It is a paradox to brand a technological innovation as tainted goods by its very name. ‘Deepfake’ is a victim of its own capabilities. Negative connotations and recent incidents have pigeonholed this innovation in the taboo zone. The rise of deepfake technology has ushered in very interesting possibilities and challenges. This synthetic media, created through sophisticated artificial intelligence algorithms, has begun to infiltrate various sectors, raising intriguing questions about its potential impact on education and employability.
The dawn of deepfake technology introduces a realm of possibilities in education. Imagine medical students engaging in lifelike surgical simulations or language learners participating in authentic conversations. The potential for deepfake to revolutionise training scenarios is vast and could significantly enhance the educational experience. Beyond simulations, deepfake can transport students to historical events through realistic reenactments or facilitate virtual field trips, transcending the boundaries of traditional education. The immersive nature of deepfake content holds the promise of making learning more engaging and memorable.
However, while these potential abuses of the technology are real and concerning that doesn't mean we should turn a blind eye to the technology’s potential when using it responsibly, says Jaime Donally, a well-known immersive learning expert.
“Typically, when we're hearing about it, it's in terms of the negative – impersonation and giving false claims,” Donally says. “But really, the technology has the power of bringing people from history alive through old images that we have using AI.”
Donally, a former math teacher and instructional technologist, has written about how a type of deep fake technology called deep nostalgia technology that went viral in 2021 can allow students to form a stronger connection with the past and their personal family heritage. The technology, available on the MyHeritage app, allows images to uploaded that are then turned into short animations thanks to AI technology.
Here are some of the ways in which teachers can utilize deep fake technology in the classroom utilizing the MyHeritage app. Teachers have used the deep fake technology in the My Heritage app to bring historical figures such as Amelia Earhart and Albert Einstein to life. One teacher Donally has communicated with used an animation of Frederick Douglass (above) to help students connect with Douglass’ famous 1852 speech about the meaning of the Fourth of July to Black enslaved Americans. Another teacher has plans to use the app to have students interview a historical figure and create dialogue for them, then match the dialogue to the animation.
Donally herself has paired animations she's created with other types of immersive technology. “I layered it on top in augmented reality,” she says. “When you scan the photo of my grandfather, it all came to life. And it became something that was much more relevant to see in your real-world space.”
With proper supervision, students can use the technology to animate images of family members or local historical figures, and can experiment with augmented reality (AR) in the process. “It makes you want to learn more,” Donally says of animations created using deep fake technology. “It drives you into kind of the history and understanding a bit more, and I think it also helps you identify who you are in that process.”
Education platforms are harnessing deepfake technology to create AI tutors that provide customised support to students. Rather than a generic video lecture, each learner can get tailored instruction and feedback from a virtual tutor who speaks their language and adjusts to their level.
For example, Anthropic built Claude, an AI assistant designed specifically for education. Claude can answer students’ natural language questions, explain concepts clearly, and identify knowledge gaps.
Such AI tutors make learning more effective, accessible, and inclusive. Students feel like they have an expert guide helping them master new skills and material.
AI and deepfake technology have enormous potential to enhance workforce training and education in immersive new ways too. As Drew Rose, CSO and founder of cybersecurity firm Living Security, explains, “educators can leverage deepfakes to create immersive learning experiences. For instance, a history lesson might feature a ‘guest appearance’ by a historical figure, or a science lesson might have a renowned scientist explaining complex concepts.” Ivana Bartoletti, privacy and data protection expert at Wipro and author of An Artificial Revolution – On Power, Politics and AI envisions similar applications.
“Deepfake technologies could provide an easier and less expensive way to train and visualise,” she says. “Students of medicine and nursing currently train with animatronic robots. They are expensive and require special control rooms. Generative AI and augmented or virtual reality headsets or practice rooms will be cheaper and allow for the generalisation, if not the gamification, of simulation.”
Medical students could gain experience diagnosing and treating simulated patients, while business students could practice high-stakes scenarios like negotiations without real-world consequences. These immersive, gamified environments enabled by AI and deepfakes also have vast potential for corporate training.
Bartoletti notes, “A similar use case could be made for other types of learning that require risky and skill-based experiences. The Air Force uses AI as adversaries in flight simulators, and humans have not beaten the best AIs since 2015.
With reference to Source A identify 3 harmful uses of deep fakes.
With reference to Source B and one other real-world example you have studied, explain why deepfakes may be used for beneficial purposes in today's world
Compare what Source C and Source D reveal about the perspectives of deepfakes in the education sector.
With reference to the sources and your own knowledge, discuss whether the use of deepfakes in the educational sector is an incremental or transformational change.
Cloud networks allow for data storage and access over the internet, making data accessible from anywhere. This accessibility supports remote work, file sharing, and collaboration but also raises concerns about data security and control over personal information.
Evaluate the impact of cloud networks on data accessibility, considering the benefits for remote work and the potential security risks.
The UN secretary-general has called for states to conclude a new international treaty by 2026 to prohibit weapons systems without human control or oversight and that cannot be used in compliance with international humanitarian law. The treaty should regulate all types of autonomous weapons systems. The report reflects 58 submissions from over 73 countries and 33 submissions from the International Committee of the Red Cross and civil society groups. The UN General Assembly is considered a venue for inclusive discussions on autonomous weapons systems, considering international peace and security concerns.
On 27 March 2020, the Prime Minister of Libya, Faiez Serraj, announced the commencement of Operation PEACE STORM, which moved GNA-AF to the offensive along the coastal littoral. The combination of the Gabya-class frigates and Korkut short-range air defence systems provided a capability to place a mobile air defence bubble around GNA-AF ground units, which took Hafter Affiliated Forces (HAF) air assets out of the military equation. Libya classifies HAF as a terrorist rebel organization. The enhanced operational intelligence capability included Turkish-operated signal intelligence and the intelligence, surveillance and reconnaissance provided by Bayraktar TB-2 and probably TAI Anka S unmanned combat aerial vehicles. This allowed for the development of an asymmetrical war of attrition designed to degrade HAF ground unit capability. The GNA-AF breakout of Tripoli was supported with Firtina T155 155 mm self-propelled guns and T-122 Sakarya multi-launch rocket systems firing extended range precision munitions against the mid-twentieth century main battle tanks and heavy artillery used by HAF.
Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (see annex 30) and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability. The unmanned combat aerial vehicles and the small drone intelligence, surveillance and reconnaissance capability of HAF were neutralized by electronic jamming from the Koral electronic warfare system.
In the autumn of 2001, however, the United States was unwilling to launch a full-scale land invasion in a region 7000 miles from home. Instead, a plan evolved to send into Afghanistan a small number of CIA agents and Special Forces in support of anti-Taliban militias, with the aid of the US Air Force. That first October night was a powerful display of coordination involving laser- guided munitions dropped from the air and Tomahawk cruise missiles launched from the sea. General Tommy Franks, who then led the US Central Command (CENTCOM), the military command overseeing operations in Afghanistan, wrote in his memoir American Soldier that the assault involved in total some 40,000 personnel, 393 aircraft, and 32 ships. But one aircraft did not feature at all in the Air Force’s complex planning: a tiny, CIA-controlled, propeller-driven spy drone, Predator tailfin number 3034 which had crept into Afghanistan some hours earlier. It now hangs suspended in the Smithsonian Air and Space Museum in Washington, D.C., its place in history assured. Yet its actions that first night of the war – in which numerous agencies in the vast US military-intelligence machine each played sharply contradictory roles – remain steeped in controversy.
Human Rights Watch released a report stating that representatives from around 50 countries will meet in the summer of 2021 at the UN to discuss worldwide policy alignment on ‘killer robots’ or ‘lethal autonomous weapons systems’. In their report, Human Rights Watch expressed objections to delegating lethal force to machines without the presence of meaningful human control. Bonnie Docherty, senior arms research at Human Rights Watch said: ‘The fundamental moral, legal and security concerns raised by autonomous weapons systems warrant a strong and urgent response in the form of a new international treaty ... International law needs to be expanded to create new rules that ensure human control and accountability in the use of force.’ Human Rights Watch proposes a treaty that covers the use of all weapons that operate autonomously that includes limitations and restrictions such as banning the use of killer robots, with many claims reinforcing that meaningful human control must be involved in the selection and engagement of targets. It goes on to define the scope and prevalence of ‘meaningful human control’ to ensure that humans have access to the data, risks and potential impacts prior to authorizing an attack.
With reference to Source A, identify two different or unexpected impacts of ‘killer robots’.
With reference to Source D, explain why it may be difficult to reach global agreement on ‘killer robot’ policy.
Compare and contrast how Sources B and C present their messages of events involving unmanned combat aerial vehicles events.
With reference to the sources and your own knowledge, evaluate the decision to ban automated military technology.
Drones are widely used for surveillance in law enforcement and border control. While they enhance monitoring capabilities and can improve public safety, drones also raise concerns about privacy, consent, and the potential misuse of surveillance technology in public and private spaces.
Discuss the impact of drone technology on public surveillance and privacy, considering both the benefits for security and the ethical implications for individual privacy rights.
Firewalls are critical for network security, acting as barriers between internal networks and external threats. They control incoming and outgoing traffic, protecting against unauthorized access and cyber attacks. However, configuring firewalls effectively can be challenging, especially in large organizations.
Evaluate the role of firewalls in securing organizational networks, considering their effectiveness and potential challenges in implementation.
Cameras in school
The principal at Flynn School has received requests from parents saying that they would like to monitor their children’s performance in school more closely. He is considering extending the school’s IT system by installing cameras linked to facial recognition software that can record student behaviour in lessons.
The facial recognition software can determine a student’s attention level and behaviour, such as identifying if they are listening, answering questions, talking with other students, or sleeping. The software uses machine learning to analyse each student’s behaviour and gives them a weekly score that is automatically emailed to their parents.
The principal claims that monitoring students’ behaviour more closely will improve the teaching and learning that takes place.
Discuss whether Flynn School should introduce a facial recognition system that uses machine learning to analyse each student’s behaviour and give them a score that is automatically emailed to their parents.
In criminal justice, "black box" algorithms are increasingly used to make decisions about bail, parole, and sentencing. However, the lack of transparency and potential for bias raise serious ethical concerns about fairness and accountability.
Evaluate the challenges of implementing algorithmic transparency and accountability in criminal justice, particularly with “black box” algorithms.
Can digital technologies be used sustainably?
Many organizations claim that the most efficient use of information technology (IT) equipment, such as laptops and printers, is to replace them on a regular basis. For example, an organization’s strategy may be to do this every three years.
Other organizations purchase IT equipment that can easily be upgraded by increasing the storage and memory or upgrading the processing capabilities only when required. They claim they do not need to replace their IT equipment on such a regular basis and believe this is a more sustainable practice.
Evaluate the sustainability of these two strategies.
In healthcare, algorithms are employed for predictive diagnostics by analyzing patient data to predict diseases or suggest treatments. While these algorithms can increase efficiency, a lack of transparency and accountability in cases of misdiagnosis or bias raises ethical concerns.
Evaluate the ethical implications of relying on algorithms for health diagnoses, particularly in terms of transparency and accountability for patient outcomes.
To what extent should users rely on the results of online mental health screening tools, such as online depression screening tests, and the result of web searches on health symptoms?