Practice 2.2 Expression with authentic IB Digital Society (DS) exam questions for both SL and HL students. This question bank mirrors Paper 1, 2, 3 structure, covering key topics like systems and structures, human behavior and interaction, and digital technologies in society. Get instant solutions, detailed explanations, and build exam confidence with questions in the style of IB examiners.
Algorithmic recommendation systems
Algorithmic recommendation systems are used across a wide range of digital platforms, including YouTube, Spotify, Amazon, and dating applications. These systems analyze large volumes of user data such as viewing history, clicks, likes, and purchases in order to suggest content, products, or connections that are predicted to be relevant. Recommendations are continuously updated as users interact with the platform.
These systems consist of interconnected components, including users, algorithms, data inputs, and content providers. Changes in one part of the system can affect other parts, creating feedback loops that influence both individual behaviour and wider cultural trends. While recommendation systems are designed to improve user experience and engagement, they can also produce unintended consequences such as filter bubbles, bias amplification, and reduced diversity of content. This case highlights how algorithmic systems operate as complex and interdependent digital systems.
Identify three elements that make up an algorithmic recommendation system.
Identify three ways elements within an algorithmic recommendation system interact with each other.
Analyze the intended and unintended consequences of changes within algorithmic recommendation systems.
Evaluate how systems thinking and complexity theory help explain the behaviour of algorithmic recommendation systems.
Case study: Virtual worlds and the construction of digital space
Virtual world platforms such as Minecraft, Second Life, and emerging metaverse environments allow users to create, organize, and inhabit digital spaces. These environments are often built using user-generated content and governed by social norms, community rules, and platform design choices. Virtual spaces serve a wide range of purposes, including education, business meetings, creative expression, and social interaction.
Unlike physical spaces, virtual environments are not constrained by geography, distance, or physical laws. Users can move instantly between locations, interact across global networks, and design spaces that reflect cultural values or social structures. However, access to virtual spaces depends on factors such as technology, cost, and digital literacy. As virtual worlds expand, questions arise about how space is organized, who controls access and movement, and how virtual environments operate at different scales from small communities to global platforms.
Define the term “digital space” in the context of virtual worlds.
Outline two ways humans organize and construct virtual spaces using cultural or social features.
Analyze how access, movement, and flows influence the design and experience of virtual spaces.
Evaluate the extent to which virtual spaces operate differently across local and global scales.
It is a paradox to brand a technological innovation as tainted goods by its very name. ‘Deepfake’ is a victim of its own capabilities. Negative connotations and recent incidents have pigeonholed this innovation in the taboo zone. The rise of deepfake technology has ushered in very interesting possibilities and challenges. This synthetic media, created through sophisticated artificial intelligence algorithms, has begun to infiltrate various sectors, raising intriguing questions about its potential impact on education and employability.
The dawn of deepfake technology introduces a realm of possibilities in education. Imagine medical students engaging in lifelike surgical simulations or language learners participating in authentic conversations. The potential for deepfake to revolutionise training scenarios is vast and could significantly enhance the educational experience. Beyond simulations, deepfake can transport students to historical events through realistic reenactments or facilitate virtual field trips, transcending the boundaries of traditional education. The immersive nature of deepfake content holds the promise of making learning more engaging and memorable.
However, while these potential abuses of the technology are real and concerning that doesn't mean we should turn a blind eye to the technology’s potential when using it responsibly, says Jaime Donally, a well-known immersive learning expert.
“Typically, when we're hearing about it, it's in terms of the negative – impersonation and giving false claims,” Donally says. “But really, the technology has the power of bringing people from history alive through old images that we have using AI.”
Donally, a former math teacher and instructional technologist, has written about how a type of deep fake technology called deep nostalgia technology that went viral in 2021 can allow students to form a stronger connection with the past and their personal family heritage. The technology, available on the MyHeritage app, allows images to uploaded that are then turned into short animations thanks to AI technology.
Here are some of the ways in which teachers can utilize deep fake technology in the classroom utilizing the MyHeritage app. Teachers have used the deep fake technology in the My Heritage app to bring historical figures such as Amelia Earhart and Albert Einstein to life. One teacher Donally has communicated with used an animation of Frederick Douglass (above) to help students connect with Douglass’ famous 1852 speech about the meaning of the Fourth of July to Black enslaved Americans. Another teacher has plans to use the app to have students interview a historical figure and create dialogue for them, then match the dialogue to the animation.
Donally herself has paired animations she's created with other types of immersive technology. “I layered it on top in augmented reality,” she says. “When you scan the photo of my grandfather, it all came to life. And it became something that was much more relevant to see in your real-world space.”
With proper supervision, students can use the technology to animate images of family members or local historical figures, and can experiment with augmented reality (AR) in the process. “It makes you want to learn more,” Donally says of animations created using deep fake technology. “It drives you into kind of the history and understanding a bit more, and I think it also helps you identify who you are in that process.”
Education platforms are harnessing deepfake technology to create AI tutors that provide customised support to students. Rather than a generic video lecture, each learner can get tailored instruction and feedback from a virtual tutor who speaks their language and adjusts to their level.
For example, Anthropic built Claude, an AI assistant designed specifically for education. Claude can answer students’ natural language questions, explain concepts clearly, and identify knowledge gaps.
Such AI tutors make learning more effective, accessible, and inclusive. Students feel like they have an expert guide helping them master new skills and material.
AI and deepfake technology have enormous potential to enhance workforce training and education in immersive new ways too. As Drew Rose, CSO and founder of cybersecurity firm Living Security, explains, “educators can leverage deepfakes to create immersive learning experiences. For instance, a history lesson might feature a ‘guest appearance’ by a historical figure, or a science lesson might have a renowned scientist explaining complex concepts.” Ivana Bartoletti, privacy and data protection expert at Wipro and author of An Artificial Revolution – On Power, Politics and AI envisions similar applications.
“Deepfake technologies could provide an easier and less expensive way to train and visualise,” she says. “Students of medicine and nursing currently train with animatronic robots. They are expensive and require special control rooms. Generative AI and augmented or virtual reality headsets or practice rooms will be cheaper and allow for the generalisation, if not the gamification, of simulation.”
Medical students could gain experience diagnosing and treating simulated patients, while business students could practice high-stakes scenarios like negotiations without real-world consequences. These immersive, gamified environments enabled by AI and deepfakes also have vast potential for corporate training.
Bartoletti notes, “A similar use case could be made for other types of learning that require risky and skill-based experiences. The Air Force uses AI as adversaries in flight simulators, and humans have not beaten the best AIs since 2015.
With reference to Source A identify 3 harmful uses of deep fakes.
With reference to Source B and one other real-world example you have studied, explain why deepfakes may be used for beneficial purposes in today's world
Compare what Source C and Source D reveal about the perspectives of deepfakes in the education sector.
With reference to the sources and your own knowledge, discuss whether the use of deepfakes in the educational sector is an incremental or transformational change.
Students should be provided with the pre-release document ahead of the May 2018 HL paper 3 examination, this can be found under the 'Your tests' tab > supplemental materials > May 2018 HL paper 3 pre-release document: Accessibility.
Improving the accessibility to the curriculum for children with special educational needs and disabilities (SEND)
Source 1: Tayton School
Tayton School is a primary school that teaches 500 children aged between 5 and 12. There are three classes in each year group, with a maximum of 24 students in each class. The school’s motto is “Education for Everyone”, and inclusion is at the heart of the school’s mission.
The school’s Inclusion Department consists of five full-time staff, led by Sandra, and 10 learning support assistants who are active in working with the children. Sandra has recently produced a report on the students with special educational needs and disabilities (SEND) in the school, in which she found that the increasing numbers of students, and the types of SEND, means that the schools needs to invest in expanding the amount of support for the students (see Table 1).
Table 1: SEND at Tayton School

Sandra’s report argues that, next year, the work of the Inclusion Department would be more effective if the school purchased educational digital technologies, such as social robots and assistive technologies.
Source 2: Social robots in education
Sandra researched social robots and came back to the department meeting with this information:
In 2020, a report on the use of social robots in education was published by a prestigious university professor, who concluded that social robots have the potential to be a key player in education in the way textbooks and whiteboards have been in the past. A social robot has the potential to support students in ways that could never have been envisaged 20 years ago. However, there are significant technical limitations, particularly linked to the social robot’s ability to interact with students, that will restrict their usability for the next few years
Source 3: Mary sees the positives
Mary, one of the learning assistants at Tayton School, says:
“As a parent of two school-age children, I think the potential introduction of social robots has both advantages and disadvantages. My children thought the idea of having a robot that sits with them very exciting, and I think they would do what the robot asks without questioning it. The robot will also be much more patient while they are learning their times tables!” (See Figure 1).
Figure 1: Students interacting with a social robot

Source 4: James has doubts
James, another learning assistant at Tayton School, is wary of the overuse of digital technology in schools for children with special needs based on his experiences in other schools. He has found some research that supports his ideas.

Source A
Source B (excerpt from platform community rules)
ClipCast exists to help people express ideas, creativity and lived experience. We allow political commentary, parody and artistic experimentation, including remixing with built-in tools. However, expression has limits when it targets others. Content may be removed if it includes harassment, hate speech, doxxing, or deceptive impersonation. Some posts may be made less visible if they use sensationalized claims without context, even if they are not removed. We also respond to copyright notices from rights-holders, which can result in muted audio or blocked uploads. Users can appeal moderation decisions, but repeated violations may lead to account restrictions. ClipCast aims to balance open participation with safety, lawful use of media, and the well-being of creators and communities.
Source C (excerpt from moderation and engagement statistics)
Of all removals in Lydora (last 90 days): 41% harassment, 24% hate speech, 18% impersonation, 10% copyright-related, 7% other. Average views by content type: satire posts receive 2.3× the views of "personal storytelling" posts. In posts using the tag #languageback, 58% are remixes/duets (often adding pronunciation tips). 12% of users who posted political content received at least one "mass report" (50+ reports within 1 hour). Appeals success rate: 31% of appealed removals are reinstated.
Source D (excerpt from newspaper /editorial, 176 words)
ClipCast claims to "help people express ideas," but the platform profits from what expression does to attention. The remix tools are not neutral art supplies; they are engagement engines that reward speed, spectacle, and conflict. Look at what spreads: satire outperforms personal storytelling because it is easier to react to, easier to share, and easier to misunderstand. Meanwhile, creators working on community projects like language revival are forced to fit culture into the platform's formats and time limits. Even moderation becomes part of expression: when political posts are "mass reported," the loudest groups can temporarily silence others, regardless of truth. ClipCast says it limits "sensationalized claims," yet who decides what counts as context? The result is a privatized speech environment where rules change without public debate and where creative work is constantly scanned for copyright, turning culture into a permissions problem. Expression is happening but on terms designed primarily for growth, not for communities.
Describe what Source A suggests about the ways users in Lydora express themselves on ClipCast.
Distinguish between content removal and reduced visibility as approaches to managing expression, with reference to Source B.
Compare and contrast the implications of ClipCast’s moderation and engagement patterns shown in Source C with the critique in Source D.
Evaluate whether ClipCast is more likely to support or undermine meaningful expression for individuals and communities in Lydora. With reference to all the sources (A–D) and your own knowledge of the Digital Society course, consider storytelling, artistic innovation, political activism, and ethical dilemmas.
Define the term “finite” in the context of algorithms.
Identify two reasons why an algorithm should have well-defined inputs and outputs.
Explain why an algorithm must be unambiguous to function correctly.
Describe one example where the feasibility of an algorithm impacts its use in a real-world application.
Define the term ‘autonomous vehicle’.
Identify and explain the function of 2 sensors on an autonomous vehicle
Explain how sensors would be used by autonomous vehicles to avoid obstacles in the road.
Gaming and digital identity formation
Online gaming platforms such as Discord, Roblox, and multiplayer online games have become important spaces where young people explore and construct their identities. Through avatars, usernames, voice chat, and in-game behaviour, players can present different aspects of themselves and experiment with alternative personas. Gaming environments also allow users to join communities based on shared interests, skill levels, or play styles, which can influence how identity is performed and recognised.
For many players, gaming identities are shaped through interaction with others, including collaboration, competition, and social norms within specific communities. These identities are not fixed and may change over time as players develop new skills, move between games, or participate in different social groups. In some cases, relationships formed in gaming environments extend into offline friendships, further influencing how young people understand and express their identities.
Define the term “digital identity” in the context of online gaming.
Outline two ways online gaming environments support identity exploration for young people.
Analyze how gaming identities can change over time and across different gaming contexts.
Evaluate the extent to which conformity, stereotype, and deviance theories help explain behaviour and identity within gaming communities.
Discuss the decision for an owner of an art gallery to develop a virtual tour that is accessible online.
User interfaces (UI) are critical in making devices accessible to a diverse range of users. For example, voice-activated interfaces, like those on smartphones, allow individuals with limited mobility to use devices effectively. While these interfaces promote inclusivity, there are challenges, such as accuracy and user privacy, that can affect their effectiveness.
Evaluate the effectiveness of user interfaces, such as voice and graphic interfaces, in promoting accessibility in computing, considering both the benefits for users with disabilities and the associated technical challenges.
Practice 2.2 Expression with authentic IB Digital Society (DS) exam questions for both SL and HL students. This question bank mirrors Paper 1, 2, 3 structure, covering key topics like systems and structures, human behavior and interaction, and digital technologies in society. Get instant solutions, detailed explanations, and build exam confidence with questions in the style of IB examiners.
Algorithmic recommendation systems
Algorithmic recommendation systems are used across a wide range of digital platforms, including YouTube, Spotify, Amazon, and dating applications. These systems analyze large volumes of user data such as viewing history, clicks, likes, and purchases in order to suggest content, products, or connections that are predicted to be relevant. Recommendations are continuously updated as users interact with the platform.
These systems consist of interconnected components, including users, algorithms, data inputs, and content providers. Changes in one part of the system can affect other parts, creating feedback loops that influence both individual behaviour and wider cultural trends. While recommendation systems are designed to improve user experience and engagement, they can also produce unintended consequences such as filter bubbles, bias amplification, and reduced diversity of content. This case highlights how algorithmic systems operate as complex and interdependent digital systems.
Identify three elements that make up an algorithmic recommendation system.
Identify three ways elements within an algorithmic recommendation system interact with each other.
Analyze the intended and unintended consequences of changes within algorithmic recommendation systems.
Evaluate how systems thinking and complexity theory help explain the behaviour of algorithmic recommendation systems.
Case study: Virtual worlds and the construction of digital space
Virtual world platforms such as Minecraft, Second Life, and emerging metaverse environments allow users to create, organize, and inhabit digital spaces. These environments are often built using user-generated content and governed by social norms, community rules, and platform design choices. Virtual spaces serve a wide range of purposes, including education, business meetings, creative expression, and social interaction.
Unlike physical spaces, virtual environments are not constrained by geography, distance, or physical laws. Users can move instantly between locations, interact across global networks, and design spaces that reflect cultural values or social structures. However, access to virtual spaces depends on factors such as technology, cost, and digital literacy. As virtual worlds expand, questions arise about how space is organized, who controls access and movement, and how virtual environments operate at different scales from small communities to global platforms.
Define the term “digital space” in the context of virtual worlds.
Outline two ways humans organize and construct virtual spaces using cultural or social features.
Analyze how access, movement, and flows influence the design and experience of virtual spaces.
Evaluate the extent to which virtual spaces operate differently across local and global scales.
It is a paradox to brand a technological innovation as tainted goods by its very name. ‘Deepfake’ is a victim of its own capabilities. Negative connotations and recent incidents have pigeonholed this innovation in the taboo zone. The rise of deepfake technology has ushered in very interesting possibilities and challenges. This synthetic media, created through sophisticated artificial intelligence algorithms, has begun to infiltrate various sectors, raising intriguing questions about its potential impact on education and employability.
The dawn of deepfake technology introduces a realm of possibilities in education. Imagine medical students engaging in lifelike surgical simulations or language learners participating in authentic conversations. The potential for deepfake to revolutionise training scenarios is vast and could significantly enhance the educational experience. Beyond simulations, deepfake can transport students to historical events through realistic reenactments or facilitate virtual field trips, transcending the boundaries of traditional education. The immersive nature of deepfake content holds the promise of making learning more engaging and memorable.
However, while these potential abuses of the technology are real and concerning that doesn't mean we should turn a blind eye to the technology’s potential when using it responsibly, says Jaime Donally, a well-known immersive learning expert.
“Typically, when we're hearing about it, it's in terms of the negative – impersonation and giving false claims,” Donally says. “But really, the technology has the power of bringing people from history alive through old images that we have using AI.”
Donally, a former math teacher and instructional technologist, has written about how a type of deep fake technology called deep nostalgia technology that went viral in 2021 can allow students to form a stronger connection with the past and their personal family heritage. The technology, available on the MyHeritage app, allows images to uploaded that are then turned into short animations thanks to AI technology.
Here are some of the ways in which teachers can utilize deep fake technology in the classroom utilizing the MyHeritage app. Teachers have used the deep fake technology in the My Heritage app to bring historical figures such as Amelia Earhart and Albert Einstein to life. One teacher Donally has communicated with used an animation of Frederick Douglass (above) to help students connect with Douglass’ famous 1852 speech about the meaning of the Fourth of July to Black enslaved Americans. Another teacher has plans to use the app to have students interview a historical figure and create dialogue for them, then match the dialogue to the animation.
Donally herself has paired animations she's created with other types of immersive technology. “I layered it on top in augmented reality,” she says. “When you scan the photo of my grandfather, it all came to life. And it became something that was much more relevant to see in your real-world space.”
With proper supervision, students can use the technology to animate images of family members or local historical figures, and can experiment with augmented reality (AR) in the process. “It makes you want to learn more,” Donally says of animations created using deep fake technology. “It drives you into kind of the history and understanding a bit more, and I think it also helps you identify who you are in that process.”
Education platforms are harnessing deepfake technology to create AI tutors that provide customised support to students. Rather than a generic video lecture, each learner can get tailored instruction and feedback from a virtual tutor who speaks their language and adjusts to their level.
For example, Anthropic built Claude, an AI assistant designed specifically for education. Claude can answer students’ natural language questions, explain concepts clearly, and identify knowledge gaps.
Such AI tutors make learning more effective, accessible, and inclusive. Students feel like they have an expert guide helping them master new skills and material.
AI and deepfake technology have enormous potential to enhance workforce training and education in immersive new ways too. As Drew Rose, CSO and founder of cybersecurity firm Living Security, explains, “educators can leverage deepfakes to create immersive learning experiences. For instance, a history lesson might feature a ‘guest appearance’ by a historical figure, or a science lesson might have a renowned scientist explaining complex concepts.” Ivana Bartoletti, privacy and data protection expert at Wipro and author of An Artificial Revolution – On Power, Politics and AI envisions similar applications.
“Deepfake technologies could provide an easier and less expensive way to train and visualise,” she says. “Students of medicine and nursing currently train with animatronic robots. They are expensive and require special control rooms. Generative AI and augmented or virtual reality headsets or practice rooms will be cheaper and allow for the generalisation, if not the gamification, of simulation.”
Medical students could gain experience diagnosing and treating simulated patients, while business students could practice high-stakes scenarios like negotiations without real-world consequences. These immersive, gamified environments enabled by AI and deepfakes also have vast potential for corporate training.
Bartoletti notes, “A similar use case could be made for other types of learning that require risky and skill-based experiences. The Air Force uses AI as adversaries in flight simulators, and humans have not beaten the best AIs since 2015.
With reference to Source A identify 3 harmful uses of deep fakes.
With reference to Source B and one other real-world example you have studied, explain why deepfakes may be used for beneficial purposes in today's world
Compare what Source C and Source D reveal about the perspectives of deepfakes in the education sector.
With reference to the sources and your own knowledge, discuss whether the use of deepfakes in the educational sector is an incremental or transformational change.
Students should be provided with the pre-release document ahead of the May 2018 HL paper 3 examination, this can be found under the 'Your tests' tab > supplemental materials > May 2018 HL paper 3 pre-release document: Accessibility.
Improving the accessibility to the curriculum for children with special educational needs and disabilities (SEND)
Source 1: Tayton School
Tayton School is a primary school that teaches 500 children aged between 5 and 12. There are three classes in each year group, with a maximum of 24 students in each class. The school’s motto is “Education for Everyone”, and inclusion is at the heart of the school’s mission.
The school’s Inclusion Department consists of five full-time staff, led by Sandra, and 10 learning support assistants who are active in working with the children. Sandra has recently produced a report on the students with special educational needs and disabilities (SEND) in the school, in which she found that the increasing numbers of students, and the types of SEND, means that the schools needs to invest in expanding the amount of support for the students (see Table 1).
Table 1: SEND at Tayton School

Sandra’s report argues that, next year, the work of the Inclusion Department would be more effective if the school purchased educational digital technologies, such as social robots and assistive technologies.
Source 2: Social robots in education
Sandra researched social robots and came back to the department meeting with this information:
In 2020, a report on the use of social robots in education was published by a prestigious university professor, who concluded that social robots have the potential to be a key player in education in the way textbooks and whiteboards have been in the past. A social robot has the potential to support students in ways that could never have been envisaged 20 years ago. However, there are significant technical limitations, particularly linked to the social robot’s ability to interact with students, that will restrict their usability for the next few years
Source 3: Mary sees the positives
Mary, one of the learning assistants at Tayton School, says:
“As a parent of two school-age children, I think the potential introduction of social robots has both advantages and disadvantages. My children thought the idea of having a robot that sits with them very exciting, and I think they would do what the robot asks without questioning it. The robot will also be much more patient while they are learning their times tables!” (See Figure 1).
Figure 1: Students interacting with a social robot

Source 4: James has doubts
James, another learning assistant at Tayton School, is wary of the overuse of digital technology in schools for children with special needs based on his experiences in other schools. He has found some research that supports his ideas.

Source A
Source B (excerpt from platform community rules)
ClipCast exists to help people express ideas, creativity and lived experience. We allow political commentary, parody and artistic experimentation, including remixing with built-in tools. However, expression has limits when it targets others. Content may be removed if it includes harassment, hate speech, doxxing, or deceptive impersonation. Some posts may be made less visible if they use sensationalized claims without context, even if they are not removed. We also respond to copyright notices from rights-holders, which can result in muted audio or blocked uploads. Users can appeal moderation decisions, but repeated violations may lead to account restrictions. ClipCast aims to balance open participation with safety, lawful use of media, and the well-being of creators and communities.
Source C (excerpt from moderation and engagement statistics)
Of all removals in Lydora (last 90 days): 41% harassment, 24% hate speech, 18% impersonation, 10% copyright-related, 7% other. Average views by content type: satire posts receive 2.3× the views of "personal storytelling" posts. In posts using the tag #languageback, 58% are remixes/duets (often adding pronunciation tips). 12% of users who posted political content received at least one "mass report" (50+ reports within 1 hour). Appeals success rate: 31% of appealed removals are reinstated.
Source D (excerpt from newspaper /editorial, 176 words)
ClipCast claims to "help people express ideas," but the platform profits from what expression does to attention. The remix tools are not neutral art supplies; they are engagement engines that reward speed, spectacle, and conflict. Look at what spreads: satire outperforms personal storytelling because it is easier to react to, easier to share, and easier to misunderstand. Meanwhile, creators working on community projects like language revival are forced to fit culture into the platform's formats and time limits. Even moderation becomes part of expression: when political posts are "mass reported," the loudest groups can temporarily silence others, regardless of truth. ClipCast says it limits "sensationalized claims," yet who decides what counts as context? The result is a privatized speech environment where rules change without public debate and where creative work is constantly scanned for copyright, turning culture into a permissions problem. Expression is happening but on terms designed primarily for growth, not for communities.
Describe what Source A suggests about the ways users in Lydora express themselves on ClipCast.
Distinguish between content removal and reduced visibility as approaches to managing expression, with reference to Source B.
Compare and contrast the implications of ClipCast’s moderation and engagement patterns shown in Source C with the critique in Source D.
Evaluate whether ClipCast is more likely to support or undermine meaningful expression for individuals and communities in Lydora. With reference to all the sources (A–D) and your own knowledge of the Digital Society course, consider storytelling, artistic innovation, political activism, and ethical dilemmas.
Define the term “finite” in the context of algorithms.
Identify two reasons why an algorithm should have well-defined inputs and outputs.
Explain why an algorithm must be unambiguous to function correctly.
Describe one example where the feasibility of an algorithm impacts its use in a real-world application.
Define the term ‘autonomous vehicle’.
Identify and explain the function of 2 sensors on an autonomous vehicle
Explain how sensors would be used by autonomous vehicles to avoid obstacles in the road.
Gaming and digital identity formation
Online gaming platforms such as Discord, Roblox, and multiplayer online games have become important spaces where young people explore and construct their identities. Through avatars, usernames, voice chat, and in-game behaviour, players can present different aspects of themselves and experiment with alternative personas. Gaming environments also allow users to join communities based on shared interests, skill levels, or play styles, which can influence how identity is performed and recognised.
For many players, gaming identities are shaped through interaction with others, including collaboration, competition, and social norms within specific communities. These identities are not fixed and may change over time as players develop new skills, move between games, or participate in different social groups. In some cases, relationships formed in gaming environments extend into offline friendships, further influencing how young people understand and express their identities.
Define the term “digital identity” in the context of online gaming.
Outline two ways online gaming environments support identity exploration for young people.
Analyze how gaming identities can change over time and across different gaming contexts.
Evaluate the extent to which conformity, stereotype, and deviance theories help explain behaviour and identity within gaming communities.
Discuss the decision for an owner of an art gallery to develop a virtual tour that is accessible online.
User interfaces (UI) are critical in making devices accessible to a diverse range of users. For example, voice-activated interfaces, like those on smartphones, allow individuals with limited mobility to use devices effectively. While these interfaces promote inclusivity, there are challenges, such as accuracy and user privacy, that can affect their effectiveness.
Evaluate the effectiveness of user interfaces, such as voice and graphic interfaces, in promoting accessibility in computing, considering both the benefits for users with disabilities and the associated technical challenges.