King's Safeguarding Newsletter, February 2024

Page 1

Safer Internet

Day Activities, Deepfake Technology and AI Bias.

This month we are looking at ways the pupils (and parents) can be more critical and careful about what they are watching and accessing on the internet. We will mark Safer Internet Day with an Assembly on Monday 5th February, followed by discussion tasks and a competition to be held during Tutor time. We will also explore some of the more recent topics that might pose a threat to the pupils. There are three main points that the pupils and tutors can explore:

1. How has technology improved my life? 2. How is AI used for my benefit? 3. Is social media positive in my life? More on these topics below.

Topic 1

How has technology improved my life?

(Algorithms and Phone Addiction)

Pupils watched the Film below and then explored the topic through discussion.

Watch Six Easy Steps to Get Us Addicted to Our Phones here:

https://vimeo.com/503021412

Advice for all phone users is to assess their use, take control of all apps and their notifications and clear browsing history regularly: as well as deleting those Apps that are not used regularly.

Topic 2

How is AI used for my benefit?

(Facial Recognition)

Pupils watched the Film below and then explored the topic through discussion.

Watch“The Real Life of Your Selfie” here:

https://vimeo.com/503021673

Advice for Pupils and all phone users is to be careful about posting pictures, tagging friends in photographs, and set up two factor authentication whenever possible.

Topic 3

Is social media positive

in my life?

Pupils watched the Film below and then explored the topic through discussion.

Watch #TheFullPicture here:

https://vimeo.com/457720222

Advice for all phone users is to think critically, be honest when creating your social media profile, unfollow anything you think might be unhealthy and actively block any online bullying or harassment.

ING'S BRUTON AFEGUARDING NEWSLETTER February 2024

What is deepfake AI?

Deepfake AI is a type of artificial intelligence used to create convincing images, audio and video hoaxes. The term describes both the technology and the resulting bogus content. Deepfakes often transform existing source content where one person is swapped for another. They also create entirely original content where someone is represented doing or saying something they didn't do or say.

The greatest danger posed by deepfakes is their ability to spread false information that appears to come from trusted sources. For example, in 2022 a deepfake video was released of Ukrainian president Volodymyr Zelenskyy asking his troops to surrender. The Government is introducing new laws to better protect victims of abuse.

The most recent example of this phenomenon is explicit photographs of Taylor Swift being circulated on social media, they were viewed over 47 million times before the images were removed from X and Telegram Click here to read the BBC Story. Click here to see a video on ending intimate image abuse.

AI Bias, the risks and possible consequences?

AI bias, also referred to as machine learning bias or algorithm bias, refers to AI systems that produce biased results that reflect and perpetuate human biases within a society. Left unaddressed, it will hinder people’s ability to participate in the economy and society. It produces distorted results and fosters mistrust among people of colour, women, people with disabilities, the LGBTQ community, or other marginalized groups of people.

What is the source of bias in AI?

AI Bias takes several forms, without getting too technical it can occur through:

 Training data bias: For example, training data for a facial recognition algorithm that over-represents white people may create errors when attempting facial recognition for people of colour.

 Algorithmic bias: Algorithmic bias can also be caused by programming errors, such as a developer unfairly weighting factors in algorithm decision-making based on their own conscious or unconscious biases. For example, indicators like income or vocabulary might be used by the algorithm to unintentionally discriminate against people of a certain race or gender.

 Cognitive bias: When people process information and make judgments, we are inevitably influenced by our experiences and our preferences. As a result, people may build these biases into AI systems.

Examples of AI bias in real life

 Healthcare - Computer-aided diagnosis (CAD) systems have been found to return lower accuracy results for black patients than white patients.

 Applicant tracking systems - Amazon stopped using a hiring algorithm after finding it favoured applicants based on words like “executed” or “captured,” which were more commonly found on men’s resumes.

 Online advertising - Google’s online advertising system displayed high-paying positions to males more often than to women.

There is much research and debate surrounding this issue. Unfortunately one of the problems is that the technology is advancing at a faster rate than the legislation to control it, please click the link to access the UK interim report on The Governance of Artificial Intelligence.

Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.