“Exploring the Possibility of AI for Safeguarding Youthful Internet Users in the UK”

The tools would be part of a wider set of proposals Ofcom is putting together focused on online child safety. Consultations for the comprehensive proposals will start in the coming weeks with the AI consultation coming later this year, Ofcom said. AI researchers are finding ever-more sophisticated ways of using AI to detect, for example, deep fakes, as well as to verify users online. It found that 32% of the kids reported that they’d seen worrying content online, but only 20% of their parents said they reported anything. Among children aged 16-17, Ofcom said, 25% said they were not confident about distinguishing fake from real online.

The landscape of the internet is constantly shifting as technology advances and younger users increasingly connect to the world wide web. As concerns over the misuse of artificial intelligence (AI) for malicious online activities rise, the UK government is now considering if and how AI can be utilized to protect these younger users.

In response to the UK’s Online Safety Act, Ofcom, the regulator in charge of enforcement, recently announced plans to launch a consultation on the use of AI and other automated tools to proactively detect and remove illegal content online. Specifically, they are seeking to protect children from harmful content and identify previously undetectable child sex abuse material.

This proposed implementation of AI technology will be a key component of a larger set of proposals focused on online child safety. While comprehensive consultations are set to begin in the near future, Ofcom has indicated that the AI-specific consultation will take place later this year.

“But to detect deepfakes, it’s necessary to have not just one, but multiple images of a person,” said Mason Liang, an assistant professor at the Chinese University of Hong Kong, whose team has developed an image manipulation tool that can detect AI-generated images.”This is especially important in today’s social-media connected world where trivial videos of people are readily available, and AI-generated pictures could allow anyone to generate plausible-looking fake content.”

– TechCrunch Interview with Mark Bunting

Mark Bunting, a director in Ofcom’s Online Safety Group, explains that their focus on AI begins with assessing its current effectiveness as a screening tool. “Some services already use these tools to identify and shield children from harmful content,” he states. “But there isn’t much information available on their accuracy and efficacy. We want to ensure that industry is properly evaluating these tools and managing risks to free expression and privacy.”

The eventual results of this evaluation may lead Ofcom to recommend specific assessments and requirements for platforms, with potential fines for non-compliance. Bunting notes, “As with many online safety regulations, the responsibility lies with companies to take appropriate steps and utilize appropriate tools to protect users.”

There will surely be both supporters and critics of these moves. While AI researchers continue to develop sophisticated methods for detecting deepfakes and verifying users online, skeptics argue that AI is far from foolproof.

Along with their announcement of the AI consultation, Ofcom also published their latest research on how children are engaging online in the UK. The study revealed that more young children are connected to the internet than ever before, resulting in Ofcom now breaking down their data among younger age brackets.

The research found that nearly a quarter, 24%, of 5-7 year-olds in the US now own their own smartphones, with the numbers rising to 76% when including tablets. This younger age group is also increasingly using media on these devices, with 65% making voice and video calls (a 6% increase from the previous year) and 50% streaming media (compared to 39% the year before).

Of course, as younger children become more connected, so do the restrictions around age requirements on social media platforms. However, these limits are often ignored in the UK, with 38% of 5-7 year-olds found to be using social media. The most popular app among this age group is Meta’s WhatsApp at 37%, followed by TikTok at 30% and Instagram at 22%. Discord is the least popular at only 4%.

Additionally, 32% of these young children are going online on their own, and 30% of their parents are fine with them having social media profiles. Of all the social media platforms, YouTube Kids remains the top choice for younger users at 48%.

Gaming continues to be a favorite online activity for children, with 41% of 5-7 year-olds now playing video games, and 15% of those playing shooter games. While 76% of parents stated they have talked to their young children about staying safe online, Ofcom points out a potential disconnect between what children see online and what they report to their parents. Among older children aged 8-17, Ofcom interviewed them directly and found that only 20% reported instances of worrying content online, compared to 32% who reported seeing it. This highlights the challenge of deep fakes, with 25% of 16-17 year-olds stating they lack confidence in distinguishing fake from real online.

Avatar photo
Dylan Williams

Dylan Williams is a multimedia storyteller with a background in video production and graphic design. He has a knack for finding and sharing unique and visually striking stories from around the world.

Articles: 874

One comment

Leave a Reply

Your email address will not be published. Required fields are marked *