Age verified search. Another misguided restriction or useful protection?

On the heels of a social media ban on young people, the Online Safety Act comes into force this week and will also force search engines to verify the age of users. Daniel Angus reports on possible effects.
Over the past year, the way we search and discover information has changed significantly due to the introduction of various artificial intelligence tools. But this is just the beginning, because new Online Security Code It points to a much bigger change in the future.
If you are not logged in starting this week, it will be assumed that you are underage.
The Online Safety Act in Australia requires platforms, including search engines, to verify the age of users from 27 December 2025. This poses new challenges for Australians seeking detailed information on sensitive topics such as mental health, sexuality, reproductive health and domestic violence. Moreover, privacy concerns arise when personal information is shared with these platforms.
These latest changes are creating newer gatekeepers of information and changing the way existing players like Google operate. What are these changes and who are these new gatekeepers? How reliable can they be and how can Australians find reliable information?
The impact of artificial intelligence on search
AI in search uses a technology called RAG (retrieval augmented generation). Instead of being shown a list of web pages or links, users now receive summarized results presented in a clearer, simpler format. This usually includes an introductory paragraph, a main body, a conclusion, and highlighting important points. In this way, search engines and artificial intelligence platforms transform themselves into “answer engines”. In the words of Microsoft CEO Satya Nadella.
recently questionnaire It examined how people use generative AI in six countries (Argentina, Denmark, France, Japan, the UK and the US). Their findings show that searching for information has become the most common use case, and usage will more than double from 11% in 2024 to 24% in 2025.
AI in search has greatly impacted website traffic, leading to a decline in visits as shown by various web traffic analyzes. This decline poses new challenges for publishers. ABC’s James Purtill observed that global news site traffic has decreased significantly, particularly affecting smaller publishers. Also, a to work A study by the Pew Research Center shows that users are less likely to click on AI-generated summaries, which are often obtained from limited sources such as YouTube, Reddit and Wikipedia.
The recent decline in web traffic leads to some interesting ways to gather information. More and more companies like Apify, shiny data, SerpApiIt uses web scraping to aggregate search engine results pages (SERPs) in real-time and then shares that data with AI platforms to create summaries.
The use of web scraping is increasing while traffic from human users is decreasing. Scraping (basically copying and pasting at warp speed) allows companies to retrieve information once and reuse it without returning to the original sites. Google is also rolling out new tools to help website owners analyze and group search terms to increase traffic.
But as Google pointingthese tools are only really useful for sites that are already getting a lot of searches. Lately win The acquisition of Semrush, a search engine marketing company, by Adobe marks a sharp shift in the focus of search engines and platform companies towards faster analysis of search results and integration with artificial intelligence tools.
We are seeing a shift in which old players in the industry are being restructured. new technology companies Those who want to obtain information and make it easier for users to access. For these new gatekeepers, information is a valuable asset rather than a public liability. It remains unclear how they set up guardrails or controls to address the known problems of misinformation, disinformation, and polarization.
With these changes, traditional publishers in the news, social services or healthcare industries are often left behind. They now need to learn new skills to produce content that can be scraped and used by AI tools. This also means that users need to be more careful and learn to trust these new ways of finding information.
Online Security Code and search
Amidst a technology-driven shift in the information ecosystem, online security code introduces a new layer of complexity. Once implemented, search platforms will be required to verify the age of any user attempting to log in.
For those who do not log in, the default assumption will be that they are underage, triggering the strictest protections and access limits. This represents a significant structural change for services that have historically operated without identity checks for basic search functions.
Users must provide some form of identification or permission to verify their age through assurance technologies. Potential for linking identity data and call logs emerges increased risksespecially for marginalized communities. For example, searches for sensitive topics such as abortion services, gender-affirming healthcare, domestic violence support, addiction treatment, or STI testing may result in highly sensitive information being tied to user identities.
The combination of authentication and detailed search histories, even where platforms and vendors promise strong protection
A powerful data asset that can be misused or accessed for purposes other than its original purpose.
This also fails to tackle the real issue; The problem with malicious content lies not with the searchers themselves, but with those who create and distribute this content through search engines. Verifying users’ identities does not eliminate harmful content; can hide it from some, but still reach others despite age checks. In this sense, age assurance can change visibility without addressing the production and circulation of harmful material.
Another concern is that the policy approach tends to conflate exposure with harm. This assumes that exposure to problematic or age-restricted content is inherently damaging.
In practice, harm is shaped by the user’s wider environment. A child who accidentally encounters difficult material online can be supported if they have strong guidance, digital literacy, and trusted adults who can contextualize what they are seeing. Another teen without these supports may still encounter disturbing content even with tight guardrails, but will do so without scaffolding.
The outcome is determined not only by the content but also by the capacity to interpret and deal with it.
Safe search features should play a strong role in reducing access to truly harmful and illegal material, and these protections should apply to all users rather than being framed as protections exclusive to children. Opening this debate further will help shift policy from a binary approach to limiting exposure to creating systems of support, literacy, and care that permanently and equitably reduce harm.
How to deal with change
With more AI-generated search summaries, fewer opt-out options, and policy changes affecting how search results are displayed, users must learn new ways to verify information sources and face the risk of being misled. What can users do in response to such big changes?
- Users should rely less on search engines and turn to the websites or apps of reputable news publishers or information providers.
- Make a list of reliable sources of information, such as government agencies, credible nonprofit organizations, university sources, and reputable news organizations.
- Verify information across multiple websites to ensure validity. In case of unclear details, contact the organization directly by phone or email.
- Use user authentication tools when necessary.
Like the social media era ban, time will tell how the new Online Safety Act will affect us all.
Social media age ban is upon us. What does it really mean?
Prof Daniel Angus is Principal Investigator at the Queensland University of Technology node of the ARC Center of Excellence for Automated Decision Making and Society (ADM+S) and Professor of Digital Communications in the School of Communication.

