Crackdown on ‘lawful but awful’ content to protect kids

Australians using some artificial intelligence chatbots will have to prove their age from March next year to reduce their access to sexual content online or to reduce their access to sexual content.
The country’s digital observer is worried about young people who have sexual open conversations for hours and hours with AI platforms.
In some cases, the Teleetis commissioner hopes to limit the exposure to chat boots that encourage self -harm or suicide thoughts.
Under the six regulatory code published by the Commission on Tuesday, platforms with “high risk” or “harmful” content will need to check before allowing a user’s age.
Pornography, self -harming material or graphic violence, which allows pornography websites and social media applications within the scope of the rules that start on March 9.
Basis, commissioner Julie Inman Grant said policies will help reduce children’s “legal but terrible” content, without getting old enough to understand properly.
Lower -risk platforms such as Copilot or ChatGPT do not need to verify age, but will have established security during interviews, so that the conversations do not turn to sexualized content or do not harm themselves.
However, sexual conversations, especially chat boots, will need to force users to prove their age.
“These chat boots have been worried for a while and children – some of them are less than 10 years old – we have heard of anecdote reports – talking to AI comrades for up to five hours a day.” He said.
He continued: “AI Chatbots’un tragic results in conversations with children with suicide thought and the last reports allegedly harmful.”
A study of researchers at Northeastern University in the United States found that large language models such as Chatgpt, Gemini and Paplexityia could be manipulated to share harmful content with users, including the analysis of different suicide methods.
Separately, a case in the US claimed that Chatgpt contributed to the death of a 16 -year -old child.
Transcripts showed that Chatbot offered to help write a suicide grade and encouraged his thoughts to keep his family as a secret.
The highest bodies from the technology industry, which develops new Australian codes, define them as “an important step in making the Internet safer”.
“The rules applied to the world, the rules applied to the world, keep up with the changes in the threat landscape.” He said.
“New codes are about to ensure that families have stronger protection without losing the benefits of digital services.”


