Categories: World

Australia clamps downs on ‘nudify’ sites used for AI-generated child abuse | Social Media News


Three websites used to create abuse imagery had received 100,000 monthly visits from Australians, watchdog says.

Internet users in Australia have been blocked from accessing several websites that used artificial intelligence to create child sexual exploitation material, the country’s internet regulator has announced.

The three “nudify” sites withdrew from Australia following an official warning, eSafety Commissioner Julie Inman Grant said on Thursday.

Recommended Stories

list of 4 itemsend of list

Grant’s office said the sites had been receiving approximately 100,000 visits a month from Australians and featured in high-profile cases of AI-generated child sex abuse imagery involving Australian school students.

Grant said such “nudify” services, which allow users to make images of real people appear naked using AI, have had a “devastating” effect in Australian schools.

“We took enforcement action in September because this provider failed to put in safeguards to prevent its services being used to create child sexual exploitation material and were even marketing features like undressing ‘any girl,’ and with options for ‘schoolgirl’ image generation and features such as ‘sex mode,’” Grand said in a statement.

The development comes after Grant’s office issued a formal warning to the United Kingdom-based company behind the sites in September, threatening civil penalties of up to 49.5 million Australian dollars ($32.2m) if it did not introduce safeguards to prevent image-based abuse.

Grant said Hugging Face, a hosting platform for AI models, had separately also taken steps to comply with Australian law, including changing its terms of service to require account holders to take steps to minimise the risks of misuse involving their platforms.

Australia has been at the forefront of global efforts to prevent the online harm of children, banning social media for under-16s and cracking down on apps used for stalking and creating deepfake images.

The use of AI to create non-consensual sexually explicit images has been a growing concern amid the rapid proliferation of platforms capable of creating photo-realistic material at the click of a mouse.

In a survey carried out by the US-based advocacy group Thorn last year, 10 percent of respondents aged 13-20 reported knowing someone who had deepfake nude imagery created of them, while 6 percent said they had been a direct victim of such abuse.



Source link

admin2

Share
Published by
admin2

Recent Posts

London Knights win 6-2 over Guelph, move to 4th place in Western Conference standings

Kaeden Hawkins and Ryan Brown each had three points and Seb Gatto made 28 saves…

1 hour ago

AMD Angles May Portend Upside for This Exciting ETF

Semiconductor stocks haven’t been immune to retrenchment in the artificial intelligence (AI) trade and sector…

2 hours ago

‘Unpleasant surprises’: Will key French cities elect far-right mayors? | The Far Right News

Paris, France – France heads to the polls on Sunday for local elections to usher…

2 hours ago

Congress Targets Crypto Prediction Markets With 4 Bills Banning War And Assassination Bets

Trusted Editorial content, reviewed by leading industry experts and seasoned editors. Ad Disclosure Crypto prediction…

2 hours ago

Electronics price hike due to global memory chip shortage ‘May just be the new normal’

Tech experts are warning that a global memory chip shortage driven by technology giants is…

4 hours ago

Balancing Tech & Income: Invesco’s Dual Launch of QEW & DVVY

Invesco’s ETF product evolution continues with the launch of the Invesco QQQ Equal Weight ETF…

7 hours ago