AI training tool using images of children

Share
A report finding that Australian children were being used to train AI without their consent or knowledge has raised significant concern.
AI children report

Images of Australian children are being used to train artificial intelligence (AI) programs, a global human rights group has found.

Human Rights Watch claims it found photos of at least 190 young people on AI programs without their knowledge or consent.

The images allegedly include information about the children’s identities, including their names and locations. HRW said the images originally appeared online via platforms including YouTube.

AI children report

Human Rights Watch (HRW) analysed a tiny section of a dataset of nearly six billion images and captions being used for AI training.

In that section, it found hundreds of photos of children from across Australia, taken from personal photo and video-sharing platforms. HRW said some of these images were intended for limited viewing.

HRW raised concern about the potential misuse of these images, like creating realistic deepfake images.

The program operators said they would remove the images after the issue was brought to their attention by HRW.

Because HRW reviewed less than 0.0001% of the entire dataset, it’s warned the scale of non-consensually collected images being used for AI training programs could be significantly higher.

It noted concern about privacy implications for the image subjects, such as future data breaches and the leaking of personal information.

Privacy concerns

HRW said it found some examples of images being used to train AI that had originally been posted online with some privacy restrictions.

This included content uploaded to YouTube as ‘unlisted’ — a setting that means the video can’t be found through a basic search, and can only be accessed by those with a direct web link.

Unlisted content being hosted on AI platforms is in breach of YouTube’s guidelines. The platform didn’t respond to HRW’s findings when asked for comment.

Dreyfus’ response

Attorney-General Mark Dreyfus said he was “deeply concerned” by the findings in the report, saying they “raise serious privacy concerns about the lawful handling and storage of personal information”.

The sharing of non-consensual deepfake child abuse material is illegal under existing laws in Australia. Last month, the Federal Government introduced a bill to Parliament that would also protect adults from the practice.

Next month, the Government will introduce a bill to enhance protection of children’s data online.

If passed, it’s hoped that these laws will improve online safety for children, and add protections against emerging technologies containing potential safety threats.

Become smarter in three minutes

Get the daily email that makes reading the news actually enjoyable. Stay informed, for free.

Be the smart friend in your group chat

Join thousands of young Aussies and get our 5 min daily newsletter on what matters in your world.

It’s easy. It’s trustworthy. It’s free.