‘It’s beyond human scale’: AFP defends use of artificial intelligence to search seized phones and emails | Artificial intelligence (AI)

The Australian federal police says it had “no choice” but to lean into using artificial intelligence and is increasingly using the technology to search seized phones and other devices, given the vast amount of data examined in investigations.

The AFP’s manager for technology strategy and data, Benjamin Lamont, said investigations conducted by the agency involve an average of 40 terabytes’ worth of data. This includes material from the 58,000 referrals a year it receives at its child exploitation centre, while a cyber incident is being reported every six minutes.

“So we have no choice but to lean into AI,” he told a Microsoft AI conference in Sydney on Wednesday.

“It’s beyond human scale, so we need to start to lean in heavily on AI, and we’re using it across a number of areas.”

Aside from being part of the federal government’s trial of Copilot AI assistant technology, the AFP is using Microsoft’s tools to develop its own custom AI for use within the agency, including undertaking work translating 6m emails that were all in Spanish, and examining 7,000 hours of video footage.

“Having … a human sitting there going through 7,000 hours – it’s just not possible. So AI is playing a heavy role in that,” Lamont said.

One dataset the AFP is now working on is 10 petabytes (10,240TB), and an individual phone seized could involve 1TB of data. Lamont said much of the work the AFP was seeking to use AI for was structuring of obtained files to make them easier for officers to process.

“When we do a warrant at someone’s house now, there’s drawers full of old mobile phones,” Lamont said. “Now, how do we know that those mobile phones haven’t been used in the commission of an offence? We have to go through them and then identify those components and see if there was … any criminality in there.”

The AFP is also developing AI to detect deepfake images and has been seeking to figure out how to quarantine, clean and analyse data obtained during investigations, through operating in a secure, fully disconnected environment.

The agency is also exploring whether generative AI could be used to create text summaries of images or videos before they are viewed by officers, to prevent them being unexpectedly exposed to graphic imagery. The AFP is also looking at whether AI could modify such content by converting images to greyscale or removing audio.

The AFP has faced criticism over its use of the technology, most notably when its officers used Clearview AI, a facial recognition service built off photos taken from the internet.

Lamont said the AFP “haven’t always got it right”.

“We’ve had to strengthen our processes internally and I think this … has been really key, because it’s not just a set and forget,” he said. “As technology evolves and as the processes evolve … we have to continually look at how we’re making sure that it’s ethical and responsible, and so we’ve created a responsible technology committee within the organisation to assess emerging technology.”

He said it was also important for the AFP to discuss its use of AI publicly and ensure that there was always a human in the loop making the decisions formed from AI use.

This article was amended on 11 December 2024 to correct a reference to the terabyte equivalent of 10 petabytes.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Secular Times is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – seculartimes.com. The content will be deleted within 24 hours.

Leave a Comment