Can AI really help fix a healthcare system in crisis? | Technology

What if AI isn’t that great? What if we’ve been overstating its potential to a frankly dangerous degree? That’s the concern of leading cancer experts in the NHS, who warn that the health service is obsessing over new tech to the point that it’s putting patient safety at risk. From our story yesterday:

In a sharply worded warning, the cancer experts say that ‘novel solutions’ such as new diagnostic tests have been wrongly hyped as ‘magic bullets’ for the cancer crisis, but ‘none address the fundamental issues of cancer as a systems problem’.

A ‘common fallacy’ of NHS leaders is the assumption that new technologies can reverse inequalities, the authors add. The reality is that tools such as AI can create ‘additional barriers for those with poor digital or health literacy’.

‘We caution against technocentric approaches without robust evaluation from an equity perspective,’ the paper concludes.

Published in the Lancet Oncology journal, the paper instead argues for a back to basics approach to cancer care. Its proposals focus on solutions like getting more staff, redirecting research to less trendy areas including surgery and radiotherapy, and creating a dedicated unit for technology transfer, ensuring that treatments that have already been proven to work are actually made a part of routine care.

Against those much-needed improvements, AI can be an appealing distraction. The promise of the technology is that, within a few short years, a radical capability increase will enable AI technology to do jobs in the health service that can’t currently be done, or at least that take up hours of a highly trained specialist’s time. And the fear of the experts is that that promise about the future is distracting from changes needed today.

It effectively casts AI as the latest example of “bionic duckweed”, a term coined by Stian Westlake in 2020 to cover the use, deliberately or otherwise, of technology that may or may not arrive in the future to argue against investment in the present. Elon Musk’s Hyperloop is perhaps the most famous example of bionic duckweed, first proposed more than a decade ago explicitly to try and discourage California from going ahead with plans to construct a high-speed rail line.

(The term comes from a real instance in the wild, in which the UK government was advised against electrifying railways in 2007 because “we might have … trains using hydrogen developed from bionic duckweed in 15 years’ time … we might have to take the wires down and it would all be wasted”. Seventeen years on, the UK continues to run diesel engines on non-electrified lines.)

But the paper’s fears about AI – and the general technophilia of the health service – are more than just that it might not arrive. Even if AI does actually start making headway in fighting cancer, without the right groundwork, it may be less useful than it could be.

Back to the piece, a quote from the lead author, oncologist Ajay Aggarwal:

AI is a workflow tool, but actually, is it going to improve survival? Well, we’ve got limited evidence of that so far. Yes, it’s something that could potentially help the workforce, but you still need people to take a patient’s history, to take blood, to do surgery, to break bad news.

Even if AI is as good as we hope it will be, in the short term, that might mean little for healthcare in general. Say AI can meaningfully speed up the work of a radiographer, diagnosing cancer earlier or faster after a scan: that means little if there are bottlenecks in the rest of the health service. In the worst-case scenario, you may even see a sort of AI-enabled denial of service attack, with the tech-powered sections of the workflow overwhelming the rest of the system.

In the long term, AI boosters hope, systems will adapt to incorporate the technology well. (Or, if you’re a true believer, then perhaps it’s simply a case of waiting until AI can staff a hospital end-to-end.) But in the short term, it’s important not to assume that just because AI can do some medical tasks, it can help fix a health system in crisis.

Digital government

New DSIT secretary Peter Kyle at Downing Street. Photograph: Tejas Sandhu/PA

Last week we looked at some ideas for what the new government could do around tech, and it’s looking good for at least one of those suggestions. New secretary of state for science, innovation and technology Peter Kyle has been in office for just a few days, but is already hitting my inbox. The DSIT, he says, will:

Become the centre for digital expertise and delivery in government, improving how the government and public services interact with citizens.

We will act as a leader and partner across government, with industry and the research communities, to boost Britain’s economic performance and power-up our public services to improve the lives and life chances of people through the application of science and technology.

Specifically, DSIT will “help to upskill civil servants so they are better at using digital and AI in their frontline work”. Last week, we called on Labour to “take AI government seriously”; it looks as if they already are.

skip past newsletter promotion

Digital colleagues

Will your next new colleague be digital? Photograph: Andriy Popov/Alamy

On the one hand, look, this is obviously a publicity stunt:

Lattice is going to bring an AI employee through all the same processes a human employee goes through when they start a new role.

We’ll be adding them into the employee record and integrating them into our HRIS; we’ll be adding them to the org chart so you can see where they fall within a team and department; we will be onboarding the AI employee and ensuring they undergo the training necessary for their role.

And we will be assigning goals to this digital worker, to make sure we are holding them accountable to certain standards – just like we would with any other employee. This is going to be a huge learning moment for us, and for the industry.

That’s Sarah Franklin, the chief executive of HR platform Lattice, talking about the company’s plans to take an AI employee through all the same steps as a human one. But if you want a peek at what AI success would look like, it’s not far off this.

Businesses are bad at bringing in new technology. If something works well enough, they tend to stick with it for years – decades, even – and it’s a huge hurdle to encourage them to switch to a different way of doing things even if the gains seem great.

But they’re much better at bringing in new staff. They have to be: staff quit, retire, have children or die. If you can adapt the process of bringing in an AI worker to be more like the latter, and less like the former, you may well end up greatly expanding the pool of businesses who feel like they can deploy AI in their own world.

The wider TechScape

Hackers have stolen digital tickets for Taylor Swift’s Eras tour. Photograph: David Parry/PA Media Assignments

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Secular Times is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – seculartimes.com. The content will be deleted within 24 hours.

Leave a Comment