Artificial intelligence: ‘Like a friend over your shoulder’
While there is an ongoing ethical debate around artificial intelligence, many safe and highly effective AI tools have already found their way into healthcare. Doctors who use them explain that with the right safeguards they can have huge benefits
Akriti Nanda is no data scientist or wonk – but then she doesn’t need to be.
As a clinical fellow in AI (artificial intelligence), she is using her specific expertise as a doctor to ensure cutting-edge technologies are safe, fit for purpose, and – importantly – make it from the lab to the patient.
She is one of a growing band of doctors who are enthusiastic about the benefits AI can bring – and who believe it is vital doctors are at the table at every stage, from design to implementation to deployment.
‘We don’t need people making more algorithms,’ she says. ‘But there almost seems to be this bottleneck between the research and getting it actually implemented in the NHS. I’m never going to be a data scientist who is writing the code, but [as doctors] we can use our expertise to be that bridge and help things get into the NHS.’
There is – rightly – a lot of debate about AI and its implications for good and potential for evil. Even some of the pioneers in the field have been clear in warning that it could lead to various disastrous scenarios from development of chemical weapons to destabilisation of society.
The use of [AI] in healthcare is evolving and I’m very optimistic of the future
Dr Lip
There is little doubt safeguards are required – as the BMA set out clearly in a report on AI published last month (see Key principles, below). But in the meantime, almost under the radar, AI tools and solutions are making their way into healthcare, and some of them are already making a huge, positive difference.
Not surprisingly, radiology is at the forefront. Gerald Lip, a consultant radiologist and clinical director of the North East Scotland Breast Screening Programme, gives tangible examples. A case in point is a recently completed evaluation of the breast-screening programme in Scotland, where an AI tool was used as a ‘safety net’ to look at scans.
‘The AI was able to pick up an additional 10 per cent of cancers in this screening population,’ he says. ‘That’s a real-life example where it made a difference.’
Early intervention
tool to spot lung nodules on chest X-rays. ‘You can pick them up sooner and smaller, before they have a chance to spread, and as a result they’re caught at an earlier, more treatable stage,’ he says. ‘There’s a benefit to patients, there’s less chemotherapy, they’ve got a higher survival rate as well.’
As a doctor, he really appreciates the assistance of this technology, especially in a busy work environment. ‘It’s like having a friend over your shoulder pointing at one or two things and saying, “Would you like to have another look at that?”’
Dr Lip, who also chairs the AI committee of the British Institute of Radiology, says there are risks, such as automation bias, where people trust the AI too much, or bias in the AI tool itself, for example discriminating based on ethnicity or gender. ‘We need to maintain a belief in our own medical skills,’ he stresses. ‘Having understanding of AI means educating yourself about how AI works to ensure you don’t have bias.’
And while AI is very helpful in administrative tasks such as transcribing patient notes or writing reports, doctors should remember their names are at the bottom, he adds. ‘We have a duty as reporters or physicians to ensure that what is written is the true word.’
AI is here to stay, says Dr Lip. ‘We’re already using it in our phones, in our chatbots, in our cars – but the use of it in healthcare is evolving and I’m very optimistic of the future.’
Ms Nanda, a specialty trainee 4 in general surgery in south London, was first drawn to learning more about AI around the time ChatGPT started to gain traction. ‘I think it was the first time that the general public had really been exposed to the power of AI and how easily accessible it is – you can have it on your phone or your laptop; you don’t need some supercomputer in the hospital to use this powerful technology.’
Her clinical fellowship in artificial intelligence is a year-long programme hosted by Guy’s and St Thomas’ NHS Foundation Trust, which is open to clinicians across the NHS. The fellows – mostly doctors – work on a project involving real-life application of AI in healthcare, as well as learning about the topic more broadly.
‘The fellowship is really good because it upskills you – it doesn’t require you to have a master’s in artificial intelligence or a PhD, just a real interest in it.’
She is working on a project that uses AI to improve radiotherapy planning – potentially saving hours of clinician time. Rather than an oncologist looking at the hundreds of ‘slices’ or pictures which make up a CT scan, and manually drawing round a tumour, plus surrounding organs that need to be protected, the technology does it for them. ‘It’s a very long and laborious process [for the oncologist] but the software we’re implementing does it automatically, based on lots of CT scans that it’s been trained on, so the oncologist just has to sit and edit them.’
Saving time
There are still many sets of human eyes on it, she adds. ‘It’s got lots of humans in the loop – but the feedback we’re getting is that it’s saving almost an hour and a half per scan.’
She believes it’s important for doctors from all specialties to have an understanding of AI – and crucially, to know the right questions to ask. ‘We’re about to be hit by hundreds of companies knocking on the NHS’s door and they’re going to say, “Hey, we’ve got AI, we can solve this problem for you.” We need people in the NHS to understand what they’re doing, to be more cautious but also to get the best out of the technologies that there are.’
Bharadwaj Chada is also undertaking the fellowship in clinical AI programme. A GP trainee in London, he has already seen first-hand the effects AI can have on the day-to-day workings of general practice.
He and his colleagues were perhaps most excited about ambient note-taking (see Super scribe, below) for automatic transcription. But an AI-enabled stethoscope which can diagnose heart abnormalities and disease in 15 seconds impressed him, too.
What he really liked about the tool, however, was that it also helped with workflow in terms of what to do if a patient is incidentally found to have heart failure, for example. ‘It tells you who to refer to and what medications to start. Often, when AI is rolled out, it’s done in isolation without really an understanding of how it affects people’s workflows and what it means for service redesign. But this was packaged as something you can actually use, and tells you what you should be doing about whatever the AI solution might be throwing up.’
We need to maintain a belief in our own medical skills
Dr Lip
As part of the fellowship, he is working on a project with South London and Maudsley NHS Foundation Trust to use predictive modelling to identify adults at high risk of needing mental healthcare based on their first presentation to mental health services. The idea is that the modelling will suggest who would benefit most from intervention such as timely visits in the community and ensuring they are on the right drug therapy.
Doctor involvement
Dr Chada believes clinicians and patients should be educated in AI, in part so they can ensure the solutions deployed in the NHS are ones which are needed. ‘With some tech solutions, they’re hammers in search of a nail,’ he says.
‘Having doctors who are conversant with the use of these technologies and who know the questions to ask, and who know the context in which it’s being used is important, as is knowing how things were done before, what data is needed, the ethics and governance, and the explainability of AI change management – so if you’re introducing an AI solution, what else needs to happen?’
There is an enthusiasm and appetite for AI, especially among his younger colleagues, he says. ‘Doctors need to be front and centre of discussions – along with patients. There’s no question the tech is great, but it’s the implementation we need to get right. We need to think about the impact on training, on staff, integrating into workflows, the unanticipated consequences of change. All these things can necessarily only be figured out if doctors are involved throughout the life cycle of the design and deployment of AI.’
Key principles
The BMA is behind AI adoption as long as it meets set criteria
In October, the BMA published Principles for artificial intelligence (AI) and its applications in healthcare. It says the BMA supports the adoption of new technologies that are safe, effective, ethical and equitable; and that support doctors to deliver the best possible care, improve care quality, and improve job quality.
- AI must be robustly assessed for safety and efficiency in clinical settings
- Governance and regulation to protect patient safety are vital
- Staff and patient involvement throughout the development and implementation process is necessary
- Staff must be trained on new technologies (initially and continuously) and they must be integrated into workflows
- Successful AI requires a robust and functioning NHS to be effective
- Existing IT infrastructure and data must be improved
- Legal liability must be clarified.
Super scribe
AI has the ability to cut through admin tasks, allowing doctors to give more time and attention to patients
Surrey GP Dave Triska (pictured below) is an enthusiastic early adopter of AI technologies. He’s also now a consultant to AI manufacturer TORTUS and to NHS England on AI in healthcare.
He uses AI tools daily, for text generation for administrative tasks such as writing practice policies, and for ‘ambient transcription’ (note-taking) during patient consultations.
The time and cost savings of using AI to do the heavy lifting of administrative functions are obvious. He helped another practice save £1,800 a year, by teaching them how to use AI to review their policies rather than pay a consultant.
Importantly for Dr Triska, a form of ‘medical AI scribe’ such as TORTUS frees him up to concentrate fully on the patient, rather than having to take notes and write up a summary. He allows time to check AI transcriptions for accuracy but now regularly finishes a session 10 to 15 minutes early.
‘If you’re taking notes yourself, you are using two different bits of your brain: the memory bit is working really hard to structure and organise [information], while your creative thinking and processing bit is trying to come up with novel solutions,’ he says.
‘Now my energies are totally focused on problem-solving, finding creative solutions for the patient.’
Patients are much less worried about AI than healthcare professionals often are, he finds. He’s done more than 4,900 consultations using TORTUS and only one patient asked him not to use it.
In the short term, he is adamant that adopting some of AI’s applications could help the NHS out of its administrative quagmire, while insisting that the types of AI used need regulating.
‘Let’s not all go chasing AI unicorns: we need to be looking at the lower-hanging fruit. The functional admin side of things is the easiest thing where we can have a dramatic impact on care delivery almost overnight.’
He wants to see more doctors involved in advising not only manufacturers, but also central bodies such as NHS England. ‘You need to have clinicians on board who are actually using products day in day out, and have an understanding of what they need,’ he says. ‘Otherwise, we end up with the delivery of stuff that’s completely hopeless.’
Fellow champions
Doctors can join a programme to speed up their understanding of AI
The NHS Clinical Fellowship in AI is a 12-month programme integrated part-time alongside clinical work. Applications for the fourth cohort are expected to open this month, with the programme running from August 2025 to August 2026.
Programme manager at Guy’s and St Thomas’ in London Beatrix Fletcher says it’s expanding year on year from 11 in the first year to a likely 35 for cohort 4. Most of the fellows in the first three cohorts are doctors but applications are also invited from other clinicians. It’s important, she says, that all members of the multidisciplinary team have some understanding of AI technologies.
Now my energies are totally focused on problem-solving, finding creative solutions for the patient
Dr Triska
‘It’s absolutely essential the people who act as the decision-makers about what is used in patient care should include all stakeholders, including patients,’ she says. ‘But doctors are really well-placed to understand their patients. They’re the ones that interact with them more than someone from an IT team from an organisation or a company that’s developing a product. Doctors understand their patient population intimately – for example, what is acceptable risk, what’s not acceptable, and how to quantify the harm of a potential product.’
Doctors also understand their workflow, know how other workforce groups are impacted by their decisions, and have a good awareness of local governing politics and policies. ‘Without that perspective, there’s a danger that products are brought in that don’t focus on patient benefit and might create a system that’s more burdensome.’
Find out more about the fellowship
(Main picture credit: Sarah Turton)