Large Language Models (LLMs): Is this AI?

Data Science team, Strategy Unit

Oct 10, 2024

Generative AI ✨

  • Creates new content
  • Trained on lots of examples
  • Can mimic creativity

Modalities 🖼️

A doctor performing a procedure on a patient using a large pair of tweezers on their nose. It's in a cartoony style with isomatric projection. The wall has a sign saying 'chronic nose hair'.

Text 🔤

  • Spam filters
  • Sentiment
  • Topic detection
  • Word prediction

Large Language Models (LLMs) 🦜

  • A (fancy?) parrot
  • Learns from lots of text
  • Predicts next word
A sentence that says 'the cat sat on the' followed by a blank. The words 'jumper', 'cushion', 'mat', 'grass' and 'dog' are show to the right with bars next to them. 'Mat' has the longest bar indicating the highest probability of it being the next word.

In healthcare 🏥

IRL models 🤖

Case study 🧠

‘Nearly 40% of NHS Talking Therapies already trust Limbic to improve their services’

Case study 🧠

Patient

A user interface to the Limbic Access system, which shows a chatbot asking the suer if they need help. There are buttons saying 'yes' and 'not thanks'.

Clinician

A clinician user interface to the Limbic Access system, which summarises an interaction with a patient.

  • From the NHS-E Transformation Directorate write-up:
  • ~99% of patients that left feedback said that Limbic was helpful
  • the service has seen a 30% increase in referrals and initial evidence indicates that Limbic improved out of hours access
  • on a pro-rata basis, a saving of 3000 hours (4 psychological wellbeing practitioners)
  • nearly 20% of referrals were identified as ineligible and signposted to a more appropriate service

But… ⚠️

The effect of using a large language model to respond to patient messages (The Lancet)

…raises the question of the extent to which LLM assistance is decision support versus LLM-based decision making

…a minority of LLM drafts, if left unedited, could lead to severe harm or death

Pros ➕

  • For providers: could reduce pressure
  • For users: increases service accessibility
  • Can be trained for domain specificity

Cons ➖

  • Ethical issues, like:
    • bias
    • computational cost
    • data origins
    • privacy
  • Not human
  • It lies

Consider 🤔

To ponder ❓

  • Is this AI?
  • Are LLMs an appropriate tool in healthcare?
  • How might you feel interacting with an LLM-driven service?
  • How can we protect patient privacy?
  • How do we deal with LLMs as tools for decision support vs decision making?
  • Who is responsible for errors, or even death?

Further reading 📚