What’s your official title?
I’m co-Chair of openEHR, so I represent individual members, but I also work for Dedalus – who are an Industry Partner – as their Chief Industry Advisor. Alongside this I also work part time academically and in policy for healthcare AI.
Why are you an advocate of openEHR?
I’ll tell you the whole story. I first became interested in openEHR in 2011 when I met a doctor who was writing a GP system based on it. I will willingly say now that I didn’t get it immediately, but that’s when the concept was first introduced to me. I’d previously worked on open standards with Linux, and with the Apache Foundation, and the idea of building things once, building them well – as a minimum platform for other things to be done globally – seemed then an infinitely wise thing to me. Other industries have done it. The World Wide Web is built on that. It just makes sense, right?
But it can be overwhelming for people who are new to this space as to why this is important and what it means. So I didn’t immediately go out there and start saying ‘we must do this’. I was kind of curious, but it took me a good couple of years to get my head around it fully.
I was working as a CIO at the time and I kind of figured that our data was unmanageable. It was in so many formats, so many places. It was either unmanageable in some places or was going to become unmanageable as we produced new data. And as I looked at openEHR, I realised that in banking and in the military and in other industries, they defined what the data was, and it was standardised across different countries and different organisations. An apple here meant an apple there. You didn’t have to create contrived interoperability to try and bridge the gap and lose some of the meaning. So – as CIO – I had to actually look at how bad the ‘warranted variance in data’ was. It was probably 2014 or 2015, when I really started to engage with this and start to pose questions in the organisations I worked for about why we couldn’t converge on standards that were being used across the world.
It’s a long journey because healthcare is complicated. We’ve built ourselves a whole set of technical data debt by doing things the cheapest way or the quickest and easiest way, or the local way. And sadly we now need to pay off that debt. But my next realisation probably came two or three years later when I started really getting into automation intelligence and AI and I realised that AI absolutely needs standardised data in a way that humans don’t. We can read something and interpret it in context but AI intelligence automation needs the on-point data in a standardised format to work reliably. It is possible with data mining and nlp and pull things out but not consistently. To do it reliably, it needs that data quality. And that’s the point at which I thought, if healthcare is going to embrace the next cognitive revolution, we have to get our house in order. For me, that was the lightbulb moment that we needed to standardise.
Why did you choose openEHR over, say, FHIR or Snomed?
I didn’t. HL7 and FHIR are about intra-operating data, shifting data from A to B and also about workflows. FHIR also incorporates interoperating simple data that can be used for simple storage. openEHR and OMOP are used for storing data, openEHR for care, OMOP for research, and then Snomed and LOINC are used in these as codes that tell you diagnosis, procedure and so on.
They all live happily on this diagram. So I’m a broad church because not all of them do everything. All of them do something. And yes, in the Venn diagram you might ask ‘does it do that or that?’ There’s a little bit of an overlap, but we need them all. It’s not about interoperating between system A and system B. It’s about standard data storage for the human condition. They’re all different and they fit together to cover all our needs.
But should we try to reinvent it?
No, we should just make it as good as we can. If you’re going to go back 100 years and say, “what is the ideal engineering for data, for healthcare and research?” you may not draw this diagram, but that represents millions of hours of work and expertise and knowledge from clinicians and experts. That diagram is better than what we’ve currently got. If we adhere to that, it’s way better than me buying a cheap system that stores data free-form, as many people are still doing.
How do you get that message out?
Like this! I’m just writing a blog for Tech UK too on the same subject. I make myself heard and I talk a lot about unwarranted variation in data. Unwarranted variation in treatment causes harm and death. That is knowledge in medicine, unwarranted variation in data causes exactly the same thing. Harm and death. Data saves lives!
But this can be a hard sell. We’re talking about something abstract: it’s not a building or an MRI scanner, or even a person. It’s not something you can see or meet, shake its hand. The only way you can do it is using metaphors and stories.
Is the tide turning, do you think?
People are understanding now that in the same way that we engineer a physical hospital so the roof doesn’t cave in and the air conditioning works and the electricity is the right voltage and the water isn’t polluted, that we need to engineer the data as the future infrastructure to be just as professional. It’s a utility that we need, and we can’t half-bake it because otherwise we’d have a hospital or health system that is badly engineered and if it were a building it would look medieval.
Do you find that some health systems are more ready to adopt it than others?
I think people who understand data or have found issues in their own data and want to achieve more with it. But if you imagine the banks in the UK: if every bank recorded transactions and balances and ATM payments differently, and then everything was point to point with spaghetti wires, think of the mess we’d be in. You couldn’t transmit all of the data because it wouldn’t be accurate anyway, and you wouldn’t know how much money you had.
So in that sense, you’re actually almost anti-interoperability?
No, I’m against unwarranted interoperability because interoperability can be used as a sticking plaster for bad data structures, but it is necessary in some places. If you’re using it because you’ve got this really old, bad system, and you’re trying to transport data, then first of all, you can bet that the schema here will not map to the schema there. You’ll have to leave some data behind. and then it probably won’t have everything you need to fill this schema, and on it goes. You’ll end up diluting the meaning of data – I’m against that. But I really don’t mind interoperability when it’s for two really good schemas and it’s warranted. Otherwise, I guess I’m against the wrong use of interoperability.
How did you formally get involved with openEHR?
I’ve got a funny way of just landing in places, and I don’t naturally step forward to lead unless there’s a space for me. With openEHR I absolutely believed in it and I felt leaders were needed, so I stepped forward. Unlike a lot of the other standards bodies where I’m a supporter, but I’ve never felt that was the need for me to stand forward.
How are you finding it?
Amazing. I’ve been reflecting recently on the responsibility it is in terms of the work that I did with Data Saves Lives and how I’ve seen how the standardisation of data literally saves people’s lives and improves lives.
There are a couple of moments when I’ve thought to myself, ‘I do this as a member. It’s not a job’. I’m a member and I’m representing members, but there’s a moral duty to do that. And that really, in a good way, feels like quite a heavy responsibility, which I need to take really seriously.
It’s a bit like working on the board of a charity or a not-for-profit. We could be saving lives directly. It’s a very similar feeling, I think because in a way, it’s easy to get lost in the informatics, and forget the point of it all: ultimately you’re looking at patient outcomes and better healthcare all around.
And I’ve definitely got to name check Data Saves Lives for that because they used to keep a running tally of the lives saved and improved with the data work we did when we’re in Manchester and the North. And that for me is always a reference point I can go back to as a reality of what the work means when they say you can’t really quantify what you’re doing. And it’s huge. We can have a huge impact here.
What do you think the next move should be for openEHR?
I think it’s really exciting that we set up the clinical programme board and the new clinical governance so that we can scale up. That’s immediately a 2023 job. But what should we be moving into? There are areas that are part of the article that I’m currently writing on life sciences around patient reported outcomes and research datasets. I think it’s very important that this data can fully extend to the patient contributing, and it being used safely in life sciences. Not that it’s not used safely now. What I mean is use without friction in life sciences.
I think it’s really exciting that we’re setting up affiliate programmes. We’ve got the launch of the UK chapter coming up, on the 14th of March. It’s not only about the dissemination of information, but also driving what is important as we build further. I think that openEHR needs to engage actively with members in every region and every country to allow two way participation.
Leave a Reply