Close close


Enter your search term below:



How to harness the power of AI and machine learning in healthcare

How to harness the power of AI and machine learning in healthcare

Artificial intelligence (AI) and machine learning technologies are rapidly revolutionising the medical industry around the world.

The Department of Health and Social Care has realised its potential, pledging £250m to invest in the AI to help solve the healthcare toughest challenges. As part of the Grand challenge mission the government set a target to ‘use data, AI and innovation to transform the prevention, early diagnosis and treatment of chronic diseases by 2030’.

The arrival of the first COVID-19 lockdown triggered several years of digital transformation seemingly overnight, as the health and care sector was forced to rapidly adapt its practice.

With so much government enthusiasm for AI and willingness across the health and care sector to adopt digital innovation, are we going to see a period of great opportunity and growth for AI and machine learning?

To delve into this topic more deeply, SETsquared gathered leading industry scientists and entrepreneurs for an in-depth roundtable debate. The session was skilfully led by Richard Vize, the Guardian columnist and writer for the British Medical Journal.

COVID-19 and the pace of digital transformation

The pandemic brought great technological change into many aspects of our everyday lives. For the health service, rapid change had to happen and barriers to the deployment of digital innovation came crashing down in the pursuit of delivering the best outcomes for patients.

Laurence Pearce, CEO of xim, has developed Lifelight – an AI-driven solution for performing remote clinical observations, such as measuring heart rate, blood pressure and respiration, within 40 seconds all via a smartphone or other mobile device.

“It’s certainly been a rollercoaster,” recalled Laurence. “During the pandemic, there was a massive peak in demand for our remote patient monitoring technology. It was an explosion of interest, suddenly it was no longer a case of trying to persuade GPs to take remote consultations it was a question of why do we need to see anybody face-to-face?

“The idea of patients looking at their mobile device while having a video consultation and having their vital signs monitored and sent to their GP is what we’ve been working toward, and suddenly everyone was saying we want your technology now. Can you roll it out to millions of customers this week?

“It’s given us a kind of rocket boost in terms of our process, but we had to push back on the demand because we want to be evidence-based. Our roll-out has to be done properly. However, what this period has done is transform people’s awareness of the opportunity to use mobile technology for this sort of point of care delivery.

“COVID has brought AI and the use of technology much more into the mainstream and made its application and implications more visible,” added Laurence Pearce.

Pahini Pandya, CEO and Founder of Panakeia Technologies, who has created an AI-based platform for cancer diagnosis has seen a similar shift. “There’s a lot of openness and increased demand for AI-based solutions among clinicians. I think the route to market still remains pretty slow for those, like my company, in secondary care. But this greater awareness has allowed us to establish the right sort of clinical partnerships so that we’re in the process of generating the right sort of clinical evidence for our medical device.

“One of the big changes we’ve noticed is when we go and speak to specialists a lot of them are more aware of AI, but the information that they have is often incomplete, so they’re either too excited or too concerned about AI-based technologies.

“I’d like companies like mine, academics and others in the industry to work together to educate the health and care sector about the pros and cons of AI, what it can actually do versus the misconceptions, and explain its limitations in a more coherent way,” suggested Pahini Pandya.

AI technologies is just one part of a big picture and awareness needs to increase for all aspects, as Professor Michael Boniface, Director of the IT Innovation Centre at the University of Southampton explained: “Awareness and understanding of AI needs to be wider; it has to be about the use of predicted analytics in all of its guises.

“During COVID, there’s been a lot of success around system modelling in pandemics; we’ve helped locally to provide hospital impact models and simple digital innovations, like remote oximetry, have been rolled out extremely quickly – all of which will lead to new data sets.

“We can’t stop the roll-out and it needs to happen at the pace it’s happening, but we need to ensure that we’re not faced with excited consultants and clinicians with a back-of-the envelope service evaluation in Excel making decisions. We have to inform them that it’s not going to stand-up to the level of evidence required for science and encourage them to lead with caution. Make sure the data is described in the right way, inch them in the direction of providing the sorts of data that can feed machine learning and AI in the future. We’re not able to solve all the problems right now because we’re having to run in the fast-track and the deeper track at the same time,” explained Professor Boniface.

“I agree there’s a lot of opportunity for AI and machine learning but there’s also significant challenges in the regulatory space, when thinking about how we validate these new devices and technologies,” added Dr Benjamin Metcalfe from the Department of Electronic and Electrical Engineering at the University of Bath. “AI is a really important component, but it often fits into other much more complex systems, and the integration of AI into these pre-existing processes is a very complex process.

“During COVID within the University of Bath, we’ve seen a huge increase in our level of collaboration with clinicians and part of that has been due to rapid prototyping uptake of new devices, diagnostics tools and remote pulse oximeters. But how we now capitalise on that and move forwards within a sensible regulatory framework is something that I think is a big challenge and it’s not an obvious one for us to try and address at the moment.”

This was agreed with by the other academic on the panel, Professor Ian Craddock, Institutional Lead for Digital Health at the University of Bristol: “It’s definitely not just about AI and it’s not about algorithms operating in isolation. In the short term what the NHS has looked for is digital innovation to address particular pinch points and service delivery during COVID, some of those applications have had an AI component.

“There’s a fast track, a very urgent need to address particular things, but in truth the NHS’ capacity to adopt new innovation is limited because it often involves changing clinical pathways, creating organisational change, and training staff in the use of new technologies. So, some things will be fast, but I think in the long-term what we’ll see is that yes, doors have been opened, eyes have been opened but it’ll be a medium-speed transformation.

“What’s encouraging is the growing amount of new talent interested in a career in AI and machine learning. They see the opportunity that COVID has created to transform the way we do things. And, we need to do new things differently and bring new people into the industry and the innovation ecosystem to bring about large-scale technological change,” said Professor Craddock.

Nick Allott, CEO of Nquiringminds, has developed an IoT and data analytics platform which works in a range of sectors, including social care. He saw a massive uptake of his technology during the pandemic but cites a number of challenges moving forward.

“I think we need to temper our excitement slightly with a couple of reality checks,” said Nick Allott. “Although there’s a lot of interest in new technologies, the capacity of the health and care sector to action is not quite there yet. A dichotomy has been created between a lot of publicity, a lot of cash and in many senses a lot of interest, but actually being able to deliver on that is somewhat hampered.

“AI has been on a constant loop of overhype and reaction and there’s contention between people that want to generate solid evidence to support what they’re doing and clinicians desperate to buy. This is obviously going to favour the opportunists and opportunists exist when there’s lots of money around, and we see evidence of this happening all the time. Some are taking new technologies or devices to market that we know don’t work yet or at the very least, they haven’t got the evidence to support it,” said Nick Allott.

Validating the right technologies

There is a risk of poor practice when creating AI and machine learning technologies in the rush to develop and provide a solution. It’s difficult to take advantage of the demand for a quick solution while ensuring the quality and rigour of your development.

“This was the biggest learning curve for us,” said Laurence Pearce. “We built the first demos and proof of concepts and every clinician we showed them to asked: where’s the evidence? Does it work? What’s the accuracy level? Can you prove it? Lots of testing and evaluation. It’s a very painful process but a necessary one, and if you haven’t started that process as a tech company; you need to learn as early as possible what will be required otherwise, you’re going to have to start from scratch and build everything with that rigour.”

Engaging with the clinicians is important and it needs to happen early, while also tempering their expectations because the development of AI technologies takes time.

“During the development phase, the design and conversations about data need to be interdisciplinary, with endless conversations taking place about what the data means,” said Professor Boniface. “Having clinicians and others, including patients, involved in the design is also key for adoption.

“AI is a complex software development paradigm. It’s about building software systems from data and that comes with processes and methodologies for how you develop models through the pipeline and how you integrate them into systems, operate them and deal with the things like conceptual drift and other factors that happen when the data changes or the context of those models change.

“That whole process doesn’t fit well with actually asserting that something works at a point in time when we know in the future it’s going to change. It’s critically important to bring teams together at the right points in the development cycle and assert the knowledge needed to make sure the model is effective for the purpose it’s designed for,” explained Professor Boniface.

But the availability of data is a huge part of the validation process and its arguably the biggest hurdle to the adoption of AI in healthcare.

Data availability and governance

“Access to data is a hugely important and we’re still awful at it,” said Nick Allott. “The NHS is incredibly fragmented; the data sets are really poor, and they won’t let you work with them. There’s lots of genuine reasons why that might be the case, but the reality is the innovation isn’t going to happen until that door is unlocked.

“The NHS is also an impenetrable culture. There are technology-centric companies like us attempting to break down the door to the NHS but it’s really difficult because they speak a different language and they have different processes. Unless you’ve got somebody that can translate for you it’s like banging your head against a brick wall.

“Information governance is the fundamental problem. We can handle data interoperability even data noise problems. What we can’t solve is – access to it in the first place.

“Small companies like us find it almost impossible to access data, while I’ve read about the likes of Google seemingly being handed it on a plate. Every NHS Trust is autonomous, so unless there’s some central leadership sending a directive, change isn’t going to happen,” said Nick Allott.

Professor Ian Craddock added: “Solving the governance issues is a big problem. Companies struggle to access data, but universities also struggle for exactly the same reasons; there’s no understanding of where the decision lies and there’s a fragmented process of decision-making in the NHS.

“Having money on the table is a great enabler but the money itself doesn’t actually address any of the issues because if it’s no-one’s job within the NHS Trust to make the data available, then giving the NHS money to make it happen isn’t going to solve anything.

“Responsibility is at the core of what we do, involving clinicians and patients in coming up with systems that are understood, which comply with regulation and have the necessary quality and independence of evaluation is extremely important. However, progress requires long-term partnerships. Just throwing money at the problem and expecting it to be solved in a year is not realistic,” argued Professor Craddock.

There have been some advances toward making data more available, as Pahini Pandya explains: “During COVID, there was access to data in a much more regulated and controlled manner. One of the recent changes that we observed is open test data sets, which are relatively standardised, anonymous and available for companies to tap into. They have representation across multiple sites which often is a very critical component when considering building technologies that scale.

“I think there may be benefits to us to all working together to standardise initiatives and establish pools of data that are disease-specific or tackle a specific challenge faced by the NHS, that can then be shared to help develop solutions faster.

“There are a lot of people applying AI just because it’s hot. The only time an innovation will be adopted in any system whether that’s healthcare or otherwise is if it solves a real immediate problem. I think one of the things we should think more about is what are the acute problems that need solving rather than some other more preventative solutions,” suggested Pahini Pandya.

Dr Metcalfe would also like to see more collaboration and sharing of data, with a greater focus on what issues need the most attention: “Data is critically important and how that’s shared, managed and the governance of that data is really key. It’s something we don’t have right. We’re not even close to getting it right.

“One issue that we come across quite often is whether the data that’s available actually addresses an unmet clinical need. So, there could be vast data sets that represent one population group with one disease, but that might not be where there’s a strong clinical need. There’s often huge research effort invested in areas where data already exists – because it’s easier to do that research but it’s not addressing an unmet clinical need.

“I think we need to bring clinicians and researchers together and have enhanced leadership in terms of generating data and making it accessible. This way we can identify and address unmet clinical needs in a much more coherent manner,” said Dr Metcalfe.

Communicating complexities on the journey to adoption

 AI and machine learning technologies have the potential to power a new generation of medical devices and systems that make clinicians more efficient, resulting in better patient outcomes. But how can we ensure clinicians understand AI and its applications?

 “Clinicians may be very familiar with traditional statistical analysis, presentation of significance and hypothesis testing, but when you present a similar analysis using machine learning – they look at you as if you’re from another planet,” said Professor Boniface.

“And that’s even for some of the most highly trained people in the NHS, and if you go down to some of the digital teams who are responsible for transformation and implementation of solutions you find that their literacy is even lower. In fact, they don’t even see the data; they see the system, how the components are bolted together, and the data is rarely viewed.

“I feel there needs to be a literacy program within the NHS and there are various initiatives trying to address this, but I think it’s a key part as well as researchers needing to make sure the algorithms are able to present their results in a way that’s understandable.”

Professor Ian Craddock commented: “Maintaining a very close link to research funders and understanding what they’re doing in terms of horizon scanning: the policies, priorities, and strategies – a consultative process. Funders don’t have a crystal ball; they talk to their community and so I think one thing is to actually engage more with them to shape their strategies.”

“I think the answer is very simple and very complicated at the same time,” said Pahini Pandya. “Design needs to be a consultative approach with a range of stakeholders, and it shouldn’t just focus on one specific set of stakeholders. By understanding, investigators are then in a position to identify gaps and specific areas to tackle. Often consultative processes are very good at identifying short-term problems or for example, the key problems we’ll face as the healthcare service recovers from the pandemic. That’s fine but there definitely needs to be a better thought process and consultation put in place for the long-term planning.

“When planning long term, it needs to be treated as an iterative process rather than a one-off, so the idea should technically be that you’re planning for 10-15 years down the line. There needs to be basic infrastructure and practices in place which allow you to collect the right sort of data, evaluate and assess the situation and see if those long-term goals are still relevant when you’re closer to that point in time.”

Wide consultation as a means of development was echoed by Laurence Pearce: “Hearing all the voices and concerns, so we know what’s needed and not just going to the usual suspects for horizon scanning is important.

“One big eye-opener for us during the early stages of development was the level of digital awareness among NHS staff. It’s easy to assume that staff all have smartphones, but they don’t actually. There’s a large proportion of NHS staff who have barely touched anything digital, and we just make the wrong assumptions.

“It’s easy for us in our tech bubble to think everybody is using Uber as a taxi and ordering food on apps. No, they’re not! And, we have to get real. It’s vital to listen more broadly to a wider audience. Take into account the needs of patients and frontline staff, understanding usability and what pathways could actually work.

“Also, we mustn’t allow COVID to blur our view of the long-term. The big healthcare issues are still there: cardiovascular disease, cancer and more; they’ve not gone away and they’re still going to exist until better treatments are available. We might have to find novel and innovative ways to manage them in the new normal, but let’s not get completely distorted by the short-term; we still need long-term preventative solutions,” said Laurence.

Communicating is all well and good but the data still needs to be there to develop the right applications. As Nick Allott explained: “Data is a fundamental impediment. Even if we speak to lots of stakeholders, develop a specific product – even from a horizon scanning perspective, if it’s not data informed it’s worthless. We’re just putting our finger in the air and guessing which direction we need to go in.

“To know which direction to go in we need performance metrics and the underlying data such as clinical records. We need both because there’s two problems to solve: Firstly, is this innovation technically viable? And secondly, can it be applied effectively? If it’s a tick in both boxes then it’s possible to take that technology successfully to launch, and to help identify whether it’s a viable opportunity and worth digging into, we need the metrics data and the underlying health record data.

“This overwhelming need for data is at the heart of the case for whether a new technology is successful,” said Nick Allott.

Dr Metcalfe is involved in active and planted medical devices and cites regulation as being the main obstacle: “Regulation has only become more burdensome and cumbersome over the past couple of years, and it’s going to become more difficult moving forward.

“There’s a real danger of ‘killing the goose that lays the golden eggs’. Currently, we have a regulatory approval system that’s risk adverse rather than risk aware. We’re already seeing products that don’t use AI – standard medical innovations, never making it to market because the regulatory burden is so extreme. Also, the cost of going through that regulatory process outweighs any possible commercial benefit at the end of it. And, I think as we bring more and more AI into these technologies, that regulatory process is only going to become even more complicated.

“There’s been some improvement recently, but I worry about the long-term sustainability of those changes. A lot of it has been reactive to help get some new devices and technologies through in response to COVID. How much of that will continue long-term is yet to be seen, and also what happens post-Brexit in terms of our regulatory framework and how well we can maintain alignment with our partners within the EU and with the FDA in America, and how that might affect our ability to collaborate to develop devices with a good level of interoperability across different healthcare systems. I think it’s an opportunity but it’s also a potential issue for us to be aware of moving forwards,” said Dr Metcalfe.


AI and machine learning have huge potential to save lives. If we take diagnostics alone, there have already been large-scale developments in rapid image recognition, symptom checking and risk stratification. There is more development on the horizon, and public funding bodies and private investors have recognised the opportunity to bring these applications to fruition.

The panellists discussed that the demand and acceptability of AI and digital health technologies has rapidly grown during the COVID pandemic driving growth in this sector. However, there is still the lack of comprehensive information and education around AI and its benefits and shortcomings which the panellists agree should be addressed. The industry growth needs to be matched with changes within health and care organisations ensuring that the capacity and digital capabilities enable the adoption and usability of new technologies.

The need to balance the speed of development and ensuring the AI solutions are robustly developed and tested before they reach the clinic, to avoid launching solutions that do not work or were not shown to be effective was also discussed. Companies in this sector are advised to collaborate with clinicians and academics during the development to warrant that the products are designed to solve actual clinical problems and can fit with the clinical and care practice.

Lack of data to help train algorithms and develop the software was seen as one of the biggest issues facing companies operating within the AI space. There is a need to create more open and standardised data sources that are disease specific. Clinicians and researchers should partner to improve data governance and access within the areas of clinical need.

The panellists saw the benefits of moving away from reactive clinical problem solving with AI technologies. There is a need to form multidisciplinary teams including the government, health and care providers, academics and technology companies to identify and develop solutions to issues that will face the NHS in the long term.

To accelerate the development and safe adoption of AI to improve the health and lives of patients the government launched AI Lab and new funding streams.  SETsquared is supporting companies through its Scale-Up Programme to bid for these new funding opportunities including the AI in Health and Care Award.

The Scale-Up Programme is designed to provide high-growth SMEs with support around the key challenges of R&D, raising investment and accessing new talent. It includes a direct-link to leading expertise at six scale-up partner universities – Bath, Bristol, Cardiff, Exeter, Southampton and Surrey.

Universities play a key role in the innovation ecosystem and the vast majority of public funding into R&D in the UK is invested into universities. By working with SMEs in industry, academics benefit because their research informs industrial challenges and clinical needs, but also more importantly their collaboration ensures that the fruits of R&D benefit our society.

Collaborations between companies and universities are becoming increasingly important as investors seek to fund companies that also win public funding grants to reduce the risks involved, and this is particularly true in healthcare. The resulting academic publications can bring real validation to many new technologies.

As part of the Scale-Up Programme, SETsquared brokers the relationship between the SMEs and leading academics. It also funds professional bid writers to prepare tailored bids for Innovate UK funding streams, industrial challenge funds and NHIR funding including the AI in Health and Care Award.

Find out how the Scale-Up Programme could support your business


Close close

Mailing List sign-up

  • By submitting this form you agree to our privacy policy

SETsquared is a partnership between

Close close

Mailing List sign-up

  • By submitting this form you agree to our privacy policy