Artificial Intelligence for clinical decision support
Discover Artificial Intelligence-based clinical decision support systems, their benefits and challenges
Clinical decision support system (CDSS) is a health information technology system that is designed to provide health professionals with clinical decision support (CDS), namely to assist with clinical decision-making tasks.
CDSS have been around for some years, but many of them have been somewhat standalone solutions and not well integrated into the point of care (POC) for patients. Poorly integrated CDSS can generate unnecessary or imprecise alerts and often result in alarm fatigue and clinician burnout, a course of action that can threaten patient safety and lead to worse outcomes.
Over the past years, researchers and developers have worked to overcome these issues, aiming to design solutions that are informative, intuitive, and efficient. Nowadays, thanks to the advances in Artificial Intelligence (AI) and Machine Learning (ML), and thanks to the growing volume of available clinical data - CDSS thrive into being more efficient and precise, leading to comprehensive and optimized patient care.
Let’s analyze the advantages and potential applications that can be realized with the synergy between artificial intelligence and clinical decision support systems.
How Artificial Intelligence is transforming clinical decision support systems
AI-based CDSS have the ability to analyze large volumes of data and suggest next steps for treatments, indicate potential problems, enhance efficiency, and facilitate the work of healthcare providers. It can leverage the information collected from patient’s health data, like for example data collected through electronic health records (EHR).
This new and revolutionary state of affair is possible mainly because of two reasons. Firstly, because of the high amount of clinical data available which can be obtained in real-time from medical devices and records. Secondly, because of the advances performed in AI, enabling this vast amount of data to be analyzed and optimally used.
AI clinical decision support systems can improve diagnosis, treatment, and prognosis of a particular medical condition, by predicting the probability of a medical outcome or the risk for a certain disease based on biomedical imaging data. AI CDSS can analyze past, current, and new patient’s data and identify or suggest safety concerns, errors, or care pathway improvements to the user. Their ability to predict with high relevance and precision leads to new ways of optimizing patient care.
Benefits of AI clinical decision support systems
Enhancing diagnostic accuracy
One of the most critical issues affecting medical professionals and healthcare organizations is the risk of misdiagnosing patients. In the healthcare industry, diagnostic errors can not only be costly but also deadly. According to a study, diagnostic errors contribute directly to higher mortality rate and longer hospital stay - especially among emergency room (ER) patients. Fortunately, AI can help in early diagnosis of life-threatening conditions in patients, and inform doctors on time so the patients can get immediate attention. One such example is the deep-learning tool in the emergency department of the Duke University Health system, called Sepsis Watch. It was designed to help doctors identify early signs of sepsis, one of the leading causes of hospital deaths globally. The Sepsis Watch can flag patients which are at medium or high risk, and those which already meet the criteria of developing the infection. It has massively reduced sepsis-induced deaths and is part of a federally registered clinical trial expected to share its results in 2021.
AI can drastically improve and enhance the accuracy of a patient’s diagnosis through automated symptoms analysis. For example, it can be used to detect early symptoms of leukemia (blast blood cells) in children, by analyzing microscopic blood images of patients.
Making more informed decisions
Artificial Intelligence CDSS can assist medical professionals to make better-informed decisions, in shorter periods of time. In an advisory role, AI-based CDSS can suggest a best practice for post-surgical patient discharge, recommend medications and doses, and recommend periodic follow-up checks and tests to ensure optimal patient care. It can help patients decide on alternate treatment or rehabilitation choices (in the safest and most cost-effective way possible), and may play an important role in reducing medical errors. For example, AI can help reduce medical errors in decision-making by offering a support system for medical professionals to double-check their decisions and/or request a recommendation.
Helping and assisting physicians
AI can also relieve the common problem of stress-related clinical burnout that occurs when physicians are maintained in an alert state on a wide range of potential complications to patients outcomes on a daily basis. By assisting with the identification, prevention, and solution of health-related problems, AI-based CDSS can help clinicians perform at their best whilst leveraging the maximum amount of information available to them. For example, AI can help healthcare professionals to better understand the day-to-day patterns and needs of the patients they care for.
What are the potential challenges?
Getting users on board
Implementing artificial intelligence is not less than implementing a change management project. In change management, the human dimension is central. In hospitals or medical practices, using a new AI solution can lead to changing processes and protocol, which is a challenge as such. Training physicians, nurses, and other concerned staff to use and support AI is a way to ensure that there will be understanding and support for the solution. Involving them in the design and delivery of AI solutions can also bring considerable benefits, both for the AI solution provider and the clinical team as such. For example, including healthcare professionals into the process of building an AI-solution can lead to better design of the AI use case, better performance and quality of the AI algorithms, and better and more complete use of data.
Reaching sufficient performance to reach trust
When it comes to AI in health, the concerns of unintended and negative consequences associated with AI can be high. Medical professionals often have high expectations in terms of the performance of the AI models, particularly the sensitivity/specificity of the model results. The sensitivity/specificity level and the error clinicians are willing to tolerate, can depend on the complexity of the medical task and the risk related to making (or not making) a certain decision. This can be particularly challenging, for example when dealing with prediction of a medical outcome, which doctors themselves have difficulty understanding.
Recently, Kantify attended a conference called AI for health, where medical professionals were discussing what is the sufficient performance of an AI model. One oncologist explained “the error we can tolerate by an AI system depends very much on the medical task we are aiming for. For example, for segmenting an organ in a 3D medical imaging dataset, errors on mm-scale might be OK for most applications. But when it comes to treatment decisions it is very difficult to define what is a sufficient performance. It’s all about validation, validation.. ”
Improving data quality to build quality algorithms
As most healthcare professionals know, medical data is not always stored in a standardized way. Data inaccuracies and missing information are all too common, meaning organizations need to have a considerable look at their data before they start preparing for AI adoption. The synergy of AI and CDSS is most effective when the machine learning algorithms can be fed with sufficient and qualitative data.
Fortunately, AI can also be used to improve data quality, and that way ensuring all the necessary information is captured, standardized, and trustworthy.
The challenge with ‘explainable AI’
Issues such as the explainability of AI in producing decisions may cause skepticism among healthcare professionals. The skepticism may arise as a result of medical professionals not understanding the way data is processed, how the algorithms are allowed to learn unintuitive relationships, and why an algorithm proposes such or such decision, etc.
Learn more about Explainable AI (EAI), what may cause some AI to not be explainable, and how we create EAI algorithms.
Let’s get in touch!
Are you interested in finding out more on how you can use Artificial Intelligence for clinical decision support? We have developed an expertise in helping healthcare companies and practitioners use AI.
Let’s get in touch to discuss your challenge in more detail!