Objectives: Here are some essential actions to take in order to approach digital transformation in a safe and sustainable manner and assist health IT executives in addressing the dangers related to AI.
We’ll concentrate on avoiding a few of the risks associated with artificial intelligence in healthcare.
The goal is to provide healthcare professionals with practical examples of safe and effective implementation of AI while also challenging them to reflect carefully on the reality of this revolution. We want to cut through the hoopla and concentrate on a mature understanding of how to develop this exciting future, and we want everyone in the health industry to join us in this endeavour.
Things not to do
AI projects that were hurriedly started are already exhibiting symptoms of failure, both inside and outside the healthcare industry.
For example, a passenger was misled by Air Canada’s customer-facing chatbot into believing they were purchasing a discounted ticket. The company then made an effort to deny any responsibility, stating that the AI was a distinct legal entity “responsible for its own actions.” Unfortunately, the “It wasn’t us, it was the AI” defence was rejected by a Canadian panel, thus the airline is now required to honour the discount that was inadvertently given.
The National Eating Disorders Association announced this past year that Tessa, a chatbot created to support people seeking guidance on eating disorders, would take the role of its highly experienced hotline staff. Days before Tessa was supposed to launch, though, it was found that the bot started giving out bad advise. These included suggestions to restrict caloric intake, weigh in frequently, and set tight weight-loss objectives. Even though Tessa was never put into service, this event serves as a stark reminder of the disastrous outcomes that might arise from implementing AI technologies too quickly.
In JAMA Open Network, a research that was just published, there are several examples of biassed algorithms that support “racial and ethnic disparities in health and healthcare.” The authors described multiple instances of harmful algorithms that are biassed that have been created and implemented, negatively affecting “the allocation of resources, and access to, or eligibility for, interventions and services.”
Furthermore, a lot of these biassed algorithms are still in use, which makes it especially worrisome.
To put it simply, unless proactive steps are done to minimise these issues, AI time bombs have already exploded and will continue to do so.
How to Proceed
These are some guidelines for executives to follow when tackling the risks related to AI transformation in order to ensure a safe and long-lasting strategy. The following advice is intended to help healthcare executives get the most out of their investments:
- Give explainability and openness top priority. Select AI systems that provide comprehensible results and clear algorithms.
- Put strong data governance into practice. Ensuring data that is high-quality, diversified, and well labelled is essential.
- Early on, interact with moral and legal authorities. Early comprehension and adherence to legal and ethical criteria can save expensive changes and guarantee patient safety.
- Encourage multidisciplinary cooperation. It is ensured that the AI technologies developed are useful, moral, and patient-centred by using an interdisciplinary approach.
- Assure interoperability and scalability. AI technologies ought to be scalable across several departments or even institutions, and they ought to be made to work seamlessly with the current healthcare IT systems.
- Make an investment in ongoing training and education. Staff members that receive ongoing education and training will be able to use AI efficiently, understand its results, and make wise judgements.
- Create a Strategy that is Patient-Centric. Adopt AI strategies that improve patient involvement, tailor the way that healthcare is delivered, and prevent unintentionally widening the gap in health outcomes.
- Keep a constant eye on effect and performance. Provide systems for receiving input from employees and patients so that AI tools can be improved over time to better serve the requirements of all parties involved.
- Provide transparent foundations for accountability. Establish distinct responsibilities for judgements taken with AI’s help.
- Encourage the development of an ethical AI culture. Promote responsible AI use, have ethical conversations around AI, and make sure that the interests of all stakeholders are taken into account when making choices.
Allow these pointers to direct you while you explore AI. Use them to create guidelines, rules, practices, and protocols so that artificial intelligence is implemented correctly the first time and that situations where things don’t go as planned are skillfully handled. Using these suggestions proactively at the start of the AI revolution will ultimately save lives as well as time and money.