5 ways the world can foster a culture of AI in healthcare
It’s become increasingly clear that artificial intelligence (AI) and other data-led technologies comprise the next frontier of capable, adaptable and affordable healthcare. In fact, these innovative solutions have already begun to link arms with the transformational capabilities of clinicians, steadily blossoming a culture of AI in the healthcare space with incredible potential to improve patient outcomes and rewrite the possibilities of personalised care.
AI technologies excel in previously human-led areas of healthcare, namely in the analysing and actioning of big data, diagnostics and repetitive tasks. They’ve proven to adapt to the data-age, thrive in the visual space and even streamline administration in real-time. As AI slowly slots alongside clinicians and positively augments their everyday practice, the industry has grown to realise that, together, man and machine can revolutionise the quality of care.
But, it’s still early days. That’s why, to ensure this increasingly-capable reality of healthcare continues to flourish, the world must foster a curious, pragmatic and adaptable culture of AI. It will require a multidisciplinary approach that involves all stakeholders – clinicians, researchers, policymakers and the patients themselves.
Alongside IntelliHQ, let’s explore five ways a beneficial culture of AI can be cemented within the healthcare space.
Five ways a culture of AI can be fostered in healthcare
- Education and training
At the progressive crossroads of healthcare, big data and AI technology, is a need for ongoing education, training and skill development to ensure their intertwining is effectively and ethically realised.
AI’s capabilities are only as strong as those who develop, monitor and put them into action, so healthcare leaders, clinicians, researchers and technicians must be confident in their use and be able to pragmatically analyse their efficacy in a variety of situations. It involves understanding AI’s potential benefits, limitations and ethical considerations, and challenging common misconceptions.
Initiatives such as IntelliHQ’s AI in Healthcare Training Program outstretch these transformational insights to those ready to amplify next-gen capabilities within their field. Through focused training in AI and applied analytics EMR for clinicians, nurses, researchers scientists and health executives, they empower participants at all levels within the healthcare space, lead a revolution in patient outcomes and face healthcare challenges with vigour.
- Secure data access and management
It’s become increasingly clear that quality and accessible healthcare data is the integral currency enabling the development of healthcare AI. These technologies thrive on the information contained within electronic health records, clinical results, patient demographics, patient-generated data, research findings and even administrative billing data.
With healthcare data being so crucial to AI’s expanding capabilities, robust data management policies and systems are essential. These require data privacy, security and data quality to be prioritised, and the thin line between data protection and data access to be tread carefully.
Without steadfast data security measures, the databanks and measures hosting and transmitting sensitive health data are at significant risk of security breaches and mishandling. Forgoing quality control measures also leaves room for inaccurate, unessential, irrelevant or non-inclusive data to influence the capabilities of data-led technologies.
Capable security infrastructures simultaneously quell the anxieties of both health professionals and patients as they build confidence in data-led technologies.
It’s up to all stakeholders – clinicians, researchers, executives, technologists, patients and policymakers – to establish a responsible and effective culture of AI in healthcare. This means collaboration between all parties must be prioritised to ensure insights at every level are captured, considered and implemented as technologies are employed.
Danny Tobey, a partner of DLA Piper – a law firm dedicated to establishing best practices for responsible AI deployment in the United States – stressed recently in a Health IT Analytics article that although AI is advancing rapidly, it’s not completely self-sufficient and requires all hands on deck to ensure it functions positively.
“It’s not just regulatory, and it’s not just data science, and it’s not just machine learning,” said Tobey. “The more collaboration we have, the better we can help people figure this out as both regulations and industry standards evolve.
“My experience in this field is that everybody wants to do the right thing. They’re just looking for a little bit of guidance about what that right thing means.”
The Healthcare Datathon is a collaborative, challenge-based event empowered by IntelliHQ where teams uniting clinicians, data scientists, researchers and technologists construct data-based solutions to real-world health challenges. It’s initiatives like this that bridge the gap between commonly disconnected voices in the healthcare space and beyond in order to build a multidisciplinary groundwork for AI innovation and culture.
- Appropriate governance and regulation
AI will likely never be a hands-off solution in healthcare. That’s why, to ensure its enduring effectiveness and ability to cater to the widest patient population, ongoing consideration at the decision-making level is essential. This requires steadfast governance frameworks that promise perpetual monitoring, diverse voices in executive processes and unwavering ethical standards.
Specialised regulation and policymaking is the other piece of the overseeing puzzle. Without robust laws and regulation, there is room for the potential of AI solutions to be compromised and the rights and interests of patients to be put at risk. This also needs to be applied to areas of data collection and management to ensure sensitive data remains in the right hands, for the right reasons.
In Australia, health information is covered under Privacy Act, in the United States under the HIPAA and the Data Protection Act in the UK. However, most legislation was not established with AI technologies and their incessant need for data in mind, so there is great need for new frameworks to be put into place.
Peter Leonard, Professor of Practice within UNSW Sydney Business School, founding partner of the Gilbert+Tobin law firm and Data Synergies consultant, describes the current regulation and governance culture of AI as complicated.
“Regulation is important, even if its outdated policies do sometimes get in the way of good applications of shared health data,” admits Professor Leonard. “But, often the problem is not regulation.
“The issue is the lack of trustworthiness in the key stakeholders proposing the ways in which data is shared. Steadfast governance isn’t always there”.
This exposes the importance of progressive developments within the spaces of data regulation and AI governance in the formation of a beneficial culture of AI in healthcare.
- Ongoing patient engagement
A positive culture of AI in healthcare includes the patients receiving AI-augmented care or those whose healthcare data is used for technology development. Their consensual input and resources are crucial to effective health outcomes and the ethical development of AI tools.
This means patients must be continually engaged in the development and implementation of AI solutions to ensure their potential benefits, limitations and risks are understood and in-line with even the most diverse populations. Patient voices in decision-making processes are a crucial part of this puzzle.
Ongoing engagement can therefore be crucial to increase the adoption of AI technologies within the healthcare space and also democratise its access.
Fostering a bright future for AI-augmented healthcare
While a beneficial culture of AI in healthcare is steadily blossoming, it’s evidently not a smooth process, nor does its establishment lie in the hands of a select few. Its success requires a multidisciplinary approach that involves all voices across the healthcare spectrum to be heard, ongoing education and training to be prioritised, robust data security measures and specialised developments in data regulation.
Together, we can all foster a culture of AI in healthcare that drives toward a more accessible, affordable and efficient industry and improves patient outcomes across the board. It’s a vision that empowers IntelliHQ’s cutting-edge mission and programs, and is setting the groundwork for a healthier, happier world.