Blog
News

The Ethics of AI

March 19, 2024
AEP Team member + A/Prof. Mangor Pedersen

WHO recently released AI ethics and governance guidelines for Large Multimodal Models (LMM). The guidance outlines five broad applications of LMMs for health including scientific research and highlights risks including bias and data security.    

What, if any, are the impacts of the publication findings and recommendations for research studies, such as the AEP, that are using AI to create prognostic and diagnostic models?  

MP: First, I think it is fantastic that WHO has released these guidelines. This will make AI safer for researchers, clinicians, and the public. As a core part of the AEP is to use novel AI tools to improve care in Epilepsy, we have also published our own ethical guidelines for the use of AI in the AEP (https://osf.io/preprints/osf/kag75). It is worth pointing out that the AEP uses a range of AI tools to improve epilepsy care. 

Is this guidance welcomed in the scientific research field? How and if not, why? 

MP: This is very welcomed by the scientific research field, as it is intended to safeguard researchers. 

What are some of the challenges for using LMM in research? 

MP: Although the AEP is a groundbreaking project aiming to collect a large-clinical dataset prospectively, it remains to be seen how recent LMM models can be used in this context. However, recent studies suggest that contemporary generative AI approaches are promising in delivering clinically reliable information (https://www.nature.com/articles/s41586-023-06291-2), so I guess this is a case of watch this space … 

What can be done to mitigate introducing bias and prejudice into training data? 

MP: To me this is one of the greatest challenges facing clinical AI research, and one that we are ideally placed to tackle in the AEP. We aim to reduce data bias and increase fairness in AI models by collecting data from various locations and establishing a cross-section of the great diversity we have in Australia. 

If AI is used in diagnosis - is it communicated to a patient? If not, why? And is it something that should be? will be, in the future?  how might this be done to maintain trust? 

MP: This is undoubtedly the aim while we move towards a model with more digital care. For us as researchers, the immediate goal is to understand how and why the AI algorithm works the way they do. This ensures that we do not treat the algorithms as a black box but as a symbiotic process in which human knowledge is paramount to AI advancement. 

Are ethics certifications of LMMs and/or transparency of data used to train models the way ahead for use in research?  

MP: I think these approaches are all a part of good science, and will lead to progress within the field. The more significant impact on AI safety is probably due to emerging legislative approaches to AI, such as the EU AI-act (https://artificialintelligenceact.eu/the-act/), which is an excellent step towards safe AI.  

 

Ask an Epilepsy Expert: What can I do to help my condition?

We've had such great feedback from our first episode of Ask an Epilepsy Expert that we've dropped episode two early. Here Imaging Lead, Dr David Vaughan answers the question: what can I do to help my condition? You can watch more episodes on our social channels. Read more to find out how.

Ask an Epilepsy Expert: How is epilepsy diagnosed?

We've launched our Ask an Epilepsy Expert series where our AEP team answer common questions about epilepsy. In episode 1, AEP imaging Lead, Dr David Vaughan answers the question: how is epilepsy diagnosed?

AEP Participant: Kylie Staats shares her story

Hi, my name is Kylie Staats, I’m 37 years old and I have had epilepsy for almost my entire life. I had my first seizure when I was four years old, and at that time, nobody knew why it was happening.