Our research | Cievert

Our research

Attack-agnostic Adversarial Detection on Medical Data Using Explainable Machine Learning

 

It has been widely shown that machine learning (ML) models are vulnerable to adversarial attacks, where a malicious user can modify the input to a model (e.g. an image, or electronic health record data) in such a way that the changes are small enough to be imperceptible to the human eye, and yet causes the ML model to produce an incorrect output. A model’s susceptibility to these attacks reduces people’s trust in machine learning, and is a significant barrier to wider adoption of ML in sensitive scenarios such as healthcare.

We develop two novel, state-of-the-art explainability-based techniques that are able to detect adversarial attacks, allowing us to create machine learning pipelines that are robust to adversarial attacks. Our adversarial attack detection methods work by inspecting the parts of the input that the ML model deems important when making its decision.

We test our adversarial attack detection models on medical datasets, gaining accuracies of 77% on the MIMIC-III electronic health record dataset, and 100% on the MIMIC-CXR chest x-ray dataset. We also develop a method that can detect new, unseen adversarial attacks, gaining an accuracy of 87% on the MIMIC-CXR dataset. The integration of these techniques into machine learning pipelines could greatly increase the robustness of ML models to attacks, which is needed for ML to be used in healthcare.

 

Improving Current Glycated Hemoglobin Prediction in Adults: Use of Machine Learning Algorithms with Electronic Health Records

 

Predicting the risk of glycated hemoglobin (HbA1c) elevation can help identify patients with the potential for developing serious chronic health problems such as diabetes and cardiovascular diseases, and the early prediction of such elevation can help patient outcomes.

This study utilises both conventional machine learning and deep learning models to predict a patient’s current HbA1c level using their electronic health record (EHR). We also apply explainable machine learning techniques to these models, allowing practitioners to verify that the ML models are utilising the correct EHR features when making its prediction; this can help verify that a model is not making spurious correlations. We find that a multi-layer perceptron is the best performing model, achieving an accuracy of 74.52%.

 

Agree to Disagree: When Deep Learning Models With Identical Architectures Produce Distinct Explanations

 

This work investigates why a model’s explanations (i.e. the parts of an input it places the most importance upon) significantly changes when a model is retrained, when only small changes to the model’s training procedure is changed (for example, the order the training dataset was passed to the model, or the random seed used for training).

 

We introduce a new metric, explanation consistency, to quantify the (in)consistency of explanations, and test this on the MIMIC-CXR chest x-ray dataset. This is important to healthcare applications as the ability for an ML model to significantly changes its behaviour (despite very little changing) has an adverse affect on the trust people (especially clinicians) may have on a model, and thus poses a barrier to more widespread use of ML in healthcare.

 

Spelling the end of 'one-size-fits-all'

Newcastle • Manchester • London

+44 (0)191 303 80 89

Copyright © 2021 Evergreen Health Solutions Ltd.

An Evergreen Life Company