Project Details
Layman's description
Recent studies showed that AI techniques could be unfairly biased against a particular population and lead to machine discrimination. A famous example is the substantial disparities in the accuracy of gender classification between darker and lighter skin. In healthcare, fairness is crucial as unequal treatment of different demographic subgroups by algorithms violates bioethical norms. However, so far the community has paid little attention to assessing and mitigating bias for medical imaging.
The unfair AI techniques in healthcare could lead to severe problems. For example, an AI technique with relatively high performance but no fairness control might lead to a 70% and 95% accuracy rate for women and men, respectively, which will impair the health and well-being of women with the disease. The worst situation is that such AI has been applied without any fairness warning to the physicians; as a result, the sub-population will be unfairly treated without knowing it. For diseases that rely highly on early-stage detection, the failure of detection might lead to substantial health issues and even threaten patients’ lives.
The first challenge for this topic is what it means to be fair for medical application. Many types of fairness metrics have been proposed in previous studies based on different fairness criteria. However, due to the fact that imposing multiple fairness criteria will over-constrain the solution space, the trade-off between different criteria becomes crucial.
The second challenge is how to reason the source of the bias. The sources of bias are various, including imbalanced datasets (e.g. lack of non-white skin color in skin cancer imaging datasets), noises from labels/features (e.g. variation of one subgroup distribution is much larger than others), different feature distribution (e.g. the difference between the chest X-ray scans for women and men caused by biological anatomy), etc. By reasoning the cause of the bias, we could further assist the bias mitigation.
In addition, we would like to pursue bias identification in a probabilistic learning manner. More specifically, we aim to disentangle the sensitive attribute in the representation space. As recent papers suggest, sensitive information such as gender and race are encoded in medical images. Thus, the hypothesis is that if we could conceptualize the sensitive attributes from the representation space, then a fair model is achieved by disentangling the sensitive attributes.
When adopting advanced machine learning methods into medical imaging, fairness is vital, non-negligible, and at the same time, challenging. Our vision is to bring up fair and robust AI for physicians, which could provide clear and transparent fairness analysis and mitigate bias in a reliable way. By completing so, AI techniques could be widely applied in healthcare and boost human welfare.
The unfair AI techniques in healthcare could lead to severe problems. For example, an AI technique with relatively high performance but no fairness control might lead to a 70% and 95% accuracy rate for women and men, respectively, which will impair the health and well-being of women with the disease. The worst situation is that such AI has been applied without any fairness warning to the physicians; as a result, the sub-population will be unfairly treated without knowing it. For diseases that rely highly on early-stage detection, the failure of detection might lead to substantial health issues and even threaten patients’ lives.
The first challenge for this topic is what it means to be fair for medical application. Many types of fairness metrics have been proposed in previous studies based on different fairness criteria. However, due to the fact that imposing multiple fairness criteria will over-constrain the solution space, the trade-off between different criteria becomes crucial.
The second challenge is how to reason the source of the bias. The sources of bias are various, including imbalanced datasets (e.g. lack of non-white skin color in skin cancer imaging datasets), noises from labels/features (e.g. variation of one subgroup distribution is much larger than others), different feature distribution (e.g. the difference between the chest X-ray scans for women and men caused by biological anatomy), etc. By reasoning the cause of the bias, we could further assist the bias mitigation.
In addition, we would like to pursue bias identification in a probabilistic learning manner. More specifically, we aim to disentangle the sensitive attribute in the representation space. As recent papers suggest, sensitive information such as gender and race are encoded in medical images. Thus, the hypothesis is that if we could conceptualize the sensitive attributes from the representation space, then a fair model is achieved by disentangling the sensitive attributes.
When adopting advanced machine learning methods into medical imaging, fairness is vital, non-negligible, and at the same time, challenging. Our vision is to bring up fair and robust AI for physicians, which could provide clear and transparent fairness analysis and mitigate bias in a reliable way. By completing so, AI techniques could be widely applied in healthcare and boost human welfare.
Status | Active |
---|---|
Effective start/end date | 01/03/2023 → 28/02/2026 |
UN Sustainable Development Goals
In 2015, UN member states agreed to 17 global Sustainable Development Goals (SDGs) to end poverty, protect the planet and ensure prosperity for all. This project contributes towards the following SDG(s):
Keywords
- Bias
- Fairness
- Medical Imaging
- Machine Learning
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.