TY - GEN
T1 - Visual Context-Aware Person Fall Detection
AU - Nagaj, Aleksander
AU - Li, Zenjie
AU - Papadopoulos, Dim P.
AU - Nasrollahi, Kamal
PY - 2025
Y1 - 2025
N2 - As the global population ages, the number of fall-related incidents is on the rise. Effective fall detection systems, specifically in the healthcare sector, are crucial to mitigate the risks associated with such events. This study evaluates the role of visual context, including background objects, on the accuracy of fall detection classifiers. We present a segmentation pipeline to semi-automatically separate individuals and objects in images. Well-established models like ResNet-18, EfficientNetV2-S, and Swin-Small are trained and evaluated. During training, pixel-based transformations are applied to segmented objects, and the models are then evaluated on raw images without segmentation. Our findings highlight the significant influence of visual context on fall detection. The application of Gaussian blur to the image background notably improves the performance and generalization capabilities of all models. Background objects such as beds, chairs, or wheelchairs can challenge fall detection systems, leading to false positive alarms. However, we demonstrate that object-specific contextual transformations during training effectively mitigate this challenge. Further analysis using saliency maps supports our observation that visual context is crucial in classification tasks. We create both dataset processing API and segmentation pipeline, available at https://github.com/A-NGJ/image-segmentation-cli.
AB - As the global population ages, the number of fall-related incidents is on the rise. Effective fall detection systems, specifically in the healthcare sector, are crucial to mitigate the risks associated with such events. This study evaluates the role of visual context, including background objects, on the accuracy of fall detection classifiers. We present a segmentation pipeline to semi-automatically separate individuals and objects in images. Well-established models like ResNet-18, EfficientNetV2-S, and Swin-Small are trained and evaluated. During training, pixel-based transformations are applied to segmented objects, and the models are then evaluated on raw images without segmentation. Our findings highlight the significant influence of visual context on fall detection. The application of Gaussian blur to the image background notably improves the performance and generalization capabilities of all models. Background objects such as beds, chairs, or wheelchairs can challenge fall detection systems, leading to false positive alarms. However, we demonstrate that object-specific contextual transformations during training effectively mitigate this challenge. Further analysis using saliency maps supports our observation that visual context is crucial in classification tasks. We create both dataset processing API and segmentation pipeline, available at https://github.com/A-NGJ/image-segmentation-cli.
KW - computer vision
KW - data augmentation
KW - fall detection
KW - visual context
U2 - 10.1007/978-981-97-7419-7_19
DO - 10.1007/978-981-97-7419-7_19
M3 - Article in proceedings
SN - 978-981-97-7418-0
VL - 411
T3 - Smart Innovation, Systems and Technologies
SP - 215
EP - 226
BT - Proceedings of the 16th International KES Conference on Intelligent Decision Technologies, KES-IDT 2024
PB - Springer
T2 - 16<sup>th</sup> International KES Conference on Intelligent Decision Technologies
Y2 - 19 June 2024 through 21 June 2024
ER -