It is vastly more important to create EXACT fall prevention protocols than this lazy crapola of 'assessments'! But your doctor should be using inertial measurement units(IMUs) to objectively quantify your gait deficits. And then map those deficits to EXACT REHAB PROTOCOLS!
Enhancing fall risk assessment: instrumenting vision with deep learning during walks
Journal of NeuroEngineering and Rehabilitation volume 21, Article number: 106 (2024)
Abstract
Background
Falls are common in a range of clinical cohorts, where routine risk assessment often comprises subjective visual observation only. Typically, observational assessment involves evaluation of an individual’s gait during scripted walking protocols within a lab to identify deficits that potentially increase fall risk, but subtle deficits may not be (readily) observable. Therefore, objective approaches (e.g., inertial measurement units, IMUs) are useful for quantifying high resolution gait characteristics, enabling more informed fall risk assessment by capturing subtle deficits. However, IMU-based gait instrumentation alone is limited, failing to consider participant behaviour and details within the environment (e.g., obstacles). Video-based eye-tracking glasses may provide additional insight to fall risk, clarifying how people traverse environments based on head and eye movements. Recording head and eye movements can provide insights into how the allocation of visual attention to environmental stimuli influences successful navigation around obstacles. Yet, manual review of video data to evaluate head and eye movements is time-consuming and subjective. An automated approach is needed but none currently exists. This paper proposes a deep learning-based object detection algorithm (VARFA) to instrument vision and video data during walks, complementing instrumented gait.
Method
The approach automatically labels video data captured in a gait lab to assess visual attention and details of the environment. The proposed algorithm uses a YoloV8 model trained on with a novel lab-based dataset.
Results
VARFA achieved excellent evaluation metrics (0.93 mAP50), identifying, and localizing static objects (e.g., obstacles in the walking path) with an average accuracy of 93%. Similarly, a U-NET based track/path segmentation model achieved good metrics (IoU 0.82), suggesting that the predicted tracks (i.e., walking paths) align closely with the actual track, with an overlap of 82%. Notably, both models achieved these metrics while processing at real-time speeds, demonstrating efficiency and effectiveness for pragmatic applications.
Conclusion
The instrumented approach improves the efficiency and accuracy of fall risk assessment by evaluating the visual allocation of attention (i.e., information about when and where a person is attending) during navigation, improving the breadth of instrumentation in this area. Use of VARFA to instrument vision could be used to better inform fall risk assessment by providing behaviour and context data to complement instrumented e.g., IMU data during gait tasks. That may have notable (e.g., personalized) rehabilitation implications across a wide range of clinical cohorts where poor gait and increased fall risk are common.
Introduction
Falls can lead to loss of independence and even death [1, 2]. Identifying those at risk of falling is an important clinical task often conducted in e.g., those with visual impairment [3], and the elderly [4,5,6]. Equally, fall risk assessment is of notable importance and pragmatically useful in people with a movement disorder, such as Parkinson’s disease (PD) [7,8,9] or Stroke [10,11,12,13] due to observable functional deficits in motor control. Additionally, assessing fall risk is equally important during pregnancy [14] where a third of pregnant women may fall [15]. In fact, there is a significant increase in falls from pre-pregnancy to the 3rd trimester which cannot be fully explained by morphological [16] or biomechanical [17] changes.
A comprehensive fall risk assessment is multifactorial and a time-consuming process including but not limited to medication review, cognitive screening, detailing a history of falls, as well as evaluating gait, balance [18], and environmental hazards or hazardous activities that have been documented in some cases to be responsible for 50% of falls [19]. For timeliness in many settings, assessing gait alone is usually conducted to evaluate intrinsic fall risk [20]. That is convenient as gait is a good marker of global health [21] and fundamental to many activities of daily life [1]. Consequently, a gait assessment with positive outcomes from subjective evaluation (by an assessor) provides insight into the patient’s independence and ability to ambulate with minimal fall risk. As described, an assessment is typically conducted by manual observation alone, where an assessor examines a person’s gait during a scripted task (i.e., walking protocol). Often, a protocol may include navigating (walking around or over) obstacles [22,23,24,25], deliberately challenging the person by increasing gait demands [26]. Yet, that also places extra burden on the assessor, challenging them to carefully observe the person’s gait during a more complex task. Instrumentation is needed to optimize assessment protocols while providing high resolution objective fall risk data.
The integration of digital technology as an objective standard in fall risk is not routine. While digital tools may provide clinicians with high-resolution data to potentially aid in determining a patient’s fall risk, there is still ongoing work to be done in understanding their full utility and developing appropriate methods. In recent years, technology has matured to include a wide selection of digital tools. Of course, 3D motion capture systems are a perceived gold/reference standard for human movement analysis, but it lacks practicality and deployment in habitual settings. Moreover, reflective markers require timely application. In contrast, wearable devices (i.e., inertial measurement units, IMUs) are quickly attached and provide clinically relevant gait characteristics to a millisecond resolution in any environment [27,28,29,30].
An objective gait assessment to inform fall risk is usually conducted within a laboratory with a single IMU on the lower back [30]. Typically, participants are then asked to undertake a protocol representing walking challenges in daily life [31, 32], like obstacle crossing [25]. However, a key IMU limitation is the provision of inertial gait data only without any insights into navigating behavior and visual attention allocation to environmental/extrinsic details. Accordingly, there is no absolute clarity to understand how gait and fall risk is influenced by other intrinsic (e.g., visual attention) or extrinsic (e.g., obstacles) factors. For example, a comprehensive instrumented assessment would better understand how those being assessed allocate visual attention along their walking path for safe navigation while also determining the role of attention when e.g., peripheral obstacles cause a distraction. Supplementing IMU data with video data from video-based eye tracking wearable glasses could better define intrinsic and extrinsic factors, providing a contemporary and pragmatic approach to fall risk assessment with easily attached wearables. (Indeed, eye tracking offers an avenue for exploring neurocognitive changes as a reason for increased falls incidence.)
Commercial eye tracking glasses capture high quality video data and often in the standardized MP4 format with a resolution of 1920 × 1080. The video contains a superimposed crosshair to display eye location. Accordingly, videos contain data on the general environment and specific objects of where the wearer is looking but data processing of eye-tracker videos is extremely time consuming and needs to be automated to allow clinical application [33]. Including eye tracking (to identify an object/obstacles of interest) with IMU data during a range of simulated free-living tasks (e.g., obstacle crossing) would provide a novel approach for simultaneously instrumenting visual attention during gait within a fall risk assessment. To accomplish this, a suitable methodology to instrument visual attention from video data must first be established as none currently exists. Accordingly, a novel vision-aided fall risk assessment (VAFRA) is proposed in this study.
More at link.