Changing stroke rehab and research worldwide now.Time is Brain! trillions and trillions of neurons that DIE each day because there are NO effective hyperacute therapies besides tPA(only 12% effective). I have 523 posts on hyperacute therapy, enough for researchers to spend decades proving them out. These are my personal ideas and blog on stroke rehabilitation and stroke research. Do not attempt any of these without checking with your medical provider. Unless you join me in agitating, when you need these therapies they won't be there.

What this blog is for:

My blog is not to help survivors recover, it is to have the 10 million yearly stroke survivors light fires underneath their doctors, stroke hospitals and stroke researchers to get stroke solved. 100% recovery. The stroke medical world is completely failing at that goal, they don't even have it as a goal. Shortly after getting out of the hospital and getting NO information on the process or protocols of stroke rehabilitation and recovery I started searching on the internet and found that no other survivor received useful information. This is an attempt to cover all stroke rehabilitation information that should be readily available to survivors so they can talk with informed knowledge to their medical staff. It lays out what needs to be done to get stroke survivors closer to 100% recovery. It's quite disgusting that this information is not available from every stroke association and doctors group.

Saturday, September 28, 2024

Enhancing stroke rehabilitation with whole-hand haptic rendering: development and clinical usability evaluation of a novel upper-limb rehabilitation device

 Useless, doesn't tell us one goddamn thing about actually getting recovered!

Enhancing stroke rehabilitation with whole-hand haptic rendering: development and clinical usability evaluation of a novel upper-limb rehabilitation device

Abstract

Introduction

There is currently a lack of easy-to-use and effective robotic devices for upper-limb rehabilitation after stroke. Importantly, most current systems lack the provision of somatosensory information that is congruent with the virtual training task. This paper introduces a novel haptic robotic system designed for upper-limb rehabilitation, focusing on enhancing sensorimotor rehabilitation through comprehensive haptic rendering.

Methods

We developed a novel haptic rehabilitation device with a unique combination of degrees of freedom that allows the virtual training of functional reach and grasp tasks, where we use a physics engine-based haptic rendering method to render whole-hand interactions between the patients’ hands and virtual tangible objects. To evaluate the feasibility of our system, we performed a clinical mixed-method usability study with seven patients and seven therapists working in neurorehabilitation. We employed standardized questionnaires to gather quantitative data and performed semi-structured interviews with all participants to gain qualitative insights into the perceived usability and usefulness of our technological solution.

Results

The device demonstrated ease of use and adaptability to various hand sizes without extensive setup. Therapists and patients reported high satisfaction levels, with the system facilitating engaging and meaningful rehabilitation exercises. Participants provided notably positive feedback, particularly emphasizing the system’s available degrees of freedom and its haptic rendering capabilities. Therapists expressed confidence in the transferability of sensorimotor skills learned with our system to activities of daily living, although further investigation is needed to confirm this. 

(But you don't tell us any FACTUAL RECOVERY RESULTS! Bad research, people need to be fired!)

Conclusion

The novel haptic robotic system effectively supports upper-limb rehabilitation post-stroke, offering high-fidelity haptic feedback and engaging training tasks. Its clinical usability, combined with positive feedback from both therapists and patients, underscores its potential to enhance robotic neurorehabilitation.

Introduction

Stroke is a major contributor to long-term disability and mortality worldwide, with over twelve million incidents annually [1]. Amongst the consequences of surviving a stroke, the loss of upper-limb functions such as reaching, grasping, and fine object manipulation—critical to carrying out activities of daily living (ADL)—are particularly prevalent, affecting around 55% - 85% of stroke survivors [2,3,4,5,6,7]. To promote recovery, patients should actively engage [8] in highly intense [8,9,10,11] task-specific exercises [10], which require one-to-one constant involvement of clinical personnel. This leads to a heavy burden on society and healthcare institutions. The expected increase in stroke incidents due to an aging society [12, 13] and the foreseen global health staff shortage [14] calls for a profound transformation in current clinical practice towards more sustainable, accessible, efficient, and effective stroke rehabilitation.

Robotic devices have become increasingly popular in neurorehabilitation research as they hold the potential to support therapists in providing highly intense, effective, and motivating rehabilitation training with minimal physical effort from the therapists [15,16,17]. To enable the use of robotic devices to their full potential, they are often combined with gamified virtual training tasks [18], which have been reported to enhance patient enjoyment [19]. Moreover, the provision of real-time performance feedback allows therapists to adapt the training to the patient’s individual needs [20]. When compared to dose-matched conventional therapy, some studies point to improved outcomes with robotic interventions [17, 21, 22], while a large body of research indicates that robotic therapy is currently non-inferior to conventional interventions [15, 23,24,25,26].

Vast efforts are currently put into further enhancing the benefits of rehabilitation robotics. An emerging line of research is to not only focus on motor functions but also provide meaningful somatosensory information during robot-assisted training [16, 27,28,29,30]. This information, acquired through our skin and muscle mechanoreceptors during physical interactions with tangible (real-life or virtual) objects, is crucial for successful movement execution, as highlighted by a myriad of studies. For example, Pettypiece et al. reported that even in a predominantly visual task, somatosensory information greatly influences motor performance [31], and Scott et al. found that the sensorimotor system can be described as an optimal feedback controller where the state estimation depends on the availability of sensory information [32]. The importance of somatosensory information is also exemplified by the syndromes of clinical conditions such as tactile apraxia [33] or sensory ataxia [34]. Although some stroke survivors with somatosensory impairments (observed in more than half of stroke survivors [35]) succeed in relearning fine object manipulation to a certain degree—e.g., by compensating with vision [36]—somatosensory impairments such as limited tactile sensibility are still a considerable cause of inconvenience in daily living for affected patients [37,38,39]. Indeed, it has been suggested that the recovery of somatosensory impairments might even be imperative for the full recovery of a paretic upper limb [40].

Therefore, it is recommended that meaningful sensory information should be incorporated in robotic training, for example, by physically representing the interaction forces with tangible objects from virtual environments—a process known as haptic rendering [16, 28, 41, 42]. A few robotic devices have been developed for haptic rendering in upper-limb neurorehabilitation. They are engineered to accurately generate forces from virtual training tasks and provide low resistance to self-initiated movements in the absence of such forces. This can be achieved through mechanical characteristics such as low inertia, high backdrivability, low backlash, and/or appropriate control methods. Their mechanical structure can vary, influencing their application in haptic rendering and clinical practicability [16].

Within grounded arm exoskeletons, i.e., those solutions where the robot joints are coincident with the anatomical arm and shoulder joints, we can find solutions that promote haptic rendering during the haptic manipulation of virtual objects, e.g., ALEx-RS [43], ARMin [44], and ANYexo [45, 46]. However, while grounded exoskeletons generally provide high levels of support and can control individual joints of the patients, they come at the cost of high complexity and rather complicated setup due to the necessary joint alignments. In contrast, grounded end-effector devices only interact at an end-point of the patient, e.g., hand or wrist. They tend to offer more flexibility and easier setup together with inherently low mechanical inertia as they usually incorporate the actuators in the base. Popular designs include planar five-bar manipulanda with parallel kinematics—e.g., MIT-Manus [47], InMotion® ARM/HAND [48], WristBot [49] or the device of Qian et al. [50]—, devices with a combination of two linear axes to achieve planar movements—e.g., H-Man [51] and ArmMotus M2 Pro [52]—, and serial kinematic designs that offer arm movements in three-dimensional space—e.g., Burt® [53] and ArmMotus EMU [54].

Such devices that predominantly target the proximal joints often either use a simple cylindrical handle or lack hand-related interfaces to interact with virtual objects. Yet, when grasping and manipulating objects, the distal body parts, such as the wrist and hand, also play a crucial role in gathering somatosensory information. Thus efforts have been made in developing devices that provide haptic rendering at more distal joints [55], such as grounded devices—e.g., HEXORR-II [56], ReHapticKnob [57], HandyBot [58], FINGER [59], the portable hand trainer from Van Damme et al. [60, 61], or the OpenWrist [62]—and more wearable devices like gloves or hand and wrist exoskeletons—e.g., [63,64,65]. The latter usually only generate forces within the (distal) attachment points. Thus, their application for haptic rendering can be limited due to the lack of force generation on proximal joints.

When reviewing the literature, we found a relatively limited number of robotic solutions that provide haptic rendering in both arm and hand. This is a limitation since functional reach and grasp movements are typically composed of coordinated movements of proximal and distal joints [66,67,68]. We only found a few examples in literature that allow virtual training of reaching and grasping while also providing haptic rendering that targets hand functions. For example, Buongionrno et al. combined the ALEx arm exoskeleton [69] with wrist and hand exoskeletons, resulting in a 12 DoF device, and presented haptic rendering of a virtual stick in a box, although reaching out and grasping the stick was not reported [70]. A reach and grasp task with haptic cues was presented by Loureiro et al. with the nine DoF Gentle/G system [71]. In the reachMAN2, three DoF were combined to train simple reach and grasp movements [72]. Finally, the CyberTeam® system consists of a five DoF glove and a six DoF robotic base [73]. However, these devices still tend to be highly complex and/or bulky, hampering their potential clinical applicability. To our best knowledge, only the Gentle/G has been tested with patients [74].

Moreover, the vast majority of haptic devices in neurorehabilitation follow classical haptic rendering approaches, where either virtual walls [75] or one or multiple predetermined interaction points in the form of spherical colliders (i.e., virtual representation to compute collisions) are used to compute interaction forces [43, 46]. These methods can only simulate interactions at predetermined locations on objects and thus fail to represent arbitrary hand-object interactions. This might result in visuo-haptic incongruencies during virtual haptic reach and grasp exercises, which might lead to hindered motor performance [76], and increased cognitive load [77, 78]. Importantly, more realistic whole-hand interactions, where the haptic rendering reflects the entire visual hand representation, may lead to more natural interactions with virtual objects [79], enhancing the ecological validity of the training and potentially facilitating the transfer of the acquired skills to ADL [80].

Hence, we have identified a clear need for a device for the training of coordinated proximal and distal movements alongside high-fidelity whole-hand haptic rendering that provides meaningful haptic information during reaching and grasping. Importantly, the device should be simple to use yet sophisticated enough to accomplish the aforementioned requirements. To maximize clinical usefulness and acceptance, the involvement of different stakeholders (e.g., therapists, patients, engineers and physicians) is essential for the development of rehabilitation devices [81,82,83]. We thus followed a clinical-driven and human-centered approach with four phases: i) Understand the context of use; ii) Specify the clinical-driven requirements; iii) Develop the solution; and iv) Evaluate against requirements. Results from the steps i) and ii) were reported in [55, 84], and intermediate development steps from iii) in [55, 85, 86]. Here, we present the final robotic system and the final clinical usability evaluation with therapists and stroke patients. We followed a mixed-method approach for the usability evaluations. We combined quantitative methods, i.e., standardized questionnaires with scales and performance-related measures, with qualitative methods, i.e., semi-structured interviews that balance structured queries and personalized dialogues, to obtain a holistic assessment [83, 87].

The rest of the paper is structured as follows: First, we present the development of the novel haptic upper-limb rehabilitation system, addressing the limitations of current robotic rehabilitation devices. This also includes a whole-hand haptic rendering approach, two virtual rehabilitation exercises, as well as a graphical user interface (GUI) for therapists. Then, we present the experimental procedure and the results of a mixed-method clinical usability study with 14 participants (seven therapists and seven sub-acute stroke patients) and discuss our findings.

More at link.

No comments:

Post a Comment