Changing stroke rehab and research worldwide now.Time is Brain! trillions and trillions of neurons that DIE each day because there are NO effective hyperacute therapies besides tPA(only 12% effective). I have 523 posts on hyperacute therapy, enough for researchers to spend decades proving them out. These are my personal ideas and blog on stroke rehabilitation and stroke research. Do not attempt any of these without checking with your medical provider. Unless you join me in agitating, when you need these therapies they won't be there.

What this blog is for:

My blog is not to help survivors recover, it is to have the 10 million yearly stroke survivors light fires underneath their doctors, stroke hospitals and stroke researchers to get stroke solved. 100% recovery. The stroke medical world is completely failing at that goal, they don't even have it as a goal. Shortly after getting out of the hospital and getting NO information on the process or protocols of stroke rehabilitation and recovery I started searching on the internet and found that no other survivor received useful information. This is an attempt to cover all stroke rehabilitation information that should be readily available to survivors so they can talk with informed knowledge to their medical staff. It lays out what needs to be done to get stroke survivors closer to 100% recovery. It's quite disgusting that this information is not available from every stroke association and doctors group.

Tuesday, May 8, 2012

Interactive Sonification of Human Movements for Stroke Rehabilitation

This is a commercial site but our therapists could use this stuff to create research projects. And then maybe we could get objective evaluations of our movement problems.

Interactive Sonification of Human Movements for Stroke Rehabilitation

 

 

Introduction

After a stroke patients often suffer from dramatically constrained mobility and partial paralysis of the limbs. Especially movements of the upper extremities, like grasping movements, are frequently hampered. Grasping movements of stroke patients are typically characterized by reduced accuracy and speed, and high variability. In common rehabilitative approaches, an improvement is achieved by massed practice of a given motor task supervised by a therapist.

In this project, complex sonification of arm movements is used in order to generate a movement-based auditory feedback. Sonification in general is the displaying of non-speech information through audio signals [1]. Here, we use sonification to acoustically enhance the patient’s perception and to support motor processes in terms of a smooth target-orientated movement without dysfunctional motions. Stroke rehabilitation mainly focuses on grasping tasks, because it's good to regain self-reliance of the patients.
The project’s research goals are 1) the evaluation of primary movement patterns for motor learning, 2) designing an informative acoustic feedback, 3) testing its consciously as well as unconsciously achieved efficiency when performing grasping movements and 4) the design and evaluation of a low power, low latency hardware platform for the inertial sensor data processing and sound synthesis.

Sonification based motor learning research

Sonification based on human movement data is efficient on healthy subjects in enhancing motor perception as well as motor control and motor learning: Perception of gross motor movements became more precise, motor control got more accurate and variance was reduced [2] and motor learning was accelerated [3]. Benefits are elicited by tuning in additional audiomotor as well as multisensory functions within the brain [4], obviously independent of drawing attention to sonification. Here we optimize 3D-movement sonification for basic movement patterns of the upper limbs to support relearning of everyday movement patterns in stroke rehabilitation. While using movement data directly for 3D-sound modulations a high degree of structural equivalence to visual and proprioceptive percepts is achieved. Whereas music movement sonification of the co-project group yielded on conscious processing and music like sonification, movement data defined sonification described here can get effective without drawing attention to it. For stroke rehabilitation a synthesized sonification merging musical and movement data defined components could be used.

Acoustic Feedback Design

The acoustic feedback should be easily understandable and agreeable to be helpful in stroke rehabilitation, meaning there should be a clear and easily learnable correspondence of limb positions to a three-dimensional sound-mapping. This will be reached via a sonification, which basically has the same features as real acoustic instruments. These features will then be connected to the three-dimensional movement-trajectories and can then be played in real-time only by movements of a limb. Furthermore the sound design of this sonification should be convenient. Only then positive emotional valence and joy while playing the instrument can be elicited.

In previous studies we could demonstrate that music supported training (MST) in stroke patients facilitates rehabilitation of fine motor skills of the paretic hand. [5, 6, 7] About 6 weeks after the stroke, patients were asked to replay simple tunes on a keyboard or on an electronic set of drumpads and were trained to systematically increase their sensory-motor skills during a 3 weeks training period. The recovery of motor skills was more pronounced than in a control group undergoing constraint-induced therapy (CIT). Extending this study, we now aim to reproduce these effects in patients using real time feedback of reaching-movements of the arm in space.

Hardware Architecture Design

A key research area is the design of a portable hardware demonstrator for usage in stroke rehabilitation. Technical requirements are low latency processing, low power consumption and small form factor to enable portable usage in home based rehabilitation sessions. The overall hardware and software system latency constraint was 30 ms, as higher values result in latency significantly recognizable and confusing for humans [8].
To allow the capturing of complex arm movements, like drinking or tooth brushing, a design goal of the PC based demonstrator and hardware demonstrator is to support up to ten MTx sensors and an Xbus Master device. For basic grasping tasks the inertial sensor system set up can be chosen according to [9], with one sensor at upper arm and one sensor attached to forearm. In [9] sonification acoustically displays the wrist position, captured by inertial sensors. This provides information about grasping movement performance.
Using movement sonification in sports or rehabilitation requires completely mobile and portable sonification systems. Depending on the selected mapping parameters, sample based sound synthesis gets quite computational intensive. Therefore, power demanding processors are required. PC based hardware platforms [10], [11] require a high power budget and are limited to stationary usage.
This project group explores hardware platforms for real time sonification of complex movements captured by inertial sensors.  The system has to provide flexibility to scale the number of MTx sensors according to motion capturing demands. The generated stereo audio signal will be transmitted by speakers. The basic system structure is shown in Figure 1. Sensor data acquisition, sonification parameter calculation and audio synthesis are handled on the hardware platform with the attached motion capture system [12].
Hardware demonstrator structure
Figure 1: Hardware demonstrator structure

No comments:

Post a Comment