Use the labels in the right column to find what you want. Or you can go thru them one by one, there are only 30,052 posts. Searching is done in the search box in upper left corner. I blog on anything to do with stroke. DO NOT DO ANYTHING SUGGESTED HERE AS I AM NOT MEDICALLY TRAINED, YOUR DOCTOR IS, LISTEN TO THEM. BUT I BET THEY DON'T KNOW HOW TO GET YOU 100% RECOVERED. I DON'T EITHER BUT HAVE PLENTY OF QUESTIONS FOR YOUR DOCTOR TO ANSWER.
Changing stroke rehab and research worldwide now.Time is Brain!trillions and trillions of neuronsthatDIEeach day because there areNOeffective hyperacute therapies besides tPA(only 12% effective). I have 523 posts on hyperacute therapy, enough for researchers to spend decades proving them out. These are my personal ideas and blog on stroke rehabilitation and stroke research. Do not attempt any of these without checking with your medical provider. Unless you join me in agitating, when you need these therapies they won't be there.
What this blog is for:
My blog is not to help survivors recover, it is to have the 10 million yearly stroke survivors light fires underneath their doctors, stroke hospitals and stroke researchers to get stroke solved. 100% recovery. The stroke medical world is completely failing at that goal, they don't even have it as a goal. Shortly after getting out of the hospital and getting NO information on the process or protocols of stroke rehabilitation and recovery I started searching on the internet and found that no other survivor received useful information. This is an attempt to cover all stroke rehabilitation information that should be readily available to survivors so they can talk with informed knowledge to their medical staff. It lays out what needs to be done to get stroke survivors closer to 100% recovery. It's quite disgusting that this information is not available from every stroke association and doctors group.
Friday, August 11, 2017
How Machine Learning Is Helping Neuroscientists Crack Our Neural Code
Whenever you move your hand or finger or eyeball, the brain sends a
signal to the relevant muscles containing the information that makes
this movement possible. This information is encoded in a special way
that allows it to be transmitted through neurons and then actioned
correctly by the relevant muscles.
Exactly how this code works is something of a mystery.
Neuroscientists have long been able to record these signals as they
travel through neurons. But understanding them is much harder. Various
algorithms exist that can decode some of these signals, but their
performance is patchy. So a better way of decoding neural signals is
desperately needed.
Today, Joshua Glaser at Northwestern University in Chicago and a
few pals say they have developed just such a technique using the
new-fangled technology of machine learning. They say their decoder
significantly outperforms existing approaches. Indeed, it is so much
better that the team says it should become the standard method for
analyzing neural signals in future.
First some background. Information travels along nerve fibers in
the form of voltage spikes, or action potentials, that travel along
nerve fibers. Neuroscientists believe that the pattern of spikes encodes
data about external stimuli, such as touch, sight, and sound.
Similarly, the brain encodes information about muscle movement in a
similar way.
Understanding this code is an important goal. It allows
neuroscientists to better understand the information that is sent to and
processed by the brain. It is also helps explain how the brain controls
muscles.
Engineers would dearly love to have better brain-machine interfaces
for controlling wheelchairs, prosthetic limbs, and video games.
“Decoding is a critical tool for understanding how neural signals relate
to the outside world,” say Glaser and co.
Their method is straightforward. They’ve trained macaque monkeys to
move a screen cursor toward a target using a kind of computer mouse. In
each test, the cursor and target appear on a screen at random
locations, and the monkey has to move the cursor horizontally and
vertically to reach the goal.
Having trained the animals, Glaser and co recorded the activity of
dozens of neurons in the parts of their brains that control movement:
the primary motor cortex, the dorsal premotor cortex, and the primary
somatosensory cortex. They recordings lasted for around 20 minutes,
which is about the attention span of the monkeys … and the
experimenters.
The job of a decoding algorithm is to determine the horizontal and
vertical distance that the monkey moves the cursor in each test, using
only the neural data.
Glaser and co’s goal was to find out which kind of decoding
algorithm does this best. So they fed the data into a variety of
conventional algorithms and several new machine-learning algorithms.
The conventional algorithms work using a statistical technique
known as linear regression. This involves estimating a curve that fits
the data and then reducing the error associated with it. It is widely
used in neural decoding in techniques such as Kalman filters and Wiener
cascades.
Glaser and co compared these techniques to a variety of
machine-learning approaches based on neural networks. These included a
Long Short Term Memory Network, a recurrent neural network, and a
feedforward neural network.
All these learn from annotated data sets, and the bigger the data
set, the better they learn. This generally involves dividing the data
set in two—80 percent being used to train the algorithm and the other 20
percent used to test it.
The results are convincing. Glaser and co say that the
machine-learning techniques significantly outperformed the conventional
analyses. “For instance, for all of the three brain areas, a Long Short
Term Memory Network decoder explained over 40% of the unexplained
variance from a Wiener filter,” they say. “These results suggest that
modern machine-learning techniques should become the standard
methodology for neural decoding.”
In some ways, it’s not surprising that machine-learning techniques
do so much better. Neural networks were originally inspired by the
architecture of the brain, so the fact that they can better model how it
works is expected.
The downside of neural nets is that they generally need large
amounts of training data. But Glaser and co deliberately reduced the
amount of training data they fed to the algorithms and found the neural
nets still outperformed the conventional techniques.
That’s probably because the team used smaller networks than are
conventionally used for techniques such as face recognition. “Our
networks have on the order of 100 thousand parameters, while common
networks for image classification can have on the order of 100 million
parameters,” they say.
The work opens the way for others to build on this analysis. Glaser
and co have made their code available for the community so that
existing neural data sets can be reanalyzed in the same way.
There is plenty to do. Perhaps the most significant task will be in
finding a way to carry out the neural decoding in real time. All of
Glaser and co’s work was done offline after the recordings had been
made. But it would clearly be useful to be able to learn on the fly and
predict movement as it happens.
This is a powerful approach that has significant potential. In
other areas of science where machine learning has been applied for the
first time, researchers have stumbled across much low-hanging fruit. It
would be a surprise if the same weren’t true of neural decoding.
Ref: arxiv.org/abs/1708.00909: Machine Learning for Neural Decoding
No comments:
Post a Comment