Changing stroke rehab and research worldwide now.Time is Brain! trillions and trillions of neurons that DIE each day because there are NO effective hyperacute therapies besides tPA(only 12% effective). I have 523 posts on hyperacute therapy, enough for researchers to spend decades proving them out. These are my personal ideas and blog on stroke rehabilitation and stroke research. Do not attempt any of these without checking with your medical provider. Unless you join me in agitating, when you need these therapies they won't be there.

What this blog is for:

My blog is not to help survivors recover, it is to have the 10 million yearly stroke survivors light fires underneath their doctors, stroke hospitals and stroke researchers to get stroke solved. 100% recovery. The stroke medical world is completely failing at that goal, they don't even have it as a goal. Shortly after getting out of the hospital and getting NO information on the process or protocols of stroke rehabilitation and recovery I started searching on the internet and found that no other survivor received useful information. This is an attempt to cover all stroke rehabilitation information that should be readily available to survivors so they can talk with informed knowledge to their medical staff. It lays out what needs to be done to get stroke survivors closer to 100% recovery. It's quite disgusting that this information is not available from every stroke association and doctors group.

Monday, July 31, 2017

Breakthrough software teaches computer characters to walk, run, even play soccer

With this we could start from the incorrect muscle movements you currently have and gradually show them being corrected. Action observation at its' finest. But this will never occur. Video at link.
https://news.ubc.ca/2017/07/31/breakthrough-software-teaches-computer-characters-to-walk-run-even-play-soccer/
Computer characters and eventually robots could learn complex motor skills like walking and running through trial and error, thanks to a milestone algorithm developed by a University of British Columbia researcher.
“We’re creating physically-simulated humans that learn to move with skill and agility through their surroundings,” said Michiel van de Panne, a UBC computer science professor who is presenting this research today at SIGGRAPH 2017, the world’s largest computer graphics and interactive techniques conference. “We’re teaching computer characters to learn to respond to their environment without having to hand-code the required strategies, such as how to maintain balance or plan a path through moving obstacles. Instead, these behaviors can be learned.”
The work, called DeepLoco, offers an alternative way to animate human movement in games and film instead of the current method which makes use of actors and motion capture cameras or animators. DeepLoco allows characters to automatically move in ways that are both realistic and attentive to their surroundings and goals. In the future, two or four-legged robots could learn to navigate through their environment without needing to hand-code the appropriate rules.
Using his algorithm, simulated characters have learned to walk along a narrow path without falling off, to avoid running into people or other moving obstacles, and even to dribble a soccer ball towards a goal.
The method makes advanced use of deep reinforcement learning, a type of machine learning algorithm in which experience is gained through trial and error and is informed by rewards. Over time, the system progressively identifies better actions to take in given situations.
“It’s like learning a new sport,” said van de Panne. “Until you try it, you don’t know what you need to pay attention to. If you’re learning to snowboard, you may not know that you need to distribute your weight in a particular way between your toes and heels. These are strategies that are best learned, as they are very difficult to code or design in any other way.”
The motion of humans and animals is governed not just by physics but also control. While humans learn motor control through trial and error, van de Panne says it’s hard to tell how much the algorithm mimics the human learning process. After all, the computer program still learns much more slowly than a human. He began working on this type of motor learning problem when he had children; they are now 17 and 20.
“I distinctly remember wondering who will learn agile walking and running skills first: my son, daughter or the algorithm?” he said. “My son and daughter beat me by a long shot.”
For more information on DeepLoco, click here.

No comments:

Post a Comment