â€œThere is no question that knowing PCT could help organize their efforts and make the development of such prostheses much more coherent and efficient.â€?
Now, how can we get them to understand this? IOW, all, how can we get PCT out of our silo?
From: Richard Marken
Sent: Thursday, December 26, 2019 10:22 AM
To: csgnet email@example.com
Subject: Re: New prosthetic limbs go beyond the functional to allow people to â€˜feelâ€™ again
** UNM-IT Warning:** This message was sent from outside of the LoboMail system. Do not click on links or open attachments unless you are sure the content is safe.
[Rick Marken 2019-12-26_09:16:08]
[Bruce Nevin 20191225.16:30ET]
Ted Cloak Dec 24, 2019, 6:13 PM –
Are these guys applying PCT without knowing it? This was reprinted in todayâ€™s Albuquerque Journal.
Richard Marken Dec 24, 2019, 6:48 PM –
BN: The input side has nice subtlety, and they
talk about “intention signals”, but they have it “read the userâ€™s neural activity and generate a command to control a prosthetic limb,” and they’re putting a neural net into that limb to perform functions of the spinal cord and motor areas of the brain:
RM: It’s hard to tell exactly what they are doing based on the description in the article. But I do think they are “applying PCT without knowing it” simply because they are providing sensory feedback based on motor output. Based on this
quote from the article:
Participants can feel over 100 different locations and types of sensation coming from their missing
hand,â€? Clark said. â€œThey can also feel the location and the contraction force of their muscles — even
wwhen muscles arenâ€™t there. Thatâ€™s because we can send electrical signals up the sensory fibers from the
muscles, so the brain interprets them as real.â€?
It looks like they are providing sensory feedback regarding the actual consequences of the motor “commands” (how the prosthetic arm has moved) but also simulated sensory feedback about the muscle contraction forces that, if the muscles
were actually there, would produce that result.
RM: I think they are unknowingly applying the basic principle of PCT which is that behavior is the control of perception. Think about it from the point of view of the wearer of the prosthetic. Before the development of this device all the
wearer of an arm prosthesis could do was try to control the visual position of the arm. The prosthesis just generated movement in proportion to neural output signals; there were no proprioceptive sensory consequences of that output, which are the main perceptions
we control when we move our arms.
RM: This new device allows the person wearing the prosthesis to control proprioceptive perceptions of their arm, which is the usual way it’s done. Of course, they have to do some engineering to make sure the variation in these perceptual
signals are in the “correct” polarity relative to the effect of the neural output signals on movement of the prosthesis. This is necessary so that the feedback loop is negative; so when I intend for the perception to increase it increases, and vice versa.
I believe this kind of “tuning” of the loop is what is going on with the development of the neural net architectures. But apparently they have the device tuned pretty well because, as they say, in a tracking task “the prosthetic finger was able to follow the
cursor in a smooth, continuous path”, something that would be impossible if all that could be controlled was a perception of the visual perception of finger position – which was all they could control when the prosthesis provided no proprioceptive sensory
RM: There is no question that knowing PCT could help organize their efforts and make the development of such prostheses much more coherent and efficient. But they are doing pretty well without it.
Just as our native limbs are trained to perform various actions — basic onees such as grasping or walking, to precision ones such as neurosurgery or ballet — prosthhetics, too,
have to be calibrated for specific uses. Engineers at the lab of Joseph Francis, associate professor of biomedical engineering at the University of Houston, have been working on a BCI that can autonomously update using implicit feedback from the user.
It’s pretty clear that updating the BCI is done by training a neural net. The 2018 London/GÃ¶ttingen hand “interpreted the wearerâ€™s intentions and sent commands to the artificial
limb… it used machine learning-based techniques
[=neural nets] to interpret neural signals from the brain to improve the performance of prosthetic hands.” This is why they say the same prosthetic would serve an amputee and someone with incapacitated
motor neural functions.
If they make more of that subtlety available on the output side and let the wearer’s brain do the reorganizing they’d be closer.
And however fine and diverse and subtle the inputs and outputs, it’s still a very pixillated channel:
â€œFor all its merits, the LUKE Arm contains only 19 sensors, and generates six different types of movements. Similarly, the neural interface we use can capture or convey hundreds of different electrical signals from or to the brain,â€? Clark
said. â€œThatâ€™s a lot, but both are impoverished compared with the thousands of motor and sensory channels of the human body, or its natural functional capabilities.â€?
I’ve attached a PDF that those of us without gigantic screens can read.
Richard S. Marken
"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.â€?
--Antoine de Saint-Exupery