— - ok so i saw a realworld person with handmade feed attached to amputations right below the knees and i got a rockawesome idea for a puzzle: free homebrew robotic prosthetic limbs that respond to user intention accurately. i think i’ve solved big chunks of the puzzle in outline form. the core unique part of interest is responding to user intention. 1. assume a shared community pretrained model. for initial users, this is a blank model. 2. prompt the user to try to move their limb while indicating what they are trying to do. 3. size the model for this data only and train it maximally. 4. now the user can use the poorly-trained limb so long as conditions precisely match the training conditions. get them using it, and collect more data as they do. 5. augment the data taking the times the prosthetic is functioning effectively and use its own outputs as labels, but mask the inputs it is using to judge them. increase the size of the model for the amount of newly available data, and continue training online. the point of 5 is to make the model highly robust in new conditions. similar to dropout during pretraining, it forces it to learn many different indicators of what is correct. doing the training online hopefully helps the user learn ways to control the prosthesis at the same time as it is learning to infer what they want. [some further consideration could engage here to find other ways to do this] for further effectiveness, since so much training is going on, a metamodel can be trained to make this adaptation more rapid. not all idea reached concept writing but what did reach seems reasonable to start with. concerns of interest are what are the channels of information and what is the actuation that responds to the user. conveniently, these can be pretty much anything.