Bobby jacks into the digital mindwaves. It's breakfast.
They used to have nicotine, alcohol, caffeine, and sugar for breakfast. That was decades ago.
They flip their tongue a little to experience the doses of nicotine, alcohol, caffeine, and sugar they expect and resumed their work designing a new form of interplanetary travel based on propulsion systems that redesigned themselves for nearby resources.
They had a potted plant in their window with a magical spirit. The magical spirit of their potted plant watched them as they sat at their breakfast table with their muscles twitching, darting their eyes and fingers around the empty space in front of them as they engaged an augmented reality.
The magical spirit of the potted plant didn't just watch them. She was also watching the birds wake up and celebrate the morning outdoors, the magical spirits of the wind and rain race their fronts of pressure and humidity, a number of communities of insects, bacteria, and fungus she was in symbiosis with, and also Bobby's pet dog, Hypercookie, who was playing with a robot in the other room.
[I ran into a cognitive issue here and wrote the issue down:
Bobby was debugging an error in his model for environments needing new engine designs. It kept failing in
I'm trying to write that the model is for choosing the general patterns for newly created designs based on patterns of newly encountered environments. My brain is refusing to simplify this expression for me. In the story, they have already finished the nn-based hyperculture that businesses are planning. The bug was going to be with a subset of these pattern groups, which Bobby had already broken into a roughly finite set. It was going to pick sets that were clearly wrong, often avoiding a subset of the correct sets in a wieghted way, and it was going to be due to a training error that stemmed from the architecture of the model combined with randomness in the data. He would go in and resolve the error by understanding the cause and manually redirecting the impact of the training data. This was going to be all simple and concise and only partly described.
][why doesn't he use an ai that understands the issue?][the space of meaning here is outside the distribution of commonly available nn models for him, and it would fix it wrong.]
[what are your thoughts on changing how nn models are so that they can handle issues like this?]
[it feels like something doesn't want that to happen, that it fears somebody would use the information, even accidentally, to do something horrible. but. I'll try to think on it a little. ...]
[no it's ok you don't have to]
[well I think about this a lot I just have to remember it. Ummm .... nn models seem to function really slowly. I might consider training a hierarchy of them around choosing architectures and mapping learning rates to individual weights, as a research focus. Assuming that then made fast training, I might make a second one around designing architectural information for new environments. I guess I'm expecting this to eventually learn it needs logical reasons and write code / design architectures around investigating the rules that cause things and using those directly. I think there's a lot of existing research around that, maybe i'd even make it the core and just use nn models to handle fluffy stuff quickly? fluffy being things where guessing based on lots of things less complex than reality is helpful, I guess. Just ideas. No real experience here. I dunno if that is helpful or ridicuous, don't really know how to tell]