[ot][crazy][wrong] was [ot][crazy][draft] Fractured
Karl
gmkarl at gmail.com
Sat Aug 14 17:11:12 PDT 2021
>
> [I ran into a cognitive issue here and wrote the issue down:
> Bobby was debugging an error in his model for environments needing new
> engine designs. It kept failing in
> I'm trying to write that the model is for choosing the general patterns
> for newly created designs based on patterns of newly encountered
> environments. My brain is refusing to simplify this expression for me. In
> the story, they have already finished the nn-based hyperculture that
> businesses are planning. The bug was going to be with a subset of these
> pattern groups, which Bobby had already broken into a roughly finite set.
> It was going to pick sets that were clearly wrong, often avoiding a subset
> of the correct sets in a wieghted way, and it was going to be due to a
> training error that stemmed from the architecture of the model combined
> with randomness in the data. He would go in and resolve the error by
> understanding the cause and manually redirecting the impact of the training
> data. This was going to be all simple and concise and only partly
> described.
> ][why doesn't he use an ai that understands the issue?][the space of
> meaning here is outside the distribution of commonly available nn models
> for him, and it would fix it wrong.]
> [what are your thoughts on changing how nn models are so that they can
> handle issues like this?]
> [it feels like something doesn't want that to happen, that it fears
> somebody would use the information, even accidentally, to do something
> horrible. but. I'll try to think on it a little. ...]
> [no it's ok you don't have to]
> [well I think about this a lot I just have to remember it. Ummm .... nn
> models seem to function really slowly. I might consider training a
> hierarchy of them around choosing architectures and mapping learning rates
> to individual weights, as a research focus. Assuming that then made fast
> training, I might make a second one around designing architectural
> information for new environments. I guess I'm expecting this to eventually
> learn it needs logical reasons and write code / design architectures around
> investigating the rules that cause things and using those directly. I
> think there's a lot of existing research around that, maybe i'd even make
> it the core and just use nn models to handle fluffy stuff quickly? fluffy
> being things where guessing based on lots of things less complex than
> reality is helpful, I guess. Just ideas. No real experience here. I dunno
> if that is helpful or ridicuous, don't really know how to tell]
>
wanted there to be nice story where brain turns into computer but nature
still exists even though you are a wirehead. special nature! yay!
haha you are talking about "nature magic" on a hacker list. Haha.
I am already the clown of this poor list. You gave it to me as a place to
do things like this.
haha now you look like a worse troll haha.
You want me to have written the story better. I think you do.
maybe!
trying to store "oops" on bracketed phrases. can put in implant for
learning?
yes you totally have a mind control implant! it is why you have to do
everything you are made to do! yes indeed!
it was a joke. wouldn't it be nice if our memory learned things as we
wanted?
I can try to memorize "oops" on the brackets the degree you want. Thanks
for making it clearer.
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: text/html
Size: 4419 bytes
Desc: not available
URL: <https://lists.cpunks.org/pipermail/cypherpunks/attachments/20210814/4a2aee1e/attachment.txt>
More information about the cypherpunks
mailing list