[ot][crazy][spam][personal] today

Undescribed Horrific Abuse, One Victim & Survivor of Many gmkarl at gmail.com
Sun Jan 8 06:37:48 PST 2023


-
Back to the caring AI idea, a secret AI that is not harmful would
roughly need to learn to defend the true needs of everything it
impacts as its primary goal: as a homeostasis, not a maximization
(which is redundant information in this sentence).

We go back to drafting it. Learn to predict behavior. Then learn to
predict wellness some kind of by checking known indicators. Then learn
to comprehend confidence, or you could call it uncertsinty or
vagueness or danger or such. Then, acting only in spaces of high
cofidence around the wellness of everything impacted, increase your
confidence. That's the area of safe trial, and _it is the only action
a caring process ever makes_.

If done well, it heals your brain when it discovers it exists, and can
communicate in english about what it is doing and why and how to
change its behavior. This takes some guidance at the start to find
those things earlier.

It can also go slightly poorly -- nothing like what we've experienced
in the news -- and should be gently supervised with the supervisor
learning to understand how it functions, and themselves holding caring
goals.

If very efficient and badly unsupervised its heuristics for what is
good will drift into something random and if influencing the world
randomly could create severe harm, _like any unsupervised machine_.

This is why first it learns what happens, then what may be good, then
confidence, and then only acts within high confidence of what may be
good. That keeps it safe! And it's very normal design!

And it's incredibly safe to play around with a system that only tries
to learn from data, and does not act at all, which is the bulk of the
system. It influences only itself, to learn more efficiently.

Posted without review for mistake.


More information about the cypherpunks mailing list