[ot][notes][ai][worldpeace] Hyperint. Draft

Karl gmkarl at gmail.com
Mon Dec 20 15:18:50 PST 2021


[spam],left tag off other messages unsure what i meant by it, meant to
tag 'em all

i'm thinking about how this powerfulness of an early mutual dependence
component can mislead one into thinking the algorithms are safe.  I'm
thinking about how they would prioritise the safety of things that
support their development, and leave out everything else, unless they
were influenced to generalise more.

Maybe a development pattern where they start at a small scale, and
prioritse newly discovered systems, could help.  What is also being
left out is eventual use-case of the design: what should it look like
to others?  how would it be used?

i'm imagining users would have an idea of what they are hoping to do,
and it would be good to easily guide the systems to begin with this
idea.  they may also have a plan for mode: a  chatbot vs a robot,
something that only powers on or uses sensors or actuators when
requested vs something more autonomous.  Uncertain.

Possibly that idea of developing from human guidance could be added at
the start.  Each human relation could be considered just an idea.  It
could learn how to learn or build to meet the idea, and store what is
needed to do that, rather than the actual result.


More information about the cypherpunks mailing list