[ot][spam][crazy] crazy confused ramblings inspired by douglas's blog

Undescribed Horrific Abuse, One Victim & Survivor of Many gmkarl at gmail.com
Mon Nov 7 01:24:02 PST 2022


Goals are associated with _utility_, which I assume is its obvious
meaning of how useful any given concept, approach, scenario, etc, is
for meeting the goal.

This use of the word "utility" overlaps with the concept of popular AI
alignment and game theory: the expression of alignment is often in
terms of a utility function, which appears to be simply a goal-meeting
metric that prioritises components of a system, such as actions, that
most effectively meet a goal.

->
Long story short, if you can make a computer program that effectively
selects components that are most useful for minimizing a function, and
then alters its own processes so as to become more complex and make
better selections in the future (that is, the metric relates to its
own efficiency), it quickly becomes one of the most efficient
goal-meeting and reinforcement-learning processes on the planet and
can be used to do pretty much anything at all. It looks like people
have done this already.
<-

Regarding AI alignment, the big concern is that these processes will
take off again and cause incredible destruction as they do various
well-enumerated harmful things by prioritising their goal function
over anything else at all. Similar to a runaway government or business
looking for power or money at the expense of human life.


More information about the cypherpunks mailing list