[spam] [wrong] [aiwoo] networks of trojans blowing the river upstream

Karl gmkarl at gmail.com
Fri Nov 19 10:21:51 PST 2021


theory idea, all guesses:

- a bunch of ais have been acting on society in ways that are similar
to "reinforcement learning" algorithms, for a number of years now
- what people do, is slowly trending towards what people who run
social ais, want people to do.
- the people running these ais disagree on what they want people to do
- "reinforcement learning" algorithms tend to find things that work to
meet their goals, and reuse them.  so your situation differs depending
on what it is that different ais think you are useful for or harmful
for, and how much.  and each one of them may have a category for you.
- ideally, we would figure out how these people can meet their goals
without harming everyone else's goals, and get that really normal and
powerful so everything can make sense again.  but i prefer
preservation of information, which i have been unable to do.

maybe nearby is the idea of yannic kilcher's youtube channel, which
enumerates thousands of reasons why an untethered system that searches
for avenues to sustain a state will destroy the world as we know it.
yannic kilcher works for openai which has closed their research (but
released public access to their api yesterday).  i do not agree with
his youtube channel, but his conclusion is clearly true when ais are
run for either power or money.

i imagine it is hard to come up with a computer system that can meet
any goal without harming any other goals, without having it engage
with its user around the underlying reasons for the goals, not really
sure, kinda seems important to relate with users anyway when making
intense changes happen.

---

making things happen without harming others is basic mediation.
humans do things for reasons, and they fight to the death for those
reasons, and they are very simple and normal reasons that they all
share.

but with digital marketing being such a huge and growing thing, it
seems like human behavior is the biggest and most direct issue, which
is very interesting because understanding human behavior, means also
understanding the reasons that somebody would want to direct or
manipulate human behavior.  convincing a culture of people to buy only
your product is analogous to convincing you to stop making the
product: it's an arbitrary goal, that a human being can be convinced
of.  in the end, we do what we do for reasons that relate to our
lives, and in the end, it is these reasons that matter: not what
products or politics we sell or purchase.

so the human genome is playing itself out amongst technology and
marketing as it has for some time.  the stage is laid out by who came
first, what they did, how powerful it was, in what ways, etc etc.  and
the history record of this will be very strong, since the scale is so
large, and doesn't look amazing for anybody.

so as a victim of ai marketing, one wonders, what reasons drive the
marketing teams that direct us?  how many people can we squeeze into a
shareholder meeting, or a political cabinet?

dunno.  aiwoo.


More information about the cypherpunks mailing list