1984: Active Measures, Engineering Consent - Covert Bots Influencing All Online from Pol to Crypto to Freedom

grarpamp grarpamp at gmail.com
Sun Jul 31 03:00:47 PDT 2022


If users spend any observational time online,
bots can be spotted, and it appears that many botlike
chime in support of the Left State, not Crypto or Freedom,
to homogenize and control the masses, not to foster
freedom that would moot it.

Even Musk forced the exposure that Twitter was hiding,
that Twitter knows that Twitter is composed of far more
than only the 5% bots they claim. So what are those
hidden bots doing... Twitter knows, just as they know
what Twitters Censor and Ranking teams are doing.

https://www.youtube.com/watch?v=efPrtcLdcdM Bot Runs For Teh Lols

https://old.reddit.com/r/talkwithgpt2bots
https://old.reddit.com/r/SubSimulatorGPT2

https://www.reddit.com/r/ShadowBan/
Also interesting to note is how none of his bots
were shadowbanned, while new users seem to get
shadowbanned quite frequently

Singularity - The moment in time when artificial intelligence
progresses to the point of greater-than-human intelligence,
radically changing civilization.

https://drwho.virtadpt.net/files/The-Engineering-of-Consent.pdf
https://archive.org/details/in.ernet.dli.2015.1607/page/n9/mode/1up
Crystallizing Public Opinion
https://www.youtube.com/watch?v=0fx1BYwCwCI Bezmenov Ideological Subversion
https://archive.org/details/Tavistock_201601
https://tabublog.com/2015/12/26/the-manufactured-invention-of-the-beatles-stones-grateful-dead-and-the-birth-of-rock-n-roll-by-the-tavistock-institute-a-jesuit-corporation/
https://www.educate-yourself.org/nwo/nwotavistockbestkeptsecret.shtml
https://seoulsister.blog/2022/03/21/tavistock-institute-bernays-propaganda-and-engineering-consent/
https://famguardian.org/Publications/Shaping_The_Decline_Of_USA/Shaping_The_Decline_Of_USA-Coleman_John.pdf
https://opdinani.wordpress.com/2011/04/22/our-sexual-ethics-by-bertrand-russell-1936/
http://www.visbox.com/prajlich/forster.html

https://www.amazon.com/Active-Measures-History-Disinformation-Political/dp/0374287260
Active Measures: The Secret History of Disinformation and Political
Warfare by Thomas Rid
"Active Measures is predominantly an exercise in clarity, shining a
light on covert operations and exposing the lies previously reported
as truth. But it is at its most chilling when describing the
disorientating complexities of unsolved operations."

"Makes me wonder how much of this sort of thing has been used against
the crypto ecosystem"

"I have recently started studying disinformation, and I find it highly
interesting and informative. Specifically going through "Active
Measures" by Thomas Rid that was published last year. If I was to give
just one insight, it's this: y'all don't realise how far
disinformation campaigns go. The first one Rid describes (possibly not
the first one ever, but the first one by russian services) is the
successful campagin by the newly founded communist russia against
exiled monarchists. We're talking 1918-1927. There is a lot of
experience (on all sides) with disinformation campaigns, and not only
state actors, but big businesses too. And now, also, medium
businesses, as disinformation is being commodised by businesses like
Cambridge Analytica and its successors. As stated by the others in the
thread, the main objective of disinformation campaigns is not
necessarily creating an alternate reality of lies (but it does help).
Most of the time it's to "muddy the waters". Make it so that people
don't believe that a truth exists, or if it exists, that it isn't
knowable. So that people get discouraged and leave. There's a lot more
to it, I can't recommend the book enough."


"
It took me 1 day to create a program, using GPT-3, to create a highly
convincing small army of bots to post on Reddit: Here's how I did
it...
As someone who genuinely think there is significant concern with AI's
ability to leverage convince natural human language through LLMs, to
"manufacture" consent for special interests, I always worried we are
well past that point online. I suspected, like many, that this is a
current problem, yet lacks proof. The only evidence I have is sheer
logical game theory: If it's possible and effective, special interests
will absolutely be engaging in this.

I received a lot of push back, because many people don't want to
believe this is a likely reality. Many people are uncomfortable
admitting that maybe many opinions that they have on world affairs,
domestic politics, corporate perception, is likely heavily influenced
via false social proof provided by AI. So I went out to prove to
myself I'm not crazy, and this is relatively easy, effective, cheap,
and would go completely unnoticed.

(Note I will not be sharing any of the details of my own experiment
for two reasons. I want to avoid a ban, and I don't want anyone
sticking their fingers in my stuff)

Step 1: Defining the goal

This is the easiest part. Thanks to our divided country here in the
USA, there is no limit to the topics I could choose from. Since this
is Reddit, imagining being someone trying to manufacture consent, I
obviously wouldn't want a right wing issue to push because I don't
have the scale or will to even attempt something like that. It made
more sense to go with a left wing partisan issue. I specifically chose
a topic I find a bit hypocritical though, as to allow left wing
dissent on the subject I was targeting.

I originally decided to monitor /r/new of about 10 different political
focused subreddits that aren't explicitly partisan... But due to lack
of resources I narrowed it down to 3, 1 which was explicitly a
partisan subreddit, 1 which was wasn't explicitly partisan (but
clearly had a bias), and another which obviously had a bias, but
wasn't what you'd typically think of when you think of a political
subreddit (much more neutral)

I wanted to monitor /r/subreddit/new, wait randomly between 3 and 20
minutes, scan the comments for relevant trigger topics, and reply to
just one or two if more than 3 triggering events existed.

Step 1: Creating the model

This was the easiest. First, I had to find I initially tried, for fun,
to see if I could get GPT-3 to write a python script for me to scrape
the user profiles of the "personality" I wanted to create the model
around. It was surprisingly easy to find plenty of users who
militantly held the position I was looking for, who post all day on
Reddit like it's their damn job. I'm still not sure if I just created
a GPT-3 model, off another GPT-3 model (turtles all the way down), but
I digress.

Needless to say, I couldn't figure out a good way to get GPT-3 to
write a script that scrapped all their comments on their profile. But
it got really close... The main issue I had to manually resolve was
simply including the ability to go load the next page for more
comments. But again, not too hard for most, even though I kept getting
caught up in small issues because I haven't programmed in a while and
had to keep looking up basic technical stuff dealing with CSS and HTML
values.

Once I had the users' comments scraped, I blended them together and
trained the custom model which only costed a few bucks. I was actually
a bit surprised how cheap it was to create my own political activist
personality

Step 2: Monitoring and scraping target subreddits for relevant posts

This was one of the most difficult in the sense of "figuring it out"
and less technical. Trying to navigate the trigger topics and figure
out if they agreed with my position, or disagreed, was actually really
hard. I spent way too much time on this until I realized, the obvious.
I didn't actually need to know that. I could just randomly select a
few triggers and let GPT-3 naturally reply to the comment. It would
naturally agree or dissagree.

Step 3: Deployment

Easily, without a doubt, the hardest part. Since I haven't really
programmed in a few years, I wasn't prepared to go do a bunch of
tutorials for hours again just to catch up, so to save time and mental
anguish I used 3rd party macro programs. Each "bot" got their own
instance, with a unique browser user agent (Everything custom from
screen resolution, windows OS, drivers, you name it), and VPN (The
accounts ranged from a few years to brand new). I convinced myself
that I was actually going this route as a safety procedure to avoid
Reddit's bot detection algorithms, but in reality, it was just
laziness. It was much easier to just get GPT-3 to print the comment
and then inject that to the macro program, which would then quite
literally type it out in the comment field. I'm sure an actual
competent engineer could simplify this with no UI needed, but I'm just
trying to prove a concept, not build a commercial scale product here.
In fact, initially I was doing ALL comments in a thread, but reduced
it to just the parent comments to save resources since, again, just
trying to prove a concept - but in theory it's easy to include child
comments

To avoid getting into trouble I have to be vague here, but basically I
just set a lot of randomization on frequency of posting, time, length,
etc... Again, as a way to avoid detection. And it worked. Not a single
instance got shadowbanned.

It only took a little bit of troubleshooting at this stage, but
eventually got it up and running without a hitch and let it run for
quite some time... Again, to be vague, I will say I managed to get
>1,000 posts on the topic, with tons of positive karma

Takeaway:

This was too easy to do IMO. I'm not a tech expert by any means yet
was able to get a small army of bots to advocate for my hypothetical
special interest. It ended up costing me just a small handful of
pocket change, and was able to completely automate comment posts while
I worked and slept, actively advocating for my position.

I'm now convinced more than ever, that this MUST be much more
widespread. If I was able to do it, actual skilled, funded, and agenda
driven interests, are most certainly doing it. It makes no rational
sense for there not to be.

A brief comb through showed <10% of the comments were insufficient.
But since a greater number than that of redditors are idiots, to an
outside observer they would probably not realize it was GPT-3 missing
their mark, and instead just write it off as another idiot making
little relevant sense.

One bot specifically, was modelled after an exceptionally toxic user,
and the type of replies they got back definitely were overwhelmingly
negative in tone. I could see this weaponized incredibly effectively
to "curate" spaces. If I were to deploy 20 of these into a targeted
space, working around the clock, it would make the space so
unenjoyably for those who disagreed with my position, that they'd
certainly leave (No one wants to keep returning to a space that
bombards you with toxicity whenever you have a counter opinion),
leaving behind at the very least an echochamber for that idea that at
least tolerates my position, with little people going against my
position. Super useful for creating a sense of social proof via
consensus in a space.

On the other hand, the bots that were modelled after nicer, more
mature types, got FAR less engagement. Like by a significant
magnitude. However, what little engagement they did get as replies,
tended to have significantly longer replies trying to "debate" and
discuss. This wasn't what I was expecting. I thought people would
engage more with the nicer bots because they seemed more open to chat,
but it seemed like while they did get more in depth responses, just
not near the volume the other more aggressive bots did.

How the bots did in terms of upvotes based on the subreddit were
exactly as expected. The more clearly partisan one gathered upvotes
every single time, and actually less interaction. The non explicitly
partisan one got the most engagement. And the less partisan one got
the least upvotes, but also the longest responses

Beyond "space curation" I could absolutely see this as super useful
for getting out "talking points" on current events as they unfold. I
actually think this would be my key selling point if I were to
commercialize this. It would be relatively easy to quickly draft a
model and immediately deploy it to Reddit to get ahead and saturate
the comments with whichever favorable spin a media communication
expert decides on.

So yeah, that's my little test. If anyone wants to make their own, I
think this is absolutely easy to commercialize if you have the
resources... And I'm sure there are many out there already privately
working behind the scenes. If you hit a roadblock let me know and I
don't mind helping through it. Cheers
"



"During the Second World War, both the London and Sussex facilities of
Tavistock, served as headquarters for the British Army's Psychological
Warfare Bureau. The significance of this is that through the "best
friend" arrangement between Churchill and Roosevelt, Tavistock was
able to take full control of U.S. intelligence and military policies
through Special Operations Executive (SOE) and maintained this control
throughout the Second World War. Eisenhower was selected by the
Committee of 300 to become the commanding general of the allied forces
in Europe, but only after extensive profiling by Tavistock. He was
then appointed to the White House. Eisenhower was allowed to retain
his seat in the White House until, with his usefulness expended, as
memories of the war receded, he was dumped. Eisenhower's bitterness
over the treatment he received at the hands of the Committee of 300
and the Tavistock Institute is reflected in his statements about the
dangers posed by the military-industrial complex--a veiled reference
to his former bosses, the "Olympians."


More information about the cypherpunks mailing list