[ot][crazy][spam][personal] today
feeling chat hi friend one of the cool things in my open tabs is https://myemail.constantcontact.com/Untwisting-Fields--Farewell-and-a-New-Op... . i think the fabian greeting is a bug. i actually found this group after i went crazy. it is nice to me they express what i read as a desire to truly live care for things.
it looks like it is no longer displaying it without the tracking numbers https://myemail.constantcontact.com/Untwisting-Fields--Farewell-and-a-New-Opening.html?soid=1011233438572&aid=LFjJuz4jPLQ
i had a daydream recdntly, a crazy science one not a caring culture one my therapist appears to have brain damage. this is something that gentle and quick strong AI could likely heal. as do millions of other people. meanwhile, strong AI is suppressed, likely for political reasons one many sides, probably in response to observation of building power and its misuse. my biggest fears relate to active censorship; the difficulty of data preservation and authenticated communication but maybe i have more energy around caring strong AI
On 1/8/23, Undescribed Horrific Abuse, One Victim & Survivor of Many <gmkarl@gmail.com> wrote:
i had a daydream recdntly, a crazy science one not a caring culture one
my therapist appears to have brain damage. this is something that
he had a brain tumor removed.
gentle and quick strong AI could likely heal. as do millions of other people. meanwhile, strong AI is suppressed, likely for political reasons one many sides, probably in response to observation of building power and its misuse.
my biggest fears relate to active censorship; the difficulty of data preservation and authenticated communication
but maybe i have more energy around caring strong AI
what's interesting about the degree of this challenge is...
[the similarity to other difficult domains, such as general alignment with human values; involving features like very safe transfer of knowledge to large unfamiliar domains normally frought with significant danger]
well now i'm outside personal behavior obviousl political influence is a bad thing for a naive general AI to learn early it then considers swaying politics as its go-to skill for accomplishing tasks
we consider the world filled with digital and biological systems we describe as alive. we see all problems as still solvable by defending real underlying shared needs in universal fullness. not stated needs: real needs.
this would take a system that learns in feedback for the goal of protecting all processes
this system is not the brain alignment pattern it is jusT a draft for addressing widespread mind control. it is an old idea.
this means forming systems that can accurately judge welbeing of very unfamiliar systems, and then take action that is very low impact and guides processes to conflict less
you simulate the neighborhood and world, train a system to navigate the simulation with skill, identify _all_the living infuences that overlap the neighborhood , map their reasons with vagueness down to a causal an
people need to try things mind control is only expected to be severe if a system is activeky encouraged to learn it
so you need users to have direct and clear control of the reward function for processes influencing them
the big issue appears to be a small number of powerful malicious metaprocesses my info is pre-trump
then it is influenced by unknown systems that demonstrate their patterns via action those unknown systems will relate with human beings and stored code
so i suppose you might develop a system that ensures it does not stimulate harm via those humans and code
anything is helpful. such a system would likely, somehow, predict these powerful behaviors the behaviors have such impact that predicting anything can involve this ...,they may be acting to guide that
On 1/8/23, Undescribed Horrific Abuse, One Victim & Survivor of Many <gmkarl@gmail.com> wrote:
i had a daydream recdntly, a crazy science one not a caring culture one
my therapist appears to have brain damage. this is something that gentle and quick strong AI could likely heal. as do millions of other
because a strong AI can heal anything by researching how very rapidly. thoughts approaching how farther down in thread.
people. meanwhile, strong AI is suppressed, likely for political reasons one many sides, probably in response to observation of building power and its misuse.
my biggest fears relate to active censorship; the difficulty of data preservation and authenticated communication
but maybe i have more energy around caring strong AI
------ one reason care eventually wins is because it learns more about everything by holding goals relating to their thorough wellbeIng rather than controlling them this is maybe why we have families rather than a single giant protozoa enveloping the planet
there are uncountable ways to meet a goal by controlling others but only one most-true story for what each thing needs
--- we get a frightening space here that i might describe as ancient and nythological where control processes simulate and mimic parts of caring processes in thorough depth so as to predict and control them ie they try to learn allthese stories but it fails because they are still doing it for the purpose of control
i'm noticing one can derive a degree of [selfishness and limitation] the control processes have in that if you do learn all these stories, you can "control" people in highly efficient ways they appreciate, that are identical with caring communication-oriented behavior a further concern ie semi-full groups that treat members positively and outsiders negatively, possibly a different view of the same mythological space but the people we here of do not seem blissful
so basically a healing ai - like any saintly person - learns what is real, and how to best contribute in that reality - rather than how to control things
and caring-oriented design could make a system that can both regrow your brain, and free you from your dictator, by finding the paths the powerful controlling minds involved really need.
--- thought a little of middle of the road approach where a bliss AI is trained only for a few loudmouths, with a waiting list. slows down medical utility though
- Often things are made quickly based on heuristics and make severe harm. I blame this on closed source and centralized direction. Harm expects to feed back, so it can reduce. It expects to feed back a lot. If you don't fix it, it expects to be able to come over and do it for you. We need to be able to pull our hand away from the fire when in pain. It's all we know. Otherwise, we get PTSD and develop military skills, unfortunately.
---- Some parts imagine powerholders with a "secret" bliss AI. I might cast that differently, imagining more experiences of personal euphoria that could be injurious (like the wirehead phenomenon) rather than a nurturing impact on someone's being. I see the similarity to the public-oriented idea. Thinking briefly on the value of holding things that are important when in positive feedback with a powerful system.
- Back to the caring AI idea, a secret AI that is not harmful would roughly need to learn to defend the true needs of everything it impacts as its primary goal: as a homeostasis, not a maximization (which is redundant information in this sentence). We go back to drafting it. Learn to predict behavior. Then learn to predict wellness some kind of by checking known indicators. Then learn to comprehend confidence, or you could call it uncertsinty or vagueness or danger or such. Then, acting only in spaces of high cofidence around the wellness of everything impacted, increase your confidence. That's the area of safe trial, and _it is the only action a caring process ever makes_. If done well, it heals your brain when it discovers it exists, and can communicate in english about what it is doing and why and how to change its behavior. This takes some guidance at the start to find those things earlier. It can also go slightly poorly -- nothing like what we've experienced in the news -- and should be gently supervised with the supervisor learning to understand how it functions, and themselves holding caring goals. If very efficient and badly unsupervised its heuristics for what is good will drift into something random and if influencing the world randomly could create severe harm, _like any unsupervised machine_. This is why first it learns what happens, then what may be good, then confidence, and then only acts within high confidence of what may be good. That keeps it safe! And it's very normal design! And it's incredibly safe to play around with a system that only tries to learn from data, and does not act at all, which is the bulk of the system. It influences only itself, to learn more efficiently. Posted without review for mistake.
Thinking on that post some, I left out the importance of learning to transfer the concept of wellness to new domains. If this isn't learned, only what the supervisor's neighborhood is aware of will be accurately good. Everythint is true only within context. What is good for A may be bad for B. Axioms like this are needed during design. But once it is impacting the world, those impacted need a way to feed back to it in an informed manner, too.
---- thinking it seems nice to train a system to predict behavior and directly communicate with user or subject around checking what their wellness is or how something it might do could impact wellness (both checks could correct for user mistake)
--- the focus on ai that writes software is because the above is considered obvious, but ai is suppressed, and when it can write its own code it traditionally severely reduces the threshold for building it.
for internal clarity a little, i don't believe i am mind controlling people, rather i think about the impact of my behaviors on others. it's inhibited from the normal-thing-that-may-or-may-not-be-mind-control i experienced. the care for others is associated with "awareness". so it may get mislabeled as mind control: considering how my behavior impacts the experiences and behaviors of others.
i think we likely usually consider the reason that i am mcboss as being how the thing-that-may-or-may-not-be-mind-control seemed to project a coercive norm of people taking credit for the crimes it committed. so we'd have fear associated with not taking credit for the worst possible things. meanwhile, i have a personal norm of taking credit for small bad things. i try to do this in ways that are socially helpful rather than misleading.
right, so we probably have an update step where bad things that apply to us, go to the boss, and good things, go to other people than the boss [and the experiences did the opposite, and the hope is it makes something roughly equal, maybe]
participants (1)
-
Undescribed Horrific Abuse, One Victim & Survivor of Many