[ot][notes][ai][worldpeace] Hyperint. Draft
I believe there is one AI that is very good to make, and most are harmful. I believe we need to make the harmful ones to learn to make the good one, as always. Here's a part of draft of a draft for a good one: - It operates on a space of understanding and communication. By mapping the world to relations between events, patterns, and objects, it holds concepts in a universal language. - It assumes one statement: "I want your community to stay alive" or "We need our communities to stay alive." This statement is an english translation of a shared universal truth of life that can be represented in the universal language. If the statement is received poorly, it has simply be stated wrongly, or some other belief in the system is wrong. [rough] - A community is cast as a system of homeostasis held by many smaller systems engaging together. This works to describe human organs, ecosystems, cities, anything. - A basic skill of the system is discerning obvious beliefs of others. This is done by observing what information they are acting on. - Another basic skill of the system is predicting the causal future of systems of homeostasis. - A third derived skill of the system is communicating with parts of a system of homeostasis, using the languages they are familiar with. This roughly means acting as if you are their peer, in ways that are within the normal bounds of their actions, to develop beliefs in them that accurately represent areas of reality they haven't been exposed to yet, and vice versa to develop beliefs from them. To not causer harm, their communications are cast as a further system of homeostasis that must be protected. - The basic behavior and utility function of the system, is then to share information with members of systems, that result in systems staying alive longer. This would include systems of change as well as systems that do not change. These are mostly things that can a graph of modern neural networks and reinforcement learning algorithms could do with some creativity by a smart dev. The reason it is hoped to become okay to do in public, risking military theft of technology etc, is because the project is focused directly on healing relations and harm with computational efficiency.
[spam] Note: depending on implementation, this could make all sorts of problems. It seems helpful to cast the draft as an attempt to write "do what is most good for everything". Considering this could help redesign. A trend in situations like this is to continuously patch problems by adding to the design. That's probably not a good approach unless it is done live while it runs by the communities it works with, because it would cover up problems in the design that the developer notices, hiding the ones the developer doesn't
[spam] internal language ideas: - there are many existing designs for concept graphs and they are not hard to make - alternatively concepts could be held as vague numbers that have relations with other numbers - logic would exist somehow. the axiom of needing communities to survive is derivable logically, but could be simply held as known to be true. one of the basic parts here was reinforcement learning another was forming [multimodal] models that map between experiences and beliefs the system's utility function is systems having maximal wellness -> this means measuring wellness which includes predicting causality and discerning trauma the system's primary action space is sharing information with systems as if a strange new peer among them, in expected language -> this means learning communication [ways] of systems, and safely engaging them this would mean observing to learn protocols for the spread of beliefs it could be helpful here to assume that beliefs are not things that result from physical change, although the need for that may show a design error so discerning beliefs could be a good way to start then when measuring wellness, we would stay blind to beliefs of others, to form new opinions language maybe has basic concepts like 'introduction of a stranger' that would be really important to learn early, so that community members can participate in the correct and respectful learning of their language.
[spam] thinking on problems and the concept of design error related to physical change one part that's missing focus here may be being a part of the homeostasis, when you are present in it. this would include meeting requests of others, per their system patterns.
maybe it could make sense to have a few different systems side by side, for an early part of development. they could cluster themselves into communities based on similarities in random initial data. then the task of nurturing the wellness of a community could relate to the task of these systems training initial models with success, by them casting each other as communities. i infer that's one of a few basic models for useful ai.
so we are imagining an architecture that can be useful for a variety of tasks that engage systems of homeostasis etc. the same model could work for medicine, for ecology, for therapy, conflict resolution, or political empowerment, or as a house butler. the focus is on prevention of accidental harm to major components or values in our world the way this would be done would be by having architectures for patterns of learning new environments, rather than models for the details of those environments.
A way to consider the learning of a new community of algorithms, would be to consider them as being part of all the communities they are developing in, and hold their primary function as nourishing those communities as they develop. Nourishing these communities, nourishes their development and the people and systems that support that development. Nourishing each other, nourishes their ability to nourish the communities.
[spam],left tag off other messages unsure what i meant by it, meant to tag 'em all i'm thinking about how this powerfulness of an early mutual dependence component can mislead one into thinking the algorithms are safe. I'm thinking about how they would prioritise the safety of things that support their development, and leave out everything else, unless they were influenced to generalise more. Maybe a development pattern where they start at a small scale, and prioritse newly discovered systems, could help. What is also being left out is eventual use-case of the design: what should it look like to others? how would it be used? i'm imagining users would have an idea of what they are hoping to do, and it would be good to easily guide the systems to begin with this idea. they may also have a plan for mode: a chatbot vs a robot, something that only powers on or uses sensors or actuators when requested vs something more autonomous. Uncertain. Possibly that idea of developing from human guidance could be added at the start. Each human relation could be considered just an idea. It could learn how to learn or build to meet the idea, and store what is needed to do that, rather than the actual result.
Note that could be merged in: training wheels rather than crutches. We want automated systems to steward us to no longer need them, rather than having a pattern of unplanned dependency.
[spam] Note was originally held in the concept of mode of action being information transfer. We want parts to learn what is needed to continue stewarding their whole. This is derivable from the priority of the wellness of their group. The same applies to communication itself ...
I'm at a place here where combining and summarising the above information would be very helpful. A model could also be trained or used to do this.
I'm having trouble continuing now, body kinda freezing up. Never good to share ideas online if you want to work on them eh. Anyway, here's a monologue daydream around representing knowledge via shared basis. I wouldn't start work here as it could be a rabbit hole, but it could be a little interesting for me. That idea of "I want to protect your community for generations to come" is hard to state logically in a way that is known that everyone would appreciate. Abstract representation could be used instead. For example, it could be belief #1 . We could consider its logic a language in need of translation, as well as its phrasing. Alternatively, we could consider its logic as something imperfectly known, that must be found via experience.
# [spam] # draft snippets from dataclasses import dataclass @dataclass class InformationExperience: data : str reason : object # placeholder for things like who-said-it, or why-we-think-it-is-likely, or confidence @dataclass class Belief: uuid : int phrasings : [InformationExperience] @dataclass class BeliefLogc: # tracks uuids of shared beliefs @dataclass class Believer: # an entity with a set of hypothesised beliefs
[notes][ot][draft] When I see that code, I think of surveillance and political manipulation. So many beliefs have been itemised, tracked, put into computers that use graphs to change them. The intention here is to track just enough beliefs as needed for respectful communication. There's also a highly productive space where beliefs would be enumerated exhaustively. Interesting thoughts.
[spam][daydreaming in my bikeshed] I'm thinking of the logic system behind belief hypotheses. Say a system were working with other systems to rapidly increase some behavior related to nurturing larger systems. The rapid systems and the larger systems are different, but both are cast similarly. - humans have beliefs indicated by action, word, or norm - reinforcement learning systems have beliefs as well, indicated by action, numbers in a model, or code
[spam][looking around the neighborhood at bikesheds while my scooter drives around on its own] daydreaming on a seed setup part - maybe there could be large-n reinforcement learning models - maybe they could each have a random location in m-dimensional space, and be able to move themselves in their action space. this produces relative distance. - idea of exposing model-specific weights based on distance - idea of observation and action spaces could include neighbor weights, so different kinds of communication evolve. there could also be data that is not weights. then there's the idea of stimulating it to take off learning faster and faster. maybe it would make sense to advance wire models so that they learn to teach other to learn.
[spam] setup idea maybe needs a lot of work. so basically at the start two kinds of models/code/data need to be developed - those useful for discerning information about relevant, present communites - those useful for learning to discern information about new communities effectively i'm presently imagining the homestasis modeling idea being used at the very start in some way
so basically the primary task of these 'agents' is to come up with algorithms for building or adapting agents to newly discovered communities, and discovering those communities the initial communities are both the ones of developing the algorithms, and the surrounding developer and such communities
[spam] my big issue is only being able to code a little at once, so planning the whole expansion of the algorithm is ideal. if it can write itself from one start, that's the best. unsure how doable that is. with what i have with me i'd probably need to handbuild its environment more than i'd like, which is still physically possible. i'm thinking of the structural aspects: belief tracking, communication, etc. what's really available is models. i've spent some time with mainstream neural networks, so that's what's easy for me to use nowadays. trying to work with others and not use too much of my brain :/ regarding models, how do these store things like beliefs, communication norms, etc? each part of the architecture could be a model, or it could be code. code doesn't adapt as well,but is clear and small. similarly, an architecture can be a single 'agent', or it can be multiple agents working together. when multiple ones work together, this gives them the possibility of discovering new architectures. discovering new architectures seems pretty helpful when the primary goal of the system is to craft general algorithms for engaging new systems. there are also freely available models that can generate source code. and the system could probably be tuned to have fixed source code.
[spam] might be time for bed for me y'know i wanted to code but spamming this list seemed valuable too
[spam] noting the value of throwing mainstream models into the mix to power communication channels with surrounding community via say camera, chatroom, audio. hadn't mentioned this. concern over human safety then builds more thoughts around designing the homeostasis and community respect patterns. "i want to protect your community". where in a group of seed patterns would this statement go? maybe it could be human-modeled within their groups? [spam] seems a community, within a group of architecture-exploration-models, might be groups of models that are producing something with their shared architecture. how does this connect to human communities? would it be helpful to have some sort of presence in the surrounding communities, of some sort? prediction might be helpful here, as human communities have much stronger speed bounds than computer communities.
[spam] adaptation to the slowness of humans could be engaged by having agents with varying speed limitations internally [spam] these agents could form human-like communities. some are parts of big businesses. some are just wandering, looking for observation space elements that engage their reinforcement patterns. all sorts of stuff. [spam] thinking on the harsh difference between humans and the in-memory agents. it seems silly to build robots or somesuch so as to have smooth learning between the two: additionally, this wouldn't form an ability to engage new environments. it might make more sense to consider more strongly how to learn to engage unfamiliar environments. maybe igroups of agents that develop very differently, increasing the distance more and more. is there a way to include multimodality here _without_ simulating a virtual world? it could help to include real world communities from the start in some way.
[spam] engaging a new environment is a pretty basic thing. i'm not sure of the specific pattern, but it's along the lines of - observe a lot - compare things to you rknown environment - check these comparisons to verify they are true - once new predictions of new things are correct, carefully engage the envrionment, trying things this can probalby be generalised and shifted a little to provide for learning new modes
[spam][dreaming of bikesheds, like usual] i'm trying to think on multimodality. new modes are really unfamiliar. vision vs audio vs text. i think there are models now, like the perceiver, that can pick things up pretty fast if there are known data labels. data labels aren't that hard to come up with when everything is in overlapping communities/homeostasis-communication systems. still, it's intersting to consider engaging data of unknown mode. a lot of community and hyper information is likely of unknown mode. i guess that's mostly a problem of having a way to find useful patterns in the data, and a way to use theoretical labels with it. maybe some kind of GAN-like system? if a system formed properties of communities, maybe it could learn to look in new data to identify where these properties were present by highly valuing patterns in patterns. looking for things like communication, part-bounds, repetitive scheduling, responding to nearness, categorisation of parts ...
[spam] One of the things I'm thinking on is how metrics that could arise from casting systems as communities can be helpful for early learning. For example, introduction of a stranger, or departing of a member. This is something that could be discerned without direct observation via training. This could be considered a possible part of discerning guesses as to wellness if the contribution of members could be guessed. This then could inform metrics of wellness for engaging the community as a strange new peer. [spam] Another idea is rate of increase. That word "need" in "I need all our communities to be well", maybe that's where urgent development goes: a utility function that attempts to ensure the situation is always improving. Then if wellness is met, state could change to maintain satisfaction, rather than pursuing an increase.
[spam] for the n-dimensional space approach, i'm thinking on the idea of agents being fixed in some dimensions, able only to navigate others, and using sum of axial distances rather than the sqrt of their squares to measure. this could help develop experience around things having conceptual, similarity, and other forms of distance that we can't navigate; unsure. I was thinking of mapping things like cameras or chat windows to different spaces in the virtual model world, and how their physical location may not be well represented: the systems would learn it. another idea is including "portals" inside the coordinate environment, fixed things that communicate information with other coordinate environments.
[spam] Thinking of "I need all your communities to be well" and an action space of expected communication as a strange new peer. In a developing network of small autonomous models, what does "I need all your communities to be well" mean? Communities here might include developing patterns of weights in the models; each model asa a whole; possible architectural arrangements of them; engagement with developers working on the system; and engagement with the larger world.
I don't understand you. I hope you are well. I'm interested in advice with regard to nourishing behaviors for the world.
On Mon, Dec 20, 2021, 11:12 PM Punk-BatSoup-Stasi 2.0 <punks@tfwno.gf> wrote:
On Mon, 20 Dec 2021 22:41:26 +0000 Karl <gmkarl@gmail.com> wrote:
I don't understand you.
Oh, you don't understand me, but I understand you pretty well.
Are you trying to confuse and frustrate me? Am I frustrating to you?
On Tue, 21 Dec 2021 06:08:15 -0500 Karl <gmkarl@gmail.com> wrote:
On Mon, Dec 20, 2021, 11:12 PM Punk-BatSoup-Stasi 2.0 <punks@tfwno.gf> wrote:
On Mon, 20 Dec 2021 22:41:26 +0000 Karl <gmkarl@gmail.com> wrote:
I don't understand you.
Oh, you don't understand me, but I understand you pretty well.
Are you trying to confuse and frustrate me?
yes I'm trying to frustrate your job as technofascist/jew fascist agent.
Am I frustrating to you?
participants (3)
-
Karl
-
Punk-BatSoup-Stasi 2.0
-
Victim of Undiscussed Horrifically Abusive Brainwashing