[ot][random][crazy][spam] Out of it, Video Game Design Project
I am totally out of it. I said some things to my therapist that sent my psychosis into convulsions. I can barely consider goals. I found a nice one: I spend so much time without good motor control, painfully associated with my goals and values, that I have a lot of experience daydreaming around pointless stuff to pass the time. One cool pointless project is the idea of a time travel video game. I've started it a few times, hard to get anywhere with though. Current behavior may be changed to boss story activity or something productive. The current attempt to draft this video game design is at https://github.com/xloem/multiplayertimetravel . The repo is basically empty. The idea is to get multiple players into an experience where they can time travel and interact with each other in each others' past and future using techniques such as simulation, prediction, vagueness, multiple timelines, etc. It's interesting because it sounds impossible when you propose it but is not too complicated to fake.
From the repo readme:
Basically the experience is a simulation that can predict its state at different points in time, based on known state at other points in time.
Usually I start by designing structures for a simulation like that. The interface and norms end up being different than for conventional simulations, and there are a handful of different ways to approach it, which can appear to impact what's reasonable to implement later. I have a lot of trouble with refactoring nowadays, and can also have a lot of trouble with remembering things I am not immediately looking at. But design concerns are usually blocks: the best way to create something is to actually work on it!
here's the last draft doc that reached github: # for designing an interface of system simulation # a simple system where some property is a function of time. # goal 1: user has access to a state containing a number, and can produce # another state based on a time change from the first # number = constant * time # goal 2: model user in a way that can be simulated, to mutate # the constant. the simulation can be simply a uniform # distribution of chance of change. # goal 3: simulate a single timeline, letting the user move along it arbitrarily, # using the user's model of behavior to predict state. # goal 4: simulate multiple timelines, with an interface as to whether to bind # actions to one or spawn a new one. be nice to have that be a float # probability of binding. # note: user and sim could share the same interface for predicting state given environment and time # note: user and number could both be equal peers # design choices/ideas: # - update functions take new time, not time change. expected to be more refactorable if need.
i've started code elsewhere that is unlikely to be completed. right now i'm [doing things I try not to do] away from other code. I wonder if following the ideas below is workable or if they make some unrealistic assumption. On Sat, Feb 12, 2022, 3:02 PM Undiscussed Horrific Abuse, One Victim of Many <gmkarl@gmail.com> wrote:
here's the last draft doc that reached github:
# for designing an interface of system simulation
# a simple system where some property is a function of time.
# goal 1: user has access to a state containing a number, and can produce # another state based on a time change from the first # number = constant * time
class NumberAgent(TimeShiftable): def __init__(self, prev_states : List[Tuple[TimePoint, float]] = [], avg_changes_per_second = 0.25): .. Okay, i'm thinking here that normalising an interface for functions of probability is really important for things guided by simple rules of chance. I don't know much about probability at this stage of my life. I have some random variable, and I have records of how it's behaved in the past. I want to be able to sample its possible state after a given duration in the future (or the past) from a known state. Y'know, I'm sure that's been well studied and solved, but maybe it's cognitively simplest to just procedurally write down how something like that behaves, and brute force it, for now. Additionally, that approach would let agents be programmed intuitively, rather than in a new language based on probability primitives. It would likely later be possible to translate to probability primitives by sending tracer values through: numbers that track their mathematical operations. Doing that means using a pure conditional function rather than "if" block control flow primitives.
drafting a number update function: class Number(DiscretelyUpdatedComponent): def update_step(self, state, old_time): next_time = old_time + self.stability_duration() next_state = old_state + self.value_change() return next_state, next_time class DiscretelyUpdatedComponent(TimeShiftable): def update(self, old_state, old_time, new_time): # this would loop, calling update_step, until new_time was passed So, this approach starts to look more complicated if we have known state+time pairs in both the past and the future. If we naively walk forward from the past, we may end up somewhere different than the known future. Another approach could be to treat values as abstract distributions rather than precise numbers. This is a lot more work, but it seems pretty fun. The problem maybe could then shift to taking an abstract distribution value, and placing bounds on it, such that one of the values that led to it can be sampled given the later bounds. This avenue is likely a red herring, too much work, more efficient to just fudge it with some heuristic, but the project is just for fun, anyway.
Futzing around thinking of distribution trees. class UniformValue: min: float max: float UniformValue(1, 2) + UniformValue(0, 4) Now we're back in probability again. If we use uniform distributions, then summing them will sum their range, but the output won't be uniform any more ... it'll be uh ... (websearches ..) an Irwin-Hall distribution? regardless it's a function that can be looked up. So say we have an Irwin-Hall distribution from 0 to 6 with specific PDF and CDF. And suddenly we know that its value is 3. UniformValue(1, 2) + UniformValue(0, 4) == 3 I don't know probability, but there must be some way to give new distributions here. Basically the two values completely depend on each other. I guess it simplifies to: UniformValue(1,2) + Dependent_Value == 3 . Is this a ridiculous avenue? Or is this easily generalisable?
I'm guessing I'm not prepared to think about this avenue cause it's very hard for me to tell whether that's easily generalisable or not. I see it might be helpful to use symbolic solving. Don't know too much about such things. uniform_a + uniform_b = constant originally we had 1 equation and 3 unknowns. with constant, we now have 1 equation and 2 unknowns. if we consider these uniform distributions variables instead of distributions, we could use a math package to solve for one or the other, either symbolically or numerically. that could be part of the problem. let's make it more interesting. uniform_a + uniform_b + uniform_c + uniform_d = dist_e dist_e = 3 uniform_c = 2 1 equation, originally with 5 unknowns, now with 3 unknowns. Here we go. Since an equation is solved when there is only one unknown, we need one less sampled value than the total number of distributions, to guess a state. It would be nice to guess a distribution rather than just a sample, but that could be a challenge for later. It seems reasonable to start by describing distributions with functions that could sample them.
Writing a distribution algebra tracer or such sounds hard, but it doesn't have to be, since this is only a proof of concept. We could restrict ourselves entirely to summation of uniform distributions, for example. This approach seems workable, so maybe I can add a bullet to that draft doc around it.
Something that's missing is bounds tracking. The constants place a range bound on the distributions, changing them. Tracking bounds is fun. But it might be more efficient to start by simply solving for the extra uniform value that has the widest range. Later, instances of states or whatnot could possibly have known constraints placed on them. I'm a little worried that solving for the widest range could alter the distribution sampled from, making it incorrect. Maybe a more accurate approach might be to sample from the distribution of the sums, then from the distributions that could produce that, until the final sample is identified. Code that does that could be optimized for different encountered equations, or reviewed for a general optimisation. So "solving for the widest range" could be considered a heuristic, to provide a value-tracking structure to open up options of design.
Let's see if I can simplify these ideas. Do they make a good bullet point to later pursue implementing? Parts - tracking simulated agents is transformable to tracking trees of probability distributions, choice processes, or simulation processes - in a simple example, if everything is a sum of uniform distributions, a possible state can be found by solving for the widest distribution in an equation, based on sampling the others. this is simple subtraction, but likely produces an incorrect output distribution and should be treated as a heuristic. so each agent has a function that calculates state at a time point I came up with a class of agent where state is decided based on discrete difference from previous state, using uniform distributions the proposal is to make a uniform distribution class that produces trees when summed. these trees can then sample their state given constraints on some of their operands. it's still big for a bullet point, but sounds meaningful to pursue.
glancing at it, it seems good for nearby timepoints, but not so great for very distant timepoints.
for distant timepoints, transformation of the described discrete brownian motion into a smooth distribution seems meaningful. this is probably pretty important in general. basically there are crux timeline junctures, where interaction between simulation components need to be handled as trees because they interact e.g. two asteroids colliding, but between them huge swaths of solvable timelines where sentient beings appear basically brownian. i'm feeling doing the trees of individual decisions anyway, but keeping the larger brownian ideas present and important.
trying to stabilise this idea part a big component of simulating time is considering outcome and causality trees. a simple useful outcome tree might be sums of uniform distributions. an example could possibly handle them with a heuristic class that can solve them given known values for some of them. however, real probability transformation to brownian-like distributions provides the correct answer here.
thinking a little about discrete sums. if the amount of time passed is itself sampled from a uniform distribution, then there is a range of possible tree heights. a number of options. a quick heuristic example could pick a specific tree height. a correct example would use the distribution of sums to work with all of them.
attempt to summarise again - it seems fun to make value tracers for uniform sums. this might involve three classes: uniform distributions, scalars, and sums. - example heuristic tracers could produce a possible value with incorrect distribution - later, real tracers could consider a range of trees and calculate distributions backward from the distribution of sums simplify: - heuristic "+" expression value tracers for uniform distributions could open design options up. all they would do would be to produce a possible sample given known values. - it might not end up being productive to do, but gives an avenue of work. another consideration is brownian distributions.
added to file using web interface: # - a simple uniform distribution class that can do a quick heuristic sample after its "+" operator could open avenues for demo material # look up / review? # - brownian distribution # - irwin hall distribution? # - bernoulli distribution?
from 2 days ago: I ended up looking a little into the brownian distribution. It's an equation with a value called D inside a square root and an exponent this morning: i'm looking into implementing the simple uniform distribution tracing idea. i'd like to do it with some generality, but working with my short term memory issues, it's very hard to discern how much generality doesn't expand the problem space too much. right now i'm stashing some tiny work with a class called UniformDistributionSum to instead use a class called maybe Expr. the reason for a more general class is partly to relate with introduction of untracked values like ints or ndarrays. Base classes that can be shared across parts help make code workable. I don't actually have the topical working memory at the moment to make something that functions at all, so it's an exploration.
One of the issues I run into, when trying to generalise with severe working memory issues, is generalising incorrectly. I won't remember all the parts and uses of what I'm working with, and may do work generalising it into a category that is unworkable later. This can make a possibly-frustrating refactoring situation where things are repeatedly refactored between different norms, to handle the bit I'm thinking about in the moment. If I keep the different refactorizations, I can then often review later to identify what makes sense, but it can be quite hard to get through. The frustration worsens the issues, and the idea of how much there is to review being larger than what I seem able to consider in the moment, can make the mental situation more delicate. Since it's at least partly dissociative amnesia going on, what and how much I can consider is heavily related to how I respond to other things involved.
now I spam to handle my inhibitions As a computer programmer, generalisation and reuse is one of my highest design values.
Object oriented design is an old approach where code modules are grouped into "objects" that represent their "concepts." Handling judgement failure, the reason this works becomes apparent: Concepts of human language already have hierarchy: a pet is an animal species is a biological organism and a friend. Each of these categories have normal properties: things like having physical location, or feelings we hold for them, or needing common things to survive. When we associate our code with concepts we are familiar with, we produce a situation where inheritance provides for reuse without having to think about it. The categories into which the concept fit, already hold the information of what kinds of things are likely to derive in a useful, reusable way.
I switched to python some years ago because: - it is used by mainstream open source ai research and many hobbyists - it has language support to reference, mutate, and recompile its own bytecode while executing it, and has some syntax mutability - it's concise There are downsides to python. Notably it is hard to debug interpreter problems from chip or kernel misbehavior. C is much better in my opinion. Anyway, I've been using python for some time without understanding __new__, which is an important construct. Here's how the manual says __new__ works. It's not complicated. 1. python calls Class.__new__(cls, ...) when Class(...) is called. 2. Unlike all other methods, __new__ is never passed a self object, and always passed a class object, no matter how it is decorated. 3. The return value of __new__ is passed on to __init__, after __new__ returns, if and only if it is an object of the class. Then it is returned to the user. 4. New objects can be constructed via super().__new__(cls), to return. This lets one write constructors that for example instantiate a child class selected from arguments and perform other tricks.
Sometimes when I look at codebases I experience frustration at what I perceive as a lack of generalisation. An example of this is sympy, I think it's called. It looked to me like it contained no mapping between operators and their inverses, for writing solvers more efficiently. I've bumped into an interesting situation playing with this code. I'm not sure how to identify that a part of an expression can be a component of a linear combination. I'm considering modeling linear combinations as sums of scalar coefficients. I'm guessing that this isn't actually complicated, and I just need to consider a lot of things that feel inhibited or missing for me. 1. When considering a linear combination, we ask what is it combining. For the uniform distribution sum, we want sums of either scalars or uniform distributions, nothing else. So some way to parameterise that for other parts. 2. When flattening expression trees, one often walks them recursively. I've drafted a construct I called "interpretation" that holds some idea around applying a mixin to an object that is already constructed. Basically it just supports "new" and "isinstance" to give some way to work with objects using intuition while prototyping. So question #2 is how to write code that reinterprets an expression tree as a flat linear combination, by recursively interpreting its parts that way ... basically the interpretation needs to be parameterised on what the linear combination is of, as described in #1. Whew, I think ... There's always something you haven't thought of. If you "fully" align your borg with humanity, they spawn a nanite species on the dark side of the moon.
Here are pastes of draft base classes. These are design level, not work level. They don't relate to the project and would, in a larger project, go in a dependant library. They are small and simple, and I've at other times made forms of them that I prefer over these. They're intended to be refactorable, changing the concepts around for other uses. Notably, in this implementation there is no way yet to add further methods to "Interpretations". I'm still learning effective uses of __new__. This is the first time I've used it. I made a prototype norm of leaving @staticmethod off of the Interpretation method implementations. This is poor style but saves typing. # OpExpr is prototype base class for expressions class OpExpr: def __init__(self, op, *operands): self.op = op self.children = operands def __getitem__(self, idx): return self.children[idx] def __instancecheck__(self, cls): if issubclass(cls, Interpretation): return cls.test(self) else: return super().__instancecheck__(self, cls) def __add__(left, right): return OpExpr(op.add, left, right) def __radd__(right, left): return self.__add__(left) # Interpretation acts like a function associated with a form, that can either create a form of an existing object, or a new object of the form. # I think of it kind of like a proxy without the proxying implemented yet. class Interpretation: def __new__(cls, obj, *params): if len(params) = 0: try: return cls.form(obj) except: return None else: return cls.new(obj, *params) def new(*params): raise NotImplementedError() def form(obj): raise NotImplementedError() @classmethod def test(cls, obj): try: return cls.form(obj) is not None except: return False # Reordered is an interpretation of an expression that changes operand order def _Reordered(*order): class Reordered(Interpretation): def form(expr): return OpExpr(expr.op, [expr[idx] for idx in order]) Reordered.__name__ += '_' + '_'.join(str(idx) for idx in order) return Reordered Reordered10 = _Reordered(1, 0) # Op is an interpretation of an expression that can just be used to check if it has a given operator, without having to type "." and "==" on my phone with my finger issues. def _Op(op): class Op(Interpretation): def new(*operands): return OpExpr(op, *operands) def form(expr): assert expr.op is op return expr Op.__name__ = op.__name__.title() return Op Add = _Op(op.add) Mul = _Op(op.mul)
I want to add this idea to the game: I expect there'd be an eery feeling if people traveled back in time, and then accidentally caused events that they experienced later in the game. I think it would be coolest to do that environmentally, by providing a number of different ways for it to happen. I'm not sure if that's numerically reasonable, but it could be explored. You could also nudge the player via in-game events. I'll at least try to add the idea to the notes in the repo. If you did it well, you could even get players to cause events that happen simultaneously in real-world time! Might take a very simple system to do that kind of player prediction though.
participants (1)
-
Undiscussed Horrific Abuse, One Victim of Many