this is a crazy journal i made around coping strategies for optimizing and repurposing advanced algorithms while experiencing severe inhibition in doing so: process-understanding this ai-daydream-part is valued for its use in doing logical consideration. when a process is reviewed and understood, the results that can be produced from its parts are enumerated. for example when understanding something like a = bx + c we learn basic relations of proportionality and increasing/decreasingness such that we can predict one variable from a change in another and compare the relation to other relations we see in the world. we often transform the relation to a line graph. and then transform other relations to the line graph. and review the line graph to consider specific variable values. this is analogous to simply having the relation available to compare proportionality of related amounts: where proportionality is an idea of how those amounts will change, when other amounts change, and idea of how confident and likely and accurate that is for example when understanding something like a = bx + c we learn basic relations of proportionality and increasing/decreasingness such that we can predict one variable from a change in another and compare the relation to other relations we see in the world. we learn that when b goes up, a goes up by the same amount, if c is held constant we learn the when b goes down, a goes down by the same amount, if c is held constant we learn that the relation between a and b inverts with negative values, by it being held by multiplication, and we test that. we also try holding a and b constant, and learn that c is related clearly to a in a similar way, but not as clearly. we consider individual relations between the parts that seem clear and useful each relation depends on other conditions of the other parts. then when using it, we apply these individual relations to other situations. [burnt out quickly trying to write] [there are 4 variables, not 3, here. a, b, c, x. meant to write y = mx + b] how to generalise to other processes you're saying the _parts_ of the processes, the _things_ it depends on, the _variables_ and _equations or simple relations of things between them_ are _enumerable_ _deriving_ relations between the different variables, gives us utility and a great portion of understanding. we then hold these derivations near where they are useful, associated with the processes for deriving them to verify results. [algebra being very similar to intuition, surprisingly] like if someone is a postman this means they will work every day, i will see them near mailboxes, they will be driving a car with the driver seat on the wrong side then say i want to wave at the postman. to do so i will need to assume that they are on the wrong side of the driver seat. this involves part of my waving process, where place _myself_ where they can see me ... yeah stuff like arithmetic and waving-inference-behavior and stuff like that. we have a big habit of reviewing things for their future utility. for example optimizing algorithms. this means enumerating the _utility_ and _construction_ of the parts. _how_ was this constructed? _why_ were choices made? where can other approaches for the same reason meet differeing goals for the process? when considering how something was constructed and why, we often imagine we were constructing it. we figure out how the choices involved in selecting its parts helped it goals. in order to develop skill constructing it ourselves, we make sure we can find other choices that would also meet the goals. we review the entire system until we know we can make the same thing a completely different way at all the points in the process of interest. we don't actually do that, we just review it enough to do so. then we select changes with high return for our goals, to meet them. this involves assuming we can learn new things to meet the various goals, without actually learning them all. we then select which things to discover based on the expected return of doing so. ok thanks for writing it. we actually use these processes while daydreaming, to handle the cognitive issues via other approaches. also while working, in notes to keep going when handling issues. i really struggle to form logical inferences in some states of mind. especially when it overlaps algorithmic, mathemtical, geometric, or memory-based inferences. we've been using an abstract concept of 'summary' that is nonverbal, but above, it looks like summarises can be verbalised. we didn't solve for the groups of thoughts that are not understanding of processes. we also didn't translate with the word 'summary' that is so commonly referenced internally. but there's strong value around utilising understanding of processes, it's really a skill we've assumed we have. ok um. we review um the process and think of how it could have been made. when we try to understand why a part works, we look to the relation between what the part does, and the choices made in constructing it. i made a slingshot! how does it work? - rubber exerts a force when extended, to contract again - by stretching rubber with an object, the force is exerted on the object force-production -> engineering parts that make it function tool/weapon-ergonomics -> making it useful for human hands logistics-engineering-of-design -> putting parts together in way where all goals are met you could make something that has the same use as a slingshot by meeting the 3 goals above in some other way. if rubber goes away, another stretchy material could be used if stretchiness goes away, another way of exerting a force could be used the functions of parts can often be clearly described by short words but there is strong suspicion some domains have no clear words. we believe we can change that by forming languages that seem intuitive for them. not everyone is sure of this. how does the linux kernel work? it holds computer instructions for running the various services of the system it was made by nonpaid software engineers, originally just one. it likely took him many months of unpaid hours of work to make, unknown though. how does it work for ... fixing the thrashing problem of the system? we'll need to review the parts that make thrashing and store information. we'll want to review the code for moving memory between swap and ram. specifically, we'd be interested in the process of choosing when to do that. thrashing is caused when swappiong happens more than user interaction holding the goal of stopping thrashing: i.e. making user interaction responsive when ram is exhausted we would then consider avenues for changing the swapping behavior and consider avenues for detecting thrashing to do so and avenues for relating those information together kernels have 'userspace' and 'kernel space', the two can be laborious to move information between. so rather than detecting which processes are user processes, it could be good instead to measure whether thrashing is happening. - find a way to measure that thrashing is happening - alter the swapping code so as to prevent it when it does { thrashing happens when memory needs to be swapped so much that the cpu cannot do work basically, one or more processes have memory access patterns that don't provide for cpu time. these processes would need to be placed on a queue with properties that provide for other processes to do work while they wait. this may mean reducing the ram available to those processes. it could also mean engaging other parts of the linux kernel. } those are good words, and could continue. when working a projedt we're likely to not have much utility from reviewing the words, there being so many, although we are getting a little better and better at that, slowly. but while writing them we came up with useful parts of the project to engage. this typing has been preparation for further cognitive decline around learning and acting-on-understanding-of-things. it is also an attempt to find ways to learn and act-on-understanding in areas we do not presently do that well. it is designing a coping strategy, or a set of them, or imagining doing so. 'cognitive decline' could be replaced with 'inhibition', where we just can't seem to form action and thought around topics, maybe due to them triggering wild spasms in our experiences. the heartpulse goal involves analysis of an algorithm that uses research we haven't learned. for example, it uses {eigenmatrices?}, which habitually we would understand by exploration of their use and remembering via exposure. we could reach success without as much resistance, possibly, by instead considering the utility of eigenmatrices simply within the design of the algorithm. we would then put the algorithm together based on the utility of its parts, rather than the specific parts, and transform it to a more efficient approach that we come up with. this may meean interrecoding the parts, such that their subparts move between each other. it also likely means redesigning some of them. some parts are balking at this. it sounds like karl wants to be able to do that _easily_. it's such a big rote puzzle work, as described. huge-seeming. but we understand karl seems to need to understand his own processes of understanding, in order to keep going on some of his task ideas. this challenge is actually similar to reason-review and the reason for the task is not for a strongly valued result any more. it is simply to learn to do tasks like it. because we used to be able to. the reason to stick with the tasks could be because we have done them enough to form descriptions of them like the above partial one.