[ot][crazy][personal] Micro-moronism Challenge: Iterate & Transform
I'm pretty confident I can relearn to do this when I have produced a small puzzle myself. I imagine it getting harder when somebody else has produced the puzzle, or when it is something useful and non-present in the world. I've made https://github.com/xloem/micromoronism . I'm running into psychological issues crafting puzzles to add to it. One thing I noticed: - it gets much harder as I imagine adding >2 instances of a concept, especially if these involve different kinds of changes
Now I'm experiencing that splash from pressing an inhibition. Similar things get much harder. Lemme try something _very_ simply right in this thread. def sum(a, b): return a + b def add(a, b): import operator operator.add(a, b) print(sum(1,2)) print(add(1,2)) => MUTATE, REFACTOR, CONSOLIDATE, IMPROVE => note, when consolidating, we don't expect any loss of data. changes should be stored in a revision history, and disparate features integrated together. def add(a, b): # some implementations previously called out to operator.add to do this # I saw no use of this behavior and have elided it in 2022-12-07 return a + b print(add(1,2)) print(add(1,2))
def add(a, b): return a + b print(add(1,2)) => MUTATE, TRANSLATE, DOCUMENT, RESOLVE => note: when translating from one language to another, we want to keep components and structures analogous, so that future maintainers can safely migrate changes and improvements either way without requiring in-depth understanding of details template <typename T> T add(T a, T b) { return a + b; } #include <iostream> void main() { std::cout << add(1, 2) << std::endl; }
Now it's very hard to do _both_ of those at once, for me right now, both translating to another language, and consolidating redundant parts. But, luckily and surprisingly, and it's almost like remembering a forgotten dream for me to see this here, I just did both examples based on the _same redundant function_. So I can work on that inhibition but just pasting my previous work next to each other!
# this was pretty hard. i'm kind of confused looking at this. # i mostly just did a blind paste rather than intending the content. def sum(a, b): return a + b def add(a, b): import operator return operator.add(a, b) # added a bugfix here, missing return print(sum(1,2)) print(add(1,2)) => MUTATE, REFACTOR, CONSOLIDATE, IMPROVE => def add(a, b): # some implementations previously called out to operator.add to do this # I saw no use of this behavior and have elided it in 2022-12-07 return a + b print(add(1,2)) print(add(1,2)) => MUTATE, TRANSLATE, DOCUMENT, RESOLVE => template <typename T> T add(T a, T b) { // added missing comment found in python source: // some implementations previously called out to operator.add to do this // I saw no use of this behavior and have elided it in 2022-12-07 return a + b; } #include <iostream> void main() { std::cout << add(1, 2) << std::endl; std::cout << add(1, 2) << std::endl; // added missing parallel output here }
Most recently I've been looking at machine learning. I found that 'petals' project, and there's another ongoing open issue in the same community, which I follow in my insane manner. There's a lot of code duplication in machine learning. Often implementations will have some arbitrary limit, and an entire new project made that has different limits. This also happens within the same project, as can be seen in all the model implementations in huggingface's transformers library. I see the pattern elsewhere, too, but that is what I'm recently exposed to.
Simarly, when I work with arweave, there isn't really a normative API. Most of the user-facing libraries generally offer only wallet functionality and access via centralized gateways, whereas the peer code and non-wallet-associated apis are mostly all in one erlang codebase in the backend. I've had some luck working with arweave. Rare area where I've done a lot of translating. I haven't quite polished things, but everything's a journey.
Regarding machine learning, this guy on github called lucidrains has a cool hobby; when popular papers come out, they make an implementation of the paper in pytorch. Each implementation is relatively small.
One of my inhibition parts is associated with renaming classes. Maybe combined with moving an important part away. in https://github.com/xloem/flat_tree , I have the concept of data for a tree index associated with actions for mutating the tree. This is making design a little confusing for me, struggling through my issues around it. - renaming a class that is in-use, to completion the problem of course is a little bigger than that: - factoring in or out a small concept, of a class, possibly renaming both
https://github.com/xloem/flat_tree has 3 points where a given component is in use: - in test.py, which tests an implementation - in flat_tree/__init__.py , which wraps implementations with a normative interface - in the implementation such as flat_tree/append_tree.py One of the ways I am actually still using this code I find inhibited is that it is a dependency of https://github.com/xloem/log , which uses flat_tree in capture.py, capture_stdin.py, and multicaputre.py . - capture_stdin.py is an almost exact newer copy of capture.py . combining these could be a good factoring challenge. testing capture.py likely involves using an android phone to record from. - multicapture.py is a different approach that likely has a threading bug . - meanwhile, download.py would use flat_tree if flat_tree had useful reading features ------ regarding machine learning, there is recent activity on the RWKV issue thread in the https://github.com/huggingface/transformers repository where people mention a few more implementations of the model. This model by an independent chinese researcher demonstrates very similar factoring challenges as the ones I am engaging personally: people keep reimplementing things rather than reusing components, and work to integrate things keeps struggling. [the huggingface/transformers repository as a whole actually has a norm of code duplication; i believe their goal is that each model can be a standalone example for people to learn from and practice with; this can seem quite frustrating if one isn't told in advance] The attempt to integrate that model also raises unintegrated models that could be iteration practice others might appreciate. unintegrated models include s4, s4d, facebook mega, hrrformer (and whatever comes after hrrformer maybe waveformer?). like rwkv, these cutting edge architectures blast through current limits of machine learning, but aren't in use. implementing them just means copying from a paper. there are also important old models that aren't implemented, like the linear transformer, which is similar to rwkv.
a very basic challenge for iterate/transform is simply focusing on the same work for long enough that one could complete the task in the time it is focused on i have a habit of 'disrupting' the state of a project by terminating efforts right before a change is restabilized. my local work on flat_tree and log is currently in that state. one of the ways to handle that harmful habit is to make changes that are fully backward compatible. this can be very verbose for e.g. renames. it can also seem helpful if changes are smaller and simpler. when there is only one change, it is easier to remember what to do when one begins needing to dissociate to stay with the work. dissociation involves splitting one's concepts in two, some for each part, so that each intense emotion has concepts that satisfy it. when one repeatedly dissociates it then really severely shrinks one's working memory.
my flat_tree and log seem a mess to me atm, and likely made by a psychologically programmer trigger, which i dissociate from to separate the subtle and severe terror from the programming, if i want to stay. it then becomes very confusing that the mess is made of multiple factoring changes to the content, to my smaller dissociated conciousness, and the confusion around this tends to quickly trigger the mess actions again. with extended practice dissociating, i can toughen against that to a degree, and keep staying, but my memory shrinks very small. it might be helpful to me to simplify this mess by separating the factoring changes out. one of the changes i made was very simple. i migrated the storage norms from flat_tree/__init__.py back into flat_tree/append_tree.py . this change seems good to me. maybe i can preserve both the mess and the clean state, separate out that change in a third worktree, test it with log.py, and push it to the repository.
2022-12-17 0547-0500 i'm thinking of adding two worktrees to both flat_tree and log: "stable" and "mess". i'm somewhat confused that i have worktree changes in these repositories that are _not_ associated with this recent mess, but are older work. maybe i'll make a subfolder with todays date, and make 'stable', 'mess', and 'work' folders with them? handling experience :/
upshot: i discovered you can check out stashes as worktrees. it makes a two-commit history, one for the stash index, and one for merging it with the commit tip. cool stuff. git worktree add dirname 'stash@{0}' .
I've added the data representation changes to append_indices.py in the work/ folder. it became doable; i then encountered an issue when planning to change files to __init__.py to include the wrapper that uses the data representation.
i'm thinking it would make sense to prepare more when i am doing something in a dissociated way and then changing contexts. dissociation seems to need, for me, a lot of preparation for context change.
participants (1)
-
Undescribed Horrific Abuse, One Victim & Survivor of Many