I was thinking maybe the task of building this datastructure that provides for randomly writing to an append-only medium could be broken into 3 parts. - slicing a chunk: easier if an interface for the store is specified, to retrieve written data - optimizing a sequence that includes sliced chunks by reorganising their metadata. i think i included a number of optimizations in one of the attempts that might be copyable out. - selecting which nodes to merge when writing a new chunk so as to make balanced trees. i really simplified code for this in the flat_tree repo. I’ve done all these three things in separate attempts, but not together in the same one. Enumerating them seems to make it much easier to thin’ about, and gives an outline to memorize. 1. selecting which nodes to merge (xloem/flat_tree append structure, can be improved to work from middle) 2. slicing chunks (means the library must have access to old internodes to pull chunks out of them) 3. optimizing out redundant information when reusing old nodes They kind of overlap, and some important choices are left out, but these are the things I’ve presently been struggling to conceptualise near each other, I think. A good next thing to add might be just a comforting reminder of a way to make the append approach work from the middle. I think the hardest part was whether or not a “middle node” should be considered and what it would be. I recall I had some interest in making leaves and internodes be the same class, but unresolved considering of it meant there could be slowdowns if that was done hastily. Maybe it could be interesting and useful to leave out #3 and use a heuristic for #1, and consider what problems remain in the result.