- I added expanding deep branches into every add, in an accumulating way equivalent to recursion. this keeps the tree depth within new bounds when the leaf count drops, and now some branches are merging, covering that code - I made the test a little more pathological by shortening the random writes so they sparsely cover old data more, which widens the tree and slows execution. thinking on addressing this. I started copying the code to C a tiny bit, just a start, not attached. thinking on handling pathological writes just a little - maybe ensuring there are not internodes with single children so more leaves are clustered? - maybe copying leaves up rather than internodes, as investment in future use when the flush is deeper? - considering what property relates to the slowdown: number of nodes traversed? what about in languages with different properties? - considering bounds on things: in a worst-case scenario with an ideal tree, how many new nodes should be attached to a new root, at most might make sense for me to just make some code that uses it, given some of the ambiguity here.