Re: [spam][crazy][log] idea: relearning to write code
1410 k let's make multithreading work 1431 return self.peer_stream(txid, range=range) File "/home/ubuntu/src/pyarweave/ar/peer.py", line 1062, in peer_stream return io.BufferedReader(PeerStream.from_txid(self, txid), 0x40000) File "/home/ubuntu/src/pyarweave/ar/stream.py", line 12, in from_txid tx_offset = ar.Transaction.frombytes(peer.unconfirmed_tx2(txid)).offset AttributeError: 'Transaction' object has no attribute 'offset' i'll just spend some time learning how to get the offset of a transaction from its binary object. looks like a bug in my pyarweave library. very inhibited, so the task change is actually helpful. 1431 boss 1: "why are u vomiting everywhere" boss 2: "i ate an ai i'm so sorry please excuse me" [vomits left] [vomits right] boss 1: "dude we have messed up software developers to do that for you" boss 2: [vomits] 1433 1436 i'll have fun checking out the function in arweave that calculates the offset 1439 https://github.com/ArweaveTeam/arweave/blob/master/apps/arweave/src/ar_data_... i wonder if i've reviewed this before 1629 just got the same checksum from a multithreaded upload, as download. major threshold :) met my hard subgoal. met it !! 1631 i'm going to use my goal coping stating of mind to try to add more features, like gps and photo capturing, and system logs, while recording is running. 1640 thinking about how much it would help to analyse problems to have blockchained logs. thinking about it stimulates lots of inhibition ^_^ 1641 1657 k i'm confused around this; i'll take it slowly. previously the generator produced a sequence of data chunks and the storer uploaded each one now each one has a type. some of these make sense to upload, some make sense to include raw. it's no longer a list of things to just pop and upload: one needs now to consider the order in which they are uploaded. when they are indexed, the indexer has to wait for stuff, so it can put a few in concatenated. it makes sense to separate the types out for the concatenation: accumulate separately for each type. if the types are separated out, this would happen after storing, since storing operates in parallel. it would make sense for the concatenation to happen at the end of storing. 1706 ok the storers retain an index in order to keep things in the right order while operating in multiple threads. ummmmm atm in the implementation, _all_ the events are ordered. ok. hum. 1714 to handle multiple binary streams i'll need to add channels to flat_tree . leaves two unstable things at once, so my plan is to have only 1 binary stream for now. 1715 actually i'm fudging it so it only tracks the length of one stream, but lets them all be there 1720 doing my goal-focus thing as stuff pushes away from my task 1747 there's a bug, it's associated with disorganization in my code. i've seen it before. i feel frustrated, exhasperated, unneccessarily. the small weakness opens big space for inhibition. there are a few things going on at once. i'll focus on a different bug immediately. might not be the best choice. 1757 so what i seem to settle into is a state of mind where i have an extended sense of what i call 'inhibition'. it's a set of intense experiences that i usually respond to by leaving my current task. what's interesting, is that i'm in a state of mind where i can keep working some. it's hard, confusing, but happens. i guess it built a little out of the 'dissociated' working, where at base i would just copy code back and forth. it's really unpleasant and i'd like to find something different or a way to improve it. it is of course great that i can continue to do some work. 1758 i'm thinking about it a little bit, and i can kind of feel in me that there are some ways i might improve it, similar to how i started this. thinking of just doing a little bit, and getting better at that. i could pair it with some distance. there are inner spaces that are hard to describe, but maybe it's really reasonable, unsure. 1759. 1805 - the extra 'None' output appeared to be just a wrapping line of debug output. helps to turn off debug output for dependency. - ... presently 3MB turned into 1.7MB, likely truncated. 2106 i've observed a repeating race condition in the multithreaded uploader that produces an unending hang. this is only happenign with the new code, but occurs in the old code. it looks to me like the right solution is to change the layout, which is more work than i planned for. spending some time thinking about how things could be easier for me. not sure what to think on this, but it seems like what makes sense to consider. getting up in the morning is so hard. physically and cognitively. it's like a puzzle, finding parts that will do it. i've been finding them a little more readily, thinking of the strategies i came up with from the second chapter of that book, but it's not the be all and end all -- i have dissociation and dyskinesia and amnesia and other similar stuff, that can really make things confusing and complex. some of the behaviors i've built over the past two days work, but they don't feel right. the dissociation associated with them is very palpable. i have a state of mind where i'm working that i'd like to protect better. similar to where i'm getting up. but i guess getting my parts able to do these things is a step towards a solution. thinking of addressing this enough to get through much of it is such an incredibly different viewpoint from years past. there is a lot that goes undescribed. lots of important things. 2110 the goal is to reduce the inhibition. sleeping is not my favorite thing all the time. my inhibitions can sometimes seem to strengthen a lot. maybe there are things to think about around that. the goal is to reduce the inhibition. we can also skill build around goals and behaviors, but it seems stronger sometimes to reduce the inhibition. 2113 2115 tomorrow is tuesday the 16th. i'm working on moving some code inside multicapture.py, so as to accommodate when a network exception is thrown after popping an item off the queue to process
0724 lost a bunch of these when my fingers accidentally clicked 'discard draft'. with Data.lock: data.extend_needs_lk(self.path, out_chunks) self.chunker = None i'm in the middle of handling an inhibition. 0750 logged on_created for the new unincluded files 0735 struggling some to think about this code not presently consistent def on_modified(self, event): if event.is_directory: return if self.cur_filename == event.src_path: self.cur_filename = event.dest_path event.src_path = event.dest_path self.continue_file(None) def on_closed(self, event): if event.is_directory: return if self.cur_file is not None and self.cur_filename == event.src_path: self.continue_file(event) if len(self.queued_files): self.close_file() self.process_queue() i'm trying to cobble things together that process both new files and appended files. so there's a queue of files to process when the first one stops appending i think it'll be easier to implement if i note when the appending file is closed, as a way to move on to processing new files if they happen ohhh maybe i can process the queue if it's closed and there is a queue ... i think i'm already doing that. i'm concerned aroudn the instance where it is closed, no files are pending, and then a new file is created. in that instance, i want to [make sure new files are processed if this one has no new data]. 0737 i'm concerned aroudn the instance where it is closed, no files are pending, and then a new file is created. in that instance, i want to [make sure new files are processed if this one has no new data]. 0737 one thing i thought about, was making some change when it is closed with nothing pending, indicating that it is reasonable to engage other files. maybe what i could do is try to continue it; and if there is no new data, then close it? uhhh sounds like i might be aware of an issue with that, uncertain. i'll just stick with the flag. 0740 this seems too complex. i know it has lots of bugs. i want to move it into another python file and work on it just until it works. 0742 ok maybe i can just bump into the bugs and fix them now. presently the code has an incomplete string at the end. def on_moved(self, event): if event.is_directory: return if self.cur_file is not None and self.cur_filename == event.src_path: offset = self.cur_file.tell() self.cur_file = open(' 0748 i'm testing it a little. it doesn't move to new file. 0752 somehow new changes to this draft misplaced. anyway poking at. 0754 it successfully moved from one file to the next. it seems to have issue with appending to existing files. 0759 ok it still has bugs but i ran into a deeper issue: when i paste the data into zstdcat, it keeps processing after the end oft he stream. want to check that the end of stream is detectable. it seems like it's at least possible. the data is chunked into frames, it's possible to decompress a frame at a time, and an error would be thrown if it's not a zstd frame. boop. 0803 0808 ok i reproduced some input to output using the path watcher. i'm thinking of trying to slap it together and run with real data. there will be many bugs. 0900 0905 i had rebooted my system and i'm having trouble logging in to my raspberry pi again $ ssh user@raspi22 user@raspi22's password: Permission denied, please try again. user@raspi22's password: Received disconnect from [gently censored] port 22:2: Too many authentication failures Authentication failed. it's also behaving a little unexpectedly for me. I wouldn't expect it to disconnect after 2 attempts rather than 3. But maybe things have changed in a system update. 0906 i pasted my password in from typing it and looking at it to verify it was right, and still getting authentication failed, strangely. 0907 ohhhh i have the username wrong ! :D 0921 the system ran out of space. i set a package upgrade going in the background since it mentioned one was needed. i am now addressing the issue. waiting for git-annex repo. git-annex: git status will show garden-of-the-misunderstood/2022-02-21T18:10:46-05:00.cast.zst to be modified, since content availability has changed and git-annex was unable to update the index. This is only a cosmetic problem affecting git status; git add, git commit, etc won't be affected. To fix the git status display, you can run: git update-index -q --refresh garden-of-the-misunderstood/2022-02-21T18:10:46-05:00.cast.zst 0924 fatal: Unable to create '/home/ubuntu/src/intellect/.git/index.lock': File exists. $ fuser /home/ubuntu/src/intellect/.git/index.lock $ ps -Af f | grep git ubuntu 181674 28764 0 09:25 pts/1 S+ 0:00 \_ grep --color=auto git $ rm /home/ubuntu/src/intellect/.git/index.lock 0926 0932 back in the code. better charge ahead. 0937 i am so tired of inhibitions i do like copying the code back and forth (Pdb) p self.channels {'/home/ubuntu/src/log', 'capture'} (Pdb) p indices[-1][0][-1][1] self.channels only shows two channels when there are many. what's strange is the output doesn't show any channel but capture. for metadata, channel_name, header, stream, length in stream.iterate(): sys.stderr.write('channel data: ' + channel_name + ': ' + str(length)+'\n') 0940 what i'm looking at seems to mean that it should not have functioned at all. strangely it was. (Pdb) p index.items() dict_items([('ditem', ['dLjCqIi9h8RIBch1UXNtKWLTkmRtxYUp9iC3Lm_fRoI']), ('min_block', [996652, 'pG4gRSc03l2js77IfpfUvkTx2zRQFE5capCxY7rSjZ5UWT-5NqeV6U0bvlu_uxW0']), ('api_block', 997053)]) (Pdb) p channel_data 997052 channel_data is supposed to be a dict but maybe we have passed the sequence of code where it is, in the debugger 0944 0944 0944 0945 AssertionError yielding capture @ 565248 it looks like, aroudn 565248, a node might contain more children than its listed length the first root child is 499712 long, so it may be all good. the second one is 94208, so it goes to 593920. drill down. 0947 565248-499712=65536 down a couple nodes, i bump into an offset of 45056 65536-45056=20480 ok, the leaf at 20480 actually starts at 8192 and is 32768 long. not what i expect. it doesn't look like the data i saw in the debugger, to me. i'll break into that offset and look around. i had been looking later 0952 ok the debugger stopped me at a different index. guess i'd better compare. here's what i looked at by hand: ClgwxWi1-IXPBxcPHfV82L3ahdrfk5x4H1yc2XDkf8M lZ9z6x0_XFj9xASzqmCE8Dkm8F3p55t0CaNjzw2gQ3Y qQHhI9d77zlYgz0e8s9mJ7HsHZmBPpgo-McsPZYC_yQ k3PO5uvcyOWgPqwo3q-Xc8HbdmSIYYUcRQw_36Z5Ol0 by5Y5Pm1EMginmZS9mYiTyslnJEwZHXN-IBOGrG8vlE z8NwryGTTWIjYMmfH0JsujjFVUO_o30E9zKD_AAmfGc then, in the debugger ... i don't see any of those ditems. maybe i'm looking at a different stream. no. same stream. 0955. ClgwxWi1-IXPBxcPHfV82L3ahdrfk5x4H1yc2XDkf8M <- this is the tail object. the debugger won't show it, it loads it when the stream is loaded. i see its content as the first index in the debugger. lZ9z6x0_XFj9xASzqmCE8Dkm8F3p55t0CaNjzw2gQ3Y <- this is visible in that content. i was misreadingt he first character as an I or 1 when it is a lowercase L . there are only 2 indices in the debugger and those are they. they just looked long cause their contents are. 0958 so if it's only 2 indices deep into the tree, how is it at this later offset? lZ9z6x0_XFj9xASzqmCE8Dkm8F3p55t0CaNjzw2gQ3Y starts at offset 499712 in the debugger, it looks like its processing the embedded data at the end of lZ9z6x0_XFj9xASzqmCE8Dkm8F3p55t0CaNjzw2gQ3Y , at that offset. the preceding data has lengths of 561152+4096 total, so it ran into a length error earlier than this offset, and did not fail an assertion ... i think ! 1002
1002 suddenly sent that. different kind of inhibition. 1004 pasting stuff during dyskinesia ;p (Pdb) p index {'capture': {'ditem': ['495FZqKXSr9cCPObKGVuNHShJA79enrwHDk-xcMOBVw', '-m-6k-usTx0RUbRI9EEDRaiA2vIapMObgKz3S1bB2Vs', 'aA2go7KTnc4ArkqcjDN-pg4A-c97_bypml5C01eS5ZU'], 'length': 20480}, 'min_block': [996652, 'pG4gRSc03l2js77IfpfUvkTx2zRQFE5capCxY7rSjZ5UWT-5NqeV6U0bvlu_uxW0'], 'api_block': 997052} curl -L https://arweave.net/lZ9z6x0_XFj9xASzqmCE8Dkm8F3p55t0CaNjzw2gQ3Y | python3 -m json.tool 72 import pdb; pdb.set_trace() 73 -> self.channels.add(channel_name) 74 length_sum = 0 561152, 4096 ], [ -1, { "capture": { "ditem": [ "495FZqKXSr9cCPObKGVuNHShJA79enrwHDk-xcMOBVw", "-m-6k-usTx0RUbRI9EEDRaiA2vIapMObgKz3S1bB2Vs", "aA2go7KTnc4ArkqcjDN-pg4A-c97_bypml5C01eS5ZU" ], "length": 20480 1007 1008 (Pdb) p stream_output_offset, expected_stream_output_offset (0, 1110016) 1009 after fixing assertion mistake, not finding offset error i'm realising that the subindices are actualyl as wide as the whole stream. i think i was manually calculating it wrongly. 1011 1012 (Pdb)
/home/ubuntu/src/log/download.py(70)iterate() -> if type(channel_data) is dict and 'ditem' in channel_data: (Pdb) { "capture": { "ditem": [ "495FZqKXSr9cCPObKGVuNHShJA79enrwHDk-xcMOBVw", "-m-6k-usTx0RUbRI9EEDRaiA2vIapMObgKz3S1bB2Vs", "aA2go7KTnc4ArkqcjDN-pg4A-c97_bypml5C01eS5ZU" ], "length": 20480 },
it appears to pass on from that breakpoint correctly. it then pops bakc up to the root node, and likely proceeds with the third child. 1013 . when it pops, it is at an unexpected offset ... possibly because i made the same error in calculating it. this might actually be a bug in the tree, unsure (Pdb) n AssertionError
/home/ubuntu/src/log/download.py(95)iterate() -> assert stream_output_offset == expected_stream_output_offset (Pdb) p stream_output_offset, expected_stream_output_offset (585728, 593920)
1015 reasonable to diagnose. just 1 down from the root. 2nd child in. length possibly mismatching. width of child 1 = 499712 width of child 2 = 94208 (Pdb) p stream_output_offset, expected_stream_output_offset (585728, 593920) (Pdb) 499712 + 94208 593920 it's like a bug with the downloader. the bounds specify to extract exactly 94208 bytes. 1017 this is _hard_ but good practice! i'm planning to leave the system at 11:00 and try to do daily routine stuff. 1019 turns out it's a bug in the uploader. the data in the second child is only 585728 bytes long. 1019. 1021 this could be helped by an assertion in the uploader. not sure what yet. lengths = sum((capture['length'] for capture in data.get('capture', []))) datas = { type: dict( ditem = [item['id'] for item in items], length = sum((item['length'] for item in items)) ) for type, items in data.items() } indices.append( prev, lengths, dict( **datas, i'm not sure how the tree is referencing a child with more data than the child contains. maybe i could add an assertion to the tree code. 1023 running_size = 0 running_leaf_count = 0 1023 def _insert(self, last_publish, *ordered_splices): # a reasonable next step is to provide for truncation appends, where a tail of the data is replaced with new data # currently only performs 1 append assert len(ordered_splices) == 1 for spliced_out_start, spliced_out_stop, spliced_in_size, spliced_in_data in ordered_s 1024 #new_node_leaf_count = self.leaf_count # + 1 new_leaf_count = self.leaf_count new_size = self.size for idx, (branch_leaf_count, branch_offset, branch_size, branch_id) in enumerate(self): if branch_leaf_count * self.degree <= new_leaf_count: #proposed_leaf_count break self[idx:] = ( #(leaf_count_of_partial_index_at_end_tmp, running_size, spliced_out_start - running_size, last_publish), (new_leaf_count, running_size, new_size, last_publish), (-1, 0, spliced_in_size, spliced_in_data) ) maybe here at self[idx:] is where an assert would go how was the root updated, to include a partial index? new_size must have been wrong? 1025 assert self.size == sum((size for leaf_count, offset, size, value in self)) this happens at the end of every mutation. it addresses the root only, not its children. self[idx:] = ( #(leaf_count_of_partial_index_at_end_tmp, running_size, spliced_out_start - running_size, last_publish), (new_leaf_count, running_size, new_size, last_publish), (-1, 0, spliced_in_size, spliced_in_data) ) adding this: assert new_size == sum((size for leaf_count, offset, size, value in self[idx:])) 1028 I guess I'll try to make code to recreate the try while downloading it, so as to test the creation of this tree from its data. old: (Pdb) p 585728-499712 86016 (Pdb) p 561152 + 4096 + 20480 585728 newer: from flat_tree import flat_tree 1030 index.append(id, len(chunk), chunk) 1031 comparison.append(comparison.size, index_subsize, index) 1032 comparison.append(comparison.leaf_count, index_subsize, index) (Pdb) p comparison.leaf_count 35 1034 (Pdb) p comparison.snap() [(27, 27, 0, 499712), (3, 30, 499712, 28672), (3, 33, 528384, 32768), (1, 34, 561152, 4096), (-1, {'capture': {'ditem': ['495FZqKXSr9cCPObKGVuNHShJA79enrwHDk-xcMOBVw', '-m-6k-usTx0RUbRI9EEDRaiA2vIapMObgKz3S1bB2Vs', 'aA2go7KTnc4ArkqcjDN-pg4A-c97_bypml5C01eS5ZU'], 'length': 20480}, 'min_block': [996652, 'pG4gRSc03l2js77IfpfUvkTx2zRQFE5capCxY7rSjZ5UWT-5NqeV6U0bvlu_uxW0'], 'api_block': 997052}, 0, 20480)] [ (27, 27, 0, 499712), (3, 30, 499712, 28672), (3, 33, 528384, 32768), (1, 34, 561152, 4096), (-1, {'capture': {'ditem': ['495FZqKXSr9cCPObKGVuNHShJA79enrwHDk-xcMOBVw', '-m-6k-usTx0RUbRI9EEDRaiA2vIapMObgKz3S1bB2Vs', 'aA2go7KTnc4ArkqcjDN-pg4A-c97_bypml5C01eS5ZU'], 'length': 20480}, 'min_block': [996652, 'pG4gRSc03l2js77IfpfUvkTx2zRQFE5capCxY7rSjZ5UWT-5NqeV6U0bvlu_uxW0'], 'api_block': 997052}, 0, 20480) ] the root is different because it hasn't added the later data yet :/ OK. what i can remember is that every state of the tree was already uploaded. it's retained and referenced. also, the flat_tree class is easy to make import old data. noted also it would be more interesting to compare if it used the whole trees as the references. 1036 (Pdb) p comparison.snap() [(27, 27, 0, 499712), (3, 30, 499712, 28672), (3, 33, 528384, 32768), (1, 34, 561152, 4096), (-1, {'capture': {'ditem': ['495FZqKXSr9cCPObKGVuNHShJA79enrwHDk-xcMOBVw', '-m-6k-usTx0RUbRI9EEDRaiA2vIapMObgKz3S1bB2Vs', 'aA2go7KTnc4ArkqcjDN-pg4A-c97_bypml5C01eS5ZU'], 'length': 20480}, 'min_block': [996652, 'pG4gRSc03l2js77IfpfUvkTx2zRQFE5capCxY7rSjZ5UWT-5NqeV6U0bvlu_uxW0'], 'api_block': 997052}, 0, 20480)] 1037 so at what point did the length issue develop, if it is there? 1039 i went back as far as _AdSfr-AHdtWF20eR9ThV8NEOey7QydTsIbUpRX6GIc so far. it contains the 94208 length reference, and then 20480 tacked on the end embedded. 1041 the only index prior to that is the one that is only 565248 bytes long so i guess i would want to reproduce that 565248 one, and tack the extra 20480 onto it, and see what kind of index it makes. it seems to me it is an error to make the one with the 94208 length. then i can make an assert for it and/or fix it or whatnot. 1042 lZ9z6x0_XFj9xASzqmCE8Dkm8F3p55t0CaNjzw2gQ3Y is 565248 bytes long _AdSfr-AHdtWF20eR9ThV8NEOey7QydTsIbUpRX6GIc is on top of it, and references it as if it is 593920 i'm worried the most likely situation here is that some data happened between them and was dropped. but i could try this. maybe i'll go to the block explorer and see the sequence of transactions. 1045 the txs are ordered alphabetically by the block explorer. they are bundled into a larger transaction with id lUx1VzFzykYepB44NfrD_GqJLZj4fD9vb6rd0IxBWH4 . i'll use my code to see their order within it.
import ar peer = ar.Peer() stream = peer.stream('lUx1VzFzykYepB44NfrD_GqJLZj4fD9vb6rd0IxBWH4') header = ar.ANS104BundleHeader.fromstream(stream)
1047
header.length_by_id.keys() dict_keys(['GEZeoe9DMmxtVi4Jqx-q-g9yIMYOw7vWb2fF9GjkVkQ', '-yL3L6w9ysIWrcg8ZSXwV_DxdBOr4PjEJWjnxOYqIU0', 'KLjPJ3JGVxHhtSLzFK8-dlU_pTncyu-C6B3s0F5yBuc', 'zDRSNDKjL04CPFzhzxgmT3ODebBfTbI2RMH
these aren't alphabetical, so they might be ordered . they're big. 1050 $ sudo swapon ~/extraswap just in case
bundle = Bundle.fromstream(stream)
i'm guessing it's paused loading it over the network? https://viewblock.io/arweave/tx/lUx1VzFzykYepB44NfrD_GqJLZj4fD9vb6rd0IxBWH4 Size 47.78 MB not sure what is taking so long. $ sudo apt-get install jnettop 1051 1052 jnettop shows minimal transfer, with no reverse lookups that i identify as associated with arweave. ^CTraceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ubuntu/src/pyarweave/ar/bundle.py", line 540, in fromstream header = ANS104BundleHeader.fromstream(stream) File "/home/ubuntu/src/pyarweave/ar/bundle.py", line 87, in fromstream return cls({ File "/home/ubuntu/src/pyarweave/ar/bundle.py", line 87, in <dictcomp> return cls({ File "/home/ubuntu/src/pyarweave/ar/bundle.py", line 83, in <genexpr> (int.from_bytes(stream.read(32), 'little'), b64enc(stream.read(32))) File "/home/ubuntu/src/pyarweave/ar/utils/serialization.py", line 7, in b64enc return base64url_encode(data).decode() File "/home/ubuntu/.local/lib/python3.9/site-packages/jose/utils.py", line 88, in base64url_encode return base64.urlsafe_b64encode(input).replace(b"=", b"") File "/usr/lib/python3.9/base64.py", line 111, in urlsafe_b64encode def urlsafe_b64encode(s): KeyboardInterrupt it looks like it was actually processing them. maybe i can do it manually and put it in tqdm. 1053 looks like there's some bug in Bundle.fromstream, which I will ignore for the moment.
dataitems = [ar.DataItem.fromstream(stream, length=length) for length in tqdm.tqdm(header.length_by_id.values())] 100%|███████████████████████████████████████████████████████| 679/679 [00:14<00:00, 47.57it/s]
1058
idx_by_id = {dataitem.header.id: idx for idx, dataitem in enumerate(dataitems)} idx_by_id['lZ9z6x0_XFj9xASzqmCE8Dkm8F3p55t0CaNjzw2gQ3Y'] 413 idx_by_id['_AdSfr-AHdtWF20eR9ThV8NEOey7QydTsIbUpRX6GIc'] 632
my_ditems = [dataitem for dataitem in dataitems if dataitem.header.owner == dataitems[413].header.owner] len(my_ditems) 246 i have 1/3rd of the ditems in that tx ;p 1059
Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: 'lZ9z6x0_XFj9xASzqmCE8Dkm8F3p55t0CaNjzw2gQ3Y' is not in list
my_ditems.index('_AdSfr-AHdtWF20eR9ThV8NEOey7QydTsIbUpRX6GIc') Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: '_AdSfr-AHdtWF20eR9ThV8NEOey7QydTsIbUpRX6GIc' is not in list
i did something wrong. stepping away 1101 . I'm hunting down an incorrect length in my last published test. the second root child is referenced as longer than it is. i was taking some time to look to see if any intermediate roots were dropped.
i'm going to try to use this new coping strategy to keep a local schedule, to reduce random postings to this list.
participants (1)
-
Undiscussed Groomed for Male Slavery, One Victim of Many