1048 welcome to another episode of that crazy guy on the internet who keeps using his malfunctioning system without rebooting it or anything.
my ip assignment disappeared again, but the route was still there, so I manually reassigned a made up ip in the right group and some of the transfers that were stalled somehow resumed their open connections. which is weird and implies it used the same ip as my guess.
I can now curl google. but github gives network is unreachable. a story that is getting slightly old.
it's 1050. I have a gdb shallow clone on their master branch. i'm changing the refs because it's the only branch I can see. I might just build it.
my pipes and io are getting slower and slower. switching to text-only terminal.
the tmux pane with vim open isn't updating. other tmux panes are.
1052
I recall I found how to fix this issue in the past few months, that it's some tmux thing, but I don't remember :/
ok after some time the pane is displaying different output but still doesn't update. looks like I hit the wrong key somewhere. it has :'<,'> at the bottom.
maybe I can just get rid of the pane. i'll look up how to tell tmux to close a pane. I usually just type exit.
1053
1054
ctrl-b, x. closes a pane !
guess i'll try the ref updating again.
1055
I think the issue is that io to my external raid is very delayed. it hought this was because of the file recovery process, but when I run iotop I don't actually see it uses io. just the git processes I spawned. maybe there are more of them?
1056 and I updated the refs! file save waiting for io.
I had a git fetch --tags going in the shallow gdb repo. it finally finished downloading and is 70% done resolving deltas.
it doesn't make sense to me that these tasks are taking this incredibly long.
1057
but I guess the file recovery process could have really backlogged some write queue.
1058 and i'm spamming a mailing list with minute by minute updates.
file saved ! yay
so the problem with building gdb is the checkout folder is responding so slowly. I guess I'll check io usage.
1059
disk usage is not that high. the default iotop display is shuffling between 0 MB/s total write and 100 MB/s total write. I guess the disks (raid 1 pair) or their driver or such are responding slowly.
the writing is mostly from the recovery process.
1100
maybe it uses direct writes or something. I think those might be slower.
1101
oh maybe this it. the array is 3.6T large and has only 78G left. there must be a lot of seeking happening. maybe I can delete something big.
I also checked /proc/mdstat and both disks are up and in sync, no rebuild happening.
1102
I sit here, waiting for ls -l to complete.
oh there it is.
-> deleted 32G emergency swap file. I regret this, as it has memory contents in it.
actually it's still deleting but it's like inside the system call, I'll probably let it finish
ok it's only 32G anyway, not that big, i'll kill -9 it
oops there it responds
1104
so I have this cool script I made called livesort its in one of my github repos for cli tools, I like to be all du --max-depth=1 | livesort -n and it shows big directories while it is searching for them
1105
this will of course take forever. i'd better pause other processes writing to the disks.
1106
I even found the pane that launched the file recovery app and paused that thing. all with ctrl-z of course. you can resume with "fg" .
du still not responding.
1107
but ls is !
1108
du is starting to respond
I kind of recall my biggest folder is likely a pypi mirror I was starting to make. hmm. maybe I can zstd it or something. i'll visit the du output as it updates.
hmm I have a mirror of stackexchange stuff here that's really big.
also models from the huggingface hub
oh and parts of the old exconfidential stuff. there's probably something there to delete.
uncompressed blueleaks tar. 289G . should zstd that thing.
I would ... have to delete the start of the file as I read it ... while piping to zstd ... uh ...
so there's value to deleting smaller stuff.
currently 109G free. swapfile deletion must have gone through.
oh no i'll need space for the blockchain too
oh no
ummmmmmmm
if I am preserving things, i'd better at least keep a copy of them. some things seem useful for different stuff. a lot of this is just archival. do I need a bigger drive? oh! the pypi folder! oh. hmm.
it would be nice to have a mirror of pypi but i'm not sure I have the space. it seems important to have. how are things so big nowadays?
right now my biggest folder is exconfidential and the biggest file is blueleaks.
unfortunately I don't trust this file to survive and so it's hard for me to delete it. but I could compress it with a trick maybe.
maybe instead of doing all this, i'll just buy a big harddrive, dunno.
I have a disk enclosure I set up for 12v power indoors. I've been using the disks in it with a raspberry pi. i'm a little confused.
to upgrade a raid 1 i'd need two disks that are bigger, or I could get 1 disk that's twice as big, and put the other two into a raid 0.
I think I have 1 disk that's twice as big. there's probably stuff on it too, though, hmm.
uhhhhhhhh I already have a blockchain on this disk. it doesn't need that much free space.
whew.
1118 .
I could tar up and zstd smaller folders until I have enough space to zstd larger stuff.
1119
yay i'm tar|zstd'ing some backup that was in directory format. rare ipfs data that got partly corrupt.
1120.
waiting for my tarball to compress. I used zstd --ultra -22 so it's pretty slow. and I forgot to enable multithreading. it doesn't appear to be processing the subfolders in sorted order, so it's hard to tell how long it will take.
I have an appointment in a few hours.
I think i'll restart it and use threading.
1121
1122
I gave it 8 threads and I think I have 4 cores. cpu is 25% idle 25% waiting on i/o. could have used more threads.
my phone keyboard has developed a bug such that funy things are on my screen. I have a hardware keyboard and the button to close the software keyboard has disappeared.
android back button worked.
1125
I tried resuming one of my clone processes and it bailed because it was mid-network-connection when paused. this freed tens of Gs of space when it automatically deleted its clone folder.
1126
oh hey du found my blockchain folder. only 180G. probably something compressible in there.
and I have some backups of corrupt states of other backups.
but I want to make sure not to make important things harder to find, not sure how to do that.
1127
oh I am so crazy, la de da, la de da
1127
I think I can kind of resume processes some now, with space slowly freeing as I work on that
1128
I guess it would have been silly to allocate more threads when it was already bound by io
1129
hey! I have a tmp folder thats 80G
ohhh that's where the recovery is happening :/
let's compress that stackexchange folder. oh maybe wait for the first compression to finish.
1130
gdb shallow could be buildable now eh
1131
1136
gdb is building, master branch since it's what I had. lots of parallelism since io is so slow.
whooptydoo.
things will go better when more space is free or when the file archival finishes. but it's nice to be working on the same thing kinda continuously. nicer than other ways things can be for me.
gdb is kinda just a token, it shouldn't be crashing, it shows something is wrong. the solution is probably debugging bitcoin, even if the symptom is elsewhere. maybe an old tag. I could also use a binary release, or at least try to.
note: chroots easier than vms for missing glibcs.
1139
all the same stuff is happening. maybe i'll do some other things for a. bit.