[ot][spam][personal] Behavioral/Rambling Brownian Motion towards Subgoals
K let's figure this filecoin thing out. 5:53 EST. Filecoin's reference client is called "lotus" and it's written in go. lotus installation: https://docs.filecoin.io/get-started/lotus/installation/ -> it says you can run in the cloud for $100 credit. I did not receive a credit when I tried this. The user-experience involves handling fewer crashes with more ram and disk, which gets expensive quickly. There is a bug in the digitalocean image where it download the chain from a web source every reboot. I submitted information on fixing the bug to their github and am not aware of having received a reply. 5:57a Data storage tutorial: https://docs.filecoin.io/get-started/store-and-retrieve/store-data/ This talks about cli usage but does not indicate remote nodes. Eureka: I think lotus (maybe it was another node app?) has a lightweight mode already built in. Subgoal: see if lotus has a lightweight, remote-use mode. Finding discussion of this is likely to find at least one remote-use node.
Lotus lite node: https://docs.filecoin.io/build/lotus/lotus-lite/
- api.chain.love was one of the example ones - glif.io and infura.io may provide nodes for pay These posts now contain the information I need to store data on filecoin. Migrate subgoal-result to subgoal thread.
6:18 am goal: access/use a filecoin wallet on a lotus lite node. 6:19a Concept: I think I have a filecoin wallet on my raspberry pi, not sure. I may have even initiated an upload of something already, not sure. Goal: visit raspberry pi filesystem, look for indication of filecoin/lotus wallet Concept: I may have had a daemon set up for running lotus, with a lite- mode custom-configured. Active goal revision: visit raspberry pi filesystem, look for custom lotus systemd file in lotus worktree. Pending. 6:21a
6:29a I found my worktree and lotus source folders, but the lotus-lite.service file I made does not appear to be among them. Options: - look harder - review terminal logs - run manually Let's pursue #3. There is a link to how to run a lotus lite node in the first two threads. Run one using the .love example host. <- active subgoal. 6:31a
6:32: - verify or build a working lotus binary 6:33 my binary is already built in my lotus worktree note: api.chain.love FULLNODE_API_INFO=/ip4/YOUR_FULL_NODE_IP_ADDRESS/tcp/1234 lotus daemon --lite FULLNODE_API_INFO=/ip4/api.chain.love/tcp/1234 lotus daemon --lite 6:36a
The above command gives me: ERROR: unknown url scheme '' Goal: successfully run lotus in lite mode. Review command help or source to find documentation of the remote node address setting or the error message
grepping the source for the environment variable shows it should have a url scheme at the end (http). A bug among either their documentation or me. 6:43a adding the url scheme does not prevent the same fatal error message 6:46a it also needs the domain name resolved. This launch string is working for me: FULLNODE_API_INFO="/ip4/ 104.21.75.210/tcp/1234/http" lotus daemon --lite !
On Sun, Jul 18, 2021, 6:46 AM Karl <gmkarl@gmail.com> wrote:
grepping the source for the environment variable shows it should have a url scheme at the end (http).
A bug among either their documentation or me.
6:43a adding the url scheme does not prevent the same fatal error message
6:46a it also needs the domain name resolved.
This launch string is working for me: FULLNODE_API_INFO="/ip4/ 104.21.75.210/tcp/1234/http" lotus daemon --lite
!
ERROR: dial tcp 104.21.75.210:1234: i/o timeout I'll plan to try another ip and change tasks to renting a full node if it fails too.
7:04a - It looks like chain.love is down. - I visited infura.io . Their mobile interface makes it look like they offer a custom REST api or somesuch, but it might be interoperable with lotus lite somehow. - I visited glif.io . They need email discussion around providing a full node. - I was previously working on setting up a full node on a cloud server. - other options exist, like running a local full node Let's look into infura.io, and also consider planning to spend time setting up local and cloud nodes 7:06a, the list above was pasted
7:24a At https://github.com/filecoin-project/lotus/issues/6111 it's revealed that infura does not support all operations and is used via: export FULLNODE_API_INFO=" https://PROJECT_ID:PROJECT_SECRET@filecoin.infura.io" It also mentions: export FULLNODE_API_INFO=PROJECT_ID:PROJECT_SECRET:/ip4/filecoin.infura.io
7:40 My old lotus-lite daemon setup is in /etc/systemd/system ! I found it ! 7:41 its api info is wss://api.chain.love 7:42 I tried this manually with FULLNODE_API_INFO and it worked. Guess api.chain.love isn't down.
7:46 I updated my lotus-lite daemon to use my infura.io credentials. My lotus binary is reliably crashing shortly after launch. Active goal: investigate lotus-lite failure 7:47
12:40 UTC This is the error message: opening backupds: opening log: reading backup part of the logic: log entry potentially truncated, set LOTUS_ALLOW_TRUNCATED_LOG=1 to proceed guess I'll try setting that 8:41a EST
12:43 UTC panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x12624e4] Null pointer dereference. I'll try to try making a new .lotus folder while building latest client from source. 8:45a EST
8:49a My lotus lite daemon is running and I very suddenly need to pee more than anything else in the universe.
9:00a - I couldn't connect to the lotus daemon connected to the infura node. Logs contained repeated lines like: restarting listenHeadChanges listen head changes error: listenHeadChanges ChainNotify call failed: RPC Error (-32004): method 'Filecoin.ChainNotify' not supported in this mode (no out channel support) - I rebooted the lotus daemon to use the chain.love endpoint it was already configured for - "lotus client list-deals" is giving me ERROR: missing permission to invoke 'ClientListDeals' (need 'write') I guess a good next step would be to discern whether that error is with my local node, and a local wallet, or the api endpoint. Maybe some reading on the wallet system, unsure. 9:04a
I don't remember what I'm doing or why and I'm inhibited from looking. It's 9:12a. I'd like to put the infura filecoin docs link into the subgoals spamthread.
10:04a The reason for the access error is because my local lotus client is failing to read the token file. Easy to fix. Goal: example download with wallet. 10:05a
10:06a client output: ERROR: websocket: bad handshake lite node output: JWT Verification failed: jwt: HMAC verification failed 10:07a
10:09a inner: "I have no idea how to address this. If I put enough energy in to understand this, I will be total confused for my appointment. I protect from that happening whenever I can." inner: "we can hold it. We can be okay."
10:18a Error is likely emitted from node/impl/common/common.go:AuthVerify It is a method of CommonAPI. It passes the token and the api secret to jwt.Verify to extract a payload with an Allow property. jwt.Verify throws the causal error. 10:19a
1:23p the error is produced in go-jsonrpc, which is also made by the filecoin people
1:31p I searched the web for the error message. It seems like it just means the token is wrong. I rebooted the daemon and the token isn't changing on boot. There's a good chance I'm trying to connect with a stale token.
2:31 The immediate error is resolved. I had copied over the server cryptographic material from a previous store, but not the client cryptographic material. New concern: I had previously made a filecoin deal, and it is not listed in the output.
5:46p EST bafk2bzacebi57tmw5oop5tk6uzqlgcgoenrl7r45eiagvgjr2u55ct3lxldek is a file containing lines of "hello world". Deals for storing it are pending with f0410001 and f0167254 . It will take a day or two for them to complete. 5:48p EST. I may have written 5:46 maybe not unsure. The miners I tried store files for a maximum of 539 days, which is a little sad. I think that content can be retrieved by anyone, not just the uploader.
8:15p I submitted a fix for the segfaults at https://github.com/filecoin-project/lotus/pull/6787 . I don't understand the underlying cause well. I'm guessing it's because some of the dealids were lost, maybe they didn't go through while the node was offline, dunno. One of my deals remains and it has completed. Theoretically the data is online! 8:17p 8:21 when I search for my uploaded file, lotus hangs. It could be from my corrupt repository. 8:22 while I was typing the above, lotus found the uploaded file and said it could retrieve it 8:23 I deleted the data from lotus's local repository and will try to retrieve it from the miner storing it. 9:32 It retrieved fine my hello-world.txt, although I am only now actually paying attention to where it goes and looking at it. I tried for some time to figure out how to request it from specific miners using the cli tools: this seemed hard to figure out without using https://filecoin.tools to lookup the form of the payload cid involved =/ The find function and retrieve function appear to possibly use the same offer object, so it is easy to do programmatically by attending to whatever it is of importance that is in the offer object. It takes about 2 minutes to retrieve my helloworld file. I think it is mostly because it is pinging nonresponsive nodes to see if they can send it. A quick workaround would be to stop as soon as a reasonable node is found. But we do want a way to probe nodes for whether they still will send a file. I'll check the actual download, and then see if I can get it with a fresh .lotus dir, I guess.
0548 EST My helloworld file can be downloaded with: lotus client retrieve --miner=f0410001 mAVWg5AIgUd/Nluuc/s1epmCzCM4jYr/HnSIAapkx1TvRT2u6xkU helloworld.txt It cost me up to around 0.01 FIL to provide downloading for an unfamiliar address, not sure why, might have done the math wrong. I wasn't charged for repeated downloads, but I think that might be a quirk of the miner. It hung overnight, waiting for the retrieval deal with a new address to finish. I had a power failure during the night. In the morning, it retrieves rapidly to the new address. The long sequence is a field marked as "label" in the output of "lotus client get-deal" . It's also called the "payload cid" on lotus.tools (a website), but the term "payload cid" appears to have some ambiguity here, not sure. I think it relates to recording the storage deal on a blockchain. 0555 Another way to get the deal is from the chain state via its deal id of 2210176 601 So all you need appears to be the deal id, but probably good to include something cryptographic, too. I can review other people's data by changing the deal id: lotus state get-deal 2210176 Then data can be retrieved using the label and the provider fields, passed to lotus client retrieve . 606 0604
0655 I don't know filecoin well, but using chain.love as a lotus-lite endpoint seems to make it hard to discern things. I can't search the chain for messages. lotus state miner-info f0410001 gives an "actor not found" error lotus state lookup --reverse f0410001 gives an "unknown actor code" error 0657 0736 Private keys are long random byte strings. I don't immediately see a way in the cli interface to renegotiate a deal. The approach appears to be to simply redeal the data, even with the same party. I'm holding a goal of pursuing programmatic access to data via e.g. git-annex, rclone, or minio. I think a simple implementation that calls the cli would not be hard. 0737
0740 There is documentation on using filecoin with infura at https://docs.filecoin.io/build/get-started/#infura-api 0744 There is a little current work toward general-purpose programmatic access to data at https://github.com/filswan/fs3-mc I'll probably pursue infura a little. 0746 EST 0749 The infura docs indicate it is not set up for use with the decentralised lotus daemon. But, when I asked in the slack, they said that data does not go through the chain.love node when uploaded, so it's reasonable to start backups using it. 0750
0756 Regarding backing up, it will be slow to upload data from a home network to filecoin. I'm thinking on setting up some way of securing it with a hash before it uploads. I wonder if there existing tools to make a merkle tree out of a binary blob. I think I will make my own, or use libgame's. I'm thinking of streaming data from a disk, hashing every 32GiB as they come in. This could be done with a script. We'd have a format for the hash output. It could either be a single updating document, or a tree. This would be a hashed but unsigned document. But maybe I could sign it anyway so as to quickly verify it with familiar tools. Alternatively, the various sum tools have a verify mode. They use a detached document. The goal is to reduce the time window during which viruses can mutate data. I may simply be paranoid and confused. In fact, I am usually these two things. 0801 Thinking on a chunk of data coming in. Hash the data and give it a name. Datasource-offset-size-date That seems reasonable. Then we can make verification files of those files, I suppose ... 0803 ... Datasource-index-size-date Seems more clear and useful. This is similar to git-annex ... 0804 ... 0806 Here's a quick way to make a secure document hash: for sum in /usr/bin/*sum; do $sum --tag $document; done 2> /dev/null Now, considering filecoin, it would be nice if we also had the deal information. The fields of interest include: - numeric deal id - payload cid - piece cid? - miner address? - sender address? - deal cid? hard to find or use this with the interface, seems most relevant - block hash? Deal information can be found with `lotus client get-deal` 0810 0811 It seems reasonable to provide the DealID and Proposal Label from client get-deal. So. We'll get say a new block of data or something. If the data is in small blocks, we would want to add this information to some hash file. Say the DealID. I guess the deal CID would make more sense in a hash file =S maybe I can ease myself by using some xxxx looking label. The ProposalCid seems to be the same as the deal CID, unsure. I could also put a deal id into the filename, but then the old hashes don't immediately verify. Thinking a little of a document containing nested documents. It could hash itself, and append further data, and append a hash of above data ... then we could add deal ids farther down and still have the same canonical previous material. 0817 . 0818 I looked into using gpg --clearsign, but I don't like how the encoding of the pgp data makes it hard to debug corruption. The hash is not readily visible. I'm considering ... 0820, inhibition Considering normal tools for breaking a file based on boundaries. Nested text files. I'm wanting it to be easy to break out the right parts of the files to verify things, when I am confused. I'd like to be able to verify only some of the files and hashes, when I am on a system with limited tools. This means extracting the document's own verification document out, to verify it with. Now thinking it is overcomplicated. Basic goal: easily check that the data I uploaded is the same as what I read from the drive, at the time I read it. So one hash, and then a way to check that hash. We download data from the network, maybe years from now, and check the hash. I guess it's an okay solution. Considering using a directory of files. Then as more documents are added, a hash document can extend in length, or a tarball can be used. Considering hashing only certain lines of files. Some way to generate subfiles from longer files. Generate all the subfiles, and you can check everything. We could have a filename part that indicates it is lines from another file. Or greps, even. 0826 While I'm making this confused writing, my system is generate 64GiB of random data to check if filecoin automatically breaks pieces larger than its maximum piece size. I should have used zeros, but it's mostly limited by write speed I expect. It's hit 38GiB so I'll test it. 0827 0828 I told lotus to import the data while it was being generated, hoping it will import truncated data rather than failing. It's silently spinning the CPU. Regarding a quick hash document, let's see. Import data in chunks to a folder. Hash data into a new file. Hash all hashes into a new file. Maybe that works. Using new files, the system is more clear. How does verification work? We roughly verify files in reverse order to their creation, and we verify only verifiable files. So it's possible a system could be checked with something like check $(ls *.checkable | tac) | grep -v missing where check runs through /usr/bin/*sum or such 0832 muscle convulsions associated with plans and decisions 0833
0835 I think it makes sense to work with large images in chunks for now, since they are being shuffled over the network. It would be nice to select randomly or such ordered chunks to upload, which is doable. So. Guess I'll want to test a hashing system. It will need debugging and will become overdeveloped. File comes in. Source-index-size-date.bin 0840 source_index_size_date.bin . Makes the date easier to extract. 0848 confused and spasms. Lotus import finished. Made a sum script, was working on a check script. Goal: see what happens when lotus tries to deal too much data 0850 it's spinning "calculating data size" to start dealing it 0851 0856 I've spent a little time on the check script. lotus is still "calculating data size" . More spasms. Goal: relates cksum and sum not supporting bsd-style tagging. Skip them somehow without making script too complex. 0903 It would be nice if the data was summed before writing. A silly habit. 0904 I'm considering that I may need a more powerful system to store this data before I lose it. 0906 0933 I have some basic scripts on my side raid, which is mounted on /media/usb. The scripts are in /img . 0933 Inhibition, distractive behaviors
0936 How to write the sum files to make checking easy? So, data comes in. Sum file goes parallel to other data leaves. We could give it the same name if other sums had a name that always sorted after or separately. We could give it a different extension. Like .sum and .sums or such. My autocorrect turned sumsum into sums. Not often I accept what it does. Sometimes it has me capitalise words in the middle of sentences and such. 0940 using .check for single files So, then we can sum all the .check files into a further file I suppose. 0942 Ok um So Um ./check *.checks *.check # seems to work fine When making new .checks files, increment a counter by 1 or such Maybe we can use date Date works Ok when making new check files, in theory we only need the last created one, and the one just made. 0946 My scripts could be simplified if one read from stdin instead of a file. 0948 Oops! It seeks to 0 repeatedly. That wasn't the right idea. 0951 Ok the system is busy testing stuff. 1015 1019 Okay, I'm testing a little now. Hashing a lot of data takes ages on a raspberry pi. The current script is written to do all the hashes and move forward when they are done, but the user really would want some string to verify with before it is done. With the current file approach, that could mean a separate file for the first hash, or each hash. It might be more organised to hash based on lines rather than filenames. Meanwhile, my lotus client finished a task. 1020 ERROR: generating unsealed CID: Piece is larger than sector. So, filecoin needs data in 32GiB chunks at the largest. I can delete my 64GiB test files. 1021
1030 I have learned that two processes can concurrently write to the same named pipe. This would probably be good for getting latest most full digest of data. I am thinking of having breakfast and may need to write a story part or something on return. Kind of fearfully. 1031 1100 I'm in kitchen making lunchish. Thinking about multiple processes writing to a named pipe a little. Could set import process going. It could pick sections that don't exist yet and keep importing them. Hash could output hashes on named pipe. Events of note: - chunk is completely written to disk and/or first hash of it is available - chunk is fully hashed - chunk is imported to lotus and has a lotus-associated hash - chunk is stored remotely and has a deal id - user wants a hash to verify state with later Each of the chunk events adds new important state. If the user were making a stream of hashes, they might want an item for every event. Alternatively the user may want an item just right now. A process could be collecting hashes from the named pipe into a file. To get a state hash, the process could be terminated and its output document hashed. 1105 Okay that's not too complex, but it's hard to consider doing. Look at current work. Not looking atm but basically the top portion of the script generated content hashes, and the bottom portion added them to ongoing document hashes. That bottom portion would kind of move to a separate process reading from a named pipe. Now, if it finishes on its own, I'd want it to store a hash on its own somewhere. So the events are valuable. But it's irritating to make too many files. Maybe they can be tar'd up or something and untar'd for verification, dunno. 1108 1120 Back at system. Continuing seems difficult.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 hello Karl, my replies in-line below, clear-signed, per usual :) ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Monday, July 19th, 2021 at 8:20 AM, Karl <gmkarl@gmail.com> wrote:
I'm in kitchen making lunchish. Thinking about multiple processes writing to a named pipe a little.
this is my favorite technique to break mental blocks: - step away from keyboard - take a break: food, drink, smoke, etc :P - resume with new perspective!
Back at system. Continuing seems difficult.
this is understandable. know that even the most capable of humans reaches limits! you may think you're talking into the ether, but there are listeners... sometimes i think of a problem you're having, and wonder, "did he think of this?" and yes, you did. other times i say, "he should try this!" and your next message, yes you try that. :) and if i wonder, i can always reference. my current suggestions: 1. keep it simple: i admire the chain mechanisms, but i fear they are all too volatile, complex, fleeting. what about multiple redundant physical copies of small data? data that fits on USB and microSD and things you can geocache? 2. keep on doing what you're doing. as i say: your written record is informative and helpful and humanizing. my promise to you is that i'll use all my powers to persist the cypherpunks email from now forward. (i have some going back a bit, but we'll call this the persistent epoch :) 3. exercise your backups. try restoring your system to another. try check-sum'ing all data in your physical media archive, try locating that geocache and making a copy :) this will help building confidence in your protections. best regards, and take care of yourself, -----BEGIN PGP SIGNATURE----- iNUEAREKAH0WIQRBwSuMMH1+IZiqV4FlqEfnwrk4DAUCYPYTKl8UgAAAAAAuAChp c3N1ZXItZnByQG5vdGF0aW9ucy5vcGVucGdwLmZpZnRoaG9yc2VtYW4ubmV0NDFD MTJCOEMzMDdEN0UyMTk4QUE1NzgxNjVBODQ3RTdDMkI5MzgwQwAKCRBlqEfnwrk4 DAt7APwN2hVZ9IMM9r40VCLYs44M6e4eEC+5KPVLIb9xB+BX5wD/ZCiY+agDTkrA fTNQns8Hp8LEEvKDJwjqyw9cy9GRLyk= =ywFF -----END PGP SIGNATURE-----
participants (2)
-
coderman
-
Karl