[ot][spam] Thank you for your post coderman
oramfs looks pretty cool and is similar to things I'm working on. --- other yammer: Regarding filesystems, I'm struggling to find a way to preserve data across corrupt systems damaging it and harddrives that break frequently. This is very hard for me to solve. This list still makes no sense to me, no pgp keys, to me people say things that make little sense (spam filtering out of nowhere?). We need to get onto better channels. I wonder what people are using for discourse with more cryptographic integrity nowadays. I proposed to the lsl project (used for neuroscience research) that they encrypt and authenticate their biosignal streams. I wasn't sure what system to suggest and suggested hypercore because it offers some small proof of creation after the fact They were expecting TLS of course, which I worry around because it doesn't say anything about archival integrity after decryption. Hypercore wasn't really a good suggestion because it is written in nodejs and lsl is in c++ :-/ Seems go and rust are the future. I looked up go.sum : dependencies, although retrieved from github over the network (scary way to make an ecosystem) are hashed via sha256 in a way that can be upgraded (reliable, trustworthy). Inspiring. There are multiple facilities in the go dependency system, for pulling from offline mirrors instead of github, but they aren't that easy to find. Haven't checked if the commit id of dependencies is used in the hash, or the worktree checkout, or what. Haven't checked rust's cargo to see what their approach is. When picking a language, I want to make sure there are ways to quickly check where data corruption arises. I always use submodules and subtrees for my dependencies in older languages. It's cool that go's approach pressures people to mirror github to dev offline. That could accomplish a lot in the world, although is likely a little limited to things written in go.
I'm including the entire quote of my previous message below, because david was replying to everything I said but only including the leftmost email, didn't reply when I asked about it, and I don't know why that is. More text following at bottom. On Thu, Jul 1, 2021, 3:30 AM Karl <gmkarl@gmail.com> wrote:
oramfs looks pretty cool and is similar to things I'm working on.
---
other yammer:
Regarding filesystems, I'm struggling to find a way to preserve data across corrupt systems damaging it and harddrives that break frequently. This is very hard for me to solve.
This list still makes no sense to me, no pgp keys, to me people say things that make little sense (spam filtering out of nowhere?). We need to get onto better channels. I wonder what people are using for discourse with more cryptographic integrity nowadays.
I proposed to the lsl project (used for neuroscience research) that they encrypt and authenticate their biosignal streams. I wasn't sure what system to suggest and suggested hypercore because it offers some small proof of creation after the fact They were expecting TLS of course, which I worry around because it doesn't say anything about archival integrity after decryption. Hypercore wasn't really a good suggestion because it is written in nodejs and lsl is in c++ :-/
Seems go and rust are the future. I looked up go.sum : dependencies, although retrieved from github over the network (scary way to make an ecosystem) are hashed via sha256 in a way that can be upgraded (reliable, trustworthy). Inspiring. There are multiple facilities in the go dependency system, for pulling from offline mirrors instead of github, but they aren't that easy to find. Haven't checked if the commit id of dependencies is used in the hash, or the worktree checkout, or what.
Haven't checked rust's cargo to see what their approach is. When picking a language, I want to make sure there are ways to quickly check where data corruption arises. I always use submodules and subtrees for my dependencies in older languages.
It's cool that go's approach pressures people to mirror github to dev offline. That could accomplish a lot in the world, although is likely a little limited to things written in go.
After writing the above I looked into rust a little. Rust stores its cargo.io package index in a single git repository with history. Each package's source bundle is hashed with sha256, although it does not look like the format provides for easily upgrading that algorithm. It is very inspiring that the entire package index can be downloaded and used offline to checksum one's dependencies, as a single repository with history. The format is described a little in https://doc.rust-lang.org/cargo/reference/registries.html . Additionally, executable projects made with rust include sha256sums of their dependencies in their Cargo.lock file, I believe with the intent of providing for deterministic rebuild, unsure. oramfs's dependency checksums can be seen at https://github.com/kudelskisecurity/oramfs/blob/main/Cargo.lock . Library projects that don't build executables do not include the Cargo.lock file, and hence are a little less trustworthy when shared. The format of the hash is the same as in the index and doesn't provide for smoothly adopting a new digest algorithm like go does. I'm curious if go has something like rust's single git package index repository, cause that's pretty nice. Of course git isn't to be trusted for binary files until it adopts newhash, these are ascii hashes not binary data, although technically that means scrubbing the repo to verify that holds which nobody would remember to do. Git will adopt newhash eventually.
On Thu, Jul 1, 2021, 4:01 AM Karl <gmkarl@gmail.com> wrote:
I'm including the entire quote of my previous message below, because david was replying to everything I said but only including the
...
I proposed to the lsl project (used for neuroscience research) that they
encrypt and authenticate their biosignal streams. I wasn't sure what system to suggest and suggested hypercore because it offers some small proof of creation after the fact They were expecting TLS of course, which I worry around because it doesn't say anything about archival integrity after decryption. Hypercore wasn't really a good suggestion because it is written in nodejs and lsl is in c++ :-/
Seems go and rust are the future. I looked up go.sum : dependencies, although retrieved from github over the network (scary way to make an ecosystem) are hashed via sha256 in a way that can be upgraded (reliable, trustworthy). Inspiring. There are multiple facilities in the go dependency system, for pulling from offline mirrors instead of github, but they aren't that easy to find. Haven't checked if the commit id of dependencies is used in the hash, or the worktree checkout, or what.
Haven't checked rust's cargo to see what their approach is. When picking a
...
After writing the above I looked into rust a little. Rust stores its cargo.io package index in a single git repository with history. Each package's source bundle is hashed with sha256, although it does not look like the format provides for easily upgrading that algorithm.
It is very inspiring that the entire package index can be downloaded and used offline to checksum one's dependencies, as a single repository with history. The format is described a little in https://doc.rust-lang.org/cargo/reference/registries.html .
...
I'm curious if go has something like rust's single git package index repository, cause that's pretty nice. Of course git isn't to be trusted for binary files until it adopts newhash, these are ascii hashes not binary data, although technically that means scrubbing the repo to verify that holds which nobody would remember to do. Git will adopt newhash eventually.
For completion, rust's index repository is at https://github.com/rust-lang/crates.io-index and the current mitm-tip-commit for me is 2e65f91572b118a4552af6f2c83d2c0b73915f0e. Looking on github I didn't quickly see indication that somebody was signing the commits, which is strange. go also uses a module mirror and checksum database. https://proxy.golang.org/ . An interesting technology is mentioned called "certificate transparency" and "transparent log" : it says the server's integrity is not trusted. It sounds really interesting. automatic use of the checksum database, which appears spread under subfolders of https://sum.golang.org/, is only enabled starting with go 1.13 . The mitm-contents of https://sum.golang.org/latest for me right now are roughly this: go.sum database tree 5846179 ynvWHhPdVJ+uzW3tYDxuPyccZN0KmsJKmy/x6aSglq4= — sum.golang.org Az3grhYllN53hh2b10cHJvRkyLB/pGehUuEZj5QeNKNHlkqhFwt2zXNgZcK3XuUisNaWOG/GD992XmPCyfPR/4n7cQ0= I don't immediately see a way to mirror the checksum log, which is saddening, but the go ecosystem is pretty big so it's highly likely somebody has written code to do that.
On Thu, Jul 1, 2021, 4:16 AM Karl <gmkarl@gmail.com> wrote: ...
I proposed to the lsl project (used for neuroscience research) that they
encrypt and authenticate their biosignal streams. I wasn't sure what system to suggest and suggested hypercore because it offers some small proof of creation after the fact They were expecting TLS of course, which I worry around because it doesn't say anything about archival integrity after decryption. Hypercore wasn't really a good suggestion because it is written in nodejs and lsl is in c++ :-/
Seems go and rust are the future. I looked up go.sum : dependencies, although retrieved from github over the network (scary way to make an ecosystem) are hashed via sha256 in a way that can be upgraded (reliable, trustworthy). Inspiring. There are multiple facilities in the go dependency system, for pulling from offline mirrors instead of github, but they aren't that easy to find. Haven't checked if the commit id of dependencies is used in the hash, or the worktree checkout, or what.
...
Rust stores its cargo.io package index in a single git repository with
history. Each package's source bundle is hashed with sha256, although it does not look like the format provides for easily upgrading that algorithm.
..
go also uses a module mirror and checksum database. https://proxy.golang.org/ . An interesting technology is mentioned called "certificate transparency" and "transparent log" : it says the server's integrity is not trusted. It sounds really interesting. automatic use of the checksum database, which appears spread under subfolders of https://sum.golang.org/, is only enabled starting with go 1.13 .
The mitm-contents of https://sum.golang.org/latest for me right now are roughly this:
go.sum database tree 5846179 ynvWHhPdVJ+uzW3tYDxuPyccZN0KmsJKmy/x6aSglq4=
— sum.golang.org Az3grhYllN53hh2b10cHJvRkyLB/pGehUuEZj5QeNKNHlkqhFwt2zXNgZcK3XuUisNaWOG/GD992XmPCyfPR/4n7cQ0=
I don't immediately see a way to mirror the checksum log, which is saddening, but the go ecosystem is pretty big so it's highly likely somebody has written code to do that.
Certificate Transparency is a great google project providing for a degree of public auditing of CA activity. It uses an append-only merkle tree. The tooling appears pretty complicated and mostly driven by google's go implementations. It's not a lightweight small tool like pgp or git. Go's sumdb uses Trillian, which is a generalisation of the technology behind certificate transparency. An important question is whether there are alternative implementations of trillian. The mitm-commit-tip of https://github.com/google/trillian-examples for me is 267fb50f0b5571b879ac75fd52a113af1b31c6a0 . In the sumdbaudit/ folder is software in go for producing, auditing, and running a go sumdb mirror. # Auditor / Cloner for SumDB This directory contains tools for verifiably creating a local copy of the [Go SumDB](https://blog.golang.org/module-mirror-launch) into a local SQLite database. * `cli/clone` is a one-shot tool to clone the Log at its current size * `cli/mirror` is a service which continually clones the Log * `cli/witness` is an HTTP service that uses a local clone of the Log to provide checkpoint validation for other clients. This is a very lightweight way of providing some Gossip solution to detect split views. ## Background This is a quick summary of https://blog.golang.org/module-mirror-launch but is not intended to replace this introduction. If you have no context on Go SumDB, read that intro first :-) Go SumDB is a Verifiable Log based on Trillian, which contains entries of the form: ``` github.com/google/trillian v1.3.11 h1:pPzJPkK06mvXId1LHEAJxIegGgHzzp/FUnycPYfoCMI= github.com/google/trillian v1.3.11/go.mod h1:0tPraVHrSDkA3BO6vKX67zgLXs6SsOAbHEivX+9mPgw= ``` Every module & version used in the Go ecosystem will have such an entry in this log, and the values are hashes which commit to the state of the repository and its `go.mod` file at that particular version. Clients can be assured that they have downloaded the same version of a module as everybody else provided all of the following are true: * The hash of what they have downloaded matches an entry in the SumDB Log * There is only one entry in the Log for the `module@version` * Entries in the Log are immutable / the Log is append-only * Everyone else sees the same Log ## Features This auditor provides an example for how Log data can be verifiably cloned, and demonstrates how this can be used as a basis to verify its [Claims](https://github.com/google/trillian/blob/master/docs/claimantmodel/ ). The Claims checked by this clone & audit tool are: * SumDB Checkpoints/STHs properly commit to all of the data in the Log * Committed entries are never modified; the Log is append-only * Each `module@version` appears at most once In addition to verifying the above Claims, the tool populates a SQLite database with the following tables: * `leaves`: raw entries from the Log * `tiles`: tiled subtrees of the Merkle Tree * `checkpoints`: a history of Log Checkpoints (aka STHs) that have been seen * `leafMetadata`: parsed data from the `leaves` table This tool does **not** check any of the following: * Everyone else sees the same Log: this requires some kind of Gossip protocol for clients and verifiers to share Checkpoints * That the hashes in the log represent the current state of the repository (the repository could have changed its git tags such that the hashes no longer match, but this is not verified) * That any `module@version` is "safe" (i.e. no checking for CVEs, etc) ## Running `clone` The following command will download all entries and store them in the database file provided: ```bash go run github.com/google/trillian-examples/sumdbaudit/cli/clone -sqlite_file ~/sum.db -alsologtostderr -v=2 ``` This will take some time to complete on the first run. Latency and bandwidth between the auditor and SumDB will be a large factor, but for illustrative purposes this completes in around 4 minutes on a workstation with a good wired connection, and in around 10 minutes on a Raspberry Pi connected over WiFi. Your mileage may vary. At the time of this commit, SumDB contained a little over 1.5M entries which results in a SQLite file of around 650MB. ## Setting up a `mirror` service These instructions show how to set up a mirror service to run on a Raspberry Pi running a recent version of Raspbian.
:frog: this would be more useful with a client/server database instead of sqlite!
:warning: The witness is missing features (outlined below) in order to be used in an untrusted environment. This witness implementation is useful only in a
Setup: ```bash # Build the mirror and install it where it can be executed go build ./sumdbaudit/cli/mirror sudo mv mirror /usr/local/bin/sumdbmirror # Create a user to run the service that has no login sudo useradd -M sumdb sudo usermod -L -s /bin/false sumdb # Create a directory to store the sqlite database sudo mkdir /var/cache/sumdb sudo chown sumdb.sumdb /var/cache/sumdb ``` Define the service by creating the file `/etc/systemd/system/sumdbmirror.service` with contents: ``` [Unit] Description=Go SumDB Mirror After=network.target [Service] Type=simple User=sumdb ExecStart=/usr/local/bin/sumdbmirror -sqlite_file /var/cache/sumdb/mirror.db -alsologtostderr -v=1 [Install] WantedBy=multi-user.target ``` Start the service and check its progress: ```bash sudo systemctl daemon-reload sudo systemctl start sumdbmirror # Follow the latest log messages journalctl -u sumdbmirror -f ``` When the mirror service is sleeping, you will be able to query the local database at `/var/cache/sumdb/mirror.db` using the example queries in the next section. At the time of writing this setup uses almost 600MB of storage for the database. If you want to have the `leafMetadata` table populated then you can add an extra argument to the service definition. In the `ExecStart` line above, add `-unpack` and then restart the `sumdbmirror` service (`sudo systemctl daemon-reload && sudo systemctl restart sumdbmirror`). When it next updates tiles this table will be populated. This will use more CPU and around 60% more disk. ## Setting up a `witness` service This requires a local clone of the SumDB Log to be available. For this to be of any real value, it should be running against a database which is regularly being updated by the `mirror` service described above. trusted domain
where the correct operation of the witness is implicit. This precludes being run as a general service on the Web, but is still useful within a household or organization.
A client which successfully checks its checkpoints with a witness can ensure that if there is a "split view" of the SumDB Log, then it is on the same side of the split as the witness. If this witness is also verifying the claims of the log, then the client is safe in relying on the data within (providing it trusts the verifer!). The service can be started with the command (assuming `~/sum.db` is the database): ```bash go run ./sumdbaudit/cli/witness -listen :8080 -sqlite_file ~/sum.db -v=1 -alsologtostderr ``` This can be set up as a Linux service in much the same way as the `mirror` above. Once running, the server will be available for GET requests at the listen address given as a commandline parameter. Some example requests that can be made: ```bash # Simply get the latest golden checkpoint curl -i http://localhost:8080/golden # Validate that the witness is consistent with the Checkpoint your go build tools are using curl -i http://localhost:8080/checkConsistency/`base64 -i ~/go/pkg/sumdb/ sum.golang.org/latest` # Validate that the witness is consistent with the latest Checkpoint from the real Log curl -i http://localhost:8080/checkConsistency/`curl https://sum.golang.org/latest | base64` ``` ### Using Docker The witness can be started along with the mirror using `docker-compose`. The following command will mirror the log and provide a witness on port `8080` when the initial sync completes: ```bash docker-compose -f sumdbaudit/docker/docker-compose.yml up -d ``` If using a Raspberry Pi, the above command will fail because no suitable MariaDB image can be installed. Instead, use this command to install an image that works: ```bash docker-compose -f sumdbaudit/docker/docker-compose.yml -f sumdbaudit/docker/docker-compose.rpi.yml up -d ``` ### Using a client/server database The instructions above are for setting this up using sqlite with its storage on the local filesystem. To set this up using MariaDB, the database can be provisioned by logging into the instance as root user and running the following: ```bash CREATE DATABASE sumdb; CREATE USER 'sumdb'@localhost IDENTIFIED BY 'letmein'; GRANT ALL PRIVILEGES ON sumdb.* TO 'sumdb'@localhost; FLUSH PRIVILEGES; ``` Once set up, change the `sqlite_file` flag above for `mysql_uri` with a connection string like `'sumdb:letmein@tcp(127.0.0.1:3306 )/sumdb?parseTime=true'`. ## Querying the database The number of leaves downloaded can be queried: ```bash sqlite3 ~/sum.db 'SELECT COUNT(*) FROM leaves;' ``` And the tile hashes at different levels inspected: ```bash sqlite3 ~/sum.db 'SELECT level, COUNT(*) FROM tiles GROUP BY level;' ``` The modules with the most versions: ```bash sqlite3 ~/sum.db 'SELECT module, COUNT(*) cnt FROM leafMetadata GROUP BY module ORDER BY cnt DESC LIMIT 10;' ``` ## Missing Features * This only downloads complete tiles, which means that at any point there could be up to 2^height leaves missing from the database. These stragglers should be stored if the root hash checks out. * Witness should return detailed responses * In the event of an inconsistency, both Checkpoints notes should be serialized and returned * Consistency should return a proof that the tree is consistent with the witnesses Golden Checkpoint
I proposed to the lsl project (used for neuroscience research) that they
encrypt and authenticate their biosignal streams. I wasn't sure what system to suggest and suggested hypercore because it offers some small proof of creation after the fact They were expecting TLS of course, which I worry around because it doesn't say anything about archival integrity after decryption. Hypercore wasn't really a good suggestion because it is written in nodejs and lsl is in c++ :-/
...
Rust stores its cargo.io package index in a single git repository with
history. Each package's source bundle is hashed with sha256, although it does not look like the format provides for easily upgrading that algorithm.
...
The mitm-commit-tip of https://github.com/google/trillian-examples for me is 267fb50f0b5571b879ac75fd52a113af1b31c6a0 . In the sumdbaudit/ folder is software in go for producing, auditing, and running a go sumdb mirror.
Athens is a tool that will run a local copy of a go development ecosystem including the sumdb. https://docs.gomods.io/ Even though sumdb is heavyweight, golang projects do store their dependency checksums by default, like rust binary projects, which is more than can be said for most C/C++ projects unless they use subtree, submodules, distribution libraries, cmake externalproject hashes, or some other external dependency system. Nodejs projects support hashing in the package.lock file, but it has sometime become a norm to not include this file in shared code. There's a lot of discussion around signing cargo packages for rust at https://github.com/rust-lang/crates.io/issues/75 . The conversations there also include some existing in-use systems, but the issue is open. The devs didn't want to rely on git's sha-1, refrained from signing the repo, and then many releases happened while there was no velocity on an alternative implementation. I tried cloning the rust index in termux on my phone: $ git clone --mirror https://github.com/rust-lang/crates.io-index Cloning into bare repository 'crates.io-index.git'... remote: Enumerating objects: 2048515, done. remote: Counting objects: 100% (3869/3869), done. remote: Compressing objects: 100% (1596/1596), done. Receiving objects: 100% (2048515/2048515), 565.69 MiB | 4.85 MiB/s, done. remote: Total 2048515 (delta 2642), reused 3437 (delta 2212), pack-reused 2044646 Resolving deltas: 100% (1414312/1414312), done. Checking objects: 100% (4194304/4194304), done. It's half a gigabyte ;p. I don't see evidence of signatures but don't really remember how to check. It looks like the latest tool mentioned at the bottom of that thread is https://github.com/crev-dev/crev : # Crev - Code REView system that we desperately need ## Implementations * [cargo-crev: Crev for Rust/cargo](https://github.com/crev-dev/cargo-crev) - ready and working * [npm-crev: Crev for Node/NPM](https://www.npmjs.com/package/crev) - baby steps * [pip-crev: Crev for Python/PIP](https://github.com/crev-dev/pip-crev) - still early * Crev for Julia/Pkg - in plans; ask around on [Crev Matrix channel]( https://matrix.to/#/#crev:matrix.org) * other languages/ecosystems - join [Crev Matrix channel]( https://matrix.to/#/#crev:matrix.org), tell us about your interest and find help ## Introduction You're ultimately responsible for vetting your dependencies. But in a world of NPM/PIP/Cargo/RubyGems - how do you do that? Can you keep up with ever-changing ecosystem? Crev is an actual *code review* system as opposed to typically practiced *code-change review* system. Crev is scalable, distributed, and social. Users publish and circulate results of their reviews: potentially warning about problems, malicious code, or just encouraging high quality by peer review. Crev allows building a personal web of trust in other people and the code they use and review. Crev [is a][f] [tool][e] [we][d] [desperately][c] [need][b] [yesterday][a]. It protects against compromised dev accounts, intentional malicious code, typosquatting, compromised package registries, or just plain poor quality. [a]: https://www.csoonline.com/article/3214624/security/malicious-code-in-the-nod... [b]: https://thenewstack.io/npm-attackers-sneak-a-backdoor-into-node-js-deploymen... [c]: https://news.ycombinator.com/item?id=17513709 [c]: https://www.theregister.co.uk/2018/11/26/npm_repo_bitcoin_stealer/ [d]: https://www.zdnet.com/article/twelve-malicious-python-libraries-found-and-re... [e]: https://www.itnews.com.au/news/rubygems-in-recovery-mode-after-site-hack-330... [f]: https://users.rust-lang.org/t/security-advisory-for-crates-io-2017-09-19/129... ## Vision We would like Crev to become a general, language, and ecosystem agnostic system for establishing trust in Open Source code. We would like to have frontends integrated with all the major Open Source package managers and ecosystems, and many independent and interoperable tools building on top of it. ## Overview At it's core Crev defines a simple, human-readable data format to communicate trust in code (results of code review) and people (reputation). Using tools implementing Crev, you can generate cryptographically signed artifacts (*Proofs*). Here is an example of a *Package Review Proof* that describes results of reviewing a whole package (library, crate, etc.): ``` -----BEGIN CREV PACKAGE REVIEW----- version: -1 date: "2018-12-16T00:09:27.905713993-08:00" from: id-type: crev id: 8iUv_SPgsAQ4paabLfs1D9tIptMnuSRZ344_M-6m9RE url: "https://github.com/dpc/crev-proofs" package: source: "https://crates.io" name: default version: 0.1.2 digest: RtL75KvBdj_Zk42wp2vzNChkT1RDUdLxbWovRvEm1yA review: thoroughness: high understanding: high rating: positive comment: "I'm the author, and this crate is trivial" -----BEGIN CREV PACKAGE REVIEW SIGNATURE----- QpigffpvOnK7KNdDzQSNRt8bkOFYP_LOLE-vOZ2lu6Je5jvF3t4VZddZDDnPhxaY9zEQurozqTiYAHX8nXz5CQ -----END CREV PACKAGE REVIEW----- ``` *Proofs* are published and exchanged in a similar way that Open Source code is, for other people to benefit from. ## Fundamental beliefs of Crev design: * Trust is about people and community, not automatic scans, arbitrary metrics, process, or bureaucracy. You can't replace a human judgment with an algorithm. Tools can only help make such a judgment. * Code quality, risk management, and trust requirements are subjective, contextual, and personal. Islands of Trust must grow organically. * Not many people can review all their dependencies, but if every user at least skimmed through a couple of them, and shared that information with others, we would be in a much better situation. * Trust should be spread redundantly between many people, so one compromised or malicious actor can't abuse the system. * Crev does not have to be perfect. Instead it should be robust, simple and flexible, so it can evolve to be good enough. ## Further reading For more concrete information, see [cargo-crev - first and currently most advanced implementation of Crev](https://github.com/crev-dev/cargo-crev).
This comic is from crev. Transcript-spoiler: A software developer is visiting a fortune teller living in a covered wagon out in the desert away from town. Fortune Teller, all spooky: "I see your Rust project ..." The scene zooms in to the inside of the wagon, lit only by light emitted by a crystal ball on a table, and a candle next to the software developer. The fortune teller is peering into the crystal ball, guiding its energy with her hands. The dev's anxiety increases. Fortune Teller, all spooky: "... I see you ... alone ... reviewing libraries you're using" The scene moves in closer to the crystal ball. The fortune teller also leans in closer to gaze deeply. The dev looks more tense and worried. Fortune Teller, looking hard to tease confusing meaning out of the crystal ball: "... which recursively pull in more dependencies ..." The scene zooms in on the dev. They are sweating, very anxious. They know the fortune teller can discern the integrity of their projects. Fortune Teller: "Jesus that's a lot of dependencies." # cargo-crev
A cryptographically verifiable **c**ode **rev**iew system for the cargo (Rust) package manager.
Figure out how to trust your processes before they figure out how to trust you!
participants (1)
-
Karl