Somebody asked me a question, but because I am far from being an expert, I couldn't answer. Suppose a person wanted to implement a TOR node, simply by buying some box, and plugging it into his modem, and power. And NOT needing to become an expert on TOR, or even on computers in general. And NOT having to follow pages and pages of instructions. I did a few minutes of searching, and even the 'simple' explanations seemed 'clear as mud'. Don't bother with long explanations challenging the usefulness, or trustworthiness of TOR. Yes, we've discussed them to death. That's a different subject. Jim Bell
On Fri, Oct 11, 2019 at 09:05:00PM +0000, jim bell wrote:
Somebody asked me a question, but because I am far from being an expert, I couldn't answer. Suppose a person wanted to implement a TOR node, simply by buying some box, and plugging it into his modem, and power. And NOT needing to become an expert on TOR, or even on computers in general. And NOT having to follow pages and pages of instructions. I did a few minutes of searching, and even the 'simple' explanations seemed 'clear as mud'. Don't bother with long explanations challenging the usefulness, or trustworthiness of TOR. Yes, we've discussed them to death. That's a different subject. Jim Bell
- Simple firewalling, as in shut it all down except for Tor. - Configure Tor as entry node, exit node, bridge node (quite straightforward). - If running as exit node (recommended), then the Tor node should have its own public IP addy, to avoid other home network users from experiencing quirks of occasional node banning. - Configure Tor according to the HW of the box (RAM and network bandwidth). Be happy. Tor is text based config. The most time consuming part is reading the config manual.
On Fri, Oct 11, 2019 at 09:05:00PM +0000, jim bell wrote:
Somebody asked me a question, but because I am far from being an expert, I couldn't answer. Suppose a person wanted to implement a TOR node, simply by buying some box, and plugging it into his modem, and power. And NOT needing to become an expert on TOR, or even on computers in general. And NOT having to follow pages and pages of instructions. I did a few minutes of searching, and even the 'simple' explanations seemed 'clear as mud'. Don't bother with long explanations challenging the usefulness, or trustworthiness of TOR. Yes, we've discussed them to death. That's a different subject. Jim Bell
On FreeBSD, it's as simple as running the following commands as root # install tor pkg install tor # set appropriate variables, there aren't too many to get going and # you can find them all well documented vi /usr/local/etc/tor/torrc # update your rc.conf so the service will start at boot, then start it sysrc tor_enable=YES service tor start For an idea of what the torrc file should look like, here is mine with a few bits XXX'd out. My node is specifically configured not to allow exit traffic because it was generating a lot of complaints upstream about my host trying to hack peoples shit, etc :) # cat /usr/local/etc/tor/torrc | egrep -v "^$|^#" SocksPort 9050 SocksPolicy accept 127.0.0.1 SocksPolicy reject * Log notice file /var/log/tor/notices.log RunAsDaemon 1 DataDirectory /var/db/tor ControlPort 9051 HashedControlPassword XXXXXXXXXXXXXX ORPort 9023 Exitpolicy reject *:* # too many complaints :) Nickname twentysevendollars Address wintermute.synfin.org OutboundBindAddress 198.154.106.54 RelayBandwidthRate 3265 KBytes # playing with this RelayBandwidthBurst 4355 KBytes # ditto ContactInfo 0CA8B961 John Torman <tor @ synfin dot org> DirPort 9030 # what port to advertise for directory connections MyFamily XXXXXXXXXXXXX If you were doing this on Linux, it would be much the same. Replace the "pkg install" with "apt-get install" or "yum install" or whatever, you might have to add a tor repo or something. The config file probably won't live under /usr/local/etc/tor, but just /etc/tor, and you'll use systemctl rather than just updating the rc.conf with sysrc. I would not recommend you run an exit node from your home ;) -- GPG fingerprint: 17FD 615A D20D AFE8 B3E4 C9D2 E324 20BE D47A 78C7
On Friday, October 11, 2019, 02:26:27 PM PDT, John Newman <jnn@synfin.org> wrote: On Fri, Oct 11, 2019 at 09:05:00PM +0000, jim bell wrote:
Somebody asked me a question, but because I am far from being an expert, I couldn't answer. Suppose a person wanted to implement a TOR node, simply by buying some box, and plugging it into his modem, and power. And NOT needing to become an expert on TOR, or even on computers in general. And NOT having to follow pages and pages of instructions. I did a few minutes of searching, and even the 'simple' explanations seemed 'clear as mud'. Don't bother with long explanations challenging the usefulness, or trustworthiness of TOR. Yes, we've discussed them to death. That's a different subject. Jim Bell
On FreeBSD, it's as simple as running the following commands as root
# install tor pkg install tor
# set appropriate variables, there aren't too many to get going and # you can find them all well documented vi /usr/local/etc/tor/torrc
# update your rc.conf so the service will start at boot, then start it sysrc tor_enable=YES service tor start
For an idea of what the torrc file should look like, here is mine with a few bits XXX'd out. My node is specifically configured not to allow exit traffic because it was generating a lot of complaints upstream about my host trying to hack peoples shit, etc :)
# cat /usr/local/etc/tor/torrc | egrep -v "^$|^#" SocksPort 9050 SocksPolicy accept 127.0.0.1 SocksPolicy reject * Log notice file /var/log/tor/notices.log RunAsDaemon 1 DataDirectory /var/db/tor ControlPort 9051 HashedControlPassword XXXXXXXXXXXXXX ORPort 9023 Exitpolicy reject *:* # too many complaints :) Nickname twentysevendollars Address wintermute.synfin.org OutboundBindAddress 198.154.106.54 RelayBandwidthRate 3265 KBytes # playing with this RelayBandwidthBurst 4355 KBytes # ditto ContactInfo 0CA8B961 John Torman <tor @ synfin dot org> DirPort 9030 # what port to advertise for directory connections MyFamily XXXXXXXXXXXXX
If you were doing this on Linux, it would be much the same. Replace the "pkg install" with "apt-get install" or "yum install" or whatever, you might have to add a tor repo or something. The config file probably won't live under /usr/local/etc/tor, but just /etc/tor, and you'll use systemctl rather than just updating the rc.conf with sysrc.
I would not recommend you run an exit node from your home ;)
Yes, even years ago I was aware that a person shouldn't try to run an Exit node on a home setup. Although, I wonder if it has been tried? Sounds like a good beginning for a Wired article? After writing that, I found: https://blog.torproject.org/tips-running-exit-node No way!!! But you didn't answer my question. I said a simple box, and that is precisely what I meant. Power, Ethernet. Plug into existing Modem. Okay, I would understand it if the operator had to link it to the network by accessing a web page and informing them of the new IP address, but that's the level of complexity I was thinking about. (Except for a box that already "knows" how to link up and start running.) Could one of the problems with the TOR network be that only "experts" are likely to participate? Also note: I am referring to a situation where a person does not need, and does not want, the benefit of TOR for himself; Just wants to add his "brick in the wall" to the nodes. Has a spare $100 or so for the box, and has unlimited-usage gigabit/second Internet service. (I see that Centurylink provides them for $65/month, probably subject to tax, as well.) Jim Bell
On Fri, Oct 11, 2019 at 09:53:10PM +0000, jim bell wrote:
On Friday, October 11, 2019, 02:26:27 PM PDT, John Newman <jnn@synfin.org> wrote:
On Fri, Oct 11, 2019 at 09:05:00PM +0000, jim bell wrote:
Somebody asked me a question, but because I am far from being an expert, I couldn't answer. Suppose a person wanted to implement a TOR node, simply by buying some box, and plugging it into his modem, and power. And NOT needing to become an expert on TOR, or even on computers in general. And NOT having to follow pages and pages of instructions. I did a few minutes of searching, and even the 'simple' explanations seemed 'clear as mud'. Don't bother with long explanations challenging the usefulness, or trustworthiness of TOR. Yes, we've discussed them to death. That's a different subject. Jim Bell
On FreeBSD, it's as simple as running the following commands as root
# install tor pkg install tor
# set appropriate variables, there aren't too many to get going and # you can find them all well documented vi /usr/local/etc/tor/torrc
# update your rc.conf so the service will start at boot, then start it sysrc tor_enable=YES service tor start
For an idea of what the torrc file should look like, here is mine with a few bits XXX'd out. My node is specifically configured not to allow exit traffic because it was generating a lot of complaints upstream about my host trying to hack peoples shit, etc :)
# cat /usr/local/etc/tor/torrc | egrep -v "^$|^#" SocksPort 9050 SocksPolicy accept 127.0.0.1 SocksPolicy reject * Log notice file /var/log/tor/notices.log RunAsDaemon 1 DataDirectory /var/db/tor ControlPort 9051 HashedControlPassword XXXXXXXXXXXXXX ORPort 9023 Exitpolicy reject *:* # too many complaints :) Nickname twentysevendollars Address wintermute.synfin.org OutboundBindAddress 198.154.106.54 RelayBandwidthRate 3265 KBytes # playing with this RelayBandwidthBurst 4355 KBytes # ditto ContactInfo 0CA8B961 John Torman <tor @ synfin dot org> DirPort 9030 # what port to advertise for directory connections MyFamily XXXXXXXXXXXXX
If you were doing this on Linux, it would be much the same. Replace the "pkg install" with "apt-get install" or "yum install" or whatever, you might have to add a tor repo or something. The config file probably won't live under /usr/local/etc/tor, but just /etc/tor, and you'll use systemctl rather than just updating the rc.conf with sysrc.
I would not recommend you run an exit node from your home ;)
Yes, even years ago I was aware that a person shouldn't try to run an Exit node on a home setup. Although, I wonder if it has been tried?
That's the only way I run Tor, and here's why: One fundamental premise of Tor "as it stands today" is the principle "plausible deniability". By not running an exit node, you reduce your plausible deniability. "Depending on what you use Tor for", perhaps researching for a book you're writing, you might not particularly want to maximise the possible deniability when using Tor in any way. But then, you might. Your signal (for GPA sniffing your activity) to noise (chaff of exit node talk amongst other things) ratio, goes up when not running exit node. This may or may not be relevant to your use case.
Sounds like a good beginning for a Wired article? After writing that, I found: https://blog.torproject.org/tips-running-exit-node No way!!!
But you didn't answer my question. I said a simple box, and that is precisely what I meant. Power, Ethernet. Plug into existing Modem. Okay, I would understand it if the operator had to link it to the network by accessing a web page and informing them of the new IP address, but that's the level of complexity I was thinking about. (Except for a box that already "knows" how to link up and start running.)
If you expect to buy a box where someone else installs Tor for you, and you have any need to actually run Tor, you are being either naieve or foolish (or both).
Could one of the problems with the TOR network be that only "experts" are likely to participate?
Indeed. If you have no one able to help you install a Tor node (configuring the torrc file and firewall), the only possible and possibly reasonable (for certain limited use cases) modality of Tor usage, is to install and run Tor Browser.
Also note: I am referring to a situation where a person does not need, and does not want, the benefit of TOR for himself; Just wants to add his "brick in the wall" to the nodes. Has a spare $100 or so for the box, and has unlimited-usage gigabit/second Internet service. (I see that Centurylink provides them for $65/month, probably subject to tax, as well.)
Definitely a worthy brick in the wall. But do not fail to configure your Tor node by yourself or by someone you implicitly trust - anything else is unfair to your users and to other users of the network generally. It's not "difficult", as JN highlights above, but yes, if editing a text file and reading the man page is "Expert", yes it requires such expertise. Good luck,
On October 11, 2019 9:53:10 PM UTC, jim bell <jdb10987@yahoo.com> wrote:
On Friday, October 11, 2019, 02:26:27 PM PDT, John Newman <jnn@synfin.org> wrote:
Somebody asked me a question, but because I am far from being an expert, I couldn't answer. Suppose a person wanted to implement a TOR node, simply by buying some box, and plugging it into his modem, and
Don't bother with long explanations challenging the usefulness, or
On Fri, Oct 11, 2019 at 09:05:00PM +0000, jim bell wrote: power. And NOT needing to become an expert on TOR, or even on computers in general. And NOT having to follow pages and pages of instructions. I did a few minutes of searching, and even the 'simple' explanations seemed 'clear as mud'. trustworthiness of TOR. Yes, we've discussed them to death. That's a different subject. Jim Bell
On FreeBSD, it's as simple as running the following commands as root
# install tor pkg install tor
# set appropriate variables, there aren't too many to get going and # you can find them all well documented vi /usr/local/etc/tor/torrc
# update your rc.conf so the service will start at boot, then start it sysrc tor_enable=YES service tor start
For an idea of what the torrc file should look like, here is mine with a few bits XXX'd out. My node is specifically configured not to allow exit traffic because it was generating a lot of complaints upstream about my host trying to hack peoples shit, etc :)
# cat /usr/local/etc/tor/torrc | egrep -v "^$|^#" SocksPort 9050 SocksPolicy accept 127.0.0.1 SocksPolicy reject * Log notice file /var/log/tor/notices.log RunAsDaemon 1 DataDirectory /var/db/tor ControlPort 9051 HashedControlPassword XXXXXXXXXXXXXX ORPort 9023 Exitpolicy reject *:* # too many complaints :) Nickname twentysevendollars Address wintermute.synfin.org OutboundBindAddress 198.154.106.54 RelayBandwidthRate 3265 KBytes # playing with this RelayBandwidthBurst 4355 KBytes # ditto ContactInfo 0CA8B961 John Torman <tor @ synfin dot org> DirPort 9030 # what port to advertise for directory connections MyFamily XXXXXXXXXXXXX
If you were doing this on Linux, it would be much the same. Replace the "pkg install" with "apt-get install" or "yum install" or whatever, you might have to add a tor repo or something. The config file probably won't live under /usr/local/etc/tor, but just /etc/tor, and you'll use systemctl rather than just updating the rc.conf with sysrc.
I would not recommend you run an exit node from your home ;)
Yes, even years ago I was aware that a person shouldn't try to run an Exit node on a home setup. Although, I wonder if it has been tried? Sounds like a good beginning for a Wired article? After writing that, I found: https://blog.torproject.org/tips-running-exit-node No way!!!
But you didn't answer my question. I said a simple box, and that is precisely what I meant. Power, Ethernet. Plug into existing Modem. Okay, I would understand it if the operator had to link it to the network by accessing a web page and informing them of the new IP address, but that's the level of complexity I was thinking about. (Except for a box that already "knows" how to link up and start running.) Could one of the problems with the TOR network be that only "experts" are likely to participate? Also note: I am referring to a situation where a person does not need, and does not want, the benefit of TOR for himself; Just wants to add his "brick in the wall" to the nodes. Has a spare $100 or so for the box, and has unlimited-usage gigabit/second Internet service. (I see that Centurylink provides them for $65/month, probably subject to tax, as well.) Jim Bell
What you are describing, if it doesn't already exist, would be trivial to code for Windows (assuming standard tor binaries will run, win10 has fucking WSL or whatever, anyway im sure it does) or MacOS or Linux.. like the tor browser, but even simpler: just a little graphical applet that generates a torrc and starts up the tor daemon. Even makes sure whatever software firewall you are using has the right holes in it ;) I don't know of such an app but kinda surprised it doesn't exist.
On October 12, 2019 2:11:59 AM UTC, John Newman <jnn@synfin.org> wrote:
On October 11, 2019 9:53:10 PM UTC, jim bell <jdb10987@yahoo.com> wrote:
On Friday, October 11, 2019, 02:26:27 PM PDT, John Newman <jnn@synfin.org> wrote:
Somebody asked me a question, but because I am far from being an expert, I couldn't answer. Suppose a person wanted to implement a TOR node, simply by buying some box, and plugging it into his modem, and
Don't bother with long explanations challenging the usefulness, or
On Fri, Oct 11, 2019 at 09:05:00PM +0000, jim bell wrote: power. And NOT needing to become an expert on TOR, or even on computers in general. And NOT having to follow pages and pages of instructions. I did a few minutes of searching, and even the 'simple' explanations seemed 'clear as mud'. trustworthiness of TOR. Yes, we've discussed them to death. That's a different subject. Jim Bell
On FreeBSD, it's as simple as running the following commands as root
# install tor pkg install tor
# set appropriate variables, there aren't too many to get going and # you can find them all well documented vi /usr/local/etc/tor/torrc
# update your rc.conf so the service will start at boot, then start it sysrc tor_enable=YES service tor start
For an idea of what the torrc file should look like, here is mine with a few bits XXX'd out. My node is specifically configured not to allow exit traffic because it was generating a lot of complaints upstream about my host trying to hack peoples shit, etc :)
# cat /usr/local/etc/tor/torrc | egrep -v "^$|^#" SocksPort 9050 SocksPolicy accept 127.0.0.1 SocksPolicy reject * Log notice file /var/log/tor/notices.log RunAsDaemon 1 DataDirectory /var/db/tor ControlPort 9051 HashedControlPassword XXXXXXXXXXXXXX ORPort 9023 Exitpolicy reject *:* # too many complaints :) Nickname twentysevendollars Address wintermute.synfin.org OutboundBindAddress 198.154.106.54 RelayBandwidthRate 3265 KBytes # playing with this RelayBandwidthBurst 4355 KBytes # ditto ContactInfo 0CA8B961 John Torman <tor @ synfin dot org> DirPort 9030 # what port to advertise for directory connections MyFamily XXXXXXXXXXXXX
If you were doing this on Linux, it would be much the same. Replace the "pkg install" with "apt-get install" or "yum install" or whatever, you might have to add a tor repo or something. The config file probably won't live under /usr/local/etc/tor, but just /etc/tor, and you'll use systemctl rather than just updating the rc.conf with sysrc.
I would not recommend you run an exit node from your home ;)
Yes, even years ago I was aware that a person shouldn't try to run an Exit node on a home setup. Although, I wonder if it has been tried? Sounds like a good beginning for a Wired article? After writing that, I found: https://blog.torproject.org/tips-running-exit-node No way!!!
But you didn't answer my question. I said a simple box, and that is precisely what I meant. Power, Ethernet. Plug into existing Modem. Okay, I would understand it if the operator had to link it to the network by accessing a web page and informing them of the new IP address, but that's the level of complexity I was thinking about. (Except for a box that already "knows" how to link up and start running.) Could one of the problems with the TOR network be that only "experts" are likely to participate? Also note: I am referring to a situation where a person does not need, and does not want, the benefit of TOR for himself; Just wants to add his "brick in the wall" to the nodes. Has a spare $100 or so for the box, and has unlimited-usage gigabit/second Internet service. (I see that Centurylink provides them for $65/month, probably subject to tax, as well.) Jim Bell
What you are describing, if it doesn't already exist, would be trivial to code for Windows (assuming standard tor binaries will run, win10 has fucking WSL or whatever, anyway im sure it does) or MacOS or Linux.. like the tor browser, but even simpler: just a little graphical applet that generates a torrc and starts up the tor daemon. Even makes sure whatever software firewall you are using has the right holes in it ;)
I don't know of such an app but kinda surprised it doesn't exist.
A more appropriate answer to your question would actually be a pi or some SoC board with bare bones Linux or BSD OS and a version of the little Tor wrapper app I described that had a really simple web interfere and ran under e.g. nginx and php (or whatever). Put in a nice case with an onion stamped on top. And if that's really the only feature you wanted, I guess that's all it would do ;) No one is selling such hardware mass produced.
On 10/11/19, jim bell <jdb10987@yahoo.com> wrote:
Somebody asked me a question, but because I am far from being an expert, I couldn't answer. Suppose a person wanted to implement a TOR node, simply by buying some box, and plugging it into his modem, and power. And NOT needing to become an expert on TOR, or even on computers in general. And NOT having to follow pages and pages of instructions. I did a few minutes of searching, and even the 'simple' explanations seemed 'clear as mud'. Don't bother with long explanations challenging the usefulness, or trustworthiness of TOR. Yes, we've discussed them to death. That's a different subject. Jim Bell
Yes, even years ago I was aware that a person shouldn't try to run an Exit node on a home setup. Although, I wonder if it has been tried? Sounds like a good beginning for a Wired article?
Some tor operators do run both non-exit and exit relays, and onion services from their home. They are typically some mix of vanilla clean, with balls, and something to prove. Or reside in a jurisdiction that does not have killcrazy stormtroopers or fake laws.
After writing that, I found: https://blog.torproject.org/tips-running-exit-node No way!!! But you didn't answer my question. I said a simple box, and that is precisely what I meant. Power, Ethernet. Plug into existing Modem. Okay, I would understand it if the operator had to link it to the network by accessing a web page and informing them of the new IP address, but that's the level of complexity I was thinking about. (Except for a box that already "knows" how to link up and start running.) Could one of the problems with the TOR network be that only "experts" are likely to participate? Also note: I am referring to a situation where a person does not need, and does not want, the benefit of TOR for himself; Just wants to add his "brick in the wall" to the nodes. Has a spare $100 or so for the box, and has unlimited-usage gigabit/second Internet service. (I see that Centurylink provides them for $65/month, probably subject to tax, as well.)
The absolute minimum effort needed to run tor similar to you describe is... tor --orport auto That will give the world another non-exit relay to use therein using whatever bandwidth and cpu it can consume. It assumes the box is directly connected to the internet and not firewalled, as well as other defaults, etc, ymmv, rtfm. --socksport 0 will turn off the socks5 proxy on 127.0.0.1:9050 that is otherwise present for your own use locally.
On Fri, 11 Oct 2019 21:05:00 +0000 (UTC) jim bell <jdb10987@yahoo.com> wrote: . Suppose a person wanted to implement a TOR node, simply by buying some box, and plugging it into his modem, and power. And NOT needing to become an expert on TOR, there is no such 'box'. IIRC sone guy once showed up in the tor mailing list proposing to sell a home router or 'box' with tor preinstalled. Tor criminal cunts like dingledine shot the idea down. it should be completly and painfully obvious by now that there are no magical 'boxes' that can provide any meaningful security. There is no 'free lunch'. Even if you devote a good deal of effort to learn 'computer security' you can be attacked in many ways. Because, get this, 'technology' is controlled by the enemy. Shocker.
Don't bother with long explanations challenging the usefulness, or trustworthiness of TOR. Yes, we've discussed them to death. That's a different subject. Jim Bell
tor is a honeypot created and run by the US military. The purpose of tor is to entrap people like ross ulbricht who is now rotting in jail thanks to tor. so unless your aim is to get more people to rot in jail, then DO NOT promote tor.
Suppose a person wanted to implement a TOR node, simply by buying some box, and plugging it into his modem, and power. And NOT needing to become an expert on TOR,
There were relay boxes on offer in the past, not sure if still today. There are boxes on offer today providing local proxy into tor. People are probably better off learning the basics and simple setup needed for any network... Tor, I2P, IPFS, CJDNS, whatever... from a local tech meetup group or old fashioned time reading some docs and testing, than just blindly buying and plugging some magic box. Given the scammy way some of these boxes are advertised, the lack some even "legit" ones have of any review, etc... reviews could be profitable and valued for enterprising cpunks to do...
On Sat, Oct 12, 2019 at 04:02:32PM -0400, grarpamp wrote:
Suppose a person wanted to implement a TOR node, simply by buying some box, and plugging it into his modem, and power. And NOT needing to become an expert on TOR,
There were relay boxes on offer in the past, not sure if still today. There are boxes on offer today providing local proxy into tor.
People are probably better off learning the basics and simple setup needed for any network... Tor, I2P, IPFS, CJDNS, whatever... from a local tech meetup group or old fashioned time reading some docs and testing, than just blindly buying and plugging some magic box.
Absolutely. I know people who, to "install, set up and configure Tor" is a gulf of consciousness away from "even slightly probable in this life time". So a classic answer is "befriend a trustworthy geek" - try to establish that your meat space geek shares at least one of your principles.
Given the scammy way some of these boxes are advertised, the lack some even "legit" ones have of any review, etc... reviews could be profitable and valued for enterprising cpunks to do...
First rule of computer security, wipe and install from scratch, crossing fingers in hope your hardware is not own3d, which is a chance, as Juan rightly points out, approaching "zero" unfortunately.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Saturday, October 12, 2019 8:02 PM, grarpamp <grarpamp@gmail.com> wrote:
... There were relay boxes on offer in the past, not sure if still today. There are boxes on offer today providing local proxy into tor.
most of these were horrible; some were outright broken (e.g. trivial proxy bypass vulns) some years back i helped write a proposal for an easy to use Tor enforcing router; this would rely on a "Tor Director" application to make setup and administration easy and idiomatic for the platform users were accustomed to. main drawback with this approach is bespoke manufacture; it would be interesting to revisit this approach with rpi4 or other plentiful commodity platform. last but not least, the entire concept of "transparent Tor proxy" is flawed; you must have application level protections against de-anonymization attacks! (die in a fire, anonabox :) i can't seem to find a mirror of the old proposal. see attached instead... best regards,
On Saturday, October 12, 2019, 06:51:52 PM PDT, coderman <coderman@protonmail.com> wrote: ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Saturday, October 12, 2019 8:02 PM, grarpamp <grarpamp@gmail.com> wrote:
... There were relay boxes on offer in the past, not sure if still today. There are boxes on offer today providing local proxy into tor.
most of these were horrible; some were outright broken (e.g. trivial proxy bypass vulns) some years back i helped write a proposal for an easy to use Tor enforcing router; this would rely on a "Tor Director" application to make setup and administration easy and idiomatic for the platform users were accustomed to. main drawback with this approach is bespoke manufacture; it would be interesting to revisit this approach with rpi4 or other plentiful commodity platform.
I should clarify: I'm not advocating TOR itself: I'm advocating a networked anonymization system, at least vaguely like TOR, but with the additional features that have been talked about for many years. Say, with automatic chaff generation, arbitrarily-long hops (256 hops? 65,536 hops? An even larger power-of-2 hops?), etc. Actually IMPLEMENTED and running. Not merely talked about. (This talks about improvements to TOR. https://www.theregister.co.uk/2017/11/03/tor_ravamp/ I'm more interested in "proposed improvements that have been ignored for years.) Who do we blame for not having this? Well, we can start by blaming the designers and implementors of TOR, and the people who fund it and thus impede it from improving. That's a good start. But, why don't we also blame all those people who claim they hate TOR, or at least how it's run? After all, they've had two decades to implement an improvement. Now that they have general-purpose hardware microcontrollers, like Raspberry Pi, I figure the main difficulty is designing the software, building 1000+ units, and finding 1000+ volunteers to host one. If such a thing existed, I would probably host one, too. Hey, it's work, but I think we agree it should be done. I just found this, written 2 years ago. https://www.raspberrypi.org/magpi/tor-router/ Or, from last year: https://www.linux.com/news/turn-your-raspberry-pi-tor-relay-node/ Why not implement an entirely new anonymization network? Jim Bell
On Sun, 13 Oct 2019 07:54:53 +0000 (UTC) jim bell <jdb10987@yahoo.com> wrote:
Why not implement an entirely new anonymization network?
You should talk to roger ver and convince him of funding/promoting such a thing. Have him put his money where his mouth is. I say "you" because you have serious cypherpunks credentials so he should at least listen to you. As to 'entirely new', it seems to me that a high latency mixing network (which is not a 'new' design) is desirable. Such a network should allow people to communicate using non-real-time messages, instead of allowing them to browse jewtube. Low latency/real time networks and communications seem a lot harder to secure.
Jim Bell
My comments inline: On Sunday, October 13, 2019, 02:02:06 PM PDT, Punk <punks@tfwno.gf> wrote: On Sun, 13 Oct 2019 07:54:53 +0000 (UTC) jim bell <jdb10987@yahoo.com> wrote:
Why not implement an entirely new anonymization network? > You should talk to roger ver and convince him of funding/promoting such a thing. Have him put his money where his mouth is. I say "you" because you have serious cypherpunks credentials so he should at least listen to you.
Okay, sounds like an excellent idea. I will do that. But let's flesh out some of the numbers and practices. Shouldn't take more than a few hours or at most a couple days, to give everybody an input. This https://www.amazon.com/CanaKit-Raspberry-4GB-Basic-Starter/dp/B07VYC6S56/ref=sr_1_5?keywords=raspberry+pi+4&qid=1571002803&sr=8-5 appears to be a representative sample of a Raspberry Pi 4 board, in kit form, 4 gigabyte of RAM (I guess they must mean SDCard, right, and not ordinary SRAM or DRAM? SD wears out, right?), with cables, a clear plastic box. $85 in quantity one. What discounts there will be in quantity 1000, I do not know. (I'm not choosing this particular one, necessarily, just using it as what appears to be a representative sample of the concept.) Can we agree that 1,000 quantity will be a good initial "critical mass" for this project? TOR is currently larger, https://metrics.torproject.org/networksize.html but 1000 is still a good start. While hypothetically node operators might receive some sort of subsidy (in full or in part) for their internet-service cost, it's also plausible that their Internet payment will be their "skin in the game", their contribution to the project. Centurylink offers 1 gigabit/second service for $65 plus tax. The speed itself is only one part of the issue. I think there is no data limit for their 1 gigabit service; their slower services may have a 1 terabyte/month limit.
As to 'entirely new', it seems to me that a high latency mixing network (which is not a 'new' design) is desirable. Such a network should allow people to communicate using non-real-time messages, instead of allowing them to browse jewtube. Low latency/real time networks and communications seem a lot harder to secure."
What I'm thinking of is a programmable-latency network, say anything from 1 to 256 hops. Although, it would be hard to imagine needing more than 16, I suppose. This is a list of proposed 'improvements' to TOR. https://blog.torproject.org/tor-design-proposals-how-we-make-changes-our-pro... No doubt SOMEWHERE there is a list of 'proposed improvements that we know the TOR structure will never agree to because they will be considered 'too good' '. Shouldn't we use those, too? Especially those! Jim Bell
comments below, ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Sunday, October 13, 2019 10:15 PM, jim bell <jdb10987@yahoo.com> wrote: ...
This https://www.amazon.com/CanaKit-Raspberry-4GB-Basic-Starter/dp/B07VYC6S56/ref=sr_1_5?keywords=raspberry+pi+4&qid=1571002803&sr=8-5 appears to be a representative sample of a Raspberry Pi 4 board, in kit form, 4 gigabyte of RAM (I guess they must mean SDCard, right, and not ordinary SRAM or DRAM? SD wears out, right?), with cables, a clear plastic box. $85 in quantity one.
there is indeed 4G of LPDDR4 SDRAM on board. you will want to include a small fan to avoid throttling while under heavy use. (ah, the kit you link includes a fan - excellent!)
While hypothetically node operators might receive some sort of subsidy (in full or in part) for their internet-service cost, it's also plausible that their Internet payment will be their "skin in the game", their contribution to the project. Centurylink offers 1 gigabit/second service for $65 plus tax. The speed itself is only one part of the issue. I think there is no data limit for their 1 gigabit service; their slower services may have a 1 terabyte/month limit.
i want to suggest NOT running a Tor node on a residential line. be advised that your service limit is NOT your monthly bandwidth limit! (i have gigabit symmetric, but can only use 1TB/month before incurring serious overage charges...) consumer internet is also prone to "TCP RST" traffic management (e.g. to fight torrent looking traffic) which interrupts circuits, and some ISPs even mangle DNS, which can get your relay marked as "BAD". see also: https://trac.torproject.org/projects/tor/wiki/TorRelayGuide#Partone:deciding... "It is required that a Tor relay be allowed to use a minimum of 100 GByte of outbound traffic"
This is a list of proposed 'improvements' to TOR. https://blog.torproject.org/tor-design-proposals-how-we-make-changes-our-pro... No doubt SOMEWHERE there is a list of 'proposed improvements that we know the TOR structure will never agree to because they will be considered 'too good' '. Shouldn't we use those, too? Especially those!
Tor has a situation where they must keep compatibility with the existing network, or introduce partitioning attacks and compromise the anonymity of their users. this is actually a hard problem - i think the future is in running parallel overlays, and routing application level services over the best overlay for the given purpose at that time. for a slew of research beyond Tor, see: https://www.freehaven.net/anonbib/ discussing the promising avenues a subject for another thread... :) best regards,
On October 13, 2019 10:32:16 PM UTC, coderman <coderman@protonmail.com> wrote:
comments below,
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Sunday, October 13, 2019 10:15 PM, jim bell <jdb10987@yahoo.com> wrote: ...
This https://www.amazon.com/CanaKit-Raspberry-4GB-Basic-Starter/dp/B07VYC6S56/ref=sr_1_5?keywords=raspberry+pi+4&qid=1571002803&sr=8-5 appears to be a representative sample of a Raspberry Pi 4 board, in kit form, 4 gigabyte of RAM (I guess they must mean SDCard, right, and not ordinary SRAM or DRAM? SD wears out, right?), with cables, a clear plastic box. $85 in quantity one.
there is indeed 4G of LPDDR4 SDRAM on board. you will want to include a small fan to avoid throttling while under heavy use. (ah, the kit you link includes a fan - excellent!)
It would of course need an sd card for the OS install. There are probably cheaper SoC offerings that are fast enough, but the rpi4 is a good choice, easy to work with.
While hypothetically node operators might receive some sort of subsidy (in full or in part) for their internet-service cost, it's also plausible that their Internet payment will be their "skin in the game", their contribution to the project. Centurylink offers 1 gigabit/second service for $65 plus tax. The speed itself is only one part of the issue. I think there is no data limit for their 1 gigabit service; their slower services may have a 1 terabyte/month limit.
i want to suggest NOT running a Tor node on a residential line. be advised that your service limit is NOT your monthly bandwidth limit! (i have gigabit symmetric, but can only use 1TB/month before incurring serious overage charges...) consumer internet is also prone to "TCP RST" traffic management (e.g. to fight torrent looking traffic) which interrupts circuits, and some ISPs even mangle DNS, which can get your relay marked as "BAD".
I tend to agree.. if run from residential connections it would make more sense to not try to use a huge amount of bandwidth, and running an exit node will almost certainly bring trouble. Then again, different providers offer and impose different levels of scrutiny. Whatever the end product is it should be fully configurable, but at a bare minimum the user should be able to control all the bandwidth settings and whether or not it's an exit node in a simple fashion. VPS' are dirt cheap, and can be spun up with Linux or FreeBSD and tor very quickly and easily. Not quite meeting Jim's requirements, but it's really not that high a burden of technical knowledge to do, for those truly interested...
see also: https://trac.torproject.org/projects/tor/wiki/TorRelayGuide#Partone:deciding... "It is required that a Tor relay be allowed to use a minimum of 100 GByte of outbound traffic"
This is a list of proposed 'improvements' to TOR. https://blog.torproject.org/tor-design-proposals-how-we-make-changes-our-pro... No doubt SOMEWHERE there is a list of 'proposed improvements that we know the TOR structure will never agree to because they will be considered 'too good' '. Shouldn't we use those, too? Especially those!
Tor has a situation where they must keep compatibility with the existing network, or introduce partitioning attacks and compromise the anonymity of their users.
this is actually a hard problem - i think the future is in running parallel overlays, and routing application level services over the best overlay for the given purpose at that time.
for a slew of research beyond Tor, see: https://www.freehaven.net/anonbib/
discussing the promising avenues a subject for another thread... :)
best regards,
On Sunday, October 13, 2019, 03:32:24 PM PDT, coderman <coderman@protonmail.com> wrote: comments below, ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Sunday, October 13, 2019 10:15 PM, jim bell <jdb10987@yahoo.com> wrote: ...
This https://www.amazon.com/CanaKit-Raspberry-4GB-Basic-Starter/dp/B07VYC6S56/ref=sr_1_5?keywords=raspberry+pi+4&qid=1571002803&sr=8-5 appears to be a representative sample of a Raspberry Pi 4 board, in kit form, 4 gigabyte of RAM (I guess they must mean SDCard, right, and not ordinary SRAM or DRAM? SD wears out, right?), with cables, a clear plastic box. $85 in quantity one.
While hypothetically node operators might receive some sort of subsidy (in full or in part) for their internet-service cost, it's also plausible that their Internet payment will be their "skin in the game", their contribution to the project. Centurylink offers 1 gigabit/second service for $65 plus tax. The speed itself is only one part of the issue. I think there is no data limit for their 1 gigabit service; their slower services may have a 1 terabyte/month limit.
i want to suggest NOT running a Tor node on a residential line. be advised that your service limit is NOT your monthly bandwidth limit! (i have gigabit symmetric, but can only use 1TB/month before incurring serious overage charges...) consumer internet is also prone to "TCP RST" traffic management (e.g. to fight torrent looking traffic) which interrupts circuits, and some ISPs even mangle DNS, which can get your relay marked as "BAD".
see also: https://trac.torproject.org/projects/tor/wiki/TorRelayGuide#Partone:deciding... "It is required that a Tor relay be allowed to use a minimum of 100 GByte of outbound traffic"
I accessed the Centurylink website, and I found this material on data limit exceptions: https://www.centurylink.com/asset/aboutus/downloads/legal/internet-service-d... "CenturyLink Excessive Use Policy Frequently Asked Questions What is the CenturyLink Excessive Use Policy (EUP) and how does it apply to me? CenturyLink residential High-Speed Internet (HSI) customers are subject to the CenturyLink EUP thatsets a 1.0 terabyte (TB) monthly limit on the amount of data a customer sends and receives over their HSIconnection, subject to certain exemptions. What customers are excluded from the CenturyLink EUP?CenturyLink’s EUP does not impact the following customers: Business HSI Customers Prism TV Customers 1 Gigabit customers Customers with subsidized HSI service for low-income households What is included in my usage? All of the data received by your modem/gateway (downloaded) and sent from your modem/gateway(uploaded) will be counted toward your data limit. Why does CenturyLink have data usage limits and how much data usage is included in myCenturyLink HSI service?Data usage limits encourage reasonable use of your CenturyLink HSI service so that all customers canreceive the optimal Internet experience they have purchased with their service plan. CenturyLink includes1.0 TB of data usage each month with all residential HSI plans." If I interpret the above correctly, 1-gigabit (actually, 940 MBPS) customers are not affected by the 1.0 terabyte/month limit. I highlighted by coloring in RED the reference to "1 Gigabit customers", above. But, it's possible that Centurylink's policies vary state by state, or region by region. Jim Bell
On Sun, 13 Oct 2019 22:15:58 +0000 (UTC) jim bell <jdb10987@yahoo.com> wrote:
...let's flesh out some of the numbers and practices. Shouldn't take more than a few hours or at most a couple days, to give everybody an input. This appears to be a representative sample of a Raspberry Pi 4 board, in kit form, 4 gigabyte of RAM (I guess they must mean SDCard, right, and not ordinary SRAM or DRAM?
as coderman said, that's the pi's main RAM memory. So yeah, those ARM 'systems on a chip' are quite capable. They have 4 cores running at ~1.2gcps and tons of ram.
SD wears out, right?), with cables, a clear plastic box. $85 in quantity one.
the previous model with 'only' 1gb or RAM, same processor is $35 or less. (you need to add a sd card, power supply and case) ...so the hardware is quite cheap. The question is, of course, to what degree is it safe? The rpi for instance is designed in the english shithole by people working for the amerikan mafia known as broadcom. The rpi's main processor is a broadcom processor (not the quadcore ARM), running closed source firmware written by the raspberry 'foundation'. there are other systems that are not as bad as the rpi - at least you won't be running GCHQ-NSA firmware directly. (some people were working on an open source firmware but I don't think they got it to work)
Can we agree that 1,000 quantity will be a good initial "critical mass" for this project?
A thousand independent node operators isn't a small number.
tor is currently larger, https://metrics.torproject.org/networksize.html but 1000 is still a good start.
yeah, you have to take into account for instance what % of those nodes is owned by the NSA, GCHQ, FSB, stasi, whatever the chinese agency is called, samsung, hitachi, etc etc etc etc etc. but wait, is your network partially client/server like tor, or is it a fully decentralized peer to peer network? (freenetproject.org)
While hypothetically node operators might receive some sort of subsidy (in full or in part) for their internet-service cost, it's also plausible that their Internet payment will be their "skin in the game", their contribution to the project. Centurylink offers 1 gigabit/second service for $65 plus tax. The speed itself is only one part of the issue. I think there is no data limit for their 1 gigabit service; their slower services may have a 1 terabyte/month limit.
I don't know about bandwidth costs, but they obv. depend on how your network works. So discussing those costs before having some idea about what kind of capacity/traffic/padding/architecture etc the system will have seems kinda backwards.
As to 'entirely new', it seems to me that a high latency mixing network (which is not a 'new' design) is desirable. Such a network should allow people to communicate using non-real-time messages, instead of allowing them to browse jewtube. Low latency/real time networks and communications seem a lot harder to secure."
What I'm thinking of is a programmable-latency network, say anything from 1 to 256 hops. Although, it would be hard to imagine needing more than 16, I suppose.
some variables : *) number of mixers/nodes a message goes through *) all clients and nodes are exchanging fixed size packets all the time (chaff) *) there are no clients - it's a peer to peer network
This is a list of proposed 'improvements' to TOR. https://blog.torproject.org/tor-design-proposals-how-we-make-changes-our-pro... No doubt SOMEWHERE there is a list of 'proposed improvements that we know the TOR structure will never agree to because they will be considered 'too good' '. Shouldn't we use those, too? Especially those!
so the best pentagon criminals like tor's syverson have been 'working' on this for a while and there are tons of 'literature' - some of their stuff is here https://www.freehaven.net/papers.html notice that cypherpunks(...) like adam back(now blockstream CEO, google funded) a guy called goldberg and others have been/are involved with tor to varying degrees. Furthermore, adam back was subscribed to this list. His last message https://lists.cpunks.org/pipermail/cypherpunks/2015-June/053438.html anyway, you Jim could try to get some ideas or/and help from back. Ver for marketing and funding and back for technical assistance may be a good combination. Here are some other datapoints : https://maidsafe.net/ those ppl have allegedly been working on teh problem...since forever. And they've gotten nowhere. They have even launched their own shitcoin/financial scam. https://coinmarketcap.com/currencies/maidsafecoin/ And they are not the only ones who want to add economic incentives to 'file sharing'. The idea seems like a good one to me, but it doesn't seem to work. https://storj.io/ "Decentralized Cloud Storage" https://tron.network/ "TRON is an ambitious project dedicated to building the infrastructure for a truly decentralized Internet." there prolly are a few more like that. bottom line : there's a fair number of variables to take into account...
Jim Bell
Jim Bell's comments inline: On Tuesday, October 15, 2019, 11:23:53 AM PDT, Punk <punks@tfwno.gf> wrote: On Sun, 13 Oct 2019 22:15:58 +0000 (UTC) jim bell <jdb10987@yahoo.com> wrote:
...let's flesh out some of the numbers and practices. Shouldn't take more than a few hours or at most a couple days, to give everybody an input. This appears to be a representative sample of a Raspberry Pi 4 board, in kit form, 4 gigabyte of RAM (I guess they must mean SDCard, right, and not ordinary SRAM or DRAM?
as coderman said, that's the pi's main RAM memory. So yeah, those ARM 'systems on a chip' are quite capable. They have 4 cores running at ~1.2gcps and tons of ram. _I_ remember when an Intel 8048 was called a "computer on a chip"!!!
SD wears out, right?), with cables, a clear plastic box. $85 in quantity one.
the previous model with 'only' 1gb or RAM, same processor is $35 or less. (you need to add a sd card, power supply and case)
How much main memory would be useful for a transfer node to use? > ...so the hardware is quite cheap. The question is, of course, to what degree is it safe? The rpi for instance is designed in the english shithole by people working for the amerikan mafia known as broadcom. The rpi's main processor is a broadcom processor (not the quadcore ARM), running closed source firmware written by the raspberry 'foundation'.
there are other systems that are not as bad as the rpi - at least you won't be running GCHQ-NSA firmware directly. (some people were working on an open source firmware but I don't think they got it to work)
I agree that this is a matter that needs to be discussed. But no doubt you've heard of the saying, 'the perfect being the enemy of the good'.
Can we agree that 1,000 quantity will be a good initial "critical mass" for this project?
A thousand independent node operators isn't a small number.
tor is currently larger, https://metrics.torproject.org/networksize.html but 1000 is still a good start.
> yeah, you have to take into account for instance what % of those nodes is owned by the NSA, GCHQ, FSB, stasi, whatever the chinese agency is called, samsung, hitachi, etc etc etc etc etc. > but wait, is your network partially client/server like tor, or is it a fully decentralized peer to peer network? (freenetproject.org) First, I'm not looking for it to be thought of as "my" network, although maybe I will be credited with some initiative for giving the project a kick. The person whose network it is publicly known as might end up being the person who initially funds it, and agrees to have his name attached to the project as sponsor. And no, I'm not qualified to answer your second comment. I don't consider myself a "software person", never have been. This is yet another issue 'we' will have to work out.
While hypothetically node operators might receive some sort of subsidy (in full or in part) for their internet-service cost, it's also plausible that their Internet payment will be their "skin in the game", their contribution to the project. Centurylink offers 1 gigabit/second service for $65 plus tax. The speed itself is only one part of the issue. I think there is no data limit for their 1 gigabit service; their slower services may have a 1 terabyte/month limit.
I don't know about bandwidth costs, but they obv. depend on how your network works. So discussing those costs before having some idea about what kind of capacity/traffic/padding/architecture etc the system will have seems kinda backwards.
The reason I initially referred to "1 gigabit" service for nodes is that I was, and still am, under the impression that current Centurylink policy exempts them from their "excessive use" policy. I suspect that computers of this level (Raspbery pi 4) won't be able to throughput more than a few tens of megabits of (processed) data, if that, so Internet rate won't likely be a bottleneck. But a data cap could easily become a limiting factor, especially if the network implements heavy chaff.
As to 'entirely new', it seems to me that a high latency mixing network (which is not a 'new' design) is desirable. Such a network should allow people to communicate using non-real-time messages, instead of allowing them to browse jewtube. Low latency/real time networks and communications seem a lot harder to secure."
What I'm thinking of is a programmable-latency network, say anything from 1 to 256 hops. Although, it would be hard to imagine needing more than 16, I suppose.
> some variables :
*) number of mixers/nodes a message goes through
Yes, I'm thinking that a user should be able to decide, for any individual message, how many nodes it will go through. He will still have a latency issue to deal with, but at least that tradeoff question will be decided by HIM, not the entire network as a group,. > *) all clients and nodes are exchanging fixed size packets all the time (chaff) I consider chaff essential to increase the difficulty of tracing messages, especially when traffic is low. > *) there are no clients - it's a peer to peer network
This is a list of proposed 'improvements' to TOR. https://blog.torproject.org/tor-design-proposals-how-we-make-changes-our-pro... No doubt SOMEWHERE there is a list of 'proposed improvements that we know the TOR structure will never agree to because they will be considered 'too good' '. Shouldn't we use those, too? Especially those!
> so the best pentagon criminals like tor's syverson have been 'working' on this for a while and there are tons of 'literature' - some of their stuff is here > The Free Haven Project > notice that cypherpunks(...) like adam back(now blockstream CEO, google funded) a guy called goldberg and others have been/are involved with tor to varying degrees. Furthermore, adam back was subscribed to this list. His last message
[Bitcoin-development] questions about bitcoin-XT code fork & non-consensus hard-fork
> anyway, you Jim could try to get some ideas or/and help from back. Ver for marketing and funding and back for technical assistance may be a good combination. I hope that if the proposal is technically sound, financing won't be a problem. My idea of a target amount of initial subsidy for setting up one node (ignoring software-development costs) should be about $50: Myself, I'd like to charge about $30 for the hardware kit, the quantity-1 cost would be $90 or so, but I don't yet have an estimate if the materials are purchased in 1000+ quantity.
Here are some other datapoints :
> those ppl have allegedly been working on teh problem...since forever. And they've gotten nowhere. They have even launched their own shitcoin/financial scam. I see lots of fine words on their website. But they haven't accomplished much? > MaidSafeCoin (MAID) price, charts, market cap, and other metrics | CoinMarketCap
And they are not the only ones who want to add economic incentives to 'file sharing'. The idea seems like a good one to me, but it doesn't seem to work.
If it were truly easy to attach a 18-terabyte HD to each node, that would make it a really interesting proposition... This, for $140 more... > Decentralized Cloud Storage — Storj "Decentralized Cloud Storage"
TRON Foundation:Capture the future slipping away "TRON is an ambitious project dedicated to building the infrastructure for a truly decentralized Internet."
there prolly are a few more like that.
> bottom line : there's a fair number of variables to take into account... True, <sigh>, quite true. Jim Bell
I just received a quotation: https://www.amazon.com/CanaKit-Raspberry-4GB-Basic-Starter/dp/B07VYC6S56/ref=sr_1_5?keywords=raspberry+pi+4&qid=1571002803&sr=8-5 And the answer is:"I have checked on this pricing and this particular pricing would USD 71.99 per unit and with Free Shipping with MOQ of 500 PCS. The items are in stock and delivery would normally be in about 3-4 business days to WA." ------------------ end of quote---------------------------- Presumably, the USB memory will be a few dollars extra. So, who volunteers to write the software? Jim Bell On Tuesday, October 15, 2019, 01:21:31 PM PDT, jim bell <jdb10987@yahoo.com> wrote: Jim Bell's comments inline: On Tuesday, October 15, 2019, 11:23:53 AM PDT, Punk <punks@tfwno.gf> wrote: On Sun, 13 Oct 2019 22:15:58 +0000 (UTC) jim bell <jdb10987@yahoo.com> wrote:
...let's flesh out some of the numbers and practices. Shouldn't take more than a few hours or at most a couple days, to give everybody an input. This appears to be a representative sample of a Raspberry Pi 4 board, in kit form, 4 gigabyte of RAM (I guess they must mean SDCard, right, and not ordinary SRAM or DRAM?
as coderman said, that's the pi's main RAM memory. So yeah, those ARM 'systems on a chip' are quite capable. They have 4 cores running at ~1.2gcps and tons of ram. _I_ remember when an Intel 8048 was called a "computer on a chip"!!!
SD wears out, right?), with cables, a clear plastic box. $85 in quantity one.
the previous model with 'only' 1gb or RAM, same processor is $35 or less. (you need to add a sd card, power supply and case)
How much main memory would be useful for a transfer node to use? > ...so the hardware is quite cheap. The question is, of course, to what degree is it safe? The rpi for instance is designed in the english shithole by people working for the amerikan mafia known as broadcom. The rpi's main processor is a broadcom processor (not the quadcore ARM), running closed source firmware written by the raspberry 'foundation'.
there are other systems that are not as bad as the rpi - at least you won't be running GCHQ-NSA firmware directly. (some people were working on an open source firmware but I don't think they got it to work)
I agree that this is a matter that needs to be discussed. But no doubt you've heard of the saying, 'the perfect being the enemy of the good'.
Can we agree that 1,000 quantity will be a good initial "critical mass" for this project?
A thousand independent node operators isn't a small number.
tor is currently larger, https://metrics.torproject.org/networksize.html but 1000 is still a good start.
> yeah, you have to take into account for instance what % of those nodes is owned by the NSA, GCHQ, FSB, stasi, whatever the chinese agency is called, samsung, hitachi, etc etc etc etc etc. > but wait, is your network partially client/server like tor, or is it a fully decentralized peer to peer network? (freenetproject.org) First, I'm not looking for it to be thought of as "my" network, although maybe I will be credited with some initiative for giving the project a kick. The person whose network it is publicly known as might end up being the person who initially funds it, and agrees to have his name attached to the project as sponsor. And no, I'm not qualified to answer your second comment. I don't consider myself a "software person", never have been. This is yet another issue 'we' will have to work out.
While hypothetically node operators might receive some sort of subsidy (in full or in part) for their internet-service cost, it's also plausible that their Internet payment will be their "skin in the game", their contribution to the project. Centurylink offers 1 gigabit/second service for $65 plus tax. The speed itself is only one part of the issue. I think there is no data limit for their 1 gigabit service; their slower services may have a 1 terabyte/month limit.
I don't know about bandwidth costs, but they obv. depend on how your network works. So discussing those costs before having some idea about what kind of capacity/traffic/padding/architecture etc the system will have seems kinda backwards.
The reason I initially referred to "1 gigabit" service for nodes is that I was, and still am, under the impression that current Centurylink policy exempts them from their "excessive use" policy. I suspect that computers of this level (Raspbery pi 4) won't be able to throughput more than a few tens of megabits of (processed) data, if that, so Internet rate won't likely be a bottleneck. But a data cap could easily become a limiting factor, especially if the network implements heavy chaff.
As to 'entirely new', it seems to me that a high latency mixing network (which is not a 'new' design) is desirable. Such a network should allow people to communicate using non-real-time messages, instead of allowing them to browse jewtube. Low latency/real time networks and communications seem a lot harder to secure."
What I'm thinking of is a programmable-latency network, say anything from 1 to 256 hops. Although, it would be hard to imagine needing more than 16, I suppose.
> some variables :
*) number of mixers/nodes a message goes through
Yes, I'm thinking that a user should be able to decide, for any individual message, how many nodes it will go through. He will still have a latency issue to deal with, but at least that tradeoff question will be decided by HIM, not the entire network as a group,. > *) all clients and nodes are exchanging fixed size packets all the time (chaff) I consider chaff essential to increase the difficulty of tracing messages, especially when traffic is low. > *) there are no clients - it's a peer to peer network
This is a list of proposed 'improvements' to TOR. https://blog.torproject.org/tor-design-proposals-how-we-make-changes-our-pro... No doubt SOMEWHERE there is a list of 'proposed improvements that we know the TOR structure will never agree to because they will be considered 'too good' '. Shouldn't we use those, too? Especially those!
> so the best pentagon criminals like tor's syverson have been 'working' on this for a while and there are tons of 'literature' - some of their stuff is here > The Free Haven Project > notice that cypherpunks(...) like adam back(now blockstream CEO, google funded) a guy called goldberg and others have been/are involved with tor to varying degrees. Furthermore, adam back was subscribed to this list. His last message
[Bitcoin-development] questions about bitcoin-XT code fork & non-consensus hard-fork
> anyway, you Jim could try to get some ideas or/and help from back. Ver for marketing and funding and back for technical assistance may be a good combination. | | | | | | | | | | | The Free Haven Project | | | | | | | [Bitcoin-development] questions about bitcoin-XT code fork & non-consens... | | | I hope that if the proposal is technically sound, financing won't be a problem. My idea of a target amount of initial subsidy for setting up one node (ignoring software-development costs) should be about $50: Myself, I'd like to charge about $30 for the hardware kit, the quantity-1 cost would be $90 or so, but I don't yet have an estimate if the materials are purchased in 1000+ quantity.
Here are some other datapoints :
> those ppl have allegedly been working on teh problem...since forever. And they've gotten nowhere. They have even launched their own shitcoin/financial scam. I see lots of fine words on their website. But they haven't accomplished much? > MaidSafeCoin (MAID) price, charts, market cap, and other metrics | CoinMarketCap
And they are not the only ones who want to add economic incentives to 'file sharing'. The idea seems like a good one to me, but it doesn't seem to work.
| | | | | | | | | | | MaidSafeCoin (MAID) price, charts, market cap, and other metrics | CoinM... Get MaidSafeCoin price, charts, and other cryptocurrency info | | | If it were truly easy to attach a 18-terabyte HD to each node, that would make it a really interesting proposition... This, for $140 more... > Decentralized Cloud Storage — Storj "Decentralized Cloud Storage"
TRON Foundation:Capture the future slipping away "TRON is an ambitious project dedicated to building the infrastructure for a truly decentralized Internet."
there prolly are a few more like that.
> bottom line : there's a fair number of variables to take into account... True, <sigh>, quite true. Jim Bell | | | | | | | | | | | Decentralized Cloud Storage — Storj Storj is the storage layer for the Internet. Decentralized cloud storage is a new paradigm that removes intermed... | | | | | | | | | | | | | | TRON Foundation:Capture the future slipping away TRON is now recruiting globally for members with a passion for blockchain technology. If you want to make histor... | | |
On Tue, Oct 15, 2019 at 09:06:15PM +0000, jim bell wrote:
tor is currently larger, https://metrics.torproject.org/networksize.html but 1000 is still a good start.
Of course 1000 is a good start. But a (fundamentally) deficient network stack, is downing a 4L cask before the guests arrive and without the catering having been ordered - there's no point partying before the party is ready, bus is booked, sleeping bags available, catering on the way. :: Each step is only potentially an Ace. :: Play your Ace too soon, and you blow that Ace for little/ no benefit.
On Tue, 15 Oct 2019 20:21:03 +0000 (UTC) jim bell <jdb10987@yahoo.com> wrote:
> some variables :
*) number of mixers/nodes a message goes through
Yes, I'm thinking that a user should be able to decide, for any individual message, how many nodes it will go through. He will still have a latency issue to deal with, but at least that tradeoff question will be decided by HIM, not the entire network as a group,.
As a side note, even the pentagon's 'onion router' aka Tor allows you to choose the number of hops in your paths/circuits, but increasing it is pointless because traffic gets correlated when it enters and leaves the network.
> *) all clients and nodes are exchanging fixed size packets all the time (chaff)
I consider chaff essential to increase the difficulty of tracing messages, especially when >traffic is low.
ok, so that's actually one of, or the most fundamental requirement. The connection between user and 'network' HAS to have a fixed rate. Let's check the archive... From: "Wei Dai" <weidai@eskimo.com> "Imagine a server that allows you to open a low bandwidth (let's say around 100 cps, in order to reduce costs) link-encrypted telnet session with it, and provides you with a number of services, for example a link-encrypted talk session with another user. You'll need to maintain the link 24 hours a day to defend against statistical analysis, and of course you can chain a number of these servers together in a way similiar to chaining remailers. This scheme seems to provide untracibility while getting around the latency cost problem of remailers, thus allowing users to talk to each other in real time, anonymously. " Date: Fri, 27 Jan 95 00:00:01 PST So that's it Jim. Users have to be connected 24/7 using a constant rate link. Today it can be more than 100 bytes/s 'bootstrapping' such a system is not then a matter of paying for some number of 'nodes', but promoting the software. Keywords to search for in the original cpunks archive : Pipe-Net, "Latency Costs of Anonymity", Wei Dai, link encryption.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Thursday, October 17, 2019 10:31 PM, Punk <punks@tfwno.gf> wrote:
... ok, so that's actually one of, or the most fundamental requirement. The connection between user and 'network' HAS to have a fixed rate. Let's check the archive... ...
So that's it Jim. Users have to be connected 24/7 using a constant rate link. Today it can be more than 100 bytes/s
one idea is to use something akin to reliable multicast groups, where you gradually increase your bandwidth according to some defined strata of bandwidth, and affirmative control notification is required to increase your bandwidth (number of concurrent strata). this is not TCP friendly, but it would support multiple levels of bandwidth in such a system. this doesn't eliminate traffic analysis (like true link padding) but it does muddy the waters into partitions which are much larger than (1). another benefit would be to use that padding traffic with application layer awareness of bulk transport. e.g. ability to say "send this, but no rush..." vs. interactive traffic. last but not least, you could apply the padding traffic to key pre-distribution or opportunistic protocol maintenance. e.g. distributing routing and node identity information. (the "directory") best regards,
On Thu, Oct 17, 2019 at 11:11:41PM +0000, coderman wrote:
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Thursday, October 17, 2019 10:31 PM, Punk <punks@tfwno.gf> wrote:
... ok, so that's actually one of, or the most fundamental requirement. The connection between user and 'network' HAS to have a fixed rate. Let's check the archive... ...
So that's it Jim. Users have to be connected 24/7 using a constant rate link. Today it can be more than 100 bytes/s
one idea is to use something akin to reliable multicast groups, where you gradually increase your bandwidth according to some defined strata of bandwidth, and affirmative control notification is required to increase your bandwidth (number of concurrent strata).
There are various improvements, but the most basic operational mode is simple in design and ought be straightforward to implement - never has been, yet.
this is not TCP friendly,
The protocol should not be TCP, but should be UDP. Then apps can use UDP, or TCP, or any protocol higher than UDP; TCP precludes, or rather introduces latency and other problems, for many apps/protocols that rely on something lower level than TCP.
but it would support multiple levels of bandwidth in such a system. this doesn't eliminate traffic analysis (like true link padding) but it does muddy the waters into partitions which are much larger than (1).
another benefit would be to use that padding traffic with application layer awareness of bulk transport. e.g. ability to say "send this, but no rush..." vs. interactive traffic.
last but not least, you could apply the padding traffic to key pre-distribution or opportunistic protocol maintenance. e.g. distributing routing and node identity information. (the "directory")
Indeed. Lots of improvements possible. Anecdote: Back about 3 years ago when I first ran a Tor exit node at home (on a ~1 MiB/s ADSL), I would sometimes SSH into the box from another location and forward VNC for a virtual desktop, really just to monitor the Tor node. Pretty consistently, within about 10 minutes, the SSH connection would die with some SSH error, so I'd reconnect and watch some more, then it would die again. It appeared evident to me that SSH had some bug that was being exploited to, at the very least, kill SSH connections with some presumably packet injection or modification (presumably after monitoring the connection for a bit). That, of course, was entirely disconcerting. Since then there's been at least one SSH bug finally disclosed/ fixed, though I can't find the one that stood out to me as commensurate to my experience, the following may be of anecdotal interest: Fixing The New OpenSSH Roaming Bug https://www.upguard.com/blog/fixing-the-new-openssh-roaming-bug ... The flaw involves the accidental inclusion of experimental client-side roaming support in the OpenSSH client, despite being disabled on the server-side years ago. This feature essentially enables users to resume broken SSH connections. Unfortunately, a maliciously configured server can exploit a bug in the client and capture its memory contents, including any private encryption keys used for SSH connections. Cisco's warning: Patch now, critical SSH flaw affects Nexus 9000 fabric switches https://www.zdnet.com/article/ciscos-warning-patch-now-critical-ssh-flaw-aff... May 2, 2019 -- 11:12 GMT (21:12 AEST) The company disclosed the bug on Tuesday and has given it a severity rating of 9.8 out of 10. ... https://nakedsecurity.sophos.com/2018/08/23/vulnerability-in-openssh-for-two... Serious SSH bug lets crooks log in just by asking nicely… https://nakedsecurity.sophos.com/2018/10/17/serious-ssh-bug-lets-crooks-log-... Big, bad, scary bug of the moment is CVE-2018-10933. This is a serious flaw – in fact, it’s a very serious flaw – in a free software library called libssh. The flaw is more than just serious – it’s scary, because it theoretically allows anyone to log into a server protected with libssh without entering a password at all. It’s scary because ssh, or SSH as it is often written, is probably the most widely deployed remote access protocol in the world. Almost all Unix and Linux servers use SSH for remote administration, and there are an awful lot of awfully large server farms out there, and so there’s an awful lot of SSH about. ... By far the most commonly used SSH version out there is an open source product called OpenSSH, created and maintained by the security-conscious folks at OpenBSD. OpenSSH is a completely separate implementation to libssh – they don’t include or rely on each other’s code. Other well-known open source implementations of SSH include Dropbear (a stripped down version commonly used on routers and other IoT devices), libssh2 (it’s a different product to libssh, not merely a newer version) and PuTTY (widely used on Windows). None of these projects have this bug either, so most of us can stand down from red alert. ...
ok, so that's actually one of, or the most fundamental requirement. The connection between user and 'network' HAS to have a fixed rate.
Assuming "fixed rate" means "always filled to said rate" not "fillable up to said rate"... then that makes every users node look nicely busy. And if the rate is the same for all users, then every user looks the same. However all nodes in the net need to be always filled to some rates. Otherwise adversary vampire can just watch the nodes end user is connected to, or perturb the users packet stream, or wait until user unluckily routes across quiet middle nodes, etc.
last but not least, you could apply the padding traffic to key pre-distribution or opportunistic protocol maintenance. e.g. distributing routing and node identity information. (the "directory")
If pad fill can be used to carry something, better than to waste it.
On Wed, Oct 23, 2019 at 05:15:57AM -0400, grarpamp wrote:
ok, so that's actually one of, or the most fundamental requirement. The connection between user and 'network' HAS to have a fixed rate.
Assuming "fixed rate" means "always filled to said rate" not "fillable up to said rate"... then that makes every users node look nicely busy.
Ack. "Chaff fill" has become overloaded. Let's try Link Metrics Normalization or LMN (or something better if someone speaks up soon): 1. packets per time unit normalization 2. packets transmission latency/jitter normalization 3. packet size normalization (this one's easy)
And if the rate is the same for all users, then every user looks the same.
Ideal operating mode. Practical (as in acceptable to users) operation probably requires as coderman suggested earlier, to allow steady stepping upwards/ downwards over time (by config only of course), to provide for the impatient bittorrent and youtuber crowd. There is no bw cap that will be accepted by all, probably not even by a majority.
However all nodes in the net need to be always filled to some rates.
Ack. I imagine a network ping (to friend/ connected nodes) on say a 10 minute interval, which from memory was only about 2.1MiB per month, would be an acceptable base load for everyone, and that many will accept higher base load than this.
Otherwise adversary vampire can just watch the nodes end user is connected to, or perturb the users packet stream, or wait until user unluckily routes across quiet middle nodes, etc.
Ack. Gov stalkers gonna stalk. One limit case to consider is all direct (first hop) p2p/f2f links are always and only ever, 1KiB/s (say). You want more bw, you add more separate links, and the disappearing act is handled by stepping up, and maintaining that rate for some period of time (presumably longer than actually needed), before eventually stepping down (removing links). And the point, in relation to "unluckily route across quiet (stalking) middle node" - some application of multi-path: - 10 trusted friends to whom I hop in to the net - 1 dark net server supporting multi path, from which I download the latest Adobe Photoshop cr24c/7 - 10 separate routes to 10 separate "darknet server access point nodes" - if 1 link gets killed in the middle, my corresponding friend node keeps chaff filling to my node regardless, and I can attempt to create with him, a new route; - also, the other 9 links continue to hum along
last but not least, you could apply the padding traffic to key pre-distribution or opportunistic protocol maintenance. e.g. distributing routing and node identity information. (the "directory")
If pad fill can be used to carry something, better than to waste it.
https://en.wikipedia.org/wiki/Zero_Knowledge_Systems notice the ian goldberg is a current accomplice of the tor mafia.
2005 Low-Cost Traffic Analysis of Tor https://www.freehaven.net/anonbib/cache/torta05.pdf "By making these assumptions, the designers of Tor believe it is safe to employ only minimal mixing of the stream cells... ...This choice of threat model, with its limitation of the adversaries’ powers, has been a subject of controversy... ...Tor, on the other hand assumes a much weaker threat model.. ...we show that even relatively weak adversaries can perform traffic-analysis, and get vital information out of Tor. This means that even non-law-enforcement agencies can significantly degrade the quality of anonymity that Tor provides, to the level of protection provided by a collection of simple proxy servers, or even below." ------- my comment : the attack is based on monitoring the latency of a node while sending an attacker controlled stream through it "Tor exhibits the worst possible behaviour: not enough interference to destroy individ- ual stream characteristics, yet enough to allow the remote measurement of the node’s load." Maybe some tor fanboi knows if this has been somehow fixed? Anyway the article makes it clear that simple cover traffic in not enough to defend against timing attacks.
On Sat, Oct 26, 2019 at 04:53:02PM -0300, Punk - Stasi 2.0 wrote:
2005 Low-Cost Traffic Analysis of Tor https://www.freehaven.net/anonbib/cache/torta05.pdf
Thank you. Have to read this.
"By making these assumptions, the designers of Tor believe it is safe to employ only minimal mixing of the stream cells...
...This choice of threat model, with its limitation of the adversaries’ powers, has been a subject of controversy...
...Tor, on the other hand assumes a much weaker threat model..
...we show that even relatively weak adversaries can perform traffic-analysis, and get vital information out of Tor. This means that even non-law-enforcement agencies can significantly degrade the quality of anonymity that Tor provides, to the level of protection provided by a collection of simple proxy servers, or even below."
-------
my comment : the attack is based on monitoring the latency of a node while sending an attacker controlled stream through it
"Tor exhibits the worst possible behaviour: not enough interference to destroy individ- ual stream characteristics, yet enough to allow the remote measurement of the node’s load."
Maybe some tor fanboi knows if this has been somehow fixed?
The real question is whether it's possible to fix.
Anyway the article makes it clear that simple cover traffic in not enough to defend against timing attacks.
Packet size, bandwidth used, as well as packet transmission latency, each need to be normalized. And any time an attacker can suspend your network stream briefly, there's a blip that will propagate through the network - and so, of course, if the attacker is sending a stream through your node, and your ISP/Gov suspended your connection to your ISP for say 200ms, then the attacker will get a subsequent gap in his stream being sent via your node, thus identifying you as their target. Splitting streams and having only micro (low b/w) streams doesn't help - the attacker is only going to send one stream through you of course. Dark alt net can handle outgoing temp suspends - just send streams through your 'dark' non-govnet hop, to some other node who can onforward the incoming streams or requests for outgoing (if I'm say a web server), but this does not fix the attacker's incoming stream being suspended, whereby you don't have any of the attacker's packets to send to the attacker during the suspension window, and attacker sees the latency spike, identifies you. Mandating higher latency per node requires (significantly) larger packet queues, and quickly ramps up overall end to end latency: Let's say we buffer 500ms since that forces attackers to suspend links for over 500ms to identify target nodes, and making their network node bisections more noticeable to end users: So 7 hops, * 500ms latency per hop, = 3.5s - and that's a basic minimal length end to end route from end user node, to dark net server node, 10 hops = 5seconds. And 500ms may not be enough! Perhaps we should buffer up for a second or more? Attackers such government stalkers who have wide spread control over ISP and backbone routers, will bisect their target sets, reducing these sets (of interesting to them end user nodes) as much as possible, before doing say a binary bisection using the above latency injection analysis technique (and other techniques).
here's another article with some interesting info. Freedom Systems 2.1 Security Issues and Analysis https://www.freehaven.net/anonbib/cache/freedom21-security.pdf 'freedom' was the name of the network run by 'zero knowdlege systems' - As noted ian goldberg was part of zks and now works for tor. Adam back was also involved. It seems to me that when the company failed some(most?) ppl went from working in the 'private' sector to working for the govt. "someone who is watching the network links can see that you are logging into the Freedom Network by watching the packets. They can’t tell what you’re doing, but can see that you are logged in, and by counting packets and seeing how long you’re online, may be able to make certain assumptions. (Counting and timing packets is possible today since traffic shaping and link padding do not offer strong security as implemented." "In the current version of the protocol there is no link padding, cover traffic or traffic shaping. It might be argued that one at minimum needs some of these countermeasures to defend against traffic analysis, but our initial analysis suggests that these countermeasures are probably necessary, but certainly not sufficient. This is because even if one does implement a combination of these countermeasures there remain a number of attacks, not significantly harder than attacking a system without these countermeasures. The main example is the packet round-trip timing related attacks, where the attacker passively observes or actively (and plausibly deniably) induces latency variations to uniquely identify the source of a route. These remaining attacks are expensive in bandwidth utilization to defend against, and the counter measures greatly hinder performance. Consider that to defend against timing attacks, even as a first step one would need to start by padding round-trip times to get cover, reducing all round-trip times to worst case round-trip."
On Sat, Oct 26, 2019 at 10:30:36PM -0300, Punk - Stasi 2.0 wrote:
here's another article with some interesting info.
Freedom Systems 2.1 Security Issues and Analysis https://www.freehaven.net/anonbib/cache/freedom21-security.pdf
'freedom' was the name of the network run by 'zero knowdlege systems' - As noted ian goldberg was part of zks and now works for tor. Adam back was also involved. It seems to me that when the company failed some(most?) ppl went from working in the 'private' sector to working for the govt.
"someone who is watching the network links can see that you are logging into the Freedom Network by watching the packets. They can’t tell what you’re doing, but can see that you are logged in, and by counting packets and seeing how long you’re online, may be able to make certain assumptions. (Counting and timing packets is possible today since traffic shaping and link padding do not offer strong security as implemented."
"In the current version of the protocol there is no link padding, cover traffic or traffic shaping. It might be argued that one at minimum needs some of these countermeasures to defend against traffic analysis, but our initial analysis suggests that these countermeasures are probably necessary, but certainly not sufficient. This is because even if one does implement a combination of these countermeasures there remain a number of attacks, not significantly harder than attacking a system without these countermeasures. The main example is the packet round-trip timing related attacks, where the attacker passively observes or actively (and plausibly deniably) induces latency variations to uniquely identify the source of a route. These remaining attacks are expensive in bandwidth utilization to defend against, and the counter measures greatly hinder performance. Consider that to defend against timing attacks, even as a first step one would need to start by padding round-trip times to get cover, reducing all round-trip times to worst case round-trip."
Yes, that's the same conclusion. You install/ set up one or more dark links, or you are exposed to active latency injection attacks. Given this fact, is it still worth pursuing the software side of any overlay net? For many use cases an overlay net appears to provide benefits - the usage stats of Tor certainly suggest there is not insignificant demand for as much, and all high latency low b/w apps appear to "obviously benefit" since active latency injection attacks must inject latency "in the order of" your particular local ping circle's latency config - if your ping is 2 hours, and your first hop is always to nodes who are actual friends and therefore maintain their own fixed rate links, your own node going down for an hour or a day says nothing about anyone else or about who you connect to through your friend's node, other than that your own node went down. [absolutism warning, but this one feels sound]
" Our first modest suggestion is thus that existing mix networks for general internet use should simply be abandoned for other than research purposes. They should continue to be studied for there inherent interest. And, they should be used for applications where it is possible to manage and measure the sets of distinct users and anonymity providers and the probability distributions on their behaviors, voting being the clearest example. But for general internet use, they are overkill against almost every adversary except unrealistic ones like the GPA or incredibly strong ones like The Man. And, because of usability and incentive limitations, in practice they do not scale enough to protect against The Man anyway. " So much bullshit in a single paragraph 1) "continue to be studied for there inherent interest" THERE? Lawl, syverson doesn't know how to spell possesive pronouns like THEIR. 2) "adversaries...except unrealistic ones like the GPA" how is an adversary that can see a lot of traffic, 'unrealistic', when in actual FUCKING REALITY that kind of adversary does exist? 3) Ok, so scum-master syverson then half admits that mixnets should be used against 'the man'...except that... "because of usability and incentive limitations, in practice they do not scale enough to protect against The Man" so what scum-master is saying is that since the pentagon and the rest of americunt govcorp(syverson's handlers) are not promoting mixnets, then there are 'limitations' to them. Go figure! People dont use mixnets because the pentagon doesn't want people to use mixnets.
source for quoted paragraph in previous message is again https://www.freehaven.net/anonbib/cache/entropist.pdf
I hope people haven't fotten about the idea for making an alternate anonymization system. The hardware requirements almost write themselves. Yes, there was some discussion about the software issues. Could/did somebody write a proposal of the functions and features of this system? Any volunteers on programming it? Jim Bell On Tuesday, October 15, 2019, 01:21:31 PM PDT, jim bell <jdb10987@yahoo.com> wrote: Jim Bell's comments inline: On Tuesday, October 15, 2019, 11:23:53 AM PDT, Punk <punks@tfwno.gf> wrote: On Sun, 13 Oct 2019 22:15:58 +0000 (UTC) jim bell <jdb10987@yahoo.com> wrote:
...let's flesh out some of the numbers and practices. Shouldn't take more than a few hours or at most a couple days, to give everybody an input. This appears to be a representative sample of a Raspberry Pi 4 board, in kit form, 4 gigabyte of RAM (I guess they must mean SDCard, right, and not ordinary SRAM or DRAM?
as coderman said, that's the pi's main RAM memory. So yeah, those ARM 'systems on a chip' are quite capable. They have 4 cores running at ~1.2gcps and tons of ram. _I_ remember when an Intel 8048 was called a "computer on a chip"!!!
SD wears out, right?), with cables, a clear plastic box. $85 in quantity one.
the previous model with 'only' 1gb or RAM, same processor is $35 or less. (you need to add a sd card, power supply and case)
How much main memory would be useful for a transfer node to use? > ...so the hardware is quite cheap. The question is, of course, to what degree is it safe? The rpi for instance is designed in the english shithole by people working for the amerikan mafia known as broadcom. The rpi's main processor is a broadcom processor (not the quadcore ARM), running closed source firmware written by the raspberry 'foundation'.
there are other systems that are not as bad as the rpi - at least you won't be running GCHQ-NSA firmware directly. (some people were working on an open source firmware but I don't think they got it to work)
I agree that this is a matter that needs to be discussed. But no doubt you've heard of the saying, 'the perfect being the enemy of the good'.
Can we agree that 1,000 quantity will be a good initial "critical mass" for this project?
A thousand independent node operators isn't a small number.
tor is currently larger, https://metrics.torproject.org/networksize.html but 1000 is still a good start.
> yeah, you have to take into account for instance what % of those nodes is owned by the NSA, GCHQ, FSB, stasi, whatever the chinese agency is called, samsung, hitachi, etc etc etc etc etc. > but wait, is your network partially client/server like tor, or is it a fully decentralized peer to peer network? (freenetproject.org) First, I'm not looking for it to be thought of as "my" network, although maybe I will be credited with some initiative for giving the project a kick. The person whose network it is publicly known as might end up being the person who initially funds it, and agrees to have his name attached to the project as sponsor. And no, I'm not qualified to answer your second comment. I don't consider myself a "software person", never have been. This is yet another issue 'we' will have to work out.
While hypothetically node operators might receive some sort of subsidy (in full or in part) for their internet-service cost, it's also plausible that their Internet payment will be their "skin in the game", their contribution to the project. Centurylink offers 1 gigabit/second service for $65 plus tax. The speed itself is only one part of the issue. I think there is no data limit for their 1 gigabit service; their slower services may have a 1 terabyte/month limit.
I don't know about bandwidth costs, but they obv. depend on how your network works. So discussing those costs before having some idea about what kind of capacity/traffic/padding/architecture etc the system will have seems kinda backwards.
The reason I initially referred to "1 gigabit" service for nodes is that I was, and still am, under the impression that current Centurylink policy exempts them from their "excessive use" policy. I suspect that computers of this level (Raspbery pi 4) won't be able to throughput more than a few tens of megabits of (processed) data, if that, so Internet rate won't likely be a bottleneck. But a data cap could easily become a limiting factor, especially if the network implements heavy chaff.
As to 'entirely new', it seems to me that a high latency mixing network (which is not a 'new' design) is desirable. Such a network should allow people to communicate using non-real-time messages, instead of allowing them to browse jewtube. Low latency/real time networks and communications seem a lot harder to secure."
What I'm thinking of is a programmable-latency network, say anything from 1 to 256 hops. Although, it would be hard to imagine needing more than 16, I suppose.
> some variables :
*) number of mixers/nodes a message goes through
Yes, I'm thinking that a user should be able to decide, for any individual message, how many nodes it will go through. He will still have a latency issue to deal with, but at least that tradeoff question will be decided by HIM, not the entire network as a group,. > *) all clients and nodes are exchanging fixed size packets all the time (chaff) I consider chaff essential to increase the difficulty of tracing messages, especially when traffic is low. > *) there are no clients - it's a peer to peer network
This is a list of proposed 'improvements' to TOR. https://blog.torproject.org/tor-design-proposals-how-we-make-changes-our-pro... No doubt SOMEWHERE there is a list of 'proposed improvements that we know the TOR structure will never agree to because they will be considered 'too good' '. Shouldn't we use those, too? Especially those!
> so the best pentagon criminals like tor's syverson have been 'working' on this for a while and there are tons of 'literature' - some of their stuff is here > The Free Haven Project > notice that cypherpunks(...) like adam back(now blockstream CEO, google funded) a guy called goldberg and others have been/are involved with tor to varying degrees. Furthermore, adam back was subscribed to this list. His last message
[Bitcoin-development] questions about bitcoin-XT code fork & non-consensus hard-fork
> anyway, you Jim could try to get some ideas or/and help from back. Ver for marketing and funding and back for technical assistance may be a good combination. I hope that if the proposal is technically sound, financing won't be a problem. My idea of a target amount of initial subsidy for setting up one node (ignoring software-development costs) should be about $50: Myself, I'd like to charge about $30 for the hardware kit, the quantity-1 cost would be $90 or so, but I don't yet have an estimate if the materials are purchased in 1000+ quantity.
Here are some other datapoints :
> those ppl have allegedly been working on teh problem...since forever. And they've gotten nowhere. They have even launched their own shitcoin/financial scam. I see lots of fine words on their website. But they haven't accomplished much? > MaidSafeCoin (MAID) price, charts, market cap, and other metrics | CoinMarketCap
And they are not the only ones who want to add economic incentives to 'file sharing'. The idea seems like a good one to me, but it doesn't seem to work.
If it were truly easy to attach a 18-terabyte HD to each node, that would make it a really interesting proposition... This, for $140 more... > Decentralized Cloud Storage — Storj "Decentralized Cloud Storage"
TRON Foundation:Capture the future slipping away "TRON is an ambitious project dedicated to building the infrastructure for a truly decentralized Internet."
there prolly are a few more like that.
> bottom line : there's a fair number of variables to take into account... True, <sigh>, quite true. Jim Bell
It turns out that in two months, I will have the opportunity to announce this project at a convention. I will be happy to do so if it appears that there will be sufficient progress in the next two months. A fairly firm commitment by someone to write the software would be an excellent start. And, this announcement MAY lead to some financing of the project. The main question, other than the financing, is the programming of the software. Has there been any progress on this matter? Jim Bell On Monday, December 9, 2019, 11:39:10 AM PST, jim bell <jdb10987@yahoo.com> wrote: I hope people haven't fotten about the idea for making an alternate anonymization system. The hardware requirements almost write themselves. Yes, there was some discussion about the software issues. Could/did somebody write a proposal of the functions and features of this system? Any volunteers on programming it? Jim Bell On Tuesday, October 15, 2019, 01:21:31 PM PDT, jim bell <jdb10987@yahoo.com> wrote: Jim Bell's comments inline: On Tuesday, October 15, 2019, 11:23:53 AM PDT, Punk <punks@tfwno.gf> wrote: On Sun, 13 Oct 2019 22:15:58 +0000 (UTC) jim bell <jdb10987@yahoo.com> wrote:
...let's flesh out some of the numbers and practices. Shouldn't take more than a few hours or at most a couple days, to give everybody an input. This appears to be a representative sample of a Raspberry Pi 4 board, in kit form, 4 gigabyte of RAM (I guess they must mean SDCard, right, and not ordinary SRAM or DRAM?
as coderman said, that's the pi's main RAM memory. So yeah, those ARM 'systems on a chip' are quite capable. They have 4 cores running at ~1.2gcps and tons of ram. _I_ remember when an Intel 8048 was called a "computer on a chip"!!!
SD wears out, right?), with cables, a clear plastic box. $85 in quantity one.
the previous model with 'only' 1gb or RAM, same processor is $35 or less. (you need to add a sd card, power supply and case)
How much main memory would be useful for a transfer node to use? > ...so the hardware is quite cheap. The question is, of course, to what degree is it safe? The rpi for instance is designed in the english shithole by people working for the amerikan mafia known as broadcom. The rpi's main processor is a broadcom processor (not the quadcore ARM), running closed source firmware written by the raspberry 'foundation'.
there are other systems that are not as bad as the rpi - at least you won't be running GCHQ-NSA firmware directly. (some people were working on an open source firmware but I don't think they got it to work)
I agree that this is a matter that needs to be discussed. But no doubt you've heard of the saying, 'the perfect being the enemy of the good'.
Can we agree that 1,000 quantity will be a good initial "critical mass" for this project?
A thousand independent node operators isn't a small number.
tor is currently larger, https://metrics.torproject.org/networksize.html but 1000 is still a good start.
> yeah, you have to take into account for instance what % of those nodes is owned by the NSA, GCHQ, FSB, stasi, whatever the chinese agency is called, samsung, hitachi, etc etc etc etc etc. > but wait, is your network partially client/server like tor, or is it a fully decentralized peer to peer network? (freenetproject.org) First, I'm not looking for it to be thought of as "my" network, although maybe I will be credited with some initiative for giving the project a kick. The person whose network it is publicly known as might end up being the person who initially funds it, and agrees to have his name attached to the project as sponsor. And no, I'm not qualified to answer your second comment. I don't consider myself a "software person", never have been. This is yet another issue 'we' will have to work out.
While hypothetically node operators might receive some sort of subsidy (in full or in part) for their internet-service cost, it's also plausible that their Internet payment will be their "skin in the game", their contribution to the project. Centurylink offers 1 gigabit/second service for $65 plus tax. The speed itself is only one part of the issue. I think there is no data limit for their 1 gigabit service; their slower services may have a 1 terabyte/month limit.
I don't know about bandwidth costs, but they obv. depend on how your network works. So discussing those costs before having some idea about what kind of capacity/traffic/padding/architecture etc the system will have seems kinda backwards.
The reason I initially referred to "1 gigabit" service for nodes is that I was, and still am, under the impression that current Centurylink policy exempts them from their "excessive use" policy. I suspect that computers of this level (Raspbery pi 4) won't be able to throughput more than a few tens of megabits of (processed) data, if that, so Internet rate won't likely be a bottleneck. But a data cap could easily become a limiting factor, especially if the network implements heavy chaff.
As to 'entirely new', it seems to me that a high latency mixing network (which is not a 'new' design) is desirable. Such a network should allow people to communicate using non-real-time messages, instead of allowing them to browse jewtube. Low latency/real time networks and communications seem a lot harder to secure."
What I'm thinking of is a programmable-latency network, say anything from 1 to 256 hops. Although, it would be hard to imagine needing more than 16, I suppose.
> some variables :
*) number of mixers/nodes a message goes through
Yes, I'm thinking that a user should be able to decide, for any individual message, how many nodes it will go through. He will still have a latency issue to deal with, but at least that tradeoff question will be decided by HIM, not the entire network as a group,. > *) all clients and nodes are exchanging fixed size packets all the time (chaff) I consider chaff essential to increase the difficulty of tracing messages, especially when traffic is low. > *) there are no clients - it's a peer to peer network
This is a list of proposed 'improvements' to TOR. https://blog.torproject.org/tor-design-proposals-how-we-make-changes-our-pro... No doubt SOMEWHERE there is a list of 'proposed improvements that we know the TOR structure will never agree to because they will be considered 'too good' '. Shouldn't we use those, too? Especially those!
> so the best pentagon criminals like tor's syverson have been 'working' on this for a while and there are tons of 'literature' - some of their stuff is here > The Free Haven Project > notice that cypherpunks(...) like adam back(now blockstream CEO, google funded) a guy called goldberg and others have been/are involved with tor to varying degrees. Furthermore, adam back was subscribed to this list. His last message
[Bitcoin-development] questions about bitcoin-XT code fork & non-consensus hard-fork
> anyway, you Jim could try to get some ideas or/and help from back. Ver for marketing and funding and back for technical assistance may be a good combination. I hope that if the proposal is technically sound, financing won't be a problem. My idea of a target amount of initial subsidy for setting up one node (ignoring software-development costs) should be about $50: Myself, I'd like to charge about $30 for the hardware kit, the quantity-1 cost would be $90 or so, but I don't yet have an estimate if the materials are purchased in 1000+ quantity.
Here are some other datapoints :
> those ppl have allegedly been working on teh problem...since forever. And they've gotten nowhere. They have even launched their own shitcoin/financial scam. I see lots of fine words on their website. But they haven't accomplished much? > MaidSafeCoin (MAID) price, charts, market cap, and other metrics | CoinMarketCap
And they are not the only ones who want to add economic incentives to 'file sharing'. The idea seems like a good one to me, but it doesn't seem to work.
If it were truly easy to attach a 18-terabyte HD to each node, that would make it a really interesting proposition... This, for $140 more... > Decentralized Cloud Storage — Storj "Decentralized Cloud Storage"
TRON Foundation:Capture the future slipping away "TRON is an ambitious project dedicated to building the infrastructure for a truly decentralized Internet."
there prolly are a few more like that.
> bottom line : there's a fair number of variables to take into account... True, <sigh>, quite true. Jim Bell
I've been running USPS on a network of raspberry pis. You anonymization layer project is very aligned with my cryptoplatform project, and they both could be the same thing. with respect to wearing out the SD cards I have Raspberry pis older than 2 years runing the blockchain protocol and I haven detected failures in any of the _60 nodes best OA Sent with [ProtonMail](https://protonmail.com) Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Friday, May 8, 2020 8:35 AM, jim bell <jdb10987@yahoo.com> wrote:
It turns out that in two months, I will have the opportunity to announce this project at a convention. I will be happy to do so if it appears that there will be sufficient progress in the next two months. A fairly firm commitment by someone to write the software would be an excellent start. And, this announcement MAY lead to some financing of the project.
The main question, other than the financing, is the programming of the software. Has there been any progress on this matter?
Jim Bell
On Monday, December 9, 2019, 11:39:10 AM PST, jim bell <jdb10987@yahoo.com> wrote:
I hope people haven't fotten about the idea for making an alternate anonymization system. The hardware requirements almost write themselves. Yes, there was some discussion about the software issues. Could/did somebody write a proposal of the functions and features of this system? Any volunteers on programming it?
Jim Bell
On Tuesday, October 15, 2019, 01:21:31 PM PDT, jim bell <jdb10987@yahoo.com> wrote:
Jim Bell's comments inline:
On Tuesday, October 15, 2019, 11:23:53 AM PDT, Punk <punks@tfwno.gf> wrote:
On Sun, 13 Oct 2019 22:15:58 +0000 (UTC) jim bell <jdb10987@yahoo.com> wrote:
...let's flesh out some of the numbers and practices. Shouldn't take more than a few hours or at most a couple days, to give everybody an input. This appears to be a representative sample of a Raspberry Pi 4 board, in kit form, 4 gigabyte of RAM (I guess they must mean SDCard, right, and not ordinary SRAM or DRAM?
as coderman said, that's the pi's main RAM memory. So yeah, those ARM 'systems on a chip' are quite capable. They have 4 cores running at ~1.2gcps and tons of ram.
_I_ remember when an Intel 8048 was called a "computer on a chip"!!!
SD wears out, right?), with cables, a clear plastic box. $85 in quantity one.
the previous model with 'only' 1gb or RAM, same processor is $35 or less. (you need to add a sd card, power supply and case)
How much main memory would be useful for a transfer node to use?
...so the hardware is quite cheap. The question is, of course, to what degree is it safe? The rpi for instance is designed in the english shithole by people working for the amerikan mafia known as broadcom. The rpi's main processor is a broadcom processor (not the quadcore ARM), running closed source firmware written by the raspberry 'foundation'.
there are other systems that are not as bad as the rpi - at least you won't be running GCHQ-NSA firmware directly. (some people were working on an open source firmware but I don't think they got it to work)
I agree that this is a matter that needs to be discussed. But no doubt you've heard of the saying, 'the perfect being the enemy of the good'.
Can we agree that 1,000 quantity will be a good initial "critical mass" for this project?
A thousand independent node operators isn't a small number.
tor is currently larger, https://metrics.torproject.org/networksize.html but1000 is still a good start.
yeah, you have to take into account for instance what % of those nodes is owned by the NSA, GCHQ, FSB, stasi, whatever the chinese agency is called, samsung, hitachi, etc etc etc etc etc.
but wait, is your network partially client/server like tor, or is it a fully decentralized peer to peer network? (freenetproject.org)
First, I'm not looking for it to be thought of as "my" network, although maybe I will be credited with some initiative for giving the project a kick. The person whose network it is publicly known as might end up being the person who initially funds it, and agrees to have his name attached to the project as sponsor. And no, I'm not qualified to answer your second comment. I don't consider myself a "software person", never have been. This is yet another issue 'we' will have to work out.
While hypothetically node operators might receive some sort of subsidy (in full or in part) for their internet-service cost, it's also plausible that their Internet payment will be their "skin in the game", their contribution to the project. Centurylink offers 1 gigabit/second service for $65 plus tax. The speed itself is only one part of the issue. I think there is no data limit for their 1 gigabit service; their slower services may have a 1 terabyte/month limit.
I don't know about bandwidth costs, but they obv. depend on how your network works. So discussing those costs before having some idea about what kind of capacity/traffic/padding/architecture etc the system will have seems kinda backwards.
The reason I initially referred to "1 gigabit" service for nodes is that I was, and still am, under the impression that current Centurylink policy exempts them from their "excessive use" policy. I suspect that computers of this level (Raspbery pi 4) won't be able to throughput more than a few tens of megabits of (processed) data, if that, so Internet rate won't likely be a bottleneck. But a data cap could easily become a limiting factor, especially if the network implements heavy chaff.
As to 'entirely new', it seems to me that a high latency mixing network (which is not a 'new' design) is desirable. Such a network should allow people to communicate using non-real-time messages, instead of allowing them to browse jewtube. Low latency/real time networks and communications seem a lot harder to secure."
What I'm thinking of is a programmable-latency network, say anything from 1 to 256 hops. Although, it would be hard to imagine needing more than 16, I suppose.
some variables :
*) number of mixers/nodes a message goes through
Yes, I'm thinking that a user should be able to decide, for any individual message, how many nodes it will go through. He will still have a latency issue to deal with, but at least that tradeoff question will be decided by HIM, not the entire network as a group,.
*) all clients and nodes are exchanging fixed size packets all the time (chaff)
I consider chaff essential to increase the difficulty of tracing messages, especially when traffic is low.
*) there are no clients - it's a peer to peer network
This is a list of proposed 'improvements' to TOR. https://blog.torproject.org/tor-design-proposals-how-we-make-changes-our-pro... Nodoubt SOMEWHERE there is a list of 'proposed improvements that we know the TOR structure will never agree to because they will be considered 'too good' '. Shouldn't we use those, too? Especially those!
so the best pentagon criminals like tor's syverson have been 'working' on this for a while and there are tons of 'literature' - some of their stuff is here
[The Free Haven Project](https://www.freehaven.net/papers.html)
notice that cypherpunks(...) like adam back(now blockstream CEO, google funded) a guy called goldberg and others have been/are involved with tor to varying degrees. Furthermore, adam back was subscribed to this list. His last message
[[Bitcoin-development] questions about bitcoin-XT code fork & non-consensus hard-fork](https://lists.cpunks.org/pipermail/cypherpunks/2015-June/053438.html)
anyway, you Jim could try to get some ideas or/and help from back. Ver for marketing and funding and back for technical assistance may be a good combination.
I hope that if the proposal is technically sound, financing won't be a problem. My idea of a target amount of initial subsidy for setting up one node (ignoring software-development costs) should be about $50: Myself, I'd like to charge about $30 for the hardware kit, the quantity-1 cost would be $90 or so, but I don't yet have an estimate if the materials are purchased in 1000+ quantity.
Here are some other datapoints :
those ppl have allegedly been working on teh problem...since forever. And they've gotten nowhere. They have even launched their own shitcoin/financial scam.
I see lots of fine words on their website. But they haven't accomplished much?
[MaidSafeCoin (MAID) price, charts, market cap, and other metrics | CoinMarketCap](https://coinmarketcap.com/currencies/maidsafecoin/)
And they are not the only ones who want to add economic incentives to 'file sharing'. The idea seems like a good one to me, but it doesn't seem to work.
If it were truly easy to attach a 18-terabyte HD to each node, that would make it a really interesting proposition... This, for $140 more...
[Decentralized Cloud Storage — Storj](https://storj.io/) "Decentralized Cloud Storage"
[TRON Foundation:Capture the future slipping away](https://tron.network/) "TRON is an ambitious project dedicated to building the infrastructure for a truly decentralized Internet."
there prolly are a few more like that.
bottom line : there's a fair number of variables to take into account...
True, <sigh>, quite true.
Jim Bell
Excellent. I should mention that I have focussed on Raspberry Pi 4 merely because it was new, and seemed to be quite capable of serving as a anonymization node. If anything, we might call it "over-capable", but in the computer world that's not necessarily a bad thing. Standardized devices, especially if they are manufactured in huge quantity, become more economical. If somebody has an alternative idea for the hardware, now would be an excellent time to speak up. They also tend to be studied more intensely than obscure, low-volume devices, I would imagine. What's the old saying, something like "Yes, we're paranoid, but I sometimes wonder if we are paranoid ENOUGH?" https://www.goodreads.com/quotes/876669-yes-i-m-paranoid-but-am-i-paranoid-e... One big improvement that I think we've settled on should be done is to implement 'chaff' into the protocol. 'chaff' might have been a problem if the people who host the nodes had some limited-data Internet service, but I am aware that Centurylink now offers 1 gigabit service for $65 monthly, and I think that service has no monthly data limit. (their slower services have a 1 terabyte montly limit). That should be plenty to allow for generous chaff. I also thought of an idea to encrypt, or at least combine the outputs of two output nodes to generate the final data. Why? It is frequently (and quite wisely!) recommended that a home-user NOT act as an output node, for fear of being held liable (civilly or criminally) for plaintext that comes out of an output node. But I think there is a solution. Don't output plaintext, encrypt it somewhat so 'nobody' can simply point to it and declare, "There goes that forbidden data, again!". One idea, mine, is to output TWO seemingly-random files, from two different output nodes, which when XOR'd with each other regenerates the (suspicious?) data. Another possibility is to encrypt the output with a symmetrical key, and perhaps deliver the key from another node. Not so much to make the data REALLY secure, but instead merely turn it into seemingly-randomized data that cannot be labelled 'suspicious' merely by monitoring the node's output. Why shouldn't ordinary people be able to run an anonymization node, and even an output node, if these precautions are taken? My point about the lifetime of SD cards was simply that if it used 'frequently', they might wear out. But, if they are only used for program storage and settings, that won't be a problem. Jim Bell On Friday, May 8, 2020, 01:51:58 AM PDT, other.arkitech <other.arkitech@protonmail.com> wrote: I've been running USPS on a network of raspberry pis. You anonymization layer project is very aligned with my cryptoplatform project, and they both could be the same thing. with respect to wearing out the SD cards I have Raspberry pis older than 2 years runing the blockchain protocol and I haven detected failures in any of the _60 nodes best OA Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Friday, May 8, 2020 8:35 AM, jim bell <jdb10987@yahoo.com> wrote: It turns out that in two months, I will have the opportunity to announce this project at a convention. I will be happy to do so if it appears that there will be sufficient progress in the next two months. A fairly firm commitment by someone to write the software would be an excellent start. And, this announcement MAY lead to some financing of the project. The main question, other than the financing, is the programming of the software. Has there been any progress on this matter? Jim Bell On Monday, December 9, 2019, 11:39:10 AM PST, jim bell <jdb10987@yahoo.com> wrote: I hope people haven't fotten about the idea for making an alternate anonymization system. The hardware requirements almost write themselves. Yes, there was some discussion about the software issues. Could/did somebody write a proposal of the functions and features of this system? Any volunteers on programming it? Jim Bell On Tuesday, October 15, 2019, 01:21:31 PM PDT, jim bell <jdb10987@yahoo.com> wrote: Jim Bell's comments inline: On Tuesday, October 15, 2019, 11:23:53 AM PDT, Punk <punks@tfwno.gf> wrote: On Sun, 13 Oct 2019 22:15:58 +0000 (UTC) jim bell <jdb10987@yahoo.com> wrote:
...let's flesh out some of the numbers and practices. Shouldn't take more than a few hours or at most a couple days, to give everybody an input. This appears to be a representative sample of a Raspberry Pi 4 board, in kit form, 4 gigabyte of RAM (I guess they must mean SDCard, right, and not ordinary SRAM or DRAM?
as coderman said, that's the pi's main RAM memory. So yeah, those ARM 'systems on a chip' are quite capable. They have 4 cores running at ~1.2gcps and tons of ram. _I_ remember when an Intel 8048 was called a "computer on a chip"!!!
SD wears out, right?), with cables, a clear plastic box. $85 in quantity one.
the previous model with 'only' 1gb or RAM, same processor is $35 or less. (you need to add a sd card, power supply and case)
How much main memory would be useful for a transfer node to use? > ...so the hardware is quite cheap. The question is, of course, to what degree is it safe? The rpi for instance is designed in the english shithole by people working for the amerikan mafia known as broadcom. The rpi's main processor is a broadcom processor (not the quadcore ARM), running closed source firmware written by the raspberry 'foundation'.
there are other systems that are not as bad as the rpi - at least you won't be running GCHQ-NSA firmware directly. (some people were working on an open source firmware but I don't think they got it to work)
I agree that this is a matter that needs to be discussed. But no doubt you've heard of the saying, 'the perfect being the enemy of the good'.
Can we agree that 1,000 quantity will be a good initial "critical mass" for this project?
A thousand independent node operators isn't a small number.
tor is currently larger, https://metrics.torproject.org/networksize.html but 1000 is still a good start.
> yeah, you have to take into account for instance what % of those nodes is owned by the NSA, GCHQ, FSB, stasi, whatever the chinese agency is called, samsung, hitachi, etc etc etc etc etc. > but wait, is your network partially client/server like tor, or is it a fully decentralized peer to peer network? (freenetproject.org) First, I'm not looking for it to be thought of as "my" network, although maybe I will be credited with some initiative for giving the project a kick. The person whose network it is publicly known as might end up being the person who initially funds it, and agrees to have his name attached to the project as sponsor. And no, I'm not qualified to answer your second comment. I don't consider myself a "software person", never have been. This is yet another issue 'we' will have to work out.
While hypothetically node operators might receive some sort of subsidy (in full or in part) for their internet-service cost, it's also plausible that their Internet payment will be their "skin in the game", their contribution to the project. Centurylink offers 1 gigabit/second service for $65 plus tax. The speed itself is only one part of the issue. I think there is no data limit for their 1 gigabit service; their slower services may have a 1 terabyte/month limit.
I don't know about bandwidth costs, but they obv. depend on how your network works. So discussing those costs before having some idea about what kind of capacity/traffic/padding/architecture etc the system will have seems kinda backwards.
The reason I initially referred to "1 gigabit" service for nodes is that I was, and still am, under the impression that current Centurylink policy exempts them from their "excessive use" policy. I suspect that computers of this level (Raspbery pi 4) won't be able to throughput more than a few tens of megabits of (processed) data, if that, so Internet rate won't likely be a bottleneck. But a data cap could easily become a limiting factor, especially if the network implements heavy chaff.
As to 'entirely new', it seems to me that a high latency mixing network (which is not a 'new' design) is desirable. Such a network should allow people to communicate using non-real-time messages, instead of allowing them to browse jewtube. Low latency/real time networks and communications seem a lot harder to secure."
What I'm thinking of is a programmable-latency network, say anything from 1 to 256 hops. Although, it would be hard to imagine needing more than 16, I suppose.
> some variables :
*) number of mixers/nodes a message goes through
Yes, I'm thinking that a user should be able to decide, for any individual message, how many nodes it will go through. He will still have a latency issue to deal with, but at least that tradeoff question will be decided by HIM, not the entire network as a group,. > *) all clients and nodes are exchanging fixed size packets all the time (chaff) I consider chaff essential to increase the difficulty of tracing messages, especially when traffic is low. > *) there are no clients - it's a peer to peer network
This is a list of proposed 'improvements' to TOR. https://blog.torproject.org/tor-design-proposals-how-we-make-changes-our-pro... No doubt SOMEWHERE there is a list of 'proposed improvements that we know the TOR structure will never agree to because they will be considered 'too good' '. Shouldn't we use those, too? Especially those!
> so the best pentagon criminals like tor's syverson have been 'working' on this for a while and there are tons of 'literature' - some of their stuff is here > The Free Haven Project > notice that cypherpunks(...) like adam back(now blockstream CEO, google funded) a guy called goldberg and others have been/are involved with tor to varying degrees. Furthermore, adam back was subscribed to this list. His last message
[Bitcoin-development] questions about bitcoin-XT code fork & non-consensus hard-fork
> anyway, you Jim could try to get some ideas or/and help from back. Ver for marketing and funding and back for technical assistance may be a good combination. I hope that if the proposal is technically sound, financing won't be a problem. My idea of a target amount of initial subsidy for setting up one node (ignoring software-development costs) should be about $50: Myself, I'd like to charge about $30 for the hardware kit, the quantity-1 cost would be $90 or so, but I don't yet have an estimate if the materials are purchased in 1000+ quantity.
Here are some other datapoints :
> those ppl have allegedly been working on teh problem...since forever. And they've gotten nowhere. They have even launched their own shitcoin/financial scam. I see lots of fine words on their website. But they haven't accomplished much? > MaidSafeCoin (MAID) price, charts, market cap, and other metrics | CoinMarketCap
And they are not the only ones who want to add economic incentives to 'file sharing'. The idea seems like a good one to me, but it doesn't seem to work.
If it were truly easy to attach a 18-terabyte HD to each node, that would make it a really interesting proposition... This, for $140 more... > Decentralized Cloud Storage — Storj "Decentralized Cloud Storage"
TRON Foundation:Capture the future slipping away "TRON is an ambitious project dedicated to building the infrastructure for a truly decentralized Internet."
there prolly are a few more like that.
> bottom line : there's a fair number of variables to take into account... True, <sigh>, quite true. Jim Bell
So long as it is much more profitable to prevent and damage cybersecurity it is unlikely that any scheme for reliable and trustworthy public cybersecurity will be developed for any longer than it takes to monetize it, following a campaign to generate public trust with freeware and high recommendations of experts already deeply compromized (that's what experts means). This has been the pattern for as long as insecurity and fear has been promoted by authoritarians, revolutionaries and "freedom fighters." Challengers of authority inevitably betray believers for king's coin and/or other irresistable rewards. The only sec methods that publicly protect as expected are the ones never heard about, used briefly, disappear without a trace. "Never heard about," "used briefly," and "disappear without a trace" are obviously deception marketing tools. "Obviously deception marketing tools" too. Mea culpa. This is freeware. Don't click it. At 02:17 PM 5/8/2020, you wrote:
Excellent. I should mention that I have focussed on Raspberry Pi 4 merely because it was new, and seemed to be quite capable of serving as a anonymization node. If anything, we might call it "over-capable", but in the computer world that's not necessarily a bad thing. Standardized devices, especially if they are manufactured in huge quantity, become more economical. If somebody has an alternative idea for the hardware, now would be an excellent time to speak up.
They also tend to be studied more intensely than obscure, low-volume devices, I would imagine. What's the old saying, something like "Yes, we're paranoid, but I sometimes wonder if we are paranoid ENOUGH?" <https://www.goodreads.com/quotes/876669-yes-i-m-paranoid-but-am-i-paranoid-enough>https://www.goodreads.com/quotes/876669-yes-i-m-paranoid-but-am-i-paranoid-enough
One big improvement that I think we've settled on should be done is to implement 'chaff' into the protocol. 'chaff' might have been a problem if the people who host the nodes had some limited-data Internet service, but I am aware that Centurylink now offers 1 gigabit service for $65 monthly, and I think that service has no monthly data limit. (their slower services have a 1 terabyte montly limit). That should be plenty to allow for generous chaff.
I also thought of an idea to encrypt, or at least combine the outputs of two output nodes to generate the final data. Why? It is frequently (and quite wisely!) recommended that a home-user NOT act as an output node, for fear of being held liable (civilly or criminally) for plaintext that comes out of an output node. But I think there is a solution. Don't output plaintext, encrypt it somewhat so 'nobody' can simply point to it and declare, "There goes that forbidden data, again!".
One idea, mine, is to output TWO seemingly-random files, from two different output nodes, which when XOR'd with each other regenerates the (suspicious?) data. Another possibility is to encrypt the output with a symmetrical key, and perhaps deliver the key from another node. Not so much to make the data REALLY secure, but instead merely turn it into seemingly-randomized data that cannot be labelled 'suspicious' merely by monitoring the node's output.
Why shouldn't ordinary people be able to run an anonymization node, and even an output node, if these precautions are taken?
My point about the lifetime of SD cards was simply that if it used 'frequently', they might wear out. But, if they are only used for program storage and settings, that won't be a problem.
Jim Bell
On Friday, May 8, 2020, 01:51:58 AM PDT, other.arkitech <other.arkitech@protonmail.com> wrote:
I've been running USPS on a network of raspberry pis. You anonymization layer project is very aligned with my cryptoplatform project, and they both could be the same thing. with respect to wearing out the SD cards I have Raspberry pis older than 2 years runing the blockchain protocol and I haven detected failures in any of the _60 nodes
best OA
Sent with <https://protonmail.com>ProtonMail Secure Email.
âââââââ Original Message âââââââ On Friday, May 8, 2020 8:35 AM, jim bell <jdb10987@yahoo.com> wrote:
It turns out that in two months, I will have the opportunity to announce this project at a convention. I will be happy to do so if it appears that there will be sufficient progress in the next two months. A fairly firm commitment by someone to write the software would be an excellent start. And, this announcement MAY lead to some financing of the project.
The main question, other than the financing, is the programming of the software. Has there been any progress on this matter?
Jim Bell
On Monday, December 9, 2019, 11:39:10 AM PST, jim bell <jdb10987@yahoo.com> wrote:
I hope people haven't fotten about the idea for making an alternate anonymization system. The hardware requirements almost write themselves. Yes, there was some discussion about the software issues. Could/did somebody write a proposal of the functions and features of this system? Any volunteers on programming it?
Jim Bell
On Tuesday, October 15, 2019, 01:21:31 PM PDT, jim bell <jdb10987@yahoo.com> wrote:
Jim Bell's comments inline:
On Tuesday, October 15, 2019, 11:23:53 AM PDT, Punk <punks@tfwno.gf> wrote:
On Sun, 13 Oct 2019 22:15:58 +0000 (UTC) jim bell <<mailto:jdb10987@yahoo.com>jdb10987@yahoo.com> wrote:
...let's flesh out some of the numbers and practices. Shouldn't take more than a few hours or at most a couple days, to give everybody an input. This appears to be a representative sample of a Raspberry Pi 4 board, in kit form, 4 gigabyte of RAM (I guess they must mean SDCard, right, and not ordinary SRAM or DRAM?
as coderman said, that's the pi's main RAM memory. So yeah, those ARM 'systems on a chip' are quite capable. They have 4 cores running at ~1.2gcps and tons of ram.
_I_ remember when an Intel 8048 was called a "computer on a chip"!!!
SD wears out, right?), with cables, a clear plastic box. $85 in quantity one.
the previous model with 'only' 1gb or RAM, same processor is $35 or less. (you need to add a sd card, power supply and case)
How much main memory would be useful for a transfer node to use?
...so the hardware is quite cheap. The question is, of course, to what degree is it safe? The rpi for instance is designed in the english shithole by people working for the amerikan mafia known as broadcom. The rpi's main processor is a broadcom processor (not the quadcore ARM), running closed source firmware written by the raspberry 'foundation'.
there are other systems that are not as bad as the rpi - at least you won't be running GCHQ-NSA firmware directly. (some people were working on an open source firmware but I don't think they got it to work)
I agree that this is a matter that needs to be discussed. But no doubt you've heard of the saying, 'the perfect being the enemy of the good'.
Can we agree that 1,000 quantity will be a good initial "critical mass" for this project?
A thousand independent node operators isn't a small number.
tor is currently larger, <https://metrics.torproject.org/networksize.htm l but>https://metrics.torproject.org/networksize.html but 1000 is still a good start.
yeah, you have to take into account for instance what % of those nodes is owned by the NSA, GCHQ, FSB, stasi, whatever the chinese agency is called, samsung, hitachi, etc etc etc etc etc.
but wait, is your network partially client/server like tor, or is it a fully decentralized peer to peer network? (freenetproject.org)
First, I'm not looking for it to be thought of as "my" network, although maybe I will be credited with some initiative for giving the project a kick. The person whose network it is publicly known as might end up being the person who initially funds it, and agrees to have his name attached to the project as sponsor. And no, I'm not qualified to answer your second comment. I don't consider myself a "software person", never have been. This is yet another issue 'we' will have to work out.
While hypothetically node operators might receive some sort of subsidy (in full or in part) for their internet-service cost, it's also plausible that their Internet payment will be their "skin in the game", their contribution to the project. Centurylink offers 1 gigabit/second service for $65 plus tax. The speed itself is only one part of the issue. I think there is no data limit for their 1 gigabit service; their slower services may have a 1 terabyte/month limit.
I don't know about bandwidth costs, but they obv. depend on how your network works. So discussing those costs before having some idea about what kind of capacity/traffic/padding/architecture etc the system will have seems kinda backwards.
The reason I initially referred to "1 gigabit" service for nodes is that I was, and still am, under the impression that current Centurylink policy exempts them from their "excessive use" policy. I suspect that computers of this level (Raspbery pi 4) won't be able to throughput more than a few tens of megabits of (processed) data, if that, so Internet rate won't likely be a bottleneck. But a data cap could easily become a limiting factor, especially if the network implements heavy chaff.
As to 'entirely new', it seems to me that a high latency mixing network (which is not a 'new' design) is desirable. Such a network should allow people to communicate using non-real-time messages, instead of allowing them to browse jewtube. Low latency/real time networks and communications seem a lot harder to secure."
What I'm thinking of is a programmable-latency network, say anything from 1 to 256 hops. Although, it would be hard to imagine needing more than 16, I suppose.
some variables :
*) number of mixers/nodes a message goes through
Yes, I'm thinking that a user should be able to decide, for any individual message, how many nodes it will go through. He will still have a latency issue to deal with, but at least that tradeoff question will be decided by HIM, not the entire network as a group,.
*) all clients and nodes are exchanging fixed size packets all the time (chaff)
I consider chaff essential to increase the difficulty of tracing messages, especially when traffic is low.
*) there are no clients - it's a peer to peer network
This is a list of proposed 'improvements' to TOR. <https://blog.torproject.org/tor-design-proposals-how-we-make-changes-our-pro... No>https://blog.torproject.org/tor-design-proposals-how-we-make-changes-our-pro... No doubt SOMEWHERE there is a list of 'proposed improvements that we know the TOR structure will never agree to because they will be considered 'too good' '. Shouldn't we use those, too? Especially those!
so the best pentagon criminals like tor's syverson have been 'working' on this for a while and there are tons of 'literature' - some of their stuff is here
<https://www.freehaven.net/papers.html>The Free Haven Project
notice that cypherpunks(...) like adam back(now blockstream CEO, google funded) a guy called goldberg and others have been/are involved with tor to varying degrees. Furthermore, adam back was subscribed to this list. His last message
<https://lists.cpunks.org/pipermail/cypherpunks/2015-June/053438.html>[Bitcoin-development] questions about bitcoin-XT code fork & non-consensus hard-fork
anyway, you Jim could try to get some ideas or/and help from back. Ver for marketing and funding and back for technical assistance may be a good combination.
I hope that if the proposal is technically sound, financing won't be a problem. My idea of a target amount of initial subsidy for setting up one node (ignoring software-development costs) should be about $50: Myself, I'd like to charge about $30 for the hardware kit, the quantity-1 cost would be $90 or so, but I don't yet have an estimate if the materials are purchased in 1000+ quantity.
Here are some other datapoints :
those ppl have allegedly been working on teh problem...since forever. And they've gotten nowhere. They have even launched their own shitcoin/financial scam.
I see lots of fine words on their website. But they haven't accomplished much?
<https://coinmarketcap.com/currencies/maidsafecoin/>MaidSafeCoin (MAID) price, charts, market cap, and other metrics | CoinMarketCap
And they are not the only ones who want to add economic incentives to 'file sharing'. The idea seems like a good one to me, but it doesn't seem to work.
If it were truly easy to attach a 18-terabyte HD to each node, that would make it a really interesting proposition... This, for $140 more...
<https://storj.io/>Decentralized Cloud Storage Storj "Decentralized Cloud Storage"
<https://tron.network/>TRON Foundationï¼Capture the future slipping away "TRON is an ambitious project dedicated to building the infrastructure for a truly decentralized Internet."
there prolly are a few more like that.
bottom line : there's a fair number of variables to take into account...
True, <sigh>, quite true.
Jim Bell
Okay, so what should I actually do? I didn't suggest this project intending to make the decisions by myself, alone. I figured I might be one of dozens of deciders who combine ideas to plan this. At this point, I see the main impediment is finding somebody with the motivations and qualifications to write the software. An additional complication is that whoever volunteers, he might not be trusted by others. What is to be done? The one situation that I consider intolerable is that TOR remains as a monopoly in the "anonymization marketplace". Jim Bell On Friday, May 8, 2020, 01:09:16 PM PDT, John Young <jya@pipeline.com> wrote: So long as it is much more profitable to prevent and damage cybersecurity it is unlikely that any scheme for reliable and trustworthy public cybersecurity will be developed for any longer than it takes to monetize it, following a campaign to generate public trust with freeware and high recommendations of experts already deeply compromized (that's what experts means). This has been the pattern for as long as insecurity and fear has been promoted by authoritarians, revolutionaries and "freedom fighters." Challengers of authority inevitably betray believers for king's coin and/or other irresistable rewards. The only sec methods that publicly protect as expected are the ones never heard about, used briefly, disappear without a trace. "Never heard about," "used briefly," and "disappear without a trace" are obviously deception marketing tools. "Obviously deception marketing tools" too. Mea culpa. This is freeware. Don't click it. At 02:17 PM 5/8/2020, you wrote:
Excellent. I should mention that I have focussed on Raspberry Pi 4 merely because it was new, and seemed to be quite capable of serving as a anonymization node. If anything, we might call it "over-capable", but in the computer world that's not necessarily a bad thing. Standardized devices, especially if they are manufactured in huge quantity, become more economical. If somebody has an alternative idea for the hardware, now would be an excellent time to speak up.
They also tend to be studied more intensely than obscure, low-volume devices, I would imagine. What's the old saying, something like "Yes, we're paranoid, but I sometimes wonder if we are paranoid ENOUGH?" <https://www.goodreads.com/quotes/876669-yes-i-m-paranoid-but-am-i-paranoid-enough>https://www.goodreads.com/quotes/876669-yes-i-m-paranoid-but-am-i-paranoid-enough
One big improvement that I think we've settled on should be done is to implement 'chaff' into the protocol. 'chaff' might have been a problem if the people who host the nodes had some limited-data Internet service, but I am aware that Centurylink now offers 1 gigabit service for $65 monthly, and I think that service has no monthly data limit. (their slower services have a 1 terabyte montly limit). That should be plenty to allow for generous chaff.
I also thought of an idea to encrypt, or at least combine the outputs of two output nodes to generate the final data. Why? It is frequently (and quite wisely!) recommended that a home-user NOT act as an output node, for fear of being held liable (civilly or criminally) for plaintext that comes out of an output node. But I think there is a solution. Don't output plaintext, encrypt it somewhat so 'nobody' can simply point to it and declare, "There goes that forbidden data, again!".
One idea, mine, is to output TWO seemingly-random files, from two different output nodes, which when XOR'd with each other regenerates the (suspicious?) data. Another possibility is to encrypt the output with a symmetrical key, and perhaps deliver the key from another node. Not so much to make the data REALLY secure, but instead merely turn it into seemingly-randomized data that cannot be labelled 'suspicious' merely by monitoring the node's output.
Why shouldn't ordinary people be able to run an anonymization node, and even an output node, if these precautions are taken?
My point about the lifetime of SD cards was simply that if it used 'frequently', they might wear out. But, if they are only used for program storage and settings, that won't be a problem.
Jim Bell
On Friday, May 8, 2020, 01:51:58 AM PDT, other.arkitech <other.arkitech@protonmail.com> wrote:
I've been running USPS on a network of raspberry pis. You anonymization layer project is very aligned with my cryptoplatform project, and they both could be the same thing. with respect to wearing out the SD cards I have Raspberry pis older than 2 years runing the blockchain protocol and I haven detected failures in any of the _60 nodes
best OA
Sent with <https://protonmail.com>ProtonMail Secure Email.
������� Original Message ������� On Friday, May 8, 2020 8:35 AM, jim bell <jdb10987@yahoo.com> wrote:
It turns out that in two months, I will have the opportunity to announce this project at a convention. I will be happy to do so if it appears that there will be sufficient progress in the next two months. A fairly firm commitment by someone to write the software would be an excellent start. And, this announcement MAY lead to some financing of the project.
The main question, other than the financing, is the programming of the software. Has there been any progress on this matter?
Jim Bell
On Monday, December 9, 2019, 11:39:10 AM PST, jim bell <jdb10987@yahoo.com> wrote:
I hope people haven't fotten about the idea for making an alternate anonymization system. The hardware requirements almost write themselves. Yes, there was some discussion about the software issues. Could/did somebody write a proposal of the functions and features of this system? Any volunteers on programming it?
Jim Bell
On Tuesday, October 15, 2019, 01:21:31 PM PDT, jim bell <jdb10987@yahoo.com> wrote:
Jim Bell's comments inline:
On Tuesday, October 15, 2019, 11:23:53 AM PDT, Punk <punks@tfwno.gf> wrote:
On Sun, 13 Oct 2019 22:15:58 +0000 (UTC) jim bell <<mailto:jdb10987@yahoo.com>jdb10987@yahoo.com> wrote:
...let's flesh out some of the numbers and practices. Shouldn't take more than a few hours or at most a couple days, to give everybody an input. This appears to be a representative sample of a Raspberry Pi 4 board, in kit form, 4 gigabyte of RAM (I guess they must mean SDCard, right, and not ordinary SRAM or DRAM?
as coderman said, that's the pi's main RAM memory. So yeah, those ARM 'systems on a chip' are quite capable. They have 4 cores running at ~1.2gcps and tons of ram.
_I_ remember when an Intel 8048 was called a "computer on a chip"!!!
SD wears out, right?), with cables, a clear plastic box. $85 in quantity one.
the previous model with 'only' 1gb or RAM, same processor is $35 or less. (you need to add a sd card, power supply and case)
How much main memory would be useful for a transfer node to use?
> ...so the hardware is quite cheap. The question is, of course, to what degree is it safe? The rpi for instance is designed in the english shithole by people working for the amerikan mafia known as broadcom. The rpi's main processor is a broadcom processor (not the quadcore ARM), running closed source firmware written by the raspberry 'foundation'.
there are other systems that are not as bad as the rpi - at least you won't be running GCHQ-NSA firmware directly. (some people were working on an open source firmware but I don't think they got it to work)
I agree that this is a matter that needs to be discussed. But no doubt you've heard of the saying, 'the perfect being the enemy of the good'.
Can we agree that 1,000 quantity will be a good initial "critical mass" for this project?
A thousand independent node operators isn't a small number.
tor is currently larger, <https://metrics.torproject.org/networksize.htm l but>https://metrics.torproject.org/networksize.html but 1000 is still a good start.
> yeah, you have to take into account for instance what % of those nodes is owned by the NSA, GCHQ, FSB, stasi, whatever the chinese agency is called, samsung, hitachi, etc etc etc etc etc.
> but wait, is your network partially client/server like tor, or is it a fully decentralized peer to peer network? (freenetproject.org)
First, I'm not looking for it to be thought of as "my" network, although maybe I will be credited with some initiative for giving the project a kick. The person whose network it is publicly known as might end up being the person who initially funds it, and agrees to have his name attached to the project as sponsor. And no, I'm not qualified to answer your second comment. I don't consider myself a "software person", never have been. This is yet another issue 'we' will have to work out.
While hypothetically node operators might receive some sort of subsidy (in full or in part) for their internet-service cost, it's also plausible that their Internet payment will be their "skin in the game", their contribution to the project. Centurylink offers 1 gigabit/second service for $65 plus tax. The speed itself is only one part of the issue. I think there is no data limit for their 1 gigabit service; their slower services may have a 1 terabyte/month limit.
I don't know about bandwidth costs, but they obv. depend on how your network works. So discussing those costs before having some idea about what kind of capacity/traffic/padding/architecture etc the system will have seems kinda backwards.
The reason I initially referred to "1 gigabit" service for nodes is that I was, and still am, under the impression that current Centurylink policy exempts them from their "excessive use" policy. I suspect that computers of this level (Raspbery pi 4) won't be able to throughput more than a few tens of megabits of (processed) data, if that, so Internet rate won't likely be a bottleneck. But a data cap could easily become a limiting factor, especially if the network implements heavy chaff.
As to 'entirely new', it seems to me that a high latency mixing network (which is not a 'new' design) is desirable. Such a network should allow people to communicate using non-real-time messages, instead of allowing them to browse jewtube. Low latency/real time networks and communications seem a lot harder to secure."
What I'm thinking of is a programmable-latency network, say anything from 1 to 256 hops. Although, it would be hard to imagine needing more than 16, I suppose.
> some variables :
*) number of mixers/nodes a message goes through
Yes, I'm thinking that a user should be able to decide, for any individual message, how many nodes it will go through. He will still have a latency issue to deal with, but at least that tradeoff question will be decided by HIM, not the entire network as a group,.
> *) all clients and nodes are exchanging fixed size packets all the time (chaff)
I consider chaff essential to increase the difficulty of tracing messages, especially when traffic is low.
> *) there are no clients - it's a peer to peer network
This is a list of proposed 'improvements' to TOR. <https://blog.torproject.org/tor-design-proposals-how-we-make-changes-our-pro... No>https://blog.torproject.org/tor-design-proposals-how-we-make-changes-our-pro... No doubt SOMEWHERE there is a list of 'proposed improvements that we know the TOR structure will never agree to because they will be considered 'too good' '. Shouldn't we use those, too? Especially those!
> so the best pentagon criminals like tor's syverson have been 'working' on this for a while and there are tons of 'literature' - some of their stuff is here
> <https://www.freehaven.net/papers.html>The Free Haven Project
> notice that cypherpunks(...) like adam back(now blockstream CEO, google funded) a guy called goldberg and others have been/are involved with tor to varying degrees. Furthermore, adam back was subscribed to this list. His last message
<https://lists.cpunks.org/pipermail/cypherpunks/2015-June/053438.html>[Bitcoin-development] questions about bitcoin-XT code fork & non-consensus hard-fork
> anyway, you Jim could try to get some ideas or/and help from back. Ver for marketing and funding and back for technical assistance may be a good combination.
I hope that if the proposal is technically sound, financing won't be a problem. My idea of a target amount of initial subsidy for setting up one node (ignoring software-development costs) should be about $50: Myself, I'd like to charge about $30 for the hardware kit, the quantity-1 cost would be $90 or so, but I don't yet have an estimate if the materials are purchased in 1000+ quantity.
Here are some other datapoints :
> those ppl have allegedly been working on teh problem...since forever. And they've gotten nowhere. They have even launched their own shitcoin/financial scam.
I see lots of fine words on their website. But they haven't accomplished much?
> <https://coinmarketcap.com/currencies/maidsafecoin/>MaidSafeCoin (MAID) price, charts, market cap, and other metrics | CoinMarketCap
And they are not the only ones who want to add economic incentives to 'file sharing'. The idea seems like a good one to me, but it doesn't seem to work.
If it were truly easy to attach a 18-terabyte HD to each node, that would make it a really interesting proposition... This, for $140 more...
> <https://storj.io/>Decentralized Cloud Storage — Storj "Decentralized Cloud Storage"
<https://tron.network/>TRON Foundation:Capture the future slipping away "TRON is an ambitious project dedicated to building the infrastructure for a truly decentralized Internet."
there prolly are a few more like that.
> bottom line : there's a fair number of variables to take into account...
True, <sigh>, quite true.
Jim Bell
The method must be developed by a single person and used only by that person to provide public content. Not send, not receive. Just publicize, once at a time, irregularly, with different method each time. No acknowledgement. Allowing others to partcipate opens the way to implantation of opponents, their subterfuges, their hiding among the defenses and offenses. The solo developer/user may be corrupt and compromizable, willing or unwilling, and may use its tool to anonymously attack without retribution or traceabiity. That is quite common. However, it will be possible to eventually track the developer/user due to vanity, arrogance, self-deception, dumbness, ignorance of the near-limitless variety of tracking, seduction, baiting, keeping quiet about the trackability, as done over and other with crypto and other means of security, military grade or fools gold. Still, there is an enduring appeal to try to beat the system, outsmart the smarties, dare to disobey and revolt. Problem is that appeal is most often promoted by authorities to smoke out their opponents, the main culprits being the professors and peddlers of crypto, comsec, infosec, cybersec, spying, susceptible to the king's coin, jealousy, vanity, pride, entrapment, compromise, bribery, imprisonment. Self-betrayal and -delusion remain the prime threats against cybersec victory. One's own content contains the seeds of failure, its DNA detectable. Don't give up, tilting at windmills is the generator of innovation, even if suicidal and madness. Success is the sign of serving authority. Giving up the battle without fighting is the goal Lao Tzu claimed is the enemy's top strategy. His disinfo has endured among the successful. At 02:56 AM 5/9/2020, you wrote:
Okay, so what should I actually do? I didn't suggest this project intending to make the decisions by myself, alone. I figured I might be one of dozens of deciders who combine ideas to plan this.
At this point, I see the main impediment is finding somebody with the motivations and qualifications to write the software. An additional complication is that whoever volunteers, he might not be trusted by others.
What is to be done?
The one situation that I consider intolerable is that TOR remains as a monopoly in the "anonymization marketplace".
Jim Bell
On Friday, May 8, 2020, 01:09:16 PM PDT, John Young <jya@pipeline.com> wrote:
So long as it is much more profitable to prevent and damage cybersecurity it is unlikely that any scheme for reliable and trustworthy public cybersecurity will be developed for any longer than it takes to monetize it, following a campaign to generate public trust with freeware and high recommendations of experts already deeply compromized (that's what experts means).
This has been the pattern for as long as insecurity and fear has been promoted by authoritarians, revolutionaries and "freedom fighters."
Challengers of authority inevitably betray believers for king's coin and/or other irresistable rewards. The only sec methods that publicly protect as expected are the ones never heard about, used briefly, disappear without a trace. "Never heard about," "used briefly," and "disappear without a trace" are obviously deception marketing tools. "Obviously deception marketing tools" too.
Mea culpa. This is freeware. Don't click it.
At 02:17 PM 5/8/2020, you wrote:
Excellent. I should mention that I have focussed on Raspberry Pi 4 merely because it was new, and seemed to be quite capable of serving as a anonymization node. If anything, we might call it "over-capable", but in the computer world that's not necessarily a bad thing. Standardized devices, especially if they are manufactured in huge quantity, become more economical. If somebody has an alternative idea for the hardware, now would be an excellent time to speak up.
They also tend to be studied more intensely than obscure, low-volume devices, I would imagine. What's the old saying, something like "Yes, we're paranoid, but I sometimes wonder if we are paranoid ENOUGH?"
One big improvement that I think we've settled on should be done is to implement 'chaff' into the protocol. 'chaff' might have been a problem if the people who host the nodes had some limited-data Internet service, but I am aware that Centurylink now offers 1 gigabit service for $65 monthly, and I think that service has no monthly data limit. (their slower services have a 1 terabyte montly limit). That should be plenty to allow for generous chaff.
I also thought of an idea to encrypt, or at least combine the outputs of two output nodes to generate the final data. Why? It is frequently (and quite wisely!) recommended that a home-user NOT act as an output node, for fear of being held liable (civilly or criminally) for plaintext that comes out of an output node. But I think there is a solution. Don't output plaintext, encrypt it somewhat so 'nobody' can simply point to it and declare, "There goes that forbidden data, again!".
One idea, mine, is to output TWO seemingly-random files, from two different output nodes, which when XOR'd with each other regenerates the (suspicious?) data. Another possibility is to encrypt the output with a symmetrical key, and perhaps deliver the key from another node. Not so much to make the data REALLY secure, but instead merely turn it into seemingly-randomized data that cannot be labelled 'suspicious' merely by monitoring the node's output.
Why shouldn't ordinary people be able to run an anonymization node, and even an output node, if these precautions are taken?
My point about the lifetime of SD cards was simply that if it used 'frequently', they might wear out. But, if they are only used for program storage and settings, that won't be a problem.
Jim Bell
On Friday, May 8, 2020, 01:51:58 AM PDT, other.arkitech
<<mailto:other.arkitech@protonmail.com>other.arkitech@protonmail.com> wrote:
I've been running USPS on a network of raspberry pis. You anonymization layer project is very aligned with my cryptoplatform project, and they both could be the same thing. with respect to wearing out the SD cards I have Raspberry pis older than 2 years runing the blockchain protocol and I haven detected failures in any of the _60 nodes
best OA
Sent with
<<https://protonmail.com>https://protonmail.com>ProtonMail Secure Email.
â�â�ââ¢â¬ï¿½Ã¢ï¿½Ã¢ï¿½Ã¢Ã¢â¬ï¿½Ã
¢ï¿½ Original Message â��â�â�â�ì�â�â�â�
On Friday, May 8, 2020 8:35 AM, jim bell <<mailto:jdb10987@yahoo.com>jdb10987@yahoo.com> wrote:
It turns out that in two months, I will have the opportunity to announce this project at a convention. I will be happy to do so if it appears that there will be sufficient progress in the next two months. A fairly firm commitment by someone to write the software would be an excellent start. And, this announcement MAY lead to some financing of the project.
The main question, other than the financing, is the programming of the software. Has there
been any progress on this matter?
Jim Bell
On Monday, December 9, 2019, 11:39:10 AM PST, jim bell <<mailto:jdb10987@yahoo.com>jdb10987@yahoo.com> wrote:
I hope people haven't fotten about the idea for making an alternate anonymization system. The hardware requirements almost write themselves. Yes, there was some discussion about the software issues. Could/did somebody write a proposal of the functions and features of this system? Any volunteers on programming it?
Jim Bell
On Tuesday, October 15, 2019, 01:21:31 PM PDT, jim bell <<mailto:jdb10987@yahoo.com>jdb10987@yahoo.com> wrote:
Jim Bell's comments inline:
On Tuesday, October 15, 2019, 11:23:53 AM
PDT, Punk <<mailto:punks@tfwno.gf>punks@tfwno.gf> wrote:
On Sun, 13 Oct 2019 22:15:58 +0000 (UTC) jim bell <<mailto:jdb10987@yahoo.com>jdb10987@yahoo.com> wrote:
...let's flesh out some of the numbers and practices. Shouldn't take more than a few hours or at most a couple days, to give everybody an input. This appears to be a representative sample of a Raspberry Pi 4 board, in kit form, 4 gigabyte of RAM (I guess they must mean SDCard, right, and not ordinary SRAM or DRAM?
as coderman said, that's the pi's main RAM memory. So yeah, those ARM 'systems on a chip' are quite capable. They have 4 cores running at ~1.2gcps and tons of ram.
_I_ remember when an Intel 8048 was called a "computer on a chip"!!!
SD wears out, right?), with cables, a clear plastic box. $85 in quantity one.
the previous model with 'only' 1gb or RAM, same processor is $35 or less. (you need to add a sd card, power supply and case)
How much main memory would be useful for a transfer node to use?
...so the hardware is quite cheap. The question is, of course, to what degree is it safe? The rpi for instance is designed in the english shithole by people working for the amerikan mafia known as broadcom. The rpi's main processor is a broadcom processor (not the quadcore ARM), running closed source firmware written by the raspberry 'foundation'.
there are other systems that are not as bad as the rpi - at least you won't be running GCHQ-NSA firmware directly. (some people were working on an open source firmware but I don't think they got it to work)
I agree that this is a matter that needs to be discussed. But no doubt you've heard of the saying, 'the perfect being the enemy of the good'.
Can we agree that 1,000 quantity will be a good initial "critical mass" for this project?
A thousand independent node operators isn't a small number.
tor is currently larger,
<<https://metrics.torproject.org/networksize.htm>https://metrics.torproject.org/networksize.htm
l
but><https://metrics.torproject.org/networksize.html>https://metrics.torproject.org/networksize.html
but 1000 is still a good start.
yeah, you have to take into account for instance what % of those nodes is owned by the NSA, GCHQ, FSB, stasi, whatever the chinese agency is called, samsung, hitachi, etc etc etc etc etc.
but wait, is your network partially client/server like tor, or is it a fully decentralized peer to peer network? (freenetproject.org)
First, I'm not looking for it to be thought of as "my" network, although maybe I will be credited with some initiative for giving the project a kick. The person whose network it is publicly known as might end up being the person who initially funds it, and agrees to have his name attached to the project as sponsor. And no, I'm not qualified to answer your second comment. I don't consider myself a "software person", never have been. This is yet another issue 'we' will have to work out.
While hypothetically node operators might receive some sort of subsidy (in full or in part) for their internet-service cost, it's also plausible that their Internet payment will be their "skin in the game", their contribution to the project. Centurylink offers 1 gigabit/second service for $65 plus tax. The speed itself is only one part of the issue. I think there is no data limit for their 1 gigabit service; their slower services may have a 1 terabyte/month limit.
I don't know about bandwidth costs, but they obv. depend on how your network works. So discussing those costs before having some idea about what kind of capacity/traffic/padding/architecture etc the system will have seems kinda backwards.
The reason I initially referred to "1 gigabit" service for nodes is that I was, and still am, under the impression that current Centurylink policy exempts them from their "excessive use" policy. I suspect that computers of this level (Raspbery pi 4) won't be able to throughput more than a few tens of megabits of (processed) data, if that, so Internet rate won't likely be a bottleneck. But a data cap could easily become a limiting factor, especially if the network implements heavy chaff.
As to 'entirely new', it seems to me that a high latency mixing network (which is not a 'new' design) is desirable. Such a network should allow people to communicate using non-real-time messages, instead of allowing them to browse jewtube. Low latency/real time networks and communications seem a lot harder to secure."
What I'm thinking of is a programmable-latency network, say anything from 1 to 256 hops. Although, it would be hard to imagine needing more than 16, I suppose.
some variables :
*) number of mixers/nodes a message goes through
Yes, I'm thinking that a user should be able to decide, for any individual message, how many nodes it will go through. He will still have a latency issue to deal with, but at least that tradeoff question will be decided by HIM, not the entire network as a group,.
*) all clients and nodes are exchanging fixed size packets all the time (chaff)
I consider chaff essential to increase the difficulty of tracing messages, especially when traffic is low.
*) there are no clients - it's a peer to peer network
This is a list of proposed 'improvements' to TOR.
No doubt SOMEWHERE there is a list of 'proposed improvements that we know the TOR structure will never agree to because they will be considered 'too good' '. Shouldn't we use those, too? Especially those!
so the best pentagon criminals like tor's syverson have been 'working' on this for a while and there are tons of 'literature' - some of their stuff is here
<<https://www.freehaven.net/papers.html>https://www.freehaven.net/papers.html>The Free Haven Project
notice that cypherpunks(...) like adam back(now blockstream CEO, google funded) a guy called goldberg and others have been/are involved with tor to varying degrees. Furthermore, adam back was subscribed to this list. His last message
questions about bitcoin-XT code fork & non-consensus hard-fork
anyway, you Jim could try to get some ideas or/and help from back. Ver for marketing and funding and back for technical assistance may be a good combination.
I hope that if the proposal is technically sound, financing won't be a problem. My idea of a target amount of initial subsidy for setting up one node (ignoring software-development costs) should be about $50: Myself, I'd like to charge about $30 for the hardware kit, the quantity-1 cost would be $90 or so, but I don't yet have an estimate if the materials are purchased in 1000+ quantity.
Here are some other datapoints :
<<https://maidsafe.net/>https://maidsafe.net/>https://maidsafe.net/
those ppl have allegedly been working on teh problem...since forever. And they've gotten nowhere. They have even launched their own shitcoin/financial scam.
I see lots of fine words on their website. But they haven't accomplished much?
(MAID) price, charts, market cap, and other metrics | CoinMarketCap
And they are not the only ones who want to add economic incentives to 'file sharing'. The idea seems like a good one to me, but it doesn't seem to work.
If it were truly easy to attach a 18-terabyte HD to each node, that would make it a really interesting proposition... This, for $140 more...
<<https://storj.io/>https://storj.io/>Decentralized Cloud Storage Storj
"Decentralized Cloud Storage"
<<https://tron.network/>https://tron.network/>TRON Foundation:Capture the future slipping away
"TRON is an ambitious project dedicated to building the infrastructure for a truly decentralized Internet."
there prolly are a few more like that.
bottom line : there's a fair number of variables to take into account...
True, <sigh>, quite true.
Jim Bell
On 5/9/20, jim bell <jdb10987@yahoo.com> wrote:
At this point, I see the main impediment is finding somebody with the motivations and qualifications to write the software. An additional complication is that whoever volunteers, he might not be trusted by
While coders can provide design meta, coders can also be hired more or less to just meet some premade spec. It will be opensource where trust as to code exploit is somewhat reasonably determinable. Trust as to protocol spec design itself holding up to adversaries in operation is a different area of evaluation.
What is to be done?
People might start surveying relevant existing networks and papers, past and present, note and annotate all their design and features in some big comparison tables, cut out their bad parts, invent new parts, assemble all the then viable parts into some design specification. Parade it around to see how badly it gets attacked and broken. Then scrap or amend it, and code and deploy it. Or skip all those traditional formalities and just start hacking stuff together.
The one situation that I consider intolerable is that TOR remains as a monopoly in the "anonymization marketplace".
Yes, there should be some solid competition in the deployed overlay network space. A good generic overlay transport network might be one that will be able to carry, and thus cater to, many people's desires to otherwise go off and create single purpose networks that would generally have the same anonymous overlay feel but for different applications... such as one net for messaging, one net for storage, one for cryptocurrency, voice, grid compute, etc, etc. Doing ten different application nets seems a bit redundant effort and tech, instead of ten different plugins into one net. Of course if you restrict yourself to only same basic functions as Tor (onionland + exits) under an alternative new Tor design + say chaff, things become easier, at expense of being able to plug more applications generically over it. Defining what you want to be, and how, is work. Coding is more trivial. Tor does have a monopoly over automagic exit capability. But networks like i2p and phantom do compete with it in offering psuedo TCP network stack compatible hidden services. There are probably at ten or so reasonably well papered overlay networks that never got implemented and could be drawn from. The internet just transports messages around a packet switch, only the applications know whether they're storage, coin, voice, messages, etc.
On Fri, May 08, 2020 at 06:17:37PM +0000, jim bell wrote:
Excellent. I should mention that I have focussed on Raspberry Pi 4 merely because it was new, and seemed to be quite capable of serving as a anonymization node.
A warning Jim, you might consider calling any conceivable such nodes as "corporate surveillance reduction nodes" or "privacy hope enhancement nodes" (PHEPs has a good ring to it). Without qualifying "privacy" nodes, non technical users -will- be lead astray, for example they will be lead to believe they are achieving private online communications. other.architekt fell into the same false assumption about Tor, not realising the very real and known problems directly about privacy on the Tor network. When some folks discover they have been deceived in their thinking in this way, there will be backlash against those whom they believe deceived them. Choose your words wisely.
If anything, we might call it "over-capable", but in the computer world that's not necessarily a bad thing. Standardized devices, especially if they are manufactured in huge quantity, become more economical. If somebody has an alternative idea for the hardware, now would be an excellent time to speak up. They also tend to be studied more intensely than obscure, low-volume devices, I would imagine. What's the old saying, something like "Yes, we're paranoid, but I sometimes wonder if we are paranoid ENOUGH?" https://www.goodreads.com/quotes/876669-yes-i-m-paranoid-but-am-i-paranoid-e...
Just as an additional minor example, the above sentences, juxtaposed as they were immediately after your first sentence ("Raspberry Pi 4 .. seemed to be quite capable of serving as a anonymization node") would most likely further feed the unthinking "reader with AP hope" into assuming, subconsciously inferring, or consciously believing, the following: - So it's 'over-capable' as an anonymization node? Great, that's even better. - So it's a 'standardized anonymization device', wow, how good is that?! - And so the Raspberry Pis "also tend to be studied more intensely than obscure, low-volume devices" - but of course, that will make it not only private and secure, but hardened! - And damn, he's also quoting "Yes, we're paranoid, but I sometimes wonder if we are paranoid ENOUGH?" - man, this Jim guy thinks just like I do, he must -really- be onto this security and privacy thing - wish I'd found this sooner, sign me up! Again Jim, beware the backlash.
On Fri, 8 May 2020 18:17:37 +0000 (UTC) jim bell <jdb10987@yahoo.com> wrote:
One big improvement that I think we've settled on should be done is to implement 'chaff' into the protocol.
yes, chaff or constant rate links is a fundamental requirement. We should at least have a list of the basic properties of the system. Like : 1) peer to peer instead of a client/server setup (so there are no special nodes, scaling is complex) 2) peers negotiate links with different speeds 3) a peer has relatively long lived links to a few other peers - so it's a mesh network. 4) are there nodes that connect to web cesspool services and other arpanet services? How would that work? Notice that web cesspool servers send data in big chunks/high speed bursts, which is not compatible with constant rate links.
'chaff' might have been a problem if the people who host the nodes had some limited-data Internet service, but I am aware that Centurylink now offers 1 gigabit service for $65 monthly,
peers have to pay for their connections. So it's up to every user how much they pay and how much capacity their nodes have.
and I think that service has no monthly data limit.
that's probably bullshit and fraud, aka 'marketing' and not really related to the design of an anonimity network
I also thought of an idea to encrypt, or at least combine the outputs of two output nodes to generate the final data. Why? It is frequently (and quite wisely!) recommended that a home-user NOT act as an output node,
in principle there shouldn't be output nodes, but if there are, then don't expect much anonimity from them. As to hardware, that's something that each user should acquire himself, just like they choose an ISP. Things like the raspberries are relatively cheap and the specs look good, BUT they are garbage manufactured by broadcom-mosad-gchq-nsa. At any rate, an anonimity network is a software project and doesn't need to be 'bundled' with any particular hardware.
Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Sunday, May 10, 2020 8:12 PM, Punk-Stasi 2.0 <punks@tfwno.gf> wrote:
On Fri, 8 May 2020 18:17:37 +0000 (UTC) jim bell jdb10987@yahoo.com wrote:
One big improvement that I think we've settled on should be done is to implement 'chaff' into the protocol.
yes, chaff or constant rate links is a fundamental requirement. We should at least have a list of the basic properties of the system. Like :
1) peer to peer instead of a client/server setup (so there are no special nodes, scaling is complex)
2) peers negotiate links with different speeds
3) a peer has relatively long lived links to a few other peers - so it's a mesh network.
4) are there nodes that connect to web cesspool services and other arpanet services? How would that work? Notice that web cesspool servers send data in big chunks/high speed bursts, which is not compatible with constant rate links.
My net already meets points 1 and 3.
'chaff' might have been a problem if the people who host the nodes had some limited-data Internet service, but I am aware that Centurylink now offers 1 gigabit service for $65 monthly,
peers have to pay for their connections. So it's up to every user how much they pay and how much capacity their nodes have.
and I think that service has no monthly data limit.
that's probably bullshit and fraud, aka 'marketing' and not really related to the design of an anonimity network
I also thought of an idea to encrypt, or at least combine the outputs of two output nodes to generate the final data. Why? It is frequently (and quite wisely!) recommended that a home-user NOT act as an output node,
in principle there shouldn't be output nodes, but if there are, then don't expect much anonimity from them.
Agree. Output nodes should be a step in the roadmap after the network does it well by itself.
As to hardware, that's something that each user should acquire himself, just like they choose an ISP.
Things like the raspberries are relatively cheap and the specs look good, BUT they are garbage manufactured by broadcom-mosad-gchq-nsa. At any rate, an anonimity network is a software project and doesn't need to be 'bundled' with any particular hardware.
If nodes are continuosly exchanging onioned packets (filled with real or dummy payload) it would be impossible to determine the source and the destination nodes
" If nodes are continuosly exchanging onioned packets (filled with real or dummy payload) it would be impossible to determine the source and the destination nodes" This was more like a test to see if it is true or false, rather than an assertion.
web cesspool servers send data in big chunks/high speed bursts, which is not compatible with constant rate links.
They are, your ISP rate or physical link speed already serves as max rate, or go test set a lower rate in your packet filter, things work fine, just slower. https://www.freebsd.org/cgi/man.cgi?query=dummynet People obviously can't shove a 320kbps audio file over 256kbps link and expect to hear it in lossless realtime 1x speed direct off the wire, have to save it first.
On Tue, 12 May 2020 04:59:32 -0400 grarpamp <grarpamp@gmail.com> wrote:
web cesspool servers send data in big chunks/high speed bursts, which is not compatible with constant rate links.
They are, your ISP rate or physical link speed already serves as max rate,
It's not clear what you mean, but if you think people could generally send dummy traffic to eacht other at the max rate advertised by their ISP-mosad-nsa-gchq local mafia, you're obviously wrong. Anyway, it's not a big problem. Since you can't just foward traffic from the overlay to the arpanet web cesspool and expect anonimity, my implicit point was that such limitation needs to be taken into account. Services that don't require high speed/high volume traffic, like, say, mail, may be fine. 'Bursty' traffic won't work. One shouldn't build services for selling drugs or killing politicians using the centralized web client/server model. or go test set a lower rate in your
packet filter, things work fine, just slower. https://www.freebsd.org/cgi/man.cgi?query=dummynet
People obviously can't shove a 320kbps audio file over 256kbps link and expect to hear it in lossless realtime 1x speed direct off the wire, have to save it first.
web cesspool servers send data in big chunks/high speed bursts, which is not compatible with constant rate links.
Go play packet filter rate limits, works fine.
your ISP rate or physical link speed already serves as max rate
people could generally send dummy traffic to eacht other at the max rate advertised by their ISP
They already send all kind of traffic to each other today. Go plug in your 100Mbps NIC, go buy 10Mbps from your ISP, then go send 10Mbps worth of whatever you want between whoever you want. Works fine.
you can't just foward traffic from the overlay to the arpanet web cesspool and expect anonimity
That's further approachable with some network fill design than it is with tor or anything else today that do nothing. Possibly even a 10x odds reduction or more.
Services that don't require high speed/high volume traffic, like, say, mail, may be fine. 'Bursty' traffic won't work.
Define 'bursty'. Anything three packets or more might be considered as such. An email message is a lot of TCP packets, go plot their traffic curve. Go play with packet filter rate limits, introduce and give a wheat flow priority over a pipe that is already maxed out with a chaff flow, watch the flows trade off.
On Tue, 12 May 2020 21:48:34 -0400 grarpamp <grarpamp@gmail.com> wrote:
web cesspool servers send data in big chunks/high speed bursts, which is not compatible with constant rate links.
Go play packet filter rate limits, works fine.
well yes you can rate limit any application. If you rate limit the web browser then the typical 5mb pages (95% malware) won't load in 0.1s, 'like they should'. That's the sense in which web browsing is not 'compatible'. Now, I'm not saying that's a real problem. It's only a problem for "normies" who "browse the web" but those users are not the target for an actual anonimity network. Anyway, that's a secondary issue. I suggest you post your ideas/views on the basic architecture for an overlay that works as "tor replacement".
In my opinion every node of the network would decide what is their respective limit on bandwidth, at TCP level. The global traffic will then travel through an heterogeneous graph of connections with different bandwidth at every edge. Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Wednesday, May 13, 2020 4:47 AM, Punk-Stasi 2.0 <punks@tfwno.gf> wrote:
On Tue, 12 May 2020 21:48:34 -0400 grarpamp grarpamp@gmail.com wrote:
web cesspool servers send data in big chunks/high speed bursts, which is not compatible with constant rate links.
Go play packet filter rate limits, works fine.
well yes you can rate limit any application. If you rate limit the web browser then the typical 5mb pages (95% malware) won't load in 0.1s, 'like they should'. That's the sense in which web browsing is not 'compatible'. Now, I'm not saying that's a real problem. It's only a problem for "normies" who "browse the web" but those users are not the target for an actual anonimity network.
Anyway, that's a secondary issue. I suggest you post your ideas/views on the basic architecture for an overlay that works as "tor replacement".
well yes you can rate limit any application. If you rate limit the web browser then the typical 5mb pages (95% malware) won't load in 0.1s, 'like they should'.
Moot since this chaff fill does not rate limit or impede wheat traffic, some overheat but user should see roughly same speed as tor, i2p, phantom, etc.
the basic architecture for an overlay that works as "tor replacement".
Would rather see a TA resistant general purpose overlay transport network that can serve many uses. A 'tor replacement' would be just one module in that.
On Thu, May 14, 2020, 5:02 AM grarpamp <grarpamp@gmail.com> wrote:
the basic architecture for an overlay that works as "tor replacement".
Would rather see a TA resistant general purpose overlay transport network that can serve many uses. A 'tor replacement' would be just one module in that.
Do you mean Template Attacks ( https://wiki.newae.com/Template_Attacks ) or something else? Template attacks look so incredibly cool. I want to learn to do them so I can put a battery-operated olimex in a few insulated layers of soldered foil and see if I can still read the secret key by using a parabolic reflector.
That sounds cool. Let's design and build it. On Thu, May 14, 2020, 5:02 AM grarpamp <grarpamp@gmail.com> wrote:
well yes you can rate limit any application. If you rate limit the
web
browser then the typical 5mb pages (95% malware) won't load in 0.1s, 'like they should'.
Moot since this chaff fill does not rate limit or impede wheat traffic, some overheat but user should see roughly same speed as tor, i2p, phantom, etc.
the basic architecture for an overlay that works as "tor replacement".
Would rather see a TA resistant general purpose overlay transport network that can serve many uses. A 'tor replacement' would be just one module in that.
Algorithm-agnostic anonymization network. Let's say we are agreed that a new anonymization network should be implemented. One problem is that advances in such networks generally require implementing entirely new networks to check out new algorithms and new features, such improvements are strongly deterred. After all, that's one reason that TOR doesn't get as many improvements as we might like. (Another reason is that it is financed, at least in part, by people who are hostile to a "too-good" anonymization system.) Sure, we could implement a new set of nodes, hopefully at least 1000 in number. I think that ordinary, residential users should be able to run nodes. Internet services are provided with as much as 1 terabyte/month capacity, and possibly unlimited as well. (CenturyLink 1 Gbps, for example) We could implement a new onion-routing system, akin to TOR but with some improvements, most prominently adding chaff. So far, so good. But there may be other ideas, other improvements that people might want to try out. I've already proposed that it should be possible for just about every node to be an output node. Possibly every node should be an input node, as well. The big impediment to this is that people naturally want to avoid the potential legal harassment they might get if their IP node sent out gigabytes of 'in the clear' forbidden data. My ideas for a solution? Output data could be encrypted, enough to make it unreadable except by the end recipient. The operator of an output node that emits only seemingly-random data would be hard to hold legally responsible for that forbidden content, since nobody expects him to know how to convert it into plaintext. And/or, the data can be output into two streams, which would be XOR'd with each other only by the intended recipient to find the data. And, this network could also run different anonymization algorithms, simultaneously. Onion-routing may have its own limitations. Somebody might have a good idea for an alternative system. Why shouldn't it be possible to serve two algorithms? Or dozens? How about Bittorrent as well? Imagine 1000 nodes, each equipped with a 10-terabyte hard drive? Jim Bell
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Tuesday, May 19, 2020 10:41 PM, jim bell <jdb10987@yahoo.com> wrote:
Algorithm-agnostic anonymization network.
Let's say we are agreed that a new anonymization network should be implemented. One problem is that advances in such networks generally require implementing entirely new networks to check out new algorithms and new features, such improvements are strongly deterred. After all, that's one reason that TOR doesn't get as many improvements as we might like. (Another reason is that it is financed, at least in part, by people who are hostile to a "too-good" anonymization system.)
Sure, we could implement a new set of nodes, hopefully at least 1000 in number. I think that ordinary, residential users should be able to run nodes. Internet services are provided with as much as 1 terabyte/month capacity, and possibly unlimited as well. (CenturyLink 1 Gbps, for example) We could implement a new onion-routing system, akin to TOR but with some improvements, most prominently adding chaff. So far, so good. But there may be other ideas, other improvements that people might want to try out.
I've already proposed that it should be possible for just about every node to be an output node. Possibly every node should be an input node, as well. The big impediment to this is that people naturally want to avoid the potential legal harassment they might get if their IP node sent out gigabytes of 'in the clear' forbidden data. My ideas for a solution? Output data could be encrypted, enough to make it unreadable except by the end recipient. The operator of an output node that emits only seemingly-random data would be hard to hold legally responsible for that forbidden content, since nobody expects him to know how to convert it into plaintext. And/or, the data can be output into two streams, which would be XOR'd with each other only by the intended recipient to find the data.
And, this network could also run different anonymization algorithms, simultaneously. Onion-routing may have its own limitations. Somebody might have a good idea for an alternative system. Why shouldn't it be possible to serve two algorithms? Or dozens? How about Bittorrent as well? Imagine 1000 nodes, each equipped with a 10-terabyte hard drive?
Jim Bell
Hi, I am preparing a draft of a draft for a spec of what I think would be the ideal complimentary anonymization overlay that fits on the already running distributed system I am working on, which is USPS and is very good. It would be great if many ideas arise in this list so we can start focusing a conversation. My personal interes is to achieve a system that can provide Sybil protection for voting systems. Which is the reason Tor cannot be used with USPS, since one could create millions of colluding evil nodes and ditch the system. I limit it using IPv4 because it is very easy to enforce an homogeneously distributed network controlling the maximum number of nodes/votes per IP. This limit will grow as the IPs are filled with voting power. I already have the Sysbil protection implemented and the network of nodes running exchanging encrypted traffic about consensus. The only thing I have left are two things: onion routing (or a faster alternative that doesn't exist but I am researching), chaff traffic. ...and probably more considerations. I am not expert in anon overlays, but perhaps we can brainstorm so I can become one : ) Thanks for reading OA
On Wednesday, May 20, 2020, 07:27:40 PM PDT, other.arkitech <other.arkitech@protonmail.com> wrote: ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Tuesday, May 19, 2020 10:41 PM, jim bell <jdb10987@yahoo.com> wrote: Algorithm-agnostic anonymization network. Let's say we are agreed that a new anonymization network should be implemented. One problem is that advances in such networks generally require implementing entirely new networks to check out new algorithms and new features, such improvements are strongly deterred. After all, that's one reason that TOR doesn't get as many improvements as we might like. (Another reason is that it is financed, at least in part, by people who are hostile to a "too-good" anonymization system.) Sure, we could implement a new set of nodes, hopefully at least 1000 in number. I think that ordinary, residential users should be able to run nodes. Internet services are provided with as much as 1 terabyte/month capacity, and possibly unlimited as well. (CenturyLink 1 Gbps, for example) We could implement a new onion-routing system, akin to TOR but with some improvements, most prominently adding chaff. So far, so good. But there may be other ideas, other improvements that people might want to try out. I've already proposed that it should be possible for just about every node to be an output node. Possibly every node should be an input node, as well. The big impediment to this is that people naturally want to avoid the potential legal harassment they might get if their IP node sent out gigabytes of 'in the clear' forbidden data. My ideas for a solution? Output data could be encrypted, enough to make it unreadable except by the end recipient. The operator of an output node that emits only seemingly-random data would be hard to hold legally responsible for that forbidden content, since nobody expects him to know how to convert it into plaintext. And/or, the data can be output into two streams, which would be XOR'd with each other only by the intended recipient to find the data. And, this network could also run different anonymization algorithms, simultaneously. Onion-routing may have its own limitations. Somebody might have a good idea for an alternative system. Why shouldn't it be possible to serve two algorithms? Or dozens? How about Bittorrent as well? Imagine 1000 nodes, each equipped with a 10-terabyte hard drive? Jim Bell
Hi, I am preparing a draft of a draft for a spec of what I think would be the ideal complimentary anonymization overlay that fits on the already running distributed system I am working on, which is USPS and is very good. It would be great if many ideas arise in this list so we can start focusing a conversation. My personal interes is to achieve a system that can provide Sybil protection for voting systems. Which is the reason Tor cannot be used with USPS, since one could create millions of colluding evil nodes and ditch the system. I limit it using IPv4 because it is very easy to enforce an homogeneously distributed network controlling the maximum number of nodes/votes per IP. This limit will grow as the IPs are filled with voting power. I already have the Sysbil protection implemented and the network of nodes running exchanging encrypted traffic about consensus. The only thing I have left are two things: onion routing (or a faster alternative that doesn't exist but I am researching), chaff traffic.
Jim Bell's comments follow: I hope that what I've suggested, an anonymization constellation that can run multiple algorithms simultaneously, is practical and can be implemented successfully. I suppose what I'm describing amounts to multi-tasking, and my understanding is that's not trivial. What does everyone think about this? Can it be done? ...and probably more considerations. I am not expert in anon overlays, but perhaps we can brainstorm so I can become one : ) Thanks for reading OA
A general purpose network sounds nice. Everything is doable. What do you think of forking the codebase of an existing network, like tor or gnunet or one of the newer examples from anonymity research? On Thu, May 21, 2020, 1:55 AM jim bell <jdb10987@yahoo.com> wrote:
On Wednesday, May 20, 2020, 07:27:40 PM PDT, other.arkitech < other.arkitech@protonmail.com> wrote:
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Tuesday, May 19, 2020 10:41 PM, jim bell <jdb10987@yahoo.com> wrote:
Algorithm-agnostic anonymization network.
Let's say we are agreed that a new anonymization network should be implemented. One problem is that advances in such networks generally require implementing entirely new networks to check out new algorithms and new features, such improvements are strongly deterred. After all, that's one reason that TOR doesn't get as many improvements as we might like. (Another reason is that it is financed, at least in part, by people who are hostile to a "too-good" anonymization system.)
Sure, we could implement a new set of nodes, hopefully at least 1000 in number. I think that ordinary, residential users should be able to run nodes. Internet services are provided with as much as 1 terabyte/month capacity, and possibly unlimited as well. (CenturyLink 1 Gbps, for example) We could implement a new onion-routing system, akin to TOR but with some improvements, most prominently adding chaff. So far, so good. But there may be other ideas, other improvements that people might want to try out.
I've already proposed that it should be possible for just about every node to be an output node. Possibly every node should be an input node, as well. The big impediment to this is that people naturally want to avoid the potential legal harassment they might get if their IP node sent out gigabytes of 'in the clear' forbidden data. My ideas for a solution? Output data could be encrypted, enough to make it unreadable except by the end recipient. The operator of an output node that emits only seemingly-random data would be hard to hold legally responsible for that forbidden content, since nobody expects him to know how to convert it into plaintext. And/or, the data can be output into two streams, which would be XOR'd with each other only by the intended recipient to find the data.
And, this network could also run different anonymization algorithms, simultaneously. Onion-routing may have its own limitations. Somebody might have a good idea for an alternative system. Why shouldn't it be possible to serve two algorithms? Or dozens? How about Bittorrent as well? Imagine 1000 nodes, each equipped with a 10-terabyte hard drive?
Jim Bell
Hi, I am preparing a draft of a draft for a spec of what I think would be the ideal complimentary anonymization overlay that fits on the already running distributed system I am working on, which is USPS and is very good. It would be great if many ideas arise in this list so we can start focusing a conversation. My personal interes is to achieve a system that can provide Sybil protection for voting systems. Which is the reason Tor cannot be used with USPS, since one could create millions of colluding evil nodes and ditch the system. I limit it using IPv4 because it is very easy to enforce an homogeneously distributed network controlling the maximum number of nodes/votes per IP. This limit will grow as the IPs are filled with voting power. I already have the Sysbil protection implemented and the network of nodes running exchanging encrypted traffic about consensus. The only thing I have left are two things: onion routing (or a faster alternative that doesn't exist but I am researching), chaff traffic.
Jim Bell's comments follow:
I hope that what I've suggested, an anonymization constellation that can run multiple algorithms simultaneously, is practical and can be implemented successfully. I suppose what I'm describing amounts to multi-tasking, and my understanding is that's not trivial. What does everyone think about this? Can it be done?
...and probably more considerations. I am not expert in anon overlays, but perhaps we can brainstorm so I can become one : )
Thanks for reading OA
Components of software are supposed to be reuseable, which is one of its efficiencies. Of course, if there is some sort of flaw already present, reusing it adopts the flaw. Nevertheless, I suspect that it is more valuable to get SOMETHING working, relatively rapidly, especially if the same group of hardware nodes can run multiple 'virtual' anonymity networks. I don't have the expertise to weigh in on the issue of using the code of a specific network. But if the new network we are building can readily run multiple examples of code, I don't see anything wrong with trying to implement multiple software concepts. Jim Bell On Sunday, May 24, 2020, 02:59:42 PM PDT, Karl <gmkarl@gmail.com> wrote: A general purpose network sounds nice. Everything is doable. What do you think of forking the codebase of an existing network, like tor or gnunet or one of the newer examples from anonymity research? On Thu, May 21, 2020, 1:55 AM jim bell <jdb10987@yahoo.com> wrote: On Wednesday, May 20, 2020, 07:27:40 PM PDT, other.arkitech <other.arkitech@protonmail.com> wrote: ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Tuesday, May 19, 2020 10:41 PM, jim bell <jdb10987@yahoo.com> wrote: Algorithm-agnostic anonymization network. Let's say we are agreed that a new anonymization network should be implemented. One problem is that advances in such networks generally require implementing entirely new networks to check out new algorithms and new features, such improvements are strongly deterred. After all, that's one reason that TOR doesn't get as many improvements as we might like. (Another reason is that it is financed, at least in part, by people who are hostile to a "too-good" anonymization system.) Sure, we could implement a new set of nodes, hopefully at least 1000 in number. I think that ordinary, residential users should be able to run nodes. Internet services are provided with as much as 1 terabyte/month capacity, and possibly unlimited as well. (CenturyLink 1 Gbps, for example) We could implement a new onion-routing system, akin to TOR but with some improvements, most prominently adding chaff. So far, so good. But there may be other ideas, other improvements that people might want to try out. I've already proposed that it should be possible for just about every node to be an output node. Possibly every node should be an input node, as well. The big impediment to this is that people naturally want to avoid the potential legal harassment they might get if their IP node sent out gigabytes of 'in the clear' forbidden data. My ideas for a solution? Output data could be encrypted, enough to make it unreadable except by the end recipient. The operator of an output node that emits only seemingly-random data would be hard to hold legally responsible for that forbidden content, since nobody expects him to know how to convert it into plaintext. And/or, the data can be output into two streams, which would be XOR'd with each other only by the intended recipient to find the data. And, this network could also run different anonymization algorithms, simultaneously. Onion-routing may have its own limitations. Somebody might have a good idea for an alternative system. Why shouldn't it be possible to serve two algorithms? Or dozens? How about Bittorrent as well? Imagine 1000 nodes, each equipped with a 10-terabyte hard drive? Jim Bell
Hi, I am preparing a draft of a draft for a spec of what I think would be the ideal complimentary anonymization overlay that fits on the already running distributed system I am working on, which is USPS and is very good. It would be great if many ideas arise in this list so we can start focusing a conversation. My personal interes is to achieve a system that can provide Sybil protection for voting systems. Which is the reason Tor cannot be used with USPS, since one could create millions of colluding evil nodes and ditch the system. I limit it using IPv4 because it is very easy to enforce an homogeneously distributed network controlling the maximum number of nodes/votes per IP. This limit will grow as the IPs are filled with voting power. I already have the Sysbil protection implemented and the network of nodes running exchanging encrypted traffic about consensus. The only thing I have left are two things: onion routing (or a faster alternative that doesn't exist but I am researching), chaff traffic.
Jim Bell's comments follow: I hope that what I've suggested, an anonymization constellation that can run multiple algorithms simultaneously, is practical and can be implemented successfully. I suppose what I'm describing amounts to multi-tasking, and my understanding is that's not trivial. What does everyone think about this? Can it be done? ...and probably more considerations. I am not expert in anon overlays, but perhaps we can brainstorm so I can become one : ) Thanks for reading OA
On 5/24/20, Karl <gmkarl@gmail.com> wrote:
A general purpose network sounds nice. Everything is doable.
What do you think of forking the codebase of an existing network, like tor or gnunet or one of the newer examples from anonymity research?
What networks ultimately do, whether they are "for" voice, video, IRC data, "messages" email files nntp etc, http style interactivity, file storage retrieval, cryptocurrency, etc, etc.... ... is move data from A to B... that's it, that's all they do. They move a "message" data, a blob of bits, a "packet" from A to B. Potentially but rarely in multicast-ish ways, sometimes in route-relay-ish ways, but always, ultimately at lowest layer, from A to B. [1] There's probably quite little return in doing all the research just to build some application specific network to be secure in just that application, because under the hood all it did was secure a more specific form of A to B for that app alone. Yet by extending the initial upfront research a bit more, you reach a general form of secure A to B for all applications, such that each new application needs to do almost no work to ride on top, applications become essentially just plugins over a data moving network, not networks themselves. Further, there is timely need metrics... plowing resources into making the best "chat" net, starves research from all other standalone app specific nets, it builds incompatible towers of networks which cannot interoperate, and they compete for exclusive node count funding etc instead of combining node count bandwidth for the commons. If need be, nodes within the commons can offer more specific transport/plugin features. Last, creating dozens of app specific nets cannot take advantage of riding and hiding in each others noise over a common transport overlay layer. And makes more risk for singled out political attack on against one app than against a general purpose net. Perhaps tor is not best as all it does is TCP. Phantom offers raw IPv6 for all existing apps, and is currently light enough that may be an ok candidate for whitepapering a dynamic chaff anti-traffic-analysis bolt on tech proof of concept, but IPv6 is not a generic data message handling network in sense of application level concepts. Building a more generic network may serve better long term, but will produce and require new apps to compile to its plugin API such as i2p-snark torrent app is specific to i2p net and has to use its API instead of just using IPv6. More generic nets could offer IPv6 as a plugin on top of them. But the extra layering to do that will make them slower than tor/phantom style IP base alone. Not to say "forking" any one net is better than other as 10 or more already extant or papered could be evaluated for that, just that a generic or at least IP API design may be more likely to produce a mass payoff effect than say building the next singularly focused impenetrable "cypherpunk mixmaster email network", which is a useless waste to anyone wanting to do browser, IRC, file, coin, voice, etc. [1] Note the only place you'll find research on anything different from A to B, is from people trying to design fundamental alternatives to IP networks... broadcast, radio, satcom etc.
https://arstechnica.com/information-technology/2016/08/building-a-new-tor-th... "Tor hasn't changed, it's the world that's changed." -- Tor Project Five years later, tor still looks same as 20 years prior, while world adversaries advanced 20 years and counting.
A $ hardware offering / reference setup is nice. And not excluding any possible donation / subsidy models. Though by keeping x86 compatible port, and bootable via USB / cd, many people have old computers can be repurposed to the net at no co$t.
On 10/13/19, jim bell <jdb10987@yahoo.com> wrote:
arbitrarily-long hops (256 hops? 65,536 hops? An even larger power-of-2 hops?)
Hops, alone, don't add much protection beyond a good routing of 3 to 9 or so. They're more for fucking with traditional jurisdictional log reconstruction trails, than dealing with GPA's, GT-1's and GAA'a including Sybil that can just follow traffic patterns across the mesh bisecting in real time, or more generally... sort and match traffic patterns between all sets of two edge hosts. If applied together with other tech, especially regarding nets where you want any kind of useable stream (even delivery of storage or msgs is in a way a stream), beyond those hops is going to get really unperformant, and less security return than thought. You can demo today by recompile Tor and Phantom and tweak I2P, to set arbitrary hop levels beyond single digits... are you more secure from G* as result... probably not.
On Mon, Oct 21, 2019 at 06:59:00AM -0400, grarpamp wrote:
On 10/13/19, jim bell <jdb10987@yahoo.com> wrote:
arbitrarily-long hops (256 hops? 65,536 hops? An even larger power-of-2 hops?)
Hops, alone, don't add much protection beyond a good routing of 3 to 9 or so. They're more for fucking with traditional jurisdictional log reconstruction trails,
That's a point.
than dealing with GPA's, GT-1's and GAA'a including Sybil
GPA - Global Passive Adversary GAA - Global Active Adversary GT-1 - ??
that can just follow traffic patterns across the mesh bisecting in real time, or more generally... sort and match traffic patterns between all sets of two edge hosts.
"between two edge hosts (aka src and dst)" is the point why more than say 3 to 9 hops adds little to nought - and if you're onion routing, not only reducing bw by [header_size] per layer, but consuming overall network bandwidth according to hop count (again, to little or no advantage to privacy).
If applied together with other tech, especially regarding nets where you want any kind of useable stream
(even delivery of storage or msgs is in a way a stream),
indeed
beyond those hops is going to get really unperformant, and less security return than thought.
No increase in security in relation to conceivable attacks. Jurisdictional hops - e.g. through Russia if you're avoiding USGov etc - sound conceptually useful.
You can demo today by recompile Tor and Phantom and tweak I2P, to set arbitrary hop levels beyond single digits... are you more secure from G* as result... probably not.
Link(s) to Phantom please?
GAA GPA GT-1 - ??
Global Tier-1 Internet and Telecom Backbones aka: rats, fiber splitting log and data giving, government cocksucking yes men and apologists Except maybe Joseph Nacchio of Qwest, so they jailed him too.
"between two edge hosts (aka src and dst)" is the point why more than say 3 to 9 hops adds little to nought - and if you're onion routing, not only reducing bw by [header_size] per layer, but consuming overall network bandwidth according to hop count
Which is why onioncat bittorrent users had howto on setting BT usage rate limits 1/7 under Tor limits to provide that bandwidth back. And partly why people should be able to understand that if they dedicate 1/Nth of their ISP pipe to a fulltime chaff padding fill network they can still get that entire rate as wheat on demand whenever needed, same as setting any overlay network today to 1/Nth. And see that a ping through an empty network still has roughly same usable latency as a ping though a network just at saturation, or at any other node-to-node fixed transport contract so long as CPU is available to perform the regulation.
(even delivery of storage or msgs is in a way a stream)
Even fixed envelope size messaging mixnets can end up pathing your message through a bunch of idle nodes to your recipient... no amount of store and forward random delay mixing is going to save you from end to end traffic analysis there. And people are talking about trying to use actual applications... mail, IRC, voice, video, file transfer, web services, shells, etc... over TCP / UDP etc... over overlays... all ultimately, end to end, input to output, streams of Bytes^N and pulsations and waves that stick out like canaries... over todays overlay networks, whether mix or circuit, that have degenerate paths, no traffic fill etc... Todays darknet overlays (ie: Tor onionspace, Pond, etc) survive pehaps not because they're particularly strong, but because their weakness is currently an open TOP SECRET, remanding all finds out to parallel construction. The encryption is probably pretty good. The who is talking to who is quite likely not the best regarding G*. People think it's hard to sift distill analyze and line up the waveforms coming off 2^32 IP addresses... it's not. This is not the old game of manually picking up the phone calling ISPs and tracing back 1990s crackers anymore. It's f(n) 24x365 lights out in Bluffdale and elsewhere... point, click, you're done. Next generation overlay networks must not fail to put serious effort into characterizing and mitigating the various G* traffic analysis, and Sybil, risks. Many of todays nets write those off, and or irresponsibly hush those topics under the rug (no doubt to appear better than they are). That's sad, and shameful.
Jurisdictional hops - e.g. through Russia if you're avoiding USGov etc - sound conceptually useful.
Intentional routing lets you select and diversify across different sets of fiber taps and Sybil deployment efforts, serves as fun random takedown splash page badge generator, increases spook workunits and their private backhaul lambdas required, etc...
Link(s) to Phantom please?
https://code.google.com/archive/p/phantom/ Other repos are not merged in.
On Wed, Oct 23, 2019 at 01:05:29AM -0400, grarpamp wrote:
GAA GPA GT-1 - ??
Global Tier-1 Internet and Telecom Backbones aka: rats, fiber splitting log and data giving, government cocksucking yes men and apologists
Except maybe Joseph Nacchio of Qwest, so they jailed him too.
"between two edge hosts (aka src and dst)" is the point why more than say 3 to 9 hops adds little to nought - and if you're onion routing, not only reducing bw by [header_size] per layer, but consuming overall network bandwidth according to hop count
Which is why onioncat bittorrent users had howto on setting BT usage rate limits 1/7 under Tor limits to provide that bandwidth back.
And partly why people should be able to understand that if they dedicate 1/Nth of their ISP pipe to a fulltime chaff padding fill network they can still get that entire rate as wheat on demand whenever needed, same as setting any overlay network today to 1/Nth.
And see that a ping through an empty network still has roughly same usable latency as a ping though a network just at saturation, or at any other node-to-node fixed transport contract so long as CPU is available to perform the regulation.
Sounds logically sane. And with actual friend to friend connections, if my friend jumps on, downloads a movie, jumps off, well, ok, he was keen for a movie, and if does it again (jump on, big dl, jump off), I'll accept that too, but I'll speak with him and say "hey, you know you need to give back X3 or X7 times your DL, to save your sorry arse from the MAFIAA?", and if he wants to continue under my wing, he's going to give back, or get booted. And to "give prime authority to every node" in their own routing decisions, means sharing of node IDs and node metrics. "Sure - generate a new node ID, but $MY_FRIEND, as I said last time, if you ain't making up for past sins, you ain't connecting to me" - thus generating a new node ID is almost irrelevant - what is relevant is finding meat space friends who will actually allow him to connect to their nodes :D Node metrics, and end user authority, FTW :) Did I mention "for the muffaluggerin win"?
(even delivery of storage or msgs is in a way a stream)
Even fixed envelope size messaging mixnets can end up pathing your message through a bunch of idle nodes to your recipient... no amount of store and forward random delay mixing is going to save you from end to end traffic analysis there.
Indeed. Ack.
And people are talking about trying to use actual applications... mail, IRC, voice, video, file transfer, web services, shells, etc... over TCP / UDP etc... over overlays... all ultimately, end to end, input to output, streams of Bytes^N and pulsations and waves that stick out like canaries... over todays overlay networks, whether mix or circuit, that have degenerate paths, no traffic fill etc...
Todays darknet overlays (ie: Tor onionspace, Pond, etc) survive pehaps not because they're particularly strong, but because their weakness is currently an open TOP SECRET, remanding all finds out to parallel construction.
Shhhh ... you're not meant to say such things publicly grarpamp, you should -know- that already :)
The encryption is probably pretty good. The who is talking to who is quite likely not the best regarding G*.
People think it's hard to sift distill analyze and line up the waveforms coming off 2^32 IP addresses... it's not. This is not the old game of manually picking up the phone calling ISPs and tracing back 1990s crackers anymore. It's f(n) 24x365 lights out in Bluffdale and elsewhere... point, click, you're done.
Next generation overlay networks must not fail to put serious effort into characterizing and mitigating the various G* traffic analysis, and Sybil, risks.
Ack. Anything less is not worth our effort. Keep those thoughts coming - the more specific the better.
Many of todays nets write those off, and or irresponsibly hush those topics under the rug (no doubt to appear better than they are). That's sad, and shameful.
Ack.
Jurisdictional hops - e.g. through Russia if you're avoiding USGov etc - sound conceptually useful.
Intentional routing lets you select and diversify across different sets of fiber taps and Sybil deployment efforts, serves as fun random takedown splash page badge generator, increases spook workunits and their private backhaul lambdas required, etc...
Link(s) to Phantom please?
https://code.google.com/archive/p/phantom/
Other repos are not merged in.
I keep getting 404, even with JS enabled. Anyone got a pubclicly accessible Phantom repo mirror? Or willing to 7z into ~15MiB chunks and email to me? (Best is just upload somewhere public... so all can dl.)
Link(s) to Phantom please?
https://code.google.com/archive/p/phantom/
Other repos are not merged in.
I keep getting 404, even with JS enabled.
Anyone got a pubclicly accessible Phantom repo mirror?
Or willing to 7z into ~15MiB chunks and email to me?
(Best is just upload somewhere public... so all can dl.)
Found an old archive I had from 2011, ~1.7MiB: phantom-r30-2011-09-12-181357.tar.gz and a few papers ~2MiB total: phantom-pres.ppt phantom-implementation-paper.pdf phantom-design-paper.pdf And a vid in two versions (first might just be audio, dunno): # ~11MiB: DEF CON 16 Hacking Conference Presentation By Magnus Brading - The Phantom Protocol - Audio.m4b # ~ 117 MiB: DEF CON 16 Hacking Conference Presentation By Magnus Brading - The Phantom Protocol - Slides.m4v So if there's a newer update to the code since 2011, looks like it would easily fit in an email - happy to receive such if someone has that...
I keep getting 404, even with JS enabled.
Link before shutdown was... https://code.google.com/p/phantom
Other repos are not merged in. So if there's a newer update to the code since 2011
There was a linux port (maybe was even in a distro, debian?) that ran in hardcode meta mode. About that time was some devel to crypto, the goodie, and toward disk auto meta... via tickets, etc. Might be here somewhere.
On Wed, 23 Oct 2019 05:42:47 -0400 grarpamp <grarpamp@gmail.com> wrote:
I keep getting 404, even with JS enabled.
Link before shutdown was... https://code.google.com/p/phantom
Other repos are not merged in. So if there's a newer update to the code since 2011
"3.1. Design Assumptions 1. The traffic of every node in the network is assumed to be eavesdropped on (individually, but not globally in a fully correlated fashion) by an external party." " so as far as I can tell phantom is not protected against traffic correlation attacks, so it's just more tor-like garbage, and can be ignored. If anybody read the whole paper and found something interesting, quote it.
"3.1. Design Assumptions
Phantom's explanation of features, models, threats, and reasons is poorly worded and often makes unrelated moot or irrelavant points. It's easier to just look at how the network works.
so as far as I can tell phantom is not protected against traffic correlation attacks
The point of looking at other networks is they may provide design bits that can be assembled into future networks. Some of these networks would be easy to add a layer of fill traffic to.
[Phantom is] more tor-like
Except... DHT instead of DA's. Random pathing instead of weighted. IPv6 instead of TCP only onion addressing. Arbitrary hops for pedants. Potential exit vpn termination could similar to I2P outproxy. See the Phantom paper for more.
On Monday, October 21, 2019, 04:00:16 AM PDT, grarpamp <grarpamp@gmail.com> wrote: On 10/13/19, jim bell <jdb10987@yahoo.com> wrote:
arbitrarily-long hops (256 hops? 65,536 hops? An even larger power-of-2 hops?)
Hops, alone, don't add much protection beyond a good routing of 3 to 9 or so. They're more for fucking with traditional jurisdictional log reconstruction trails, than dealing with GPA's, GT-1's and GAA'a including Sybil that can just follow traffic patterns across the mesh bisecting in real time, or more generally... sort and match traffic patterns between all sets of two edge hosts.
Okay, I was just joshing about the "256 hops" part. While there may not be any hard limit built into the system, I believe I later said that 16 hops would be enough for anybody.(Somehow, didn't I remember about 35 years ago that Bill Gates said something like, ""640 kilobytes of main memory would be enough for anybody? We see where THAT led!)
If applied together with other tech, especially regarding nets where you want any kind of useable stream (even delivery of storage or msgs is in a way a stream), beyond those hops is going to get really unperformant, and less security return than thought.
You can demo today by recompile Tor and Phantom and tweak I2P, to set arbitrary hop levels beyond single digits... are you more secure from G* as result... probably not.
However, one use of "many" hops would be the generation of chaff 'traffic'. The goal, presumably, of adding chaff is to disguise the real traffic. To do that, it would be desireable to make that chaff look as much as possible like real traffic. A packet sent through all, or a large number of nodes will have a genuine path. Assuming the spy bugs one node, he will see traffic come in, and leave for another. Just like an ordinary instance of traffic. An alternative would be a system where each node spontaneously generates chaff. Spying on a node would see such spontaneous 'traffic' generations. Maybe it would be clearer that that was chaff? But I'm just throwing out ideas. I assume that the 'chaff' issue has been professionally detailed in some academic papers. Jim Bell
On Mon, Oct 21, 2019 at 06:06:15PM +0000, jim bell wrote:
On Monday, October 21, 2019, 04:00:16 AM PDT, grarpamp <grarpamp@gmail.com> wrote:
On 10/13/19, jim bell <jdb10987@yahoo.com> wrote:
arbitrarily-long hops (256 hops? 65,536 hops? An even larger power-of-2 hops?)
Hops, alone, don't add much protection beyond a good routing of 3 to 9 or so. They're more for fucking with traditional jurisdictional log reconstruction trails, than dealing with GPA's, GT-1's and GAA'a including Sybil that can just follow traffic patterns across the mesh bisecting in real time, or more generally... sort and match traffic patterns between all sets of two edge hosts.
Okay, I was just joshing about the "256 hops" part. While there may not be any hard limit built into the system, I believe I later said that 16 hops would be enough for anybody.(Somehow, didn't I remember about 35 years ago that Bill Gates said something like, ""640 kilobytes of main memory would be enough for anybody? We see where THAT led!)
If applied together with other tech, especially regarding nets where you want any kind of useable stream (even delivery of storage or msgs is in a way a stream), beyond those hops is going to get really unperformant, and less security return than thought.
You can demo today by recompile Tor and Phantom and tweak I2P, to set arbitrary hop levels beyond single digits... are you more secure from G* as result... probably not.
However, one use of "many" hops would be the generation of chaff 'traffic'. The goal, presumably, of adding chaff is to disguise the real traffic.
Sort of. The goal of chaff is to fill the blanks - so when I'm not sending wheat, in Tor land, it's obvious that I've stopped sending. Chaff means when I stop sending, my node still send chaff - just purely random filled packets, so that an observer cannot tell whether I've begun or ended a connection, or whether I'm sending anything at all. (Same for the receive loop of course too...)
To do that, it would be desireable to make that chaff look as much as possible like real traffic.
Ahh, I see the thought. Yes, that thought makes sense on first blush, but the problem is, if our encryption is so poor that chaff packets are distinguishable from wheat, our chaff system is broken. And yes, as above, chaff is to fill the gaps, not to create flows or streams that are not otherwise needed - the goal is simply to disguise traffic, not to create completely arbitrary fill traffic (and if the encryption is not broken, all traffic should look completely arbitrary - this is a fundamental 'broken' with Tor's non chaff filled TCP flows).
A packet sent through all, or a large number of nodes will have a genuine path.
Yes, "chaff paths" is the concept here, now I understand. I believe that would be counter productive to network utilisation, and as coderman points out, for too little gain. I can see how chaff paths could possibly make sense in the Tor network. Also, but more fundamentally, what we are aiming for with chaff fill, at least in a packet switched network, is something better than "chaff paths": - we want streams to not be distinguishable - this is a known (and fundamental) problem with Tor - chaff packets seeks a functional improvement on this fundamental problem with Tor - the reason Tor is so bad, is that entry and exit nodes are dominated by GPAs, and the "default set up of Tor Browser" for an end user is therefore fundamentally broken - this is why I stress the importance of running your own home node (if you're using Tor at all), and more so, running that as an exit node if you want any reasonable plausible deniability Covfefe net hopes to overcome this fundamental Tor (as it stands) problem.
Assuming the spy bugs one node, he will see traffic come in, and leave for another. Just like an ordinary instance of traffic.
"chaff fill" is a misnomer perhaps leading people's' thoughts astray, we should say something like: Chaff packets: 1) Are, to an onlooker or snooper, indistinguishable from wheat packets, both in their size, and in their timing of delivery, and in all consequential timing for packets returning, or outgoing, from the node that receives a chaff packet. 2) Are only ever used as padding to fill gaps, so that stream begin, and stream end are not distinguishable (to the snoop), and also so that stream data, and surrounding chaff packets, are also not distinguishable from one another. (A stream is a packet flow such as a request, and the corresponding response for the content of a web page.)
An alternative would be a system where each node spontaneously generates chaff. Spying on a node would see such spontaneous 'traffic' generations. Maybe it would be clearer that that was chaff?
Yes, this is the Covfefe model - chaff packets, to fill the gaps, so the snoop cannot tell whether any data or streams are being sent, or not, at all.
But I'm just throwing out ideas. I assume that the 'chaff' issue has been professionally detailed in some academic papers.
Possibly - if someone has a link, I'd be happy to read it, but the principle seems to jump out and smack us in the face, but I can imagine that there could be some useful academic analysis of chaff and network theory - if such exists...
On Tue, Oct 22, 2019 at 10:20:35AM +1100, Zenaan Harkness wrote:
On Mon, Oct 21, 2019 at 06:06:15PM +0000, jim bell wrote:
On Monday, October 21, 2019, 04:00:16 AM PDT, grarpamp <grarpamp@gmail.com> wrote:
To do that, it would be desireable to make that chaff look as much as possible like real traffic.
Ahh, I see the thought. Yes, that thought makes sense on first blush, but the problem is, if our encryption is so poor that chaff packets are distinguishable from wheat, our chaff system is broken.
And yes, as above, chaff is to fill the gaps, not to create flows or streams that are not otherwise needed - the goal is simply to disguise traffic, not to create completely arbitrary fill traffic (and if the encryption is not broken, all traffic should look completely arbitrary - this is a fundamental 'broken' with Tor's non chaff filled TCP flows).
A packet sent through all, or a large number of nodes will have a genuine path.
Yes, "chaff paths" is the concept here, now I understand. I believe that would be counter productive to network utilisation, and as coderman points out, for too little gain.
I can see how chaff paths could possibly make sense in the Tor network.
Also, but more fundamentally, what we are aiming for with chaff fill, at least in a packet switched network, is something better than "chaff paths":
- we want streams to not be distinguishable - this is a known (and fundamental) problem with Tor
- chaff packets seeks a functional improvement on this fundamental problem with Tor
- the reason Tor is so bad, is that entry and exit nodes are dominated by GPAs, and the "default set up of Tor Browser" for an end user is therefore fundamentally broken - this is why I stress the importance of running your own home node (if you're using Tor at all), and more so, running that as an exit node if you want any reasonable plausible deniability
Covfefe net hopes to overcome this fundamental Tor (as it stands) problem.
On second blush, although I might trust an immediate friend (first hop), I might effectively set up a circuit through friend B, to C, where I control the chaff, inserting chaff when I'm not using this "mini circuit" - in this way B does not know that the circuit from A to C is partly chaff, or purely data, or purely chaff. Node C might have something to say about that if I don't utilize this mini route for too long (that would be a waste of B's generous bandwidth provision). We could consider or name this mini route ABC, a chaff route in the sense that A controls the route, inserting chaff as needed.
Assuming the spy bugs one node, he will see traffic come in, and leave for another. Just like an ordinary instance of traffic.
"chaff fill" is a misnomer perhaps leading people's' thoughts astray, we should say something like:
Chaff packets:
1) Are, to an onlooker or snooper, indistinguishable from wheat packets, both in their size, and in their timing of delivery, and in all consequential timing for packets returning, or outgoing, from the node that receives a chaff packet.
2) Are only ever used as padding to fill gaps, so that stream begin, and stream end are not distinguishable (to the snoop), and also so that stream data, and surrounding chaff packets, are also not distinguishable from one another.
(A stream is a packet flow such as a request, and the corresponding response for the content of a web page.)
An alternative would be a system where each node spontaneously generates chaff. Spying on a node would see such spontaneous 'traffic' generations. Maybe it would be clearer that that was chaff?
Yes, this is the Covfefe model - chaff packets, to fill the gaps, so the snoop cannot tell whether any data or streams are being sent, or not, at all.
But I'm just throwing out ideas. I assume that the 'chaff' issue has been professionally detailed in some academic papers.
Possibly - if someone has a link, I'd be happy to read it, but the principle seems to jump out and smack us in the face, but I can imagine that there could be some useful academic analysis of chaff and network theory - if such exists...
On 11/10/2019 22:05, jim bell wrote:
Somebody asked me a question, but because I am far from being an expert, I couldn't answer. Suppose a person wanted to implement a TOR node, simply by buying some box, and plugging it into his modem, and power. And NOT needing to become an expert on TOR, or even on computers in general. And NOT having to follow pages and pages of instructions. I did a few minutes of searching, and even the 'simple' explanations seemed 'clear as mud'.
Don't bother with long explanations challenging the usefulness, or trustworthiness of TOR. Yes, we've discussed them to death. That's a different subject. Jim Bell
I'd sell you one for $350 plus postage etc from the UK. Or set one up for $50-100. But I'd guess you can find-a-nerd in the US for less. Maybe suggest it to the Tor guys. I'm a bit out-of-date but that's the sort of thing the original guys would have liked, and the new guys [*] probably still do. [*] new absent any recent sjw shit -- otherwise just tell them to piss up a stick. If they have no penis, to try their best. :) Peter Fair
participants (12)
-
coderman
-
grarpamp
-
jim bell
-
John Newman
-
John Young
-
Karl
-
other.arkitech
-
Peter Fairbrother
-
Punk
-
Punk - Stasi 2.0
-
Punk-Stasi 2.0
-
Zenaan Harkness