On 6/7/23, Peter Fairbrother <peter@tsto.co.uk> wrote:
On 06/06/2023 07:41, Undescribed Horrific Abuse, One Victim & Survivor of Many wrote:
As to constant bandwidth/covertraffic, that is expensive even today. For constant bandwidth to get a 5 second response time for a smallish say 3MB web page you need to have 3 MB of covertraffic every 5 seconds, or 50GB per day, per link. Ouch.
I thought about this a little bit, and the concern doesn't add up to me.
As a consumer and participant in small businesses, I've only ever seen bandwidth that is metered per availability, not per use. The price is the same whether I use it or not.
Up to a point, yes. In most cases that point is 25GB/month, after which your traffic gets throttled. Unmetered lower bandwidth contracts also exist, but don't help enough.
However, you miss my point - the requirement is 50GB per day, *per link*.
Imagine you are a TOR entry node. If you are serving 1,000 people - which is not a whole lot - you need to serve 50 TB of dummy traffic per day.
This is only true if these people are all using the network at precisely the same time and are all given 1MB/s at that same time. The maximum total bandwidth needed at any given moment is the same with or without cover traffic: it is the sum of the most bandwidth needed by people simultaneously. At this maximal moment, no cover traffic is needed: the users are each others’ cover traffic. At other moments you need to provide enough cover traffic such that that moment is indistinguishable from any other. But it’s still maximally that same amount of bandwidth with or without cover traffic. I proposed making all moments be that maximal moment, because the network transceiver is powered and capable anyway. I’m seeing there is a complexity here regarding who decides routes and who provides cover traffic and what can be probed, and I’m curious what the state of the art is.
For free.
Big ouch.
Plus you need links to intermediate nodes - to do this properly you need a link to every possible intermediate node, which should be in the tens of thousands or more.
550 TB per day.
Again, you only need as much cover traffic to compare with the moment and line of greatest usage, and traffic is split among the various nodes. You state 550 with no math, it does sound made up when stated this way.
So not happening.
As for low latency highly anonymous traffic, well we had remailers, which worked up to a point, and which were getting better - until TOR came along and took up all the science and coolness and innocent cover traffic and the cypherpunks who actually wrote code; and remailer development basically stopped.
I think you mean high latency. It’s nice to think of remailers. I didn’t use them myself, they were held as using dangerous old network standards when new things were available.
Peter Fairbrother