On Thu, Oct 17, 2019 at 11:11:41PM +0000, coderman wrote:
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Thursday, October 17, 2019 10:31 PM, Punk <punks@tfwno.gf> wrote:
... ok, so that's actually one of, or the most fundamental requirement. The connection between user and 'network' HAS to have a fixed rate. Let's check the archive... ...
So that's it Jim. Users have to be connected 24/7 using a constant rate link. Today it can be more than 100 bytes/s
one idea is to use something akin to reliable multicast groups, where you gradually increase your bandwidth according to some defined strata of bandwidth, and affirmative control notification is required to increase your bandwidth (number of concurrent strata).
There are various improvements, but the most basic operational mode is simple in design and ought be straightforward to implement - never has been, yet.
this is not TCP friendly,
The protocol should not be TCP, but should be UDP. Then apps can use UDP, or TCP, or any protocol higher than UDP; TCP precludes, or rather introduces latency and other problems, for many apps/protocols that rely on something lower level than TCP.
but it would support multiple levels of bandwidth in such a system. this doesn't eliminate traffic analysis (like true link padding) but it does muddy the waters into partitions which are much larger than (1).
another benefit would be to use that padding traffic with application layer awareness of bulk transport. e.g. ability to say "send this, but no rush..." vs. interactive traffic.
last but not least, you could apply the padding traffic to key pre-distribution or opportunistic protocol maintenance. e.g. distributing routing and node identity information. (the "directory")
Indeed. Lots of improvements possible. Anecdote: Back about 3 years ago when I first ran a Tor exit node at home (on a ~1 MiB/s ADSL), I would sometimes SSH into the box from another location and forward VNC for a virtual desktop, really just to monitor the Tor node. Pretty consistently, within about 10 minutes, the SSH connection would die with some SSH error, so I'd reconnect and watch some more, then it would die again. It appeared evident to me that SSH had some bug that was being exploited to, at the very least, kill SSH connections with some presumably packet injection or modification (presumably after monitoring the connection for a bit). That, of course, was entirely disconcerting. Since then there's been at least one SSH bug finally disclosed/ fixed, though I can't find the one that stood out to me as commensurate to my experience, the following may be of anecdotal interest: Fixing The New OpenSSH Roaming Bug https://www.upguard.com/blog/fixing-the-new-openssh-roaming-bug ... The flaw involves the accidental inclusion of experimental client-side roaming support in the OpenSSH client, despite being disabled on the server-side years ago. This feature essentially enables users to resume broken SSH connections. Unfortunately, a maliciously configured server can exploit a bug in the client and capture its memory contents, including any private encryption keys used for SSH connections. Cisco's warning: Patch now, critical SSH flaw affects Nexus 9000 fabric switches https://www.zdnet.com/article/ciscos-warning-patch-now-critical-ssh-flaw-aff... May 2, 2019 -- 11:12 GMT (21:12 AEST) The company disclosed the bug on Tuesday and has given it a severity rating of 9.8 out of 10. ... https://nakedsecurity.sophos.com/2018/08/23/vulnerability-in-openssh-for-two... Serious SSH bug lets crooks log in just by asking nicely… https://nakedsecurity.sophos.com/2018/10/17/serious-ssh-bug-lets-crooks-log-... Big, bad, scary bug of the moment is CVE-2018-10933. This is a serious flaw – in fact, it’s a very serious flaw – in a free software library called libssh. The flaw is more than just serious – it’s scary, because it theoretically allows anyone to log into a server protected with libssh without entering a password at all. It’s scary because ssh, or SSH as it is often written, is probably the most widely deployed remote access protocol in the world. Almost all Unix and Linux servers use SSH for remote administration, and there are an awful lot of awfully large server farms out there, and so there’s an awful lot of SSH about. ... By far the most commonly used SSH version out there is an open source product called OpenSSH, created and maintained by the security-conscious folks at OpenBSD. OpenSSH is a completely separate implementation to libssh – they don’t include or rely on each other’s code. Other well-known open source implementations of SSH include Dropbear (a stripped down version commonly used on routers and other IoT devices), libssh2 (it’s a different product to libssh, not merely a newer version) and PuTTY (widely used on Windows). None of these projects have this bug either, so most of us can stand down from red alert. ...