Nextgen G* Traffic Analysis Resistant Overlay Networks (re Tor stinks)
On 10/26/19, Punk - Stasi 2.0 <punks@tfwno.gf> wrote:
2005 Low-Cost Traffic Analysis of Tor https://www.freehaven.net/anonbib/cache/torta05.pdf
"By making these assumptions, the designers of Tor believe it is safe to employ only minimal mixing of the stream cells...
...This choice of threat model, with its limitation of the adversaries’ powers, has been a subject of controversy...
...Tor, on the other hand assumes a much weaker threat model..
...we show that even relatively weak adversaries can perform traffic-analysis, and get vital information out of Tor. This means that even non-law-enforcement agencies can significantly degrade the quality of anonymity that Tor provides, to the level of protection provided by a collection of simple proxy servers, or even below."
-------
my comment : the attack is based on monitoring the latency of a node while sending an attacker controlled stream through it
"Tor exhibits the worst possible behaviour: not enough interference to destroy individ- ual stream characteristics, yet enough to allow the remote measurement of the node’s load."
Maybe some tor fanboi knows if this has been somehow fixed?
Tor, perhaps as result of years of such papers and posts, is attempting to incorporate some traffic noise, search there for "netflow" and "padding", however the breadth of its application may not be enough to close out many of the traffic analysis papers. People should look more closely at that work and analyse its results.
Anyway the article makes it clear that simple cover traffic in not enough to defend against timing attacks.
This may be one of the papers that may outlines why, in a fill padding chaff wheat network, or even any network, that every node also needs to reclock the input it receives into its output, else adversary can see the perturbations it applies to a nodes input reflected in its output, and or see the natural perturbations the internet itself imparts, and those of the users own traffic patterns, carried through end to end without mitigation. Also why, similar to some Bittorrent and other protocols, overlay nodes should probably drop contracts they have with other peer nodes that do not fulfill input performance expectations. ie: Bittorrent nodes (clients) ignore nodes that send corrupted datas... overlays should drop peer nodes that exhibit non conformant traffic characteristics... such as unclocked, bursty waves, etc that was not agreed to, that would present risk to observability. ie: First, don't accept sketchy peers trying to send you suspect waveforms to carry, and Second, do not forward anything on through you to another node without conforming and reclocking it, don't be so busy or partition attacked that your node can't uphold that thus becoming a risk to the net, sleep and reestablish your node into the net later. Since an overlay network does not yet own the underlying internet hardware (and there is not yet full time crypted link rate fill HW RFC), this must be done in nodes CPU versus at the NIC PHY over the cable. You can get closer to Layer0 with #OpenFabs #OpenHW #MeshNets #GuerrillaNets. Note also there is a difference in design thinking approaches between say... - Cover traffic, meaning traffic laid over or filling in gaps in wheat. The wheat ends up being awarded a variety of mental biases and design assumptions in its favor which could prove controlling and exploitable. - Base fill traffic, meaning a base of fill [at linerate, fulltime, regulated], such that when wheat is ready to pass, the wheat packet is substituted in for the fill packet. The fill security aspect is mentally biased for, such that it should continue to hold regardless whatever wheat or active adversary crap is attempted to be sent over or at it. There's something to be said for "store and forward" overlays on the order of hours to days... file storage, email / nntp. Unfortunately they're no good for anything interactive realtime... IRC, urgent messaging, website queries, transactions, voice, video. There's also consideration to make on whether an interactive realtime network should also provide, if needed, a store and forward layer, or treat it as just another app riding on top, as all realtime nets do today. Or whether to cripple either s+f or realtime to preclude use of the other over them. ie: Tor is somewhat crippled by TCP only. Whereas I2P/Tor+OnionCat, CJDNS, Phantom enable UDP etc by providing a full IPv6 stack. Other overlays have similar restrictions and capabilities, most not intentionally preclusive, instead being just "We want to create a net to do X (say send an email)" and the net just ended up being useless for anything else. Keep in mind, once you have a cryptographically strong analysis resistant general purpose overlay transport network (ie: internet stack), you can run anything over it. Don't be afraid to create an AF_OVERLAY if needed to accomplish that. If the overlay network is truly good, people will get todays applications ticketed, patched, and compiled with that option to inferface with them.
On Sat, Oct 26, 2019 at 05:32:59PM -0400, grarpamp wrote:
On 10/26/19, Punk - Stasi 2.0 <punks@tfwno.gf> wrote:
2005 Low-Cost Traffic Analysis of Tor https://www.freehaven.net/anonbib/cache/torta05.pdf
"By making these assumptions, the designers of Tor believe it is safe to employ only minimal mixing of the stream cells...
...This choice of threat model, with its limitation of the adversaries’ powers, has been a subject of controversy...
...Tor, on the other hand assumes a much weaker threat model..
...we show that even relatively weak adversaries can perform traffic-analysis, and get vital information out of Tor. This means that even non-law-enforcement agencies can significantly degrade the quality of anonymity that Tor provides, to the level of protection provided by a collection of simple proxy servers, or even below."
-------
my comment : the attack is based on monitoring the latency of a node while sending an attacker controlled stream through it
"Tor exhibits the worst possible behaviour: not enough interference to destroy individ- ual stream characteristics, yet enough to allow the remote measurement of the node’s load."
Maybe some tor fanboi knows if this has been somehow fixed?
Tor, perhaps as result of years of such papers and posts, is attempting to incorporate some traffic noise, search there for "netflow" and "padding", however the breadth of its application may not be enough to close out many of the traffic analysis papers. People should look more closely at that work and analyse its results.
Anyway the article makes it clear that simple cover traffic in not enough to defend against timing attacks.
This may be one of the papers that may outlines why, in a fill padding chaff wheat network, or even any network, that every node also needs to reclock the input it receives into its output,
else adversary can see the perturbations it applies to a nodes input reflected in its output, and or see the natural perturbations the internet itself imparts, and those of the users own traffic patterns, carried through end to end without mitigation.
Also why, similar to some Bittorrent and other protocols, overlay nodes should probably drop contracts they have with other peer nodes that do not fulfill input performance expectations.
With merics sharing and collective node kicking, perhaps it may be possible to collectively identify stalker nodes, and share this information over a longer period of time - as with bottorrent, don't connect with bad performing nodes.
ie: Bittorrent nodes (clients) ignore nodes that send corrupted datas... overlays should drop peer nodes that exhibit non conformant traffic characteristics... such as unclocked, bursty waves, etc that was not agreed to, that would present risk to observability.
"Bursty" anything, without negotiation of graduated/ stepped up/down link management, must not be permitted in the first instance.
ie: First, don't accept sketchy peers trying to send you suspect waveforms to carry, and Second, do not forward anything on through you to another node without conforming and reclocking it, don't be so busy or partition attacked that your node can't uphold that thus becoming a risk to the net, sleep and reestablish your node into the net later.
For most all folks today, their first physical hop or link, is to their ISP. A GAA performing active timing attack, in the way of suspending your internet link for say 500 ms, is not possible to defend against when you have no other links for onboarding. And if we assume more secure hardware (phones with separation of non back doored CPU, and radio CPU), and a bug free driver and software stack, and a "difficult to statistically analyze" garlick net, then it is an entirely reasonable assumption that our not so friendly GPA and GAA stalkers will use the remaining tools in their toolbox, much more heavily - active link suspension across target sets of end users, bisecting as needed to map end user nodes to destination/ server streams of interest. We are engaged in an arms race of sorts. And we are pushing our adversaries to have to use the more overt tools in their toolbox; this is a good thing, as in the less your enemy can hide, the better.
Since an overlay network does not yet own the underlying internet hardware (and there is not yet full time crypted link rate fill HW RFC), this must be done in nodes CPU versus at the NIC PHY over the cable. You can get closer to Layer0 with #OpenFabs #OpenHW #MeshNets #GuerrillaNets.
Note also there is a difference in design thinking approaches between say...
- Cover traffic, meaning traffic laid over or filling in gaps in wheat. The wheat ends up being awarded a variety of mental biases and design assumptions in its favor which could prove controlling and exploitable.
Hmm. To what extent can we classify types of wheat - e.g. wheat which is high value and therefore needs QoS maximized, vs wheat which can tolerate burstyness, and can therefore be reduced in QoS? Which nodes would you give QoS info to? Phone calls require QoS.
- Base fill traffic, meaning a base of fill [at linerate, fulltime, regulated], such that when wheat is ready to pass, the wheat packet is substituted in for the fill packet. The fill security aspect is mentally biased for, such that it should continue to hold regardless whatever wheat or active adversary crap is attempted to be sent over or at it.
Active latency injection, say at your ISP, is the big nut to crack, as you say above, needs guerilla nets. Ultimately, if we don't own it, we don't control it, and to the extent we don't control it, it will be used against us. So we have no option but to eventually roll out our own phys links, mesh nets, back hauls etc - to some degree we have to, at least in early days e.g. use govnet links for international (undersea cables are prohibitively expensive, even for most corporations).
There's something to be said for "store and forward" overlays on the order of hours to days... file storage, email / nntp.
Unfortunately they're no good for anything interactive realtime... IRC, urgent messaging, website queries, transactions, voice, video.
We must treat each use case individually, to optimize not only the network, but the apps (yet to be written). Voice and realtime video are a big ask - without your dark links, and if you're a prime gov target, you're gonna be stalked and actively attacked. With some apps, e.g. IRC, although relatively low latency is needed for most folks to use, the b/w is very low (a text message here and there); web forums could actually work this way too, if we take remove content and thing "text messages", "+1s" etc. - the heaviest parts of forums are dodgy centralised-SQL-DB type code which does not scale sufficiently, and spitting endless images out of the servers network card - images are media which should be content addressed (ala git), thus cached locally. How many times have you been on a forum such as dailystormer and from one day to the next, EVERY profile image gets re-loaded? This is sheer insanity! Even custom firefox dark config "cache all HTTPS content, cache for ever" etc, was not sufficient to stop reloads of what should be cached content! So, for some server loads, we certainly should be able to reduce that load to a trickle, and for others, we may be able to completely decentralise, no server needed - $CURRENT_YEAR git and DHTs hints in this direction.
There's also consideration to make on whether an interactive realtime network should also provide, if needed, a store and forward layer, or treat it as just another app riding on top, as all realtime nets do today.
I don't understand the comparison you are drawing (or rather, the consideration you are putting forward) here. A sane network is not based on TCP. A sane network is based on UDP or IP or some lower version of "independent packets or frames". How successful can distributed reputation (namely "warnings") be at identifying and isolating "bad" e.g. stalking gov nodes. "Know your peer."
Or whether to cripple either s+f or realtime to preclude use of the other over them. ie: Tor is somewhat crippled by TCP only. Whereas I2P/Tor+OnionCat, CJDNS, Phantom enable UDP etc by providing a full IPv6 stack. Other overlays have similar restrictions and capabilities, most not intentionally preclusive, instead being just "We want to create a net to do X (say send an email)" and the net just ended up being useless for anything else.
Keep in mind, once you have a cryptographically strong analysis resistant general purpose overlay transport network (ie: internet stack), you can run anything over it.
Don't be afraid to create an AF_OVERLAY if needed to accomplish that. If the overlay network is truly good, people will get todays applications ticketed, patched, and compiled with that option to inferface with them.
There's a bit of a resurgence of user space networking, TCP/IP stacks etc in recent years - the most high performant I came across so far is SNABB (might have posted a link ~1 year ago (see one of the iqnets urls files attached).
On Sun, Oct 27, 2019 at 11:48:58AM +1100, Zenaan Harkness wrote:
On Sat, Oct 26, 2019 at 05:32:59PM -0400, grarpamp wrote:
ie: Bittorrent nodes (clients) ignore nodes that send corrupted datas... overlays should drop peer nodes that exhibit non conformant traffic characteristics... such as unclocked, bursty waves, etc that was not agreed to, that would present risk to observability.
"Bursty" anything, without negotiation of graduated/ stepped up/down link management, must not be permitted in the first instance.
Except of course, where bursty is high bandwidth, low priority traffic which can be buffered and used effectively as a form of chaff fill when higher prio traffic is satisfied. By making such streams a first class citizen (design wise - e.g. low risk torrents), we may find that chaff/wheat ratio ultimately gets really low (i.e., very efficient overall network b/w utilization).
On Sun, Oct 27, 2019 at 11:48:58AM +1100, Zenaan Harkness wrote:
On Sat, Oct 26, 2019 at 05:32:59PM -0400, grarpamp wrote:
- Base fill traffic, meaning a base of fill [at linerate, fulltime, regulated], such that when wheat is ready to pass, the wheat packet is substituted in for the fill packet. The fill security aspect is mentally biased for, such that it should continue to hold regardless whatever wheat or active adversary crap is attempted to be sent over or at it.
In an overlay net, we think of a link as peer to peer. But physically that link is usually as follows: NodeA -> ISP1 router -> GT-1 router ... ... -> ISP2 router -> NodeB Wo when we talk base fill/ linerate/ fulltime chaff link, we should perhaps be clear about which physical links/routes we are referring to - we must consider the physical links as much as the virtual/ overlay links, in order to properly assess security implications.
For most all folks today, their first physical hop or link, is to their ISP.
A GAA performing active timing attack, in the way of suspending your internet link for say 500 ms, is not possible to defend against when you have no other links for onboarding.
Acess censorship is separate from what the first overlay net node you connect to decides to do with the adverary modulated garbage they received from your node. That first node, or any other node, should drop you until you behave, assigning the bandwidth and timing contract they negotiated with you to a better participant in the meantime.
active link suspension across target sets of end users, bisecting as needed to map end user nodes to destination/ server streams of interest.
So what, a secure overlay should drop its apparently contract breaking nodes (as so affected by adversary whether by cutout or other modulation) up to and including the remaining overlay progressively cutting out thus effectively downing itself as protection in reaction to increasing adversary scopes of aggression. A net can't call itself secure if it is stupid enough to stay up under known successful attack methods, operational yes, secure no.
the less your enemy can hide, the better.
An estimate is required to determine if G* adversary can actually sustain modulation for traffic analysis against millions of nodes at once for what duration of time... if adversary cannot hold a self-defensive network down and out as such, the overlay wins, and adversary is relegated to a mere annoyance randomly sinking nodes as a sore loser for lols.
QoS, lo / hi priority
People first have to solve old problems with those... - Users declaring all their traffic as hi, because. - The overlay must see inside all traffic to inspect and classify, no go. - The overlay must becomes the State offering only proprietary apps that it can controls, boring limited. - Users pay for play to the overlay, complex. Users are paying ISP for what rate they choose to pass over their NIC. Most all overlays have always been able to handle user traffic because there are more than enough wheat-idle nodes to carry for example low quality video over 7 hops, or mid quality youtube over 3. Unlike Tor, if as in Phantom every user is a relay, there should be plenty of excess wheat-idle capacity because users are mostly idle.
Phone calls require QoS.
Both the Internet and Tor have no QoS, yet users have been able to hold voice and IRC conversations between Tor onions since day one, with some even being able to stuff low quality video calls over it as well. In a fill network, so long as fill yields to for wheat demand, the only real constraint seems how the overlay's transport such as TCP / UDP and or some proprietary bucket transport handle congestion when two or more users traffic shares the same physical path between nodes.
I don't understand the consideration
Overall point was, are people building some overlay to handle only one app (messages, storage, IRC, whatever), or a general purpose transport overlay like the internet that can carry whatever. Presuming both can be done equally securely and performant, there is no point to do the former. Lots of research and nets out there "We're building an overlay for this specific app". That being, much more research needs done in area of application agnostic, general purpose transport, traffic analysis resistant, networks. If you can figure out how to do the latter, the former is entirely moot. Study the latter first.
In an overlay net, we think of a link as peer to peer.
But physically that link is usually as follows:
NodeA -> ISP1 router -> GT-1 router ... ... -> ISP2 router -> NodeB
Wo when we talk base fill/ linerate/ fulltime chaff link, we should perhaps be clear about which physical links/routes we are referring to - we must consider the physical links as much as the virtual/ overlay links, in order to properly assess security implications.
In a fill-as-defense model, overlay links dont care about the physical between, only that whatever the two overlay nodes agreed about bandwidth and timing expectations they have for each other is upheld between them. If it isn't, they or their internet path between is under attack either by nature or adversary, the contract A B negotiated between themselves will fault, and they should sleep / drop / renegotiate, before passing data for the overlay again.
Let's say we buffer 500ms since that forces attackers to suspend links for over 500ms to identify target nodes, and making their network node bisections more noticeable to end users: 3.5s
And 500ms may not be enough! Perhaps we should buffer up for a second or more?
10 milliseconds, 10 seconds, 10 minutes, 10 hours or 10 days... speculating on which any adversary will use... removes use cases for the network as a result. Set speculations to 0 ms, and just depeer from node when it appears to no be upholding traffic parameters it said it would be sending you. If you agree to x, possibly supported by iperf test between you, and your peer start sending you a chopped up sine wave outside allowable deviation, they or their path to you are obviously fucked, just drop them. If it's allowable then buffer and reclock it when sending it back out your NIC so that whatever natural identifiable remains is not replicated beyond you.
On Sun, Oct 27, 2019 at 06:22:53AM -0400, grarpamp wrote:
Let's say we buffer 500ms since that forces attackers to suspend links for over 500ms to identify target nodes, and making their network node bisections more noticeable to end users: 3.5s
And 500ms may not be enough! Perhaps we should buffer up for a second or more?
10 milliseconds, 10 seconds, 10 minutes, 10 hours or 10 days... speculating on which any adversary will use... removes use cases for the network as a result.
Set speculations to 0 ms, and just depeer from node when it appears to no be upholding traffic parameters it said it would be sending you. If you agree to x, possibly supported by iperf test between you, and your peer start sending you a chopped up sine wave outside allowable deviation, they or their path to you are obviously fucked, just drop them. If it's allowable then buffer and reclock it when sending it back out your NIC so that whatever natural identifiable remains is not replicated beyond you.
Ack. Yes each link, BA, AC, must be maintained according to its own agreements, and is bound by the consequences of not meeting those agreements. So let's analyse this: Nodes A, B and C. There is a link BA, and a link AC. B is sending packets to A, and A is routing them (per pre arranged agreement) to C. - A may have had some unused capacity in a currently maintained link AC, which A allocates to serve the request by B (of A) to route packets to C (at least for a time), - or, A may have had to make a request of C, to increase the capacity of the link AC (which request, node C may ack or nack at its discretion), - or, A may have had to establish a new link as a result of B's request, link AC. In any case, we assume: - C has granted A's request for a link, - and that C is not told why A is requesting the new link (or an increase in b/w on this link), - and in all cases, A must maintain its contract with C. So, B begins to use link BA, and indirectly link AC. So now, B suddenly comes under attack, and is unable to maintain its previously requested packet sending rate (or latency/ jitter), to node A: - This should not affect A's link with C (AC) - if, in any particular packet sending time period T, A has not received sufficient packets from B, to send on to C, A simply chaff-fills the link AC, so that a passive onlooker cannot see any difference in the AC link behaviour. - node C is satisfied with this, since A is keeping up its end of the bargain, albeit there is apparent under-utilization of the link AC - Per config, A could ramp down, or kick, the BA incoming link, and consequently ramp down its AC outgoing link. - From A's point of view, B's metrics just took a negative hit, and A records this fact; in the case B is a meat space friend, the operator of node A may either noisily (on the phone or in email) or quietly (in person) enquire of his friend operating node B "wtf?" And the issue is, for A to maintain AC properly, A has options in relation to a "pause + catchup burst" from B (on link BA): 1. duplicate the "trough+spike", i.e. "catch up" with B - if this is SOP (std op procedure), the "trough+spike" will propagate throughout the entire link, so we're not doing this! 2. of course A chaff fills the trough, in its link to C, but what about the "catchup spike" from B? A could cache that spike ongoingly, until the stream is over; this may be desirable from B's perspective, but a lot of spiking could cause both unpredictable, out of control and therefore problematic "spike caching" - although this could be readily handled by requiring B to pause a second time, just to allow A to drain its "spike cache for B". 3. At some point, if the behaviour is bad enough, A simply kicks B (your network behaviour is sufficiently bad, bye bye). 4. do something else? Nodes, in this case B, are expected to display "decent" behaviour in relation to such spikes, even if say B is unable to operate without such spikes (perhaps there are network local conditions making spikes on B's outgoing link, unavoidable ??), and in this case "decent" behaviour could mean "if you've had troughs and spikes, you must add troughs, since I (node A) will only cache at most N packets for you, after that I will simply drop packets and/or kick your link. By maintaining node behaviour metrics, and by propagating such metrics (need to think about this from a privacy vs network sanity perspective), we may be able to identify "latency creators" (therefore problematic nodes and/ or hardware) if not immediately, at least after attacks happen, thereby increasing the cost to stalkers - also, we will have a digital war on our hands, a massive global us vs the GAA stalkers situation, where certain physical links, end points, routers, buildings etc, will be identified as stalkers/ attackers, and outed (network, and IRL - make 'em pay the price for stalking and attacking us). So from the perspective of the syntax / scenario above, "reclocking" (by A) of "dodgy input" (from B) simply means "chaff filling the output, so that my output link to C appears perfectly normal", and ensuring B does an additional pause to allow A to drain its temp "B spike cache" - at least, this is my current understanding. If there is an additional concept to "reclocking" that may also be useful, we should consider that too...
Although we talk about "kick"ing a node for "bad behaviour", in my mind being kicked actually means being "relegated to fill traffic"; in both directions. The network behaviour achieved in such a mode could be no worse than Tor as it is today; "fill class", "best effort class" or "idle class". One of the keys though, is ensure the end user is aware of what has happened, and so if relegated to "fill traffic status" has happened, the end user should be informed of this, so there's no misunderstanding, or rather, traffic for use cases which -require- some class other than "fill traffic class", and need to not continue xmit in such circumstances, is specified and handled correctly (may be "drop immediately and notify source". When B sends packets to A, one of the fields in the "please route this packet for me" could be a class field: - if the link within which that packet exists has been relegated to "fill traffic" class, "important" packets might get dropped, rather than cached and forwarded - such protocols are important to specify clearly, so that fail mode behaviour is known ahead of time, and / or specifyable ahead of time - it may be that most clear net "regular" web surfing via overlay, may be classifiable as "fill class" - for marketing purpose we rename this to something like "high speed bursty maxi surfing, delivering the best effort class to your browser daily" - no point scaring the fauna away; regular web surfing may be the ideal fill traffic - a lower class that "regular web surfing" might be "bulk best effort" (e.g. for torrents, see utcp posted previously) - by commandeering ALL IP traffic outgoing from a node, we might be able to perform better for many app traffic classes than utcp/ bittorrent etc today, which seem to rely on back pressure heuristics to detect buffer bloat, and back off correspondingly etc - with eth device "effective total ownership" and traffic class concepts such as TCP_OVERLAY, we just might kick some arse - happy users, happy ISPs - next in line, bringing QoS to the OS desktop - various QoS is your realtime class(es) There is evident overlap between networking and process scheduling concepts.
On Sun, Oct 27, 2019 at 05:41:49AM -0400, grarpamp wrote:
For most all folks today, their first physical hop or link, is to their ISP.
A GAA performing active timing attack, in the way of suspending your internet link for say 500 ms, is not possible to defend against when you have no other links for onboarding.
Acess censorship is separate from what the first overlay net node you connect to decides to do with the adverary modulated garbage they received from your node. That first node, or any other node, should drop you until you behave, assigning the bandwidth and timing contract they negotiated with you to a better participant in the meantime.
The problem is the node that was attacked with a latency injection attack - he just got attacked, his friends have now dropped him, and the Feds just identified whatever it was he was up/downloading - this is a problem we're trying to solve, and it seems impossible to solve with a single govnet onramp link (at least, with traditional fat and bursty up- and down-loads.
active link suspension across target sets of end users, bisecting as needed to map end user nodes to destination/ server streams of interest.
So what, a secure overlay should drop its apparently contract breaking nodes (as so affected by adversary whether by cutout or other modulation) up to and including the remaining overlay progressively cutting out thus effectively downing itself as protection in reaction to increasing adversary scopes of aggression. A net can't call itself secure if it is stupid enough to stay up under known successful attack methods, operational yes, secure no.
Is it possible to have GUI such that end user can specify "important" streams which must go down/stop when under any identified successful attack, vs "unimportant" regular clear net surfing? What we want, if possible, is an appealing end user experience, but one which does not deceive them as to what is happening.
the less your enemy can hide, the better.
An estimate is required to determine if G* adversary can actually sustain modulation for traffic analysis against millions of nodes at once for what duration of time...
It's not the millions we need to protect usually, just the few.
if adversary cannot hold a self-defensive network down and out as such, the overlay wins, and adversary is relegated to a mere annoyance randomly sinking nodes as a sore loser for lols.
Fat bursty traffic, if it is of high importance or criticality in relation to G*As, needs to be modified to be a different type of traffic - analogous to free to air terrestrial broadcast, vs spread spectrum "disappears in the white noise" comms which merely raises the noise floor.
QoS, lo / hi priority
People first have to solve old problems with those...
- Users declaring all their traffic as hi, because.
Solution: Peers of course won't accept that. The model is, make request of a peer, for X bps @ Y QoS; peer responds with ack or nack; repeat until route is created, or not possible at this time. - End user/ individual node, is the final authority for all requests made of him. - So find an anon peer who acks your request, or wait, or make (more) friends in meat space.
- The overlay must see inside all traffic to inspect and classify, no go.
Indeed, no go.
- The overlay must becomes the State offering only proprietary apps that it can controls, boring limited.
No go. A node is its own final authority. Everything else will be by agreement/ contract - inducement by incentivization, and also squeezing some things into base "minimum suggested operating mode" to establish tacit consent to such minimum suggested operating modes (e.g. default 10cps headroom chaff filled "first hop" node links).
- Users pay for play to the overlay, complex.
The first motivation for most these days is pay to play - like that South Korean "zero sim" or whatever they're called which you recently posted. Before going there, let's maximise each of: - natural incentivization - tacit consent, and - voluntary node to node "contracts" - even with 'unknown' nodes we can pop up requests to end user e.g.: Node you are connected to ('SHA713...') requests "Please stay online for 10 minutes, wants 20Kbps: [YES] [NO] Many UI elements and config etc can be created around this concept of course, to maximize end user sense of control, comfort, social credit significance feelz, preference first hop nodes that are my actual friends (if under 15 minutes, always accept but let me know, unless I have clicked "I'm going to sleep" button), maximum requests accepted per 10 minutes, request batching, unattended behaviour, etc etc.
Users are paying ISP for what rate they choose to pass over their NIC. Most all overlays have always been able to handle user traffic because there are more than enough wheat-idle nodes to carry for example low quality video over 7 hops, or mid quality youtube over 3.
And we're dealing with a fat bursty "single TCP stream" links too - much room for improvement - e.g. yt-dl can resume a previous partial dl, this means that the yt server will download from any point in the file/stream, which means we can do multi-path, to increase dl speed. If the net is compelling, and has safe fallback modes for non-important clear net comms (UI behaviour must be absolutely unambiguous to end user), and perhaps a killer app comes along, we have a moon shot chance at replacing the internet as we know it. Which is, of course, the goal.
Unlike Tor, if as in Phantom every user is a relay, there should be plenty of excess wheat-idle capacity because users are mostly idle.
Ack.
Phone calls require QoS.
Both the Internet and Tor have no QoS, yet users have been able to hold voice and IRC conversations between Tor onions since day one, with some even being able to stuff low quality video calls over it as well.
In a fill network, so long as fill yields to for wheat demand, the only real constraint seems how the overlay's transport such as TCP / UDP and or some proprietary bucket transport handle congestion when two or more users traffic shares the same physical path between nodes.
For QoS, and in general, this conversation seems to be merging on a concensus of "be conservative, keep a little headroom, rather than absolutely max out the links". And, each node is its own final authority to ack or nack - a node which acks a QoS, yet does not deliver, has its "ability to deliver phone call class QoS" metric reduced.
I don't understand the consideration
Overall point was, are people building some overlay to handle only one app (messages, storage, IRC, whatever), or a general purpose transport overlay like the internet that can carry whatever. Presuming both can be done equally securely and performant, there is no point to do the former.
Thanks - yes, ack.
Lots of research and nets out there "We're building an overlay for this specific app".
That being, much more research needs done in area of application agnostic, general purpose transport, traffic analysis resistant, networks.
Research, or empirical. I don't need to see research, to have a gut feel that a) "each node is its own primary authority" and that b) "each link is nogotiated, and acked/nacked between nodes (not by any central authority)", means we may well be able to make the generic packet switched overlay work in an application agnostic way, notwithstanding that certain app transports may be effectively built into the lowest overlay layer (e.g. low b/w, very high latency, high value short text messages and related ping circle userspace).
If you can figure out how to do the latter, the former is entirely moot. Study the latter first.
Ack.
In an overlay net, we think of a link as peer to peer.
But physically that link is usually as follows:
NodeA -> ISP1 router -> GT-1 router ... ... -> ISP2 router -> NodeB
Wo when we talk base fill/ linerate/ fulltime chaff link, we should perhaps be clear about which physical links/routes we are referring to - we must consider the physical links as much as the virtual/ overlay links, in order to properly assess security implications.
In a fill-as-defense model, overlay links dont care about the physical between, only that whatever the two overlay nodes agreed about bandwidth and timing expectations they have for each other is upheld between them.
This is the first step to such defense, yes.
If it isn't, they or their internet path between is under attack either by nature or adversary, the contract A B negotiated between themselves will fault, and they should sleep / drop / renegotiate, before passing data for the overlay again.
Ack.
The problem is the node that was attacked with a latency injection attack - he just got attacked, his friends have now dropped him, and the Feds just identified whatever it was he was up/downloading
No, ident requires timing attack to propagate thereby exposing the end-to-end speakers. Node X, or its path to some other nodes was attacked, X's relavant peer nodes connected to X detected that disturbance in X's transmissions, and refused to forward on anything X sends (meanwhile the entire overlay is filling and reclock normalizing everything anyway). You could cut X's stream off from the left of Y (that Y normally forwards out its right), Y's CPU either creates fill to replace X's bw contract and sends that out its right, or ultimately renegotiates a lower sum of rates with some of its right peers that accounts for loss of X on its left, Y is now free to accept new contract proposals on its left summing up to the rate that X formerly consumed. Yes, X got depeered, sucks for X, at least until X reconnects and starts upholding policed timing traffic fill contracts expected, but the attack did not succeed in disclosing anyone who was talking to who end-to-end. It's entirely plausible and reasonable that in decades post-911 post-Snowden, G* may now have laughably trivial end-to-end who-to-who traffic analysis attacks that none of today's overlays are strongly resistant against. Most of today's overlay networks design-think predates one or both of those revelations and confirmations, and applies little of the new crypto and network research that has evolved since either of them. You need to come up with projects and overlays whose whitepapers clearly indicate solid resistance measures to G* TA (instead of disclaiming / dodging / burying / ignoring the topic as is the norm today), and whose analysis whitepapers by external reviewers cannot find fault with their approach (certainly at least not to any materially use case significant odds of success, unlike with todays overlays). There are probably a variety of design and tech can be applied towards that. Both for general purpose overlays, and app specific overlays. Have fun creating and deploying them :)
On Sun, Oct 27, 2019 at 08:50:03PM -0400, grarpamp wrote:
The problem is the node that was attacked with a latency injection attack - he just got attacked, his friends have now dropped him, and the Feds just identified whatever it was he was up/downloading
No, ident requires timing attack to propagate thereby exposing the end-to-end speakers. Node X, or its path to some other nodes was attacked, X's relavant peer nodes connected to X detected that disturbance in X's transmissions, and refused to forward on anything X sends (meanwhile the entire overlay is filling and reclock normalizing everything anyway).
And in this case, let's say X was uploading the next helo gunship collateral murder video, was half way through, is attacked, does not get to finish uploading. GPA was monitoring the upload, thus why GPA attacked X (and others in their target set), and due to the [temp|full] link dropout (latency trough), GPA IDs node X as the uploader. But, same problem even if link not dropped, and just a latency trough - target X, is now IDed by GPA. In both cases a) X's uplink dropped, or b) not dropped, there destination of X's upload, sees the [temp|permanent] dropout.
You could cut X's stream off from the left of Y (that Y normally forwards out its right), Y's CPU either creates fill to replace X's bw contract and sends that out its right, or ultimately renegotiates a lower sum of rates with some of its right peers that accounts for loss of X on its left, Y is now free to accept new contract proposals on its left summing up to the rate that X formerly consumed.
Yes. That is how we must operate - remaining (non attacked) nodes, must continue per chaff fill and renegotiation protocols.
Yes, X got depeered, sucks for X, at least until X reconnects and starts upholding policed timing traffic fill contracts expected, but the attack did not succeed in disclosing anyone who was talking to who end-to-end.
Yes - if X was uploading to a target passively monitored by GPA, GPA should not be able to detect any traffic troughs or dropouts, since the upload target's peers maintain chaff fill in the face of the wheat dropout. Only X is IDed, not also X's upload target. This is certainly an improvement on the status quo. So our problem set is now thankfully reduced to "what can X do to improve his own situation", and the answer appears to be "dark links" of some form.
It's entirely plausible and reasonable that in decades post-911 post-Snowden, G* may now have laughably trivial end-to-end who-to-who traffic analysis attacks that none of today's overlays are strongly resistant against. Most of today's overlay networks design-think predates one or both of those revelations and confirmations, and applies little of the new crypto and network research that has evolved since either of them.
If you (or anyone following along) is aware of specific cutting edge research that you believe applies to this problem domain, please of course post a link, and ideally a description. Now is the time to consider the cutting edge, what we can incorporate.
You need
s/you/we/
to come up with projects and overlays whose whitepapers clearly indicate solid resistance measures to G* TA (instead of disclaiming / dodging / burying / ignoring the topic as is the norm today), and whose analysis whitepapers by external reviewers cannot find fault with their approach (certainly at least not to any materially use case significant odds of success, unlike with todays overlays).
Logic is logical for a reason. Those who write whitepapers have the time and motivation to do so. We are writing here a whitepaper without polishing it for publishing. Anyone motivated to polish that which we discuss, into a whitepaper, is encouraged to do so. Short of that, we're coming to conclusions as best we can. Anyone who identifies errors in our logic or unspoken assumptions or etc, is encourage to name such, at the moment you cognize such.
There are probably a variety of design and tech can be applied towards that. Both for general purpose overlays, and app specific overlays.
Have fun creating and deploying them :)
I personally intend to focus primarily, though not exclusively, on a packet switched network layer. To this end we have a few more concepts to lay out and tear apart yet. Also, we should continue to put forth and discuss certain concepts which app layers can build on, in particular distributed cache incentivization. A big-I Internet replacement, must target both the comms, plus the content storage, not just one or the other - on occasion the dynamic between two core concepts gives rise to a breakthough. E.g., if we can sufficiently incentivize decentral opportunistic cacheing, we are very close to replacing cloudflare, akamai and youtube. And if with some DHT, P2P, possibly git style content distribution mech on top of opportunistic intrinsically incentivized cacheing, we may effectively and readily eliminate the need for all webservers - where every end user node can at any time publish anything, as long as he has an audience interested in his publishing, and that audience naturally caches that content, at the minimum for the time required to view that content and/ or decide whether to store it longer term in local library. How close we can get to replacing the Internet as we know it is yet to be determined, and there is apparently no inherent reason we cannot achieve this goal. The reason the centralizers exist is due to the nature of profit maximizing "content producers" holding the tide back against what is now possible. We begin from the present moment. We are not bound by the present manifestations of past intentions. We carve out our collective future with our intentions, will and actions today. Create our world,
GPA was monitoring the upload, thus why GPA attacked X (and others in their target set), and due to the [temp|full] link dropout (latency trough), GPA IDs node X as the uploader. But, same problem even if link not dropped, and just a latency trough - target X, is now IDed by GPA. In both cases a) X's uplink dropped, or b) not dropped, there destination of X's upload, sees the [temp|permanent] dropout.
No. First principles... under a proper enforced and regulated background of fill traffic network, there is no such observables for GPA to monitor, so nothing to attack (which a GPA doesn't do), no ID to be made. Now if a GAA is an agent running the central server receiving the upload, and they're fast enough to recognize the content on their filesystem as such before it completes, they can try to DoS back their own overlay hops toward the uploader in realtime. At first that would seem hard to defend against. Then you realize that again, a proper fill contract aware network will auto detect and depeer any link that gets DoS, thus the upload stops well before it can ever be tracked back. And GAA has to literally sweep through the entire overlay spiking at tens of thousands to millions of nodes (because every node is a relay, linear search odds) trying to find the point source that causes the server upload to stall. And if the user has selected a higher hopcount for their side, odds are that much higher that GAA will spike one of those first again depeering the path and ending the upload before discovery. The upload might continue a bit slower in a multipath or scatter mixnet, or if either the depeered node renegotiates back in, or the source repaths another circuit around. Whichever way the GAA has discovered nothing and is back to step one at that point... the depeering problem. Nodes might publish a number of depeering tolerance parameters, or network metrics seen of peers, such that clients could use them when constructing paths on continuum from more reliable to more secure as desired. The particulars of what observables drive depeering thresholds should be, and whether an overlay can perform whatever self management tasks well... all need tested out. Regardless, sensitive users should not upload to any plausibly owned / suspect / scene central servers, and instead should insert into any distributed encrypted file storage overlays, IPFS, generic surface websites or hosts, upload encrypted blobs and release the keys elsewhere, use wifi, vpn, etc. Any candidate network for any nextgen award in the subject line should raise the bar significantly such that only the highest sensitive and smallest number of users would have reason to argue. That is simply not the case with today's networks.
incentivize
Incentive is that secure network platforms provide generic transport for cool apps and services that people want to use. Most users are not going to hack their nodes to disable giveback, and or the network will detect that, or have natural consumption limits. Again ventures into generic transport network built for mass utility vs network built for some specific app or class of app... the former has probably not yet received its fair share of development consideration over the last decades. As before, have fun creating and deploying new stuff.
Awesome! Thank you for this clarity. I had not grokked the below. On Mon, Oct 28, 2019 at 01:33:29AM -0400, grarpamp wrote:
GPA was monitoring the upload, thus why GPA attacked X (and others in their target set), and due to the [temp|full] link dropout (latency trough), GPA IDs node X as the uploader. But, same problem even if link not dropped, and just a latency trough - target X, is now IDed by GPA. In both cases a) X's uplink dropped, or b) not dropped, there destination of X's upload, sees the [temp|permanent] dropout.
No. First principles... under a proper enforced and regulated background of fill traffic network, there is no such observables for GPA to monitor, so nothing to attack (which a GPA doesn't do), no ID to be made.
Now if a GAA is an agent running the central server receiving the upload, and they're fast enough to recognize the content on their filesystem as such before it completes, they can try to DoS back their own overlay hops toward the uploader in realtime.
At first that would seem hard to defend against.
Then you realize that again, a proper fill contract aware network will auto detect and depeer any link that gets DoS, thus the upload stops well before it can ever be tracked back. And GAA has to literally sweep through the entire overlay spiking at tens of thousands to millions of nodes (because every node is a relay, linear search odds) trying to find the point source that causes the server upload to stall. And if the user has selected a higher hopcount for their side, odds are that much higher that GAA will spike one of those first again depeering the path and ending the upload before discovery.
The upload might continue a bit slower in a multipath or scatter mixnet, or if either the depeered node renegotiates back in, or the source repaths another circuit around. Whichever way the GAA has discovered nothing and is back to step one at that point... the depeering problem.
Nodes might publish a number of depeering tolerance parameters, or network metrics seen of peers, such that clients could use them when constructing paths on continuum from more reliable to more secure as desired. The particulars of what observables drive depeering thresholds should be, and whether an overlay can perform whatever self management tasks well... all need tested out.
Regardless, sensitive users should not upload to any plausibly owned / suspect / scene central servers, and instead should insert into any distributed encrypted file storage overlays, IPFS, generic surface websites or hosts, upload encrypted blobs and release the keys elsewhere, use wifi, vpn, etc.
Any candidate network for any nextgen award in the subject line should raise the bar significantly such that only the highest sensitive and smallest number of users would have reason to argue.
That is simply not the case with today's networks.
incentivize
Incentive is that secure network platforms provide generic transport for cool apps and services that people want to use. Most users are not going to hack their nodes to disable giveback, and or the network will detect that, or have natural consumption limits. Again ventures into generic transport network built for mass utility vs network built for some specific app or class of app... the former has probably not yet received its fair share of development consideration over the last decades.
As before, have fun creating and deploying new stuff.
On Mon, 28 Oct 2019 01:33:29 -0400 grarpamp <grarpamp@gmail.com> wrote:
Then you realize that again, a proper fill contract aware network will auto detect and depeer any link that gets DoS,
The 'literature' I've seen so far (up to 2007) mentions Pipe-Net 1.1 a few times and they say that when a Pipe-Net link is attacked, the whole network shuts down in self-defense. Which isn't too practical or robust... I haven't looked into how the the thing actually works yet... http://www.weidai.com/pipenet.txt ...but I'm puzzled by the fact that the people commenting on pipe-net (like adam back) don't suggest the apparently obvious improvement of cutting links selectively instead of shutting down the whole thing.
On Mon, Oct 28, 2019 at 05:19:08PM -0300, Punk - Stasi 2.0 wrote:
On Mon, 28 Oct 2019 01:33:29 -0400 grarpamp <grarpamp@gmail.com> wrote:
Then you realize that again, a proper fill contract aware network will auto detect and depeer any link that gets DoS,
The 'literature' I've seen so far (up to 2007) mentions Pipe-Net 1.1 a few times and they say that when a Pipe-Net link is attacked, the whole network shuts down in self-defense. Which isn't too practical or robust...
Indeed. Grarpamp's presentment makes much sense on this - nodes don't drop all links (and domino this out) just because one went bad. For 1995, pipe-net was the cutting edge, and vs the newer Tor: besides more onion/ less packet switch, Tor just seems to have introduced TCP as base layer to f@#$ things up - although arguably, since TCP makes life easier for the prototyper (no having to handle the things TCP handles, like re-sending, re-sequencing etc) - if Tor weren't so funded, we could argue it's just a prototype, but since it is so well funded, the more plausible explanation is that its problems are intended.
I haven't looked into how the the thing actually works yet...
http://www.weidai.com/pipenet.txt
...but I'm puzzled by the fact that the people commenting on pipe-net (like adam back) don't suggest the apparently obvious improvement of cutting links selectively instead of shutting down the whole thing.
On Mon, Oct 28, 2019 at 05:19:08PM -0300, Punk - Stasi 2.0 wrote:
(I won't focus on errors in this paper, e.g.: - the description of the return path packet encryption (dest to origin) appears in error - but that's not interesting afaics. - "Anonymity in this scheme is asymmetric - the caller is anonymous, but not the receiver" seems an incorrect assertion, since N0 is known to N1 at least, albeit N0's content may well not be known to N1, and N0's destination point may not be known to N1. ) This design doc is most useful conceptually for pondering possible elements of our network design, since it's an origin document, usefully laying out some concepts at issue: It introduces the onion concept (if not by name), where node N0 requests N1 to link to N2 on behalf of N0, and key establishment between N0 and N2 is (presumably) hidden from N1: "4. Establish a key (K2) with N2 through N1." It introduces link negotiation: "3. Request that N1 establish a link id (S2) with N2." It also introduces the packet switching concept, where in at least one version of such switching, N1 (or N2 etc) could randomize routing on behalf of N0: "The second node shuffles the packets it receives during a time unit and forwards them in random order to others." Exactly how this is achieved is not yet clear. Possibles: A. N0 establishes with N1 (by usual request/ contract proto) multiple links from N1 to nodes N2, N3, N4 etc., and N0 also or thereafter requests of N1, randomized outgoing packet shuffling for N0's packets (sent from N0 to N1). - this leaves ultimate logical routing control in the hands of N0 - latency escalation (over a multi hop route) should be estimable by N0 - ultimate (effective) network topology may be simpler to reason about, control and analyze B. N0 links to N1, and simply hands off all routing decisions for all packets to N1. - This might be viable if N1 is a known friend node. - In this routing protocol, N0 still needs to nego QoS requests with N1, to establish what total volume (in and/ or out) and what b/w rates, N1 is willing to make available to N0, and for what durations. - We must always keep in mind that meat space 'known friends', may well be using hardware/ software which is compromised (unknown to the friend). - These protocols don't have to operate mutually exclusively to one another - they can be used in parallel, along with other routing protocols, such as strict N0 controlled end to end routes. - We must not mistake the feeling of control ("ACKed requests"), with actual control. When we say N0 ultimately makes and therefore "controls" all routing decisions/ routing types used, what we really mean is, N0 "specifies" all routing types it is willing to use, within each of its respective "link establishment requests". - We of course must also always keep in mind that we are talking virtual links, not physical links, and also quite possibly adversarial peer nodes. In the virtual (let alone phys) networking space, a node N1 (at least for suitable QoS link requests if non adversarial) may of its own accord make "randomized" routing decisions or aka "routing decisions, for N0's packets, outside of any specific requests by N0", and such "N1 primary authority" decisions may be adversarial to N0, supportive of N0, or have some other basis. Of course, iqnets core does the right/ assumed best thing by default - we simply consider all possibilities which may be ultimately faced in any actual network. Re randomized fan outs, here is a bit of a conundrum/ potential opportunity - in the balance between various options available to us: - Does it make sense for N0 to leave certain routing decisions to another node in its route? - Is the "fan out + randomize" concept identifiably useful for certain use cases? - For say N2 to do a randomized fan out in on incoming packets from N0 (say via N1), N2 will have to buffer the incoming packets over time period units of T, so that it has > 1 packet to on- send in a randomized fashion; this naturally introduces latency - which of course is acceptable, even desirable, depending on use case - we're now conceptually heading into random latency/ high latency mix net design territory. Latency - an important consideration which the above paper effectively raises is: - the latency effect on route establishment, and - the latency effect on packet traversal through established routes, for different switching/ routing models. This consideration needs more thought, especially in relation to various networking (i.e. end user app) use cases. -------------------------- Alert: incoming thought, must get it down before it flees my lonely neurone. Headroom, or rather resource, reservation requests: - N0 could make "headroom" reservation requests of another node. - Is this the same as simply a chaff filled link? No. A resource reservation request is an "in advance of being used" request for a node to reserve or keep aside some resource, on my behalf until I need to use that resource, according to params e.g. "reserve for T time period", resource magnitude X, etc E.g.: - bandwidth reservation (I want to d/l a 4GiB movie, I just don't have time right now, please reserve that for me, which I would like to use in the next 5 days) - low latency link reservation (I want you to always reserve at least 1 telephone call worth of low latency link, on my behalf, for when I want to make phone calls (and of course hand out the rest as you choose) - cache reservation, although without further thought, I prefer the undertaking/promise model - such reservation requests perhaps make most sense between meat space friends, but there's of course no reason to limit such reservation requests to any particular node type
On Tue, Oct 29, 2019 at 11:27:43AM +1100, Zenaan Harkness wrote:
Re randomized fan outs, here is a bit of a conundrum/ potential opportunity - in the balance between various options available to us:
- Does it make sense for N0 to leave certain routing decisions to another node in its route?
- Is the "fan out + randomize" concept identifiably useful for certain use cases?
- For say N2 to do a randomized fan out in on incoming packets from N0 (say via N1), N2 will have to buffer the incoming packets over time period units of T, so that it has > 1 packet to on- send in a randomized fashion;
The above is incorrect: N2 could "round robbin" incoming packets, or rather randomized round robin, the packets incoming from route N0 (via node N1) to node N2. Of course, this would introduce visibility if not chaff filled, if we are working with one packet at a time. Maintaining link rate means sending one packet per time period, and sending chaff if we don't have wheat. Therefore, for fan out to be network efficient, fan out links need to be proportionally smaller b/w than the incoming link, which is another "obvious visibility" issue in relation to G*A.
this naturally introduces latency - which of course is acceptable, even desirable, depending on use case - we're now
and of course, undesirable in other use cases
conceptually heading into random latency/ high latency mix net design territory.
On Tue, Oct 29, 2019 at 01:08:50PM +1100, Zenaan Harkness wrote:
On Tue, Oct 29, 2019 at 11:27:43AM +1100, Zenaan Harkness wrote:
Re randomized fan outs, here is a bit of a conundrum/ potential opportunity - in the balance between various options available to us:
- Does it make sense for N0 to leave certain routing decisions to another node in its route?
- Is the "fan out + randomize" concept identifiably useful for certain use cases?
- For say N2 to do a randomized fan out in on incoming packets from N0 (say via N1), N2 will have to buffer the incoming packets over time period units of T, so that it has > 1 packet to on- send in a randomized fashion;
The above is incorrect: N2 could "round robbin" incoming packets, or rather randomized round robin, the packets incoming from route N0 (via node N1) to node N2.
Of course, this would introduce visibility if not chaff filled, if we are working with one packet at a time.
Maintaining link rate means sending one packet per time period, and sending chaff if we don't have wheat.
... and time clocking - if we are sending only one packet per minute, and that's our only link, that packet (either wheat or chaff) should presumably be sent right around the same time (same second) every minute. This is an assumption at the moment, but let's consider the following example: - nodes A B C and D - links AB BC and BD - ping circle between A and D - ping rate is 1 ping per minute, which may be a wheat ping, or a chaff ping; so our ping cycle in this example is 60 seconds - let's say ping data content size is 32 bytes, which may at any time be a short text message, rather than a ping - let's say packet size for this network is always fixed at 512 bytes, so a ping packet is always padded out to 512 bytes - most of the time, a ping is sent in the 59th second of each minute (send outside of that 1 second, and a node is out of specification, and suffers a "performance/trust metric reduction"); one second is a long time, to this may appear to cascade between nodes, within that second, in the "direction" of the ping; but in any case, these "once per minute" pings must be completed before the 60th second, to be in spec; - a consequence of this proto is, that if you just sent your ping (during the 59th second), and before the 59th second concludes you receive a wheat ping from an incoming node which needs to be forwarded, you must cache that wheat for another 60 seconds, you cannot send a second ping of course until the next ping cycle; importantly, even if a new node (say N) connects to node B around say the 30 second mark, and immediately sends a wheat ping (arguably out of spec, but to be debated), node B must also still just cache that wheat ping, until the 59th second arrives; this proto gives rise to the compound (maximal) ping (and therefore message send) latency along any particular route. - back to our example - if: - the AB ping cycle is occurring at the 30 second point (it's an independed link you see), and - the BC and BD ping cycles are happening at say the 59 second point, - and assuming the AB ping is usually chaff, then, when A attempts to send a wheat ping to D, it first of course sends this to B, at the 30 second point in the cycle, and to state the obvious, if node B were to immediately forward that wheat (encrypted with chaff padding) on to node D, then that particular packet would absolutely stand out of the crowd; Rule: wheat pings must always be queued and only forwarded according to the outgoing ping send cycles. Multicast ping/message: - although we can redily conceive of multi-casting a message within this "ping cycle" protocol, we do need to design our protocols against escalation attacks, so, e.g. what might look like a multi-cast to the end user, may at the packet layer simply be an array of "target nodes" or "target routes", so that a node is bound to scale (at least to some minimal degree) their outgoing b/w requirement in order to "ping many", and are thus locked in to the request + ack|nak "good behaviour" relationship/link establishment between nodes; - notwithstanding, a twitter replacement may provide for multicast, perhaps at least where a tweet is public and therefore not encrypted - our network may well be the ultimate decentral solution to the central+censored Twatter problem. - we can now imagine a git style content addressed tweet: - multi casting is simply real-world relationships manifested in the network, where a twatter twats to her followers, who correspondingly twat to their followers, etc - initial multi-cast/ broadcast/ twat, would presumably send the actual text as well as the content address (SHA256 or etc), and subsequent re-twats would either forward (twat), or embed (include in a new twat commenting on the original twat) the SHA256 of the original twat (not the full text); actual protocols yet to be thunked out...
Therefore, for fan out to be network efficient, fan out links need to be proportionally smaller b/w than the incoming link, which is another "obvious visibility" issue in relation to G*A.
This might be mitigated, or the problem even completely eliminated (graph theory math analysis pending of course), if every node in the net, on average, utilizes the same fan out protocol, and since every origin node begins from its own location, and chooses its own random "target nodes to request fan out links" from, then on average, each node should have the same incoming and outgoing b/w requirements - "it'll all average out"; that said, the problem cases are always the edge cases (insufficient nodes presently in a network, insufficient friends, insufficient "yes will do you a fan out" nodes, etc) and many/most/all of these problem "degenerate edge cases" the net core code can and will detect and alert the user before doing something and deceiving the user without any alert.
this naturally introduces latency - which of course is acceptable, even desirable, depending on use case - we're now
and of course, undesirable in other use cases
conceptually heading into random latency/ high latency mix net design territory.
participants (3)
-
grarpamp
-
Punk - Stasi 2.0
-
Zenaan Harkness