On Sun, Oct 27, 2019 at 06:22:53AM -0400, grarpamp wrote:
Let's say we buffer 500ms since that forces attackers to suspend links for over 500ms to identify target nodes, and making their network node bisections more noticeable to end users: 3.5s
And 500ms may not be enough! Perhaps we should buffer up for a second or more?
10 milliseconds, 10 seconds, 10 minutes, 10 hours or 10 days... speculating on which any adversary will use... removes use cases for the network as a result.
Set speculations to 0 ms, and just depeer from node when it appears to no be upholding traffic parameters it said it would be sending you. If you agree to x, possibly supported by iperf test between you, and your peer start sending you a chopped up sine wave outside allowable deviation, they or their path to you are obviously fucked, just drop them. If it's allowable then buffer and reclock it when sending it back out your NIC so that whatever natural identifiable remains is not replicated beyond you.
Ack. Yes each link, BA, AC, must be maintained according to its own agreements, and is bound by the consequences of not meeting those agreements. So let's analyse this: Nodes A, B and C. There is a link BA, and a link AC. B is sending packets to A, and A is routing them (per pre arranged agreement) to C. - A may have had some unused capacity in a currently maintained link AC, which A allocates to serve the request by B (of A) to route packets to C (at least for a time), - or, A may have had to make a request of C, to increase the capacity of the link AC (which request, node C may ack or nack at its discretion), - or, A may have had to establish a new link as a result of B's request, link AC. In any case, we assume: - C has granted A's request for a link, - and that C is not told why A is requesting the new link (or an increase in b/w on this link), - and in all cases, A must maintain its contract with C. So, B begins to use link BA, and indirectly link AC. So now, B suddenly comes under attack, and is unable to maintain its previously requested packet sending rate (or latency/ jitter), to node A: - This should not affect A's link with C (AC) - if, in any particular packet sending time period T, A has not received sufficient packets from B, to send on to C, A simply chaff-fills the link AC, so that a passive onlooker cannot see any difference in the AC link behaviour. - node C is satisfied with this, since A is keeping up its end of the bargain, albeit there is apparent under-utilization of the link AC - Per config, A could ramp down, or kick, the BA incoming link, and consequently ramp down its AC outgoing link. - From A's point of view, B's metrics just took a negative hit, and A records this fact; in the case B is a meat space friend, the operator of node A may either noisily (on the phone or in email) or quietly (in person) enquire of his friend operating node B "wtf?" And the issue is, for A to maintain AC properly, A has options in relation to a "pause + catchup burst" from B (on link BA): 1. duplicate the "trough+spike", i.e. "catch up" with B - if this is SOP (std op procedure), the "trough+spike" will propagate throughout the entire link, so we're not doing this! 2. of course A chaff fills the trough, in its link to C, but what about the "catchup spike" from B? A could cache that spike ongoingly, until the stream is over; this may be desirable from B's perspective, but a lot of spiking could cause both unpredictable, out of control and therefore problematic "spike caching" - although this could be readily handled by requiring B to pause a second time, just to allow A to drain its "spike cache for B". 3. At some point, if the behaviour is bad enough, A simply kicks B (your network behaviour is sufficiently bad, bye bye). 4. do something else? Nodes, in this case B, are expected to display "decent" behaviour in relation to such spikes, even if say B is unable to operate without such spikes (perhaps there are network local conditions making spikes on B's outgoing link, unavoidable ??), and in this case "decent" behaviour could mean "if you've had troughs and spikes, you must add troughs, since I (node A) will only cache at most N packets for you, after that I will simply drop packets and/or kick your link. By maintaining node behaviour metrics, and by propagating such metrics (need to think about this from a privacy vs network sanity perspective), we may be able to identify "latency creators" (therefore problematic nodes and/ or hardware) if not immediately, at least after attacks happen, thereby increasing the cost to stalkers - also, we will have a digital war on our hands, a massive global us vs the GAA stalkers situation, where certain physical links, end points, routers, buildings etc, will be identified as stalkers/ attackers, and outed (network, and IRL - make 'em pay the price for stalking and attacking us). So from the perspective of the syntax / scenario above, "reclocking" (by A) of "dodgy input" (from B) simply means "chaff filling the output, so that my output link to C appears perfectly normal", and ensuring B does an additional pause to allow A to drain its temp "B spike cache" - at least, this is my current understanding. If there is an additional concept to "reclocking" that may also be useful, we should consider that too...