On Tue, Oct 29, 2019 at 01:08:50PM +1100, Zenaan Harkness wrote:
On Tue, Oct 29, 2019 at 11:27:43AM +1100, Zenaan Harkness wrote:
Re randomized fan outs, here is a bit of a conundrum/ potential opportunity - in the balance between various options available to us:
- Does it make sense for N0 to leave certain routing decisions to another node in its route?
- Is the "fan out + randomize" concept identifiably useful for certain use cases?
- For say N2 to do a randomized fan out in on incoming packets from N0 (say via N1), N2 will have to buffer the incoming packets over time period units of T, so that it has > 1 packet to on- send in a randomized fashion;
The above is incorrect: N2 could "round robbin" incoming packets, or rather randomized round robin, the packets incoming from route N0 (via node N1) to node N2.
Of course, this would introduce visibility if not chaff filled, if we are working with one packet at a time.
Maintaining link rate means sending one packet per time period, and sending chaff if we don't have wheat.
... and time clocking - if we are sending only one packet per minute, and that's our only link, that packet (either wheat or chaff) should presumably be sent right around the same time (same second) every minute. This is an assumption at the moment, but let's consider the following example: - nodes A B C and D - links AB BC and BD - ping circle between A and D - ping rate is 1 ping per minute, which may be a wheat ping, or a chaff ping; so our ping cycle in this example is 60 seconds - let's say ping data content size is 32 bytes, which may at any time be a short text message, rather than a ping - let's say packet size for this network is always fixed at 512 bytes, so a ping packet is always padded out to 512 bytes - most of the time, a ping is sent in the 59th second of each minute (send outside of that 1 second, and a node is out of specification, and suffers a "performance/trust metric reduction"); one second is a long time, to this may appear to cascade between nodes, within that second, in the "direction" of the ping; but in any case, these "once per minute" pings must be completed before the 60th second, to be in spec; - a consequence of this proto is, that if you just sent your ping (during the 59th second), and before the 59th second concludes you receive a wheat ping from an incoming node which needs to be forwarded, you must cache that wheat for another 60 seconds, you cannot send a second ping of course until the next ping cycle; importantly, even if a new node (say N) connects to node B around say the 30 second mark, and immediately sends a wheat ping (arguably out of spec, but to be debated), node B must also still just cache that wheat ping, until the 59th second arrives; this proto gives rise to the compound (maximal) ping (and therefore message send) latency along any particular route. - back to our example - if: - the AB ping cycle is occurring at the 30 second point (it's an independed link you see), and - the BC and BD ping cycles are happening at say the 59 second point, - and assuming the AB ping is usually chaff, then, when A attempts to send a wheat ping to D, it first of course sends this to B, at the 30 second point in the cycle, and to state the obvious, if node B were to immediately forward that wheat (encrypted with chaff padding) on to node D, then that particular packet would absolutely stand out of the crowd; Rule: wheat pings must always be queued and only forwarded according to the outgoing ping send cycles. Multicast ping/message: - although we can redily conceive of multi-casting a message within this "ping cycle" protocol, we do need to design our protocols against escalation attacks, so, e.g. what might look like a multi-cast to the end user, may at the packet layer simply be an array of "target nodes" or "target routes", so that a node is bound to scale (at least to some minimal degree) their outgoing b/w requirement in order to "ping many", and are thus locked in to the request + ack|nak "good behaviour" relationship/link establishment between nodes; - notwithstanding, a twitter replacement may provide for multicast, perhaps at least where a tweet is public and therefore not encrypted - our network may well be the ultimate decentral solution to the central+censored Twatter problem. - we can now imagine a git style content addressed tweet: - multi casting is simply real-world relationships manifested in the network, where a twatter twats to her followers, who correspondingly twat to their followers, etc - initial multi-cast/ broadcast/ twat, would presumably send the actual text as well as the content address (SHA256 or etc), and subsequent re-twats would either forward (twat), or embed (include in a new twat commenting on the original twat) the SHA256 of the original twat (not the full text); actual protocols yet to be thunked out...
Therefore, for fan out to be network efficient, fan out links need to be proportionally smaller b/w than the incoming link, which is another "obvious visibility" issue in relation to G*A.
This might be mitigated, or the problem even completely eliminated (graph theory math analysis pending of course), if every node in the net, on average, utilizes the same fan out protocol, and since every origin node begins from its own location, and chooses its own random "target nodes to request fan out links" from, then on average, each node should have the same incoming and outgoing b/w requirements - "it'll all average out"; that said, the problem cases are always the edge cases (insufficient nodes presently in a network, insufficient friends, insufficient "yes will do you a fan out" nodes, etc) and many/most/all of these problem "degenerate edge cases" the net core code can and will detect and alert the user before doing something and deceiving the user without any alert.
this naturally introduces latency - which of course is acceptable, even desirable, depending on use case - we're now
and of course, undesirable in other use cases
conceptually heading into random latency/ high latency mix net design territory.