iqnets: opportunistic XYZ, e.g. "begin xmit"

Zenaan Harkness zen at freedbms.net
Wed Oct 30 19:17:54 PDT 2019


Every round trip between 2 peer nodes incurs 2x the average link
latency.

Google did substantial design and testing for various TCP HTTP(s)
speedups.

One of the concepts ISTR bandied about was opportunistic data sending
- as long as the server is happy with the initial packet(s) sent by
the client, and the client has included its initial request(s) into
those initial packets, then the server begins to transmit data to the
client, without waiting for ACKs of ACKs and further requests.

The concept, at least as I understand it, is to, where possible,
include initial data/ link set up/ etc requests, as early as feasible
in the p2p link establishment packets, to minimize or eliminate round
trips and thereby drastically reduce at least initial latency, and
possibly also additional latency.

For example think of a web page which contains hrefs to content,
which the client only knows exist when it receives the base web page,
so then of course must otherwise make a new request for the content
identified - if the server knows what the client will need to ask
for, it can automatically/ opportunistically send that data, without
waiting for the obvious request.

There may be peer node negotiation for different types of
opportunism, e.g.:

  - on first ever new node contact, nodes may choose to auto
    (opportunisticly) hand out X cache undertaking/ promise

  - nodes may request "may I opportunisticly assume I may begin a
    "bulk fill" link with you, if I need (or am requested) to do so?

  - certain types of opportunism may be enabled by default,
    unless overridden by conf

  - other types of opportunism may only (by default) occur by nego

  - we ought be diligent in attempting to identify all possible types
    of opportunism that overlay nets may utilise/ provide for
    - we that we may opportunisticly optimize for as much opportunism
      as possible

  - everything possible to minimize number of round trips

  - algos which scale linearly (or better) with time, rather than
    b/w, and rather than exponentially in any unit


A thought giving rise to the above is that notwithstanding our best
efforts to stay "optimally" below link b/w and other limits, for many
reasons any of these attempts will, at different times, fail.

A link at a b/w of say 2MiB/s may oscillate or otherwise vary over
time, up to 2MiB/s.

When we are xmitting at say 1.95MiB/s, and phys nodes out of our
control (such as an ISP's router) shape downwards, even temporarily,
UDP packets get dropped.

In many, most or all cases to start with, we will not have QoS at the
(from our overlay net point of view) physical network layer.

  - This means we may have minimal to no control over which UDP
    packets get dropped.

  - This QoS issue is a fundamental area we need to expand our
    understanding of.

  - Longer term, iqnets shall be a motivator to end users, ISPs and
    GT-* folks, to push QoS down to the link layer.

  - When we do achieve internet wide QoS contracts at the network
    layer, a privacy issue (depending on your threat model) will be
    which QoS modes to utilize - e.g. you may be better off using
    "bulk fill", rather than "telephone audio" class QoS, in order to
    better hide your important phone call.

    - notwitstanding use cases and threat models, we don't fail to
      maximise provision for QoS, just because some use cases won't
      use it



More information about the cypherpunks mailing list