UDP based data xfer protocols
We must consider UDP/IP based protocols.
From here: https://en.wikipedia.org/wiki/Micro_Transport_Protocol
We find links to these: https://en.wikipedia.org/wiki/Internet2 https://en.wikipedia.org/wiki/LEDBAT https://web.archive.org/web/20110725080523/http://forum.bittorrent.org/viewt... https://en.wikipedia.org/wiki/Multipurpose_Transaction_Protocol https://en.wikipedia.org/wiki/QUIC https://en.wikipedia.org/wiki/Real-Time_Media_Flow_Protocol https://en.wikipedia.org/wiki/Stream_Control_Transmission_Protocol https://en.wikipedia.org/wiki/UDP-based_Data_Transfer_Protocol
https://en.wikipedia.org/wiki/UDP-based_Data_Transfer_Protocol
UDT began in 2001 as a successful attempt to overcome TCP protocol overhead problems, especially over high speed networks. UDT is built on top of User Datagram Protocol (UDP), adding congestion control and reliability control mechanisms. UDT is an application level, connection oriented, duplex protocol that supports both reliable data streaming and partial reliable messaging. In October, 2003, the NCDM achieved a 6.8 gigabits per second transfer from Chicago, United States to Amsterdam, Netherlands. During the 30-minute test they transmitted approximately 1.4 terabytes of data. Initially, UDT was a UDP stream + a TCP control stream, later, TCP stream was removed, with control messages sent over the UDP data stream. In UDT3 (2006), congestion control was improved to support "normal internet" UDT connections (improved low bandwidth support), and significant reduction in CPU and RAM needs. UDT4 is designed to support concurrent UDT streams (through the same port) and firewall traversing (rendezvous/ hole punching). UDT uses periodic ACKs and NACKs - reducing traffic control traffic (so ACKS are proportional to time, not to xfer volume). AIMD with decreasing increase UDT uses an AIMD (additive increase multiplicative decrease) style congestion control algorithm. The increase parameter is inversely proportional to the available bandwidth (estimated using the packet pair technique), thus UDT can probe high bandwidth rapidly and can slow down for better stability when it approaches maximum bandwidth. The decrease factor is a random number between 1/8 and 1/2. This helps reduce the negative impact of loss synchronization. In UDT, packet transmission is limited by both rate control and window control. The sending rate is updated by the AIMD algorithm described above. The congestion window, as a secondary control mechanism, is set according to the data arrival rate on the receiver side.
Secure Reliable Transport https://en.wikipedia.org/wiki/Secure_Reliable_Transport Secure Reliable Transport is an open source video transport protocol developed originally by Haivision. According to SRT Alliance, an organisation that promotes the technology, it optimises streaming performance across unpredictable networks, such as the Internet, by dynamically adapting to the real-time network conditions between transport endpoints. This helps minimise effects of jitter and bandwidth changes, while error-correction mechanisms help minimise packet loss. SRT supports end-to-end encryption with AES.[1] When performing retransmissions, SRT only attempts to retransmit packets for a limited amount of time based on the latency as configured by the application.[2] The reference implementation of the protocol was originally published under the Lesser General Public License version 2.1,[3] but was relicensed under the Mozilla Public License on 22 March 2018.[4] SRT is supported in the free software multimedia frameworks GStreamer, FFmpeg, and in VLC free software media player.[2][5] Notes: - low latency connections want no (or minimized) buffering - bulk fill traffic does not care about buffering, just wants everything to be (eventually) delivered - net/switch control packets want no buffering/ maximum priority So, buffering desirability is related to the traffic class - we could have a QoS field in each packet. An open question is MTU and packet size - do we have say 2 or three packet sizes corresponding to traffic classes, do we have a single packet size and if so, what size should that be - in the case we are sending net/switch control messages at high prio, average (encrypted) packet size containing such control msgs might be very small, say 20 bytes. Even if "net/switch" control msg packet size needs to be say 150 bytes to contain the largest such messages, do we want to burden such packets with "having to be a set size of ~MTU, say 1.5KiB? We need an ISP/backbone guru to talk to on some of these issues - to suggest what would work today, and what could work (if the IQNets userbase is sufficiently large, which it will be), from their perspective. I met some Telstra backbone guys some years back - the wholesale side of our old Australian Telecom network that was ripped from under the nation for a few silver coins, but no longer have their contact details. Example questions: Is ISP (and GT-1 etc) physical equipment, possibly able to honour QoS for rare but important net/switch control packets? Would such packets need to be a maximum size, say 150 bytes (or even less), rather than MTU bytes? Is such GT*, or perhaps rather, end user facing ISP equipment, able to handle users who might try to game such QoS offerings by say rate limiting such packets on a per end user node basis (to say, 2kbps or whatever...)? If we get a handle on these types of questions, IQNets might actually be very appealing to the GT* guys, to optimize everything with proactive, rather than reactive, traffic management. UTP goes a long way to handling bittorrent, but there seems to be a lot of opportunity for full stack (right up to apps) improvement here... ... and pushing this up the the stack makes sense if the lowest level guys can provide. If they can't provide today, then tomorrow's equipment will provide if the picture is compelling and widespread.
Tsunami UDP Protocol https://en.wikipedia.org/wiki/Tsunami_UDP_Protocol The Tsunami UDP Protocol is a UDP-based protocol that was developed for high-speed file transfer over network paths that have a high bandwidth-delay product. Such protocols are needed because standard TCP does not perform well over paths with high bandwidth-delay products.[1] Tsunami effects a file transfer by chunking the file into numbered blocks of 32 KB. Communication between the client and server applications flows over a low bandwidth TCP connection, and the bulk data is transferred over UDP. Bandwidth-delay product https://en.wikipedia.org/wiki/Bandwidth-delay_product In data communications, the bandwidth-delay product is the product of a data link's capacity (in bits per second) and its round-trip delay time (in seconds).[1] The result, an amount of data measured in bits (or bytes), is equivalent to the maximum amount of data on the network circuit at any given time, i.e., data that has been transmitted but not yet acknowledged. The bandwidth-delay product was originally proposed[2] as a rule of thumb for sizing router buffers in conjunction with congestion avoidance algorithm Random Early Detection (RED). A network with a large bandwidth-delay product is commonly known as a long fat network (shortened to LFN). As defined in RFC 1072, a network is considered an LFN if its bandwidth-delay product is significantly larger than 105 bits (12,500 bytes). Ultra-high speed local area networks (LANs) may fall into this category, where protocol tuning is critical for achieving peak throughput, on account of their extremely high bandwidth, even though their delay is not great. While a connection with 1 Gbit/s and a round-trip time below 100 μs is no LFN, a connection with 100 Gbit/s would need to stay below 1 μs RTT to not be considered an LFN. An important example of a system where the bandwidth-delay product is large is that of geostationary satellite connections, where end-to-end delivery time is very high and link throughput may also be high. The high end-to-end delivery time makes life difficult for stop-and-wait protocols and applications that assume rapid end-to-end response. A high bandwidth-delay product is an important problem case in the design of protocols such as Transmission Control Protocol (TCP) in respect of TCP tuning, because the protocol can only achieve optimum throughput if a sender sends a sufficiently large quantity of data before being required to stop and wait until a confirming message is received from the receiver, acknowledging successful receipt of that data. If the quantity of data sent is insufficient compared with the bandwidth-delay product, then the link is not being kept busy and the protocol is operating below peak efficiency for the link. Protocols that hope to succeed in this respect need carefully designed self-monitoring, self-tuning algorithms.[3] The TCP window scale option may be used to solve this problem caused by insufficient window size, which is limited to 65,535 bytes without scaling. [Examples omitted] Tsunami - how it works, from https://sourceforge.net/projects/tsunami-udp/files/tsunami-udp%20docs/tsunam... ... How It Works Tsunami performs a file transfer by sectioning the file into numbered blocks of usually 32kB size. Communication between the client and server applications flows over a low bandwidth TCP connection. The bulk data is transferred over UDP. Most of the protocol intelligence is worked into the client code - the server simply sends out all blocks, and resends blocks that the client requests. The client specifies nearly all parameters of the transfer, such as the requested file name, target data rate, blocksize, target port, congestion behaviour, etc, and controls which blocks are requested from the server and when these requests are sent. The client starts file transfers with a get-file request. At the first stage of a transfer the client passes all its transfer parameters to the server inside the request. The server reports back the length of the requested file in bytes, so that the client can calculate how many blocks it needs to receive. Immediately after a get-file request the server begins to send out file blocks on its own, starting from the first block. It flags these blocks as "original blocks". The client can request blocks to be sent again. These blocks are flagged as "retransmitted blocks" by the server. When sending out blocks, to throttle the transmission rate to the rate specified by the client, the server pauses for the correct amount of time after each block before sending the next. The client regularly sends error rate information to the server. The server uses this information to adjust the transmission rate; the server can gradually slow down the transmission when the client reports it is too slow in receiving and processing the UDP packets. This, too, is controlled by the cient. In the settings passed from client to server at the start of a transfer, the client configures the server's speed of slowdown and recovery/"speed-up", and specifies an acceptable packet loss percentage (for example 7%). The client keeps track of which of the numbered blocks it has already received and which blocks are still pending. This is done by noting down the received blocks into a simple bitfield. When a block has been received, in the bitfield the bit corresponding to the received block is set to '1'. If the block number of a block that the client receives is larger than what would be the correct and expected consecutive block, the missing intervening blocks are queued up for a pending retransmission. The retransmission "queue" is a simple sorted list of the missing block numbers. The list size is allowed to grow dynamically, to a limit. At regular intervals, the retransmission list is processed - blocks that have been received in the meantime are removed from the list, after which the list of really missing blocks is sent as a normal block transmission request to the server. When adding a new pending retransmission to the client's list makes the list exceed a hard-coded limit, the entire transfer is reinitiated to start at the first block in the list i.e. the earliest block in the entire file that has not been successfully transferred yet. This is done by sending a special restart-transmission request to the server. When all blocks of the file have been successfully received, the client sends a terminate-transmission request to the server.
participants (1)
-
Zenaan Harkness