Server-Sent Events: the alternative to WebSockets you should be using

zeynepaydogan zeynepaydogan at protonmail.com
Sat Feb 26 10:08:27 PST 2022


germano.dev

Server-Sent Events: the alternative to WebSockets you should be using

[Cover image]

When developing real-time web applications, WebSockets might be the first thing that come to your mind. However, Server Sent Events (SSE) are a simpler alternative that is often superior.

Contentshttps://germano.dev/sse-websockets/#contents

- [Prologue](https://germano.dev/sse-websockets/#prologue)
- [WebSockets?](https://germano.dev/sse-websockets/#websockets)
- [What is wrong with WebSockets](https://germano.dev/sse-websockets/#what-is-wrong-with-websockets)

- [Compression](https://germano.dev/sse-websockets/#compression)
- [Multiplexing](https://germano.dev/sse-websockets/#multiplexing)
- [Issues with proxies](https://germano.dev/sse-websockets/#proxies)
- [Cross-Site WebSocket Hijacking](https://germano.dev/sse-websockets/#hijacking)

- [Server-Sent Events](https://germano.dev/sse-websockets/#sse)
- [Let’s write some code](https://germano.dev/sse-websockets/#code)

- [The Reverse-Proxy](https://germano.dev/sse-websockets/#reverse-proxy)
- [The Frontend](https://germano.dev/sse-websockets/#frontend)
- [The Backend](https://germano.dev/sse-websockets/#backend)

- [Bonus: Cool SSE features](https://germano.dev/sse-websockets/#bonus)
- [Conclusion](https://germano.dev/sse-websockets/#conclusion)

Prologuehttps://germano.dev/sse-websockets/#prologue

Recently I have been curious about the best way to implement areal-time web application. That is, an application containing one ore more components which automatically update, in real-time, reacting to some external event. The most common example of such an application, would be a messaging service, where we want every message to be immediately broadcasted to everyone that is connected, without requiring any user interaction.

After some research I stumbled upon an[amazing talk by Martin Chaov](https://www.youtube.com/watch?v=n9mRjkQg3VE), which compares Server Sent Events, WebSockets and Long Polling. The talk, which is also[available as a blog post](https://www.smashingmagazine.com/2018/02/sse-websockets-data-flow-http2/#comments-sse-websockets-data-flow-http2), is entertaining and very informative. I really recommend it. However, it is from 2018 and some small things have changed, so I decided to write this article.

WebSockets?https://germano.dev/sse-websockets/#websockets

[WebSockets](https://tools.ietf.org/html/rfc6455)enable the creation oftwo-waylow-latencycommunication channels between the browser and a server.

This makes them ideal in certain scenarios, like multiplayer games, where the communication istwo-way, in the sense that both the browser and server send messages on the channelall the time, and it is required that these messages be delivered withlow latency.

In a First-Person Shooter, the browser could be continuously streaming the player’s position, while simoultaneously receiving updates on the location of all the other players from the server. Moreover, we definitely want these messages to be delivered with as little overhead as possible, to avoid the game feeling sluggish.

This is the opposite of the traditional[request-response model](https://en.wikipedia.org/wiki/Request%E2%80%93response)of[HTTP](https://developer.mozilla.org/en-US/docs/Web/HTTP), where the browser is always the one initiating the communication, and each message has a significant overhead, due to establishing[TCP connections](https://en.wikipedia.org/wiki/Transmission_Control_Protocol)and[HTTP headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers).

However, many applications do not have requirements this strict. Even among real-time applications,the data flow is usually asymmetric: the server sends the majority of the messages while the client mostly just listens and only once in a while sends some updates. For example, in a chat application an user may be connected to many rooms each with tens or hundreds of participants. Thus, the volume of messages received far exceeds the one of messages sent.

What is wrong with WebSocketshttps://germano.dev/sse-websockets/#what-is-wrong-with-websockets

Two-way channels and low latency are extremely good features. Why bother looking further?

WebSockets have one major drawback:they do not work on top of HTTP, at least not fully. They require their own TCP connection. They use HTTP only to establish the connection, but then upgrade it to a standalone TCP connection on top of which the WebSocket protocol can be used.

This may not seem a big deal, however it means thatWebSockets cannot benefit from any HTTP feature. That is:

- No support for compression
- No support for HTTP/2 multiplexing
- Potential issues with proxies
- No protection from Cross-Site Hijacking

At least, this was the situation when the WebSocket protocol was first released. Nowadays, there are some complementary standards that try to improve upon this situation. Let’s take a closer look to the current situation.

Note: If you do not care about the details, feel free to skip the rest of this section and jump directly to[Server-Sent Events](https://germano.dev/sse-websockets/#sse)or the[demo](https://germano.dev/sse-websockets/#code).

Compressionhttps://germano.dev/sse-websockets/#compression

On standard connections,[HTTP compression](https://en.wikipedia.org/wiki/HTTP_compression)is supported by every browser, and is super easy to enable server-side. Just flip a switch in your reverse-proxy of choice. With WebSockets the question is more complex, because there are no requests and responses, but one needs to compress the individual WebSocket frames.

[RFC 7692](https://tools.ietf.org/html/rfc7692), released on December 2015, tries to improve the situation by definining“Compression Extensions for WebSocket”. However, to the best of my knowledge, no popular reverse-proxy (e.g. nginx, caddy) implements this, making it impossible to have compression enabled transparently.

This means that if you want compression, it has to be implemented directly in your backend. Luckily, I was able to find some libraries supporting RFC 7692. For example, the[websockets](https://websockets.readthedocs.io/en/stable/extensions.html)and[wsproto](https://github.com/python-hyper/wsproto/)Python libraries, and the[ws](https://github.com/websockets/ws)library for nodejs.

However, the latter suggests not to use the feature:

> The extension is disabled by default on the server and enabled by default on the client. It adds a significant overhead in terms of performance and memory consumption so we suggest to enable it only if it is really needed.
>
> Note that Node.js has a variety of issues with high-performance compression, where increased concurrency, especially on Linux, can lead to catastrophic memory fragmentation and slow performance.

On the browsers side,[Firefox supports WebSocket compression since version 37](https://developer.mozilla.org/en-US/docs/Mozilla/Firefox/Releases/37#networking).[Chrome supports it as well](https://chromestatus.com/feature/6555138000945152). However, apparently Safari and Edge do not.

I did not take the time to verify what is the situation on the mobile landscape.

Multiplexinghttps://germano.dev/sse-websockets/#multiplexing

[HTTP/2](https://tools.ietf.org/html/rfc7540)introduced support for multiplexing, meaning that multiple request/response pairs to the same host no longer require separate TCP connections. Instead, they all share the same TCP connection, each operating on its own independent[HTTP/2 stream](https://tools.ietf.org/html/rfc7540#section-5).

This is, again,[supported by every browser](https://caniuse.com/http2)and is very easy to transparently enable on most reverse-proxies.

On the contrary, the WebSocket protocol has no support, by default, for multiplexing. Multiple WebSockets to the same host will each open their own separate TCP connection. If you want to have two separate WebSocket endpoints share their underlying connection you must add multiplexing in your application’s code.

[RFC 8441](https://tools.ietf.org/html/rfc8441), released on September 2018, tries to fix this limitation by adding support for“Bootstrapping WebSockets with HTTP/2”. It has been[implemented in Firefox](https://bugzilla.mozilla.org/show_bug.cgi?id=1434137)[and Chrome](https://chromestatus.com/feature/6251293127475200). However, as far as I know, no major reverse-proxy implements it. Unfortunately, I could not find any implementation in Python or Javascript either.

Issues with proxieshttps://germano.dev/sse-websockets/#proxies

HTTP proxies without explicit support for WebSockets can prevent unencrypted WebSocket connections to work. This is because the proxy will not be able to parse the WebSocket frames and close the connection.

However, WebSocket connections happening over HTTPS should be unaffected by this problem, since the frames will be encrypted and the proxy should just forward everything without closing the connection.

To learn more, see[“How HTML5 Web Sockets Interact With Proxy Servers”](https://www.infoq.com/articles/Web-Sockets-Proxy-Servers/)by Peter Lubbers.

Cross-Site WebSocket Hijackinghttps://germano.dev/sse-websockets/#hijacking

WebSocket connections are not protected by the same-origin policy. This makes them vulnerable to Cross-Site WebSocket Hijacking.

Therefore,WebSocket backends must check the correctness of theOriginheader, if they use any kind of client-cached authentication, such as[cookies](https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies)or[HTTP authentication](https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication).

I will not go into the details here, but consider this short example. Assume a Bitcoin Exchange uses WebSockets to provide its trading service. When you log in, the Exchange might set a cookie to keep your session active for a given period of time. Now, all an attacker has to do to steal your precious Bitcoins is make you visit a site under her control, and simply open a WebSocket connection to the Exchange. The malicious connection is going to be automatically authenticated. That is, unless the Exchange checks theOriginheader and blocks the connections coming from unauthorized domains.

I encourage you to check out the great article about[Cross-Site WebSocket Hijacking](https://christian-schneider.net/CrossSiteWebSocketHijacking.html#main)by Christian Schneider, to learn more.

Server-Sent Eventshttps://germano.dev/sse-websockets/#sse

Now that we know a bit more about WebSockets, including their advantages and shortcomings, let us learn about Server-Sent Events and find out if they are a valid alternative.

[Server-Sent Events](https://html.spec.whatwg.org/#server-sent-events)enable the server to send low-latency push events to the client, at any time. They use a very simple protocol that is[part of the HTML Standard](https://html.spec.whatwg.org/#server-sent-events)and[supported by every browser](https://caniuse.com/eventsource).

Unlike WebSockets,Server-sent Events flow only one way: from the server to the client. This makes them unsuitable for a very specific set of applications, that is, those that require a communication channel that isboth two-way and low latency, like real-time games. However, this trade-off is also their major advantage over WebSockets, because beingone-way,Server-Sent Events work seamlessly on top of HTTP, without requiring a custom protocol. This gives them automatic access to all of HTTP’s features, such as compression or HTTP/2 multiplexing, making them a very convenient choice for the majority of real-time applications, where the bulk of the data is sent from the server, and where a little overhead in requests, due to HTTP headers, is acceptable.

The protocol is very simple. It uses thetext/event-streamContent-Type and messages of the form:

data: First message

event: join
data: Second message. It has two
data: lines, a custom event type and an id.
id: 5

: comment. Can be used as keep-alive

data: Third message. I do not have more data.
data: Please retry later.
retry: 10

Each event is separated by two empty lines (\n) and consists of various optional fields.

Thedatafield, which can be repeted to denote multiple lines in the message, is unsurprisingly used for the content of the event.

Theeventfield allows to specify custom event types, which as we will show in the next section, can be used to fire different event handlers on the client.

The other two fields,idandretry, are used to configure the behaviour of theautomatic reconnection mechanism. This is one of the most interesting features of Server-Sent Events. It ensures thatwhen the connection is dropped or closed by the server, the client will automatically try to reconnect, without any user intervention.

Theretryfield is used to specify the minimum amount of time, in seconds, to wait before trying to reconnect. It can also be sent by a server, immediately before closing the client’s connection, to reduce its load when too many clients are connected.

Theidfield associates an identifier with the current event. When reconnecting the client will transmit to the server the last seen id, using theLast-Event-IDHTTP header. This allows the stream to be resumed from the correct point.

Finally, the server can stop the automatic reconnection mechanism altogether by returning an[HTTP 204 No Content](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204)response.

Let’s write some code!https://germano.dev/sse-websockets/#code

Let us now put into practice what we learned. In this section we will implement a simple service both with Server-Sent Events and WebSockets. This should enable us to compare the two technologies. We will find out how easy it is to get started with each one, and verify by hand the features discussed in the previous sections.

We are going to use Python for the backend, Caddy as a reverse-proxy and of course a couple of lines of JavaScript for the frontend.

To make our example as simple as possible, our backend is just going to consist of two endpoints, each streaming a unique sequence of random numbers. They are going to be reachable from/sse1and/sse2for Server-Sent Events, and from/ws1and/ws2for WebSockets. While our frontend is going to consist of a singleindex.htmlfile, with some JavaScript which will let us start and stop WebSockets and Server-Sent Events connections.

[The code of this example is available on GitHub](https://github.com/tyrion/sse-websockets-demo).

The Reverse-Proxyhttps://germano.dev/sse-websockets/#reverse-proxy

Using a reverse-proxy, such as Caddy or nginx, is very useful, even in a small example such as this one. It gives us very easy access to many features that our backend of choice may lack.

More specifically, it allows us to easily serve static files and automatically compress HTTP responses; to provide support for HTTP/2, letting us benefit from multiplexing, even if our backend only supports HTTP/1; and finally to do load balancing.

I chose Caddy because it automatically manages for us HTTPS certificates, letting us skip a very boring task, especially for a quick experiment.

The basic configuration, which resides in aCaddyfileat the root of our project, looks something like this:

localhost

bind 127.0.0.1 ::1

root ./static
file_server browse

encode zstd gzip

This instructs Caddy to listen on the local interface on ports 80 and 443, enabling support for HTTPS and generating a self-signed certificate. It also enables compression and serving static files from thestaticdirectory.

As the last step we need to ask Caddy to proxy our backend services. Server-Sent Events is just regular HTTP, so nothing special here:

reverse_proxy /sse1 127.0.1.1:6001
reverse_proxy /sse2 127.0.1.1:6002

To proxy WebSockets our reverse-proxy needs to have explicit support for it. Luckily, Caddy can handle this without problems, even though the configuration is slighly more verbose:

@websockets {
    header Connection *Upgrade*
    header Upgrade    websocket
}

handle /ws1 {
    reverse_proxy @websockets 127.0.1.1:6001
}

handle /ws2 {
    reverse_proxy @websockets 127.0.1.1:6002
}

Finally you should start Caddy with

$

sudo

caddy start

The Frontendhttps://germano.dev/sse-websockets/#frontend

Let us start with the frontend, by comparing the JavaScript APIs of WebSockets and Server-Sent Events.

The[WebSocket JavaScript API](https://developer.mozilla.org/en-US/docs/Web/API/Websockets_API)is very simple to use. First, we need to create a newWebSocketobject passing the URL of the server. Herewssindicates that the connection is to happen over HTTPS. As mentioned above it is really recommended to use HTTPS to avoid issues with proxies.

Then, we should listen to some of the possible events (i.e.open,message,close,error), by either setting theon$eventproperty or by usingaddEventListener().

const

ws

=

new

WebSocket

(

"wss://localhost/ws"

)

;

ws

.

onopen

=

e

=>

console

.

log

(

"WebSocket open"

)

;

ws

.

addEventListener

(

"message"

,

e

=>

console

.

log

(

e

.

data

)

)

;

The JavaScript API for Server-Sent Events is very similar. It requires us to create a newEventSourceobject passing the URL of the server, and then allows us to subscribe to the events in the same way as before.

The main difference is that we can also subscribe to custom events.

const

es

=

new

EventSource

(

"https://localhost/sse"

)

;

es

.

onopen

=

e

=>

console

.

log

(

"EventSource open"

)

;

es

.

addEventListener

(

"message"

,

e

=>

console

.

log

(

e

.

data

)

)

;

// Event listener for custom event

es

.

addEventListener

(

"join"

,

e

=>

console

.

log

(

`

${

e

.

data

}

joined

`

)

)

We can now use all this freshly aquired knowledge about JS APIs to build our actual frontend.

To keep things as simple as possible, it is going to consist of only oneindex.htmlfile, with a bunch of buttons that will let us start and stop our WebSockets and EventSources. Like so

<

button

onclick

=

"

startWS(1)

"

>

Start WS1

</

button

>

<

button

onclick

=

"

closeWS(1)

"

>

Close WS1

</

button

>

<

br

>

<

button

onclick

=

"

startWS(2)

"

>

Start WS2

</

button

>

<

button

onclick

=

"

closeWS(2)

"

>

Close WS2

</

button

>

We want more than one WebSocket/EventSource so we can test if HTTP/2 multiplexing works and how many connections are open.

Now let us implement the two functions needed by those buttons to work:

const

wss

=

[

]

;

function

startWS

(

i

)

{

if

(

wss

[

i

]

!==

undefined

)

return

;

const

ws

=

wss

[

i

]

=

new

WebSocket

(

"wss://localhost/ws"

+

i

)

;

ws

.

onopen

=

e

=>

console

.

log

(

"WS open"

)

;

ws

.

onmessage

=

e

=>

console

.

log

(

e

.

data

)

;

ws

.

onclose

=

e

=>

closeWS

(

i

)

;

}

function

closeWS

(

i

)

{

if

(

wss

[

i

]

!==

undefined

)

{

console

.

log

(

"Closing websocket"

)

;

websockets

[

i

]

.

close

(

)

;

delete

websockets

[

i

]

;

}

}

The frontend code for Server-Sent Events is almost identical. The only difference is theonerrorevent handler, which is there because in case of error a message is logged and the browser will attempt to reconnect.

const

ess

=

[

]

;

function

startES

(

i

)

{

if

(

ess

[

i

]

!==

undefined

)

return

;

const

es

=

ess

[

i

]

=

new

EventSource

(

"https://localhost/sse"

+

i

)

;

es

.

onopen

=

e

=>

console

.

log

(

"ES open"

)

;

es

.

onerror

=

e

=>

console

.

log

(

"ES error"

,

e

)

;

es

.

onmessage

=

e

=>

console

.

log

(

e

.

data

)

;

}

function

closeES

(

i

)

{

if

(

ess

[

i

]

!==

undefined

)

{

console

.

log

(

"Closing EventSource"

)

;

ess

[

i

]

.

close

(

)

delete

ess

[

i

]

}

}

The Backendhttps://germano.dev/sse-websockets/#backend

To write our backend, we are going to use[Starlette](https://www.starlette.io/), a simple async web framework for Python, and[Uvicorn](https://www.uvicorn.org/)as the server. Moreover, to make things modular, we are going to separate thedata-generating process, from the implementation of the endpoints.

We want each of the two endpoints to generate anuniquerandom sequence of numbers. To accomplish this we will use the stream id (i.e.1or2) as part of the[random seed](https://en.wikipedia.org/wiki/Random_seed).

Ideally, we would also like our streams to beresumable. That is, a client should be able to resume the stream from the last message it received, in case the connection is dropped, instead or re-reading the whole sequence. To make this possible we will assign an ID to each message/event, and use it to initialize the random seed, together with the stream id, before each message is generated. In our case, the ID is just going to be a counter starting from0.

With all that said, we are ready to write theget_datafunction which is responsible to generate our random numbers:

import

random

def

get_data

(

stream_id

:

int

,

event_id

:

int

)

-

>

int

:

rnd

=

random

.

Random

(

)

rnd

.

seed

(

stream_id

*

event_id

)

return

rnd

.

randrange

(

1000

)

Let’s now write the actual endpoints.

Getting started with Starlette is very simple. We just need to initialize anappand then register some routes:

from

starlette

.

applications

import

Starlette

app

=

Starlette

(

)

To write a WebSocket service both our web server and framework of choice must have explicit support. Luckily Uvicorn and Starlette are up to the task, and writing a WebSocket endpoint is as convenient as writing a normal route.

This all the code that we need:

from

websockets

.

exceptions

import

WebSocketException

@app

.

websocket_route

(

"/ws{id:int}"

)

async

def

websocket_endpoint

(

ws

)

:

id

=

ws

.

path_params

[

"id"

]

try

:

await

ws

.

accept

(

)

for

i

in

itertools

.

count

(

)

:

data

=

{

"id"

:

i

,

"msg"

:

get_data

(

id

,

i

)

}

await

ws

.

send_json

(

data

)

await

asyncio

.

sleep

(

1

)

except

WebSocketException

:

print

(

"client disconnected"

)

The code above will make sure ourwebsocket_endpointfunction is called every time a browser requests a path starting with/wsand followed by a number (e.g./ws1,/ws2).

Then, for every matching request, it will wait for a WebSocket connection to be established and subsequently start an infinite loop sending random numbers, encoded as a JSON payload, every second.

For Server-Sent Events the code is very similar, except that no special framework support is needed. In this case, we register a route matching URLs starting with/sseand ending with a number (e.g./sse1,/sse2). However, this time our endpoint just sets the appropriate headers and returns aStreamingResponse:

from

starlette

.

responses

import

StreamingResponse

@app

.

route

(

"/sse{id:int}"

)

async

def

sse_endpoint

(

req

)

:

return

StreamingResponse

(

sse_generator

(

req

)

,

headers

=

{

"Content-type"

:

"text/event-stream"

,

"Cache-Control"

:

"no-cache"

,

"Connection"

:

"keep-alive"

,

}

,

)

StreamingResponseis an utility class, provided by Starlette, which takes a generator and streams its output to the client, keeping the connection open.

The code ofsse_generatoris shown below, and is almost identical to the WebSocket endpoint, except that messages are encoded according to the Server-Sent Events protocol:

async

def

sse_generator

(

req

)

:

id

=

req

.

path_params

[

"id"

]

for

i

in

itertools

.

count

(

)

:

data

=

get_data

(

id

,

i

)

data

=

b"id: %d\ndata: %d\n\n"

%

(

i

,

data

)

yield

data

await

asyncio

.

sleep

(

1

)

We are done!

Finally, assuming we put all our code in a file namedserver.py, we can start our backend endpoints using Uvicorn, like so:

$ uvicorn --host 127.0.1.1 --port 6001 server:app &
$ uvicorn --host 127.0.1.1 --port 6002 server:app &

Bonus: Cool SSE featureshttps://germano.dev/sse-websockets/#bonus

Ok, let us now conclude by showing how easy it is to implement all those nice features we bragged about earlier.

Compressioncan be enabled by changing just a few lines in our endpoint:

@@ -32,10 +33,12 @@ async def websocket_endpoint(ws):

async def sse_generator(req):
     id = req.path_params["id"]

+    stream = zlib.compressobj()

for i in itertools.count():
         data = get_data(id, i)
         data = b"id: %d\ndata: %d\n\n" % (i, data)

-        yield data

+        yield stream.compress(data)
+        yield stream.flush(zlib.Z_SYNC_FLUSH)

await asyncio.sleep(1)

@@ -47,5 +50,6 @@ async def sse_endpoint(req):

"Content-type": "text/event-stream",
             "Cache-Control": "no-cache",
             "Connection": "keep-alive",

+            "Content-Encoding": "deflate",

},
     )

We can then verify that everything is working as expected by checking the DevTools:

[SSE Compression]

Multiplexingis enabled by default since Caddy supports HTTP/2. We can confirm that the same connection is being used for all our SSE requests using the DevTools again:

[SSE Multiplexing]

Automatic reconnectionon unexpected connection errors is as simple as reading the[Last-Event-ID](https://html.spec.whatwg.org/multipage/server-sent-events.html#last-event-id)header in our backend code:

<     for i in itertools.count():

---

>     start = int(req.headers.get("last-event-id", 0))
>     for i in itertools.count(start):

Nothing has to be changed in the front-end code.

We can test that it is working by starting the connection to one of the SSE endpoints and then killing uvicorn. The connection will drop, but the browser will automatically try to reconnect. Thus, if we re-start the server, we will see the stream resume from where it left off!

Notice how the stream resumes from the message243. Feels like magic 🔥

[Prova]

Conclusionhttps://germano.dev/sse-websockets/#conclusion

WebSockets are a big machinery built on top of HTTP and TCP to provide a set of extremely specific features, that istwo-wayandlow latencycommunication.

In order to do that they introduce a number of complications, which end up making both client and server implementations more complicated than solutions based entirely on HTTP.

These complications and limitations have been addressed by new specs ([RFC 7692](https://tools.ietf.org/html/rfc7692),[RFC 8441](https://tools.ietf.org/html/rfc8441)), and will slowly end up implemented in client and server libraries.

However, even in a world where WebSockets have no technical downsides, they will still be a fairly complex technology, involving a large amount of additional code both on clients and servers. Therefore, you should carefully consider if the addeded complexity is worth it, or if you can solve your problem with a much simpler solution, such as Server-Sent Events.

---------------------------------------------------------------

That’s all, folks! I hope you found this post interesting and maybe learned something new.

[Feel free to check out the code of the demo on GitHub](https://github.com/tyrion/sse-websockets-demo), if you want to experiment a bit with Server Sent Events and Websockets.

[I also encourage you to read the spec](https://html.spec.whatwg.org/#server-sent-events), because it surprisingly clear and contains many examples.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: text/html
Size: 122385 bytes
Desc: not available
URL: <https://lists.cpunks.org/pipermail/cypherpunks/attachments/20220226/ac25da7b/attachment.txt>


More information about the cypherpunks mailing list