Storm, Nugache lead dangerous new botnet barrage

Len Sassaman rabbi at abditum.com
Sun Jan 13 03:37:15 PST 2008


On Sun, 13 Jan 2008, Peter Gutmann wrote:

[Snip discussion about user cluelessness -- sure, I agree, though I
maintain hope that there is at least *some* attrition on that scale.]

> >Adding in additional computational overhead to the operation of the botnet
> >diminishes its overall capacity, either in the number of nodes, or in the
> >amount of work you can steal from the nodes without losing hosts, or both.
>
> So you reduce it from 1M nodes to 900,000 nodes, that's not much of a loss.
> The benefit you get from making it hard(er) to intercept and disrupt more than
> covers it.

Ah! There's a reason stronger than "because they can." Yes, I recognize
that the overhead is minimal, etc., etc. But without some compelling
reason (making it partitionable for resale, making it harder to disrupt,
etc.) I would not expect the botnet controllers to introduce another area
of possible failure -- one that not only increases CPU time, but
bandwidth, and makes maintaining and upgrading the code and compatibility
between instances more difficult.

I'm not sure that this *does* make it harder to disrupt the botnet,
though, does it? Does anyone have example traffic dumps of these encrypted
payloads? It should be possible to identify and block this traffic; it's
going to follow some unique pattern.



--Len.





More information about the cypherpunks-legacy mailing list