On 03.02.2015 20:38, rysiek wrote:
yet the source
of randomness and crypto implementation are not explained
properly. The wiki talks about public keys and PFS without
explaining
the relation between the two.
https://github.com/irungentoo/toxcore/blob/master/docs/updates/Crypto.md
ACK. So, the PDF I linked to goes a *bit*
further (just a wee bit). Go have a
look at the "Crypto" section:
https://jenkins.libtoxcore.so/job/Technical_Report/lastSuccessfulBuild/artifact/tox.pdf/
So, at least not a "we hold your keys -- FOR SAFETY!!1!" kind of
snakeooil.
Half of an "AOK" from me here.
Just because it could be worse doesn't mean it couldn't be better.
Thanks for the whitepaper, I'll have a look when I've the time.
*5. There is no
threat model*
"/With
the rise of government monitoring programs/" implies it's
designed to be secure against state surveillance.
"Tox does not cloak IP addresses when communicating with other
users"
In disclaimer it is also just stated that
"/Tox prevents message contents from being read or altered by
third
parties, or anyone else other than the intended recipient/",
yet it
doesn't even bother to evaluate the system against HSAs or
MSAs.
True. One has to consider their own threat model and assess if
Tox is the
answer. Tox does *not*
provide anonymity, it at least *tries*
to provide OTR-
like features (encryption, integrity, etc.).
IIRC the DH signing keys are bound the the account ID. Appelbaum
recommended in his 31c3 talk 'Reconstructing Narratives' that
users rotate their OTR keys often and verify the hash using
off-band channel.
I'm not sure it's a convenient thing users have to re-add their
contacts every time the DH signing key needs to be refreshed. It's
sort of good thing users are immediately using the public signing
key (Tox ID) but the issue is, while the Tox ID doesn't have to be
secret, it must be authentic: so users unaware of this can be
subjected to MITM attack.
*7. Neglects
general sad state of host security *
Well, yes, and my beef with Tox is also that the private keys do
not require a
passpharse to unlock. So that's a no-no in my book.
This only changes the type of attack: a keylogger has to be used
along the private key exfiltration tool.
Still, this
doesn't look like snakeoil; rather like a good idea with not-so-
stellar execution, which *might*
get better.
Am I missing anything?
I would argue the current OTR/PGP/ZRTP implementation has limited
lifespan regardless of execution, given the fact intelligence
community is expanding end-point exploitation to mass surveillance
levels: methodology is changing, not the scale:
https://www.youtube.com/watch?v=FScSpFZjFf0&t=37m35s
There's a lot of misconception on 0-days being expensive
'one-time-hacks' that must be used only when necessary. How many
anti-virus programs detect and report these? What percentage of
users are running some sort of IDS? How many users assume sudden
system crash is due to malfunctioning exploit/payload? A 0-day is
more like a master key for given OS with average lifespan of 300
days (
http://users.ece.cmu.edu/~tdumitra/public_documents/bilge12_zero_day.pdf
)
Could we have a *separate* thread for it? I'm really
interested in having a
more in-depth discussion of Tox and this could potentially
hi-jack this
thread. Much obliged.
I agree it should be separate. I tried to keep that section short
and the intention
was to provide contrast and show each of these can be addressed
simultaneously.