Well, I didn't see THAT coming: https://whispersystems.org/blog/whatsapp/ -- Pozdr rysiek
This was honestly just about as exciting as the new EFF/Mozilla/Akamai/etc CA. Strong encryption with no UX degradation, for *so* many people, and the post certainly indicates it'll be going into the rest of WhatsApp's native applications. I'm sure this fed into improvements into the TextSecure protocol, and that the PR will help WhisperSystems obtain more partnerships like this. A great day for the TS project. On Tue, Nov 18, 2014 at 6:35 PM, rysiek <rysiek@hackerspace.pl> wrote:
Well,
I didn't see THAT coming: https://whispersystems.org/blog/whatsapp/
-- Pozdr rysiek
-- konklone.com | @konklone <https://twitter.com/konklone>
WhisperSystems designed good protocols, but I am afraid that Moxie was too anxious to release this info and hit ENTER key too early :-) I am quite skeptical about the actual value from the security point of this press release. WhisperSystems reports about end-to-end encryption, that means, I encrypt my message with an encryption key that only you or both of us know. 1. How can we negotiate that key? Users are not involved, but everything happens automatically, under the hood, between two whatsapp clients. How? they negotiate the encryption keys through whatsapp servers: is it my own key or the NSA one? are they leaking the key to Facebook? 2. We do need to authenticate the identity, eg: via QR code, fingerprint, spell it loudly on the phone, etc.., which reduces usability, especially for mass market. 3. Last but not least: even if we authenticated identities and keys, how can we be sure that whatsapp client is really using the authenticated keys and not the NSA keys, maybe only on a white list of suspected mobile phone numbers? above all, they provide a proprietary and closed source app The security model is faulted, at the root level: - If I subscribe to a security service - such as messaging -, the service provider is untrusted by default. I need total transparency -> every single components in the architecture should be auditable and open source - If mobile app is closed source, I can trust only the infrastructure that should be under my full control, to be sure that no information leak outside infrastructure is ever possible. My 2 cents Marco 2014-11-19 7:25 GMT+01:00 Eric Mill <eric@konklone.com>:
This was honestly just about as exciting as the new EFF/Mozilla/Akamai/etc CA. Strong encryption with no UX degradation, for *so* many people, and the post certainly indicates it'll be going into the rest of WhatsApp's native applications.
I'm sure this fed into improvements into the TextSecure protocol, and that the PR will help WhisperSystems obtain more partnerships like this. A great day for the TS project.
On Tue, Nov 18, 2014 at 6:35 PM, rysiek <rysiek@hackerspace.pl> wrote:
Well,
I didn't see THAT coming: https://whispersystems.org/blog/whatsapp/
-- Pozdr rysiek
-- konklone.com | @konklone <https://twitter.com/konklone>
Eh, easier than than. Keys generated end to end by the book, then code in the closed source spyware app justs lifts them and posts to FB. Open protocols in closed apps are meaningless. On 19 November 2014 08:46:50 GMT+00:00, Marco Pozzato <mpodroid@gmail.com> wrote:
WhisperSystems designed good protocols, but I am afraid that Moxie was too anxious to release this info and hit ENTER key too early :-)
I am quite skeptical about the actual value from the security point of this press release.
WhisperSystems reports about end-to-end encryption, that means, I encrypt my message with an encryption key that only you or both of us know.
1. How can we negotiate that key? Users are not involved, but everything happens automatically, under the hood, between two whatsapp clients. How? they negotiate the encryption keys through whatsapp servers: is it my own key or the NSA one? are they leaking the key to Facebook? 2. We do need to authenticate the identity, eg: via QR code, fingerprint, spell it loudly on the phone, etc.., which reduces usability, especially for mass market. 3. Last but not least: even if we authenticated identities and keys, how can we be sure that whatsapp client is really using the authenticated keys and not the NSA keys, maybe only on a white list of suspected mobile phone numbers? above all, they provide a proprietary and closed source app
The security model is faulted, at the root level:
- If I subscribe to a security service - such as messaging -, the service provider is untrusted by default. I need total transparency -> every single components in the architecture should be auditable and open source - If mobile app is closed source, I can trust only the infrastructure that should be under my full control, to be sure that no information leak outside infrastructure is ever possible.
My 2 cents
Marco
2014-11-19 7:25 GMT+01:00 Eric Mill <eric@konklone.com>:
This was honestly just about as exciting as the new EFF/Mozilla/Akamai/etc CA. Strong encryption with no UX degradation, for *so* many people, and the post certainly indicates it'll be going into the rest of WhatsApp's native applications.
I'm sure this fed into improvements into the TextSecure protocol, and that the PR will help WhisperSystems obtain more partnerships like this. A great day for the TS project.
On Tue, Nov 18, 2014 at 6:35 PM, rysiek <rysiek@hackerspace.pl> wrote:
Well,
I didn't see THAT coming: https://whispersystems.org/blog/whatsapp/
-- Pozdr rysiek
-- konklone.com | @konklone <https://twitter.com/konklone>
-- Sent from my Android device with K-9 Mail. Please excuse my brevity.
On Wed, Nov 19, 2014 at 09:18:10AM +0000, Cathal (Phone) wrote:
Eh, easier than than. Keys generated end to end by the book, then code in the closed source spyware app justs lifts them and posts to FB.
Open protocols in closed apps are meaningless.
Not meaningless, although of course open source would be preferable from a trustability standpoint. I've got the executable code for the proprietary WhatsApp apk installed on my phone, and can reverse engineer it if I so choose. (I'm running CM11 so extracting the APKs is fairly straightforward.) I also have automatic app updates turned off, so I know when the code is supposed to change. Of course it would be Best (TM) if everyone could use a completely free operating system and had complete freedom to inspect all the code we depend on. But given the world we live in, 600M users with access to E2E encrypted messaging is better than 600M users without such access. -andy
Not if that E2E protocol is entirely undermined. Which is the case here: trust is security. If 600M people think they have privacy and don't, that's a problem. On 19 November 2014 21:35:33 GMT+00:00, Andy Isaacson <adi@hexapodia.org> wrote:
On Wed, Nov 19, 2014 at 09:18:10AM +0000, Cathal (Phone) wrote:
Eh, easier than than. Keys generated end to end by the book, then code in the closed source spyware app justs lifts them and posts to FB.
Open protocols in closed apps are meaningless.
Not meaningless, although of course open source would be preferable from a trustability standpoint. I've got the executable code for the proprietary WhatsApp apk installed on my phone, and can reverse engineer it if I so choose. (I'm running CM11 so extracting the APKs is fairly straightforward.) I also have automatic app updates turned off, so I know when the code is supposed to change.
Of course it would be Best (TM) if everyone could use a completely free operating system and had complete freedom to inspect all the code we depend on. But given the world we live in, 600M users with access to E2E encrypted messaging is better than 600M users without such access.
-andy
-- Sent from my Android device with K-9 Mail. Please excuse my brevity.
On Wed, Nov 19, 2014 at 10:58:56PM +0000, Cathal (Phone) wrote:
Not if that E2E protocol is entirely undermined. Which is the case here: trust is security. If 600M people think they have privacy and don't, that's a problem.
Have you heard of the phrase "harm reduction"? You can't solve a social/technical problem by insisting that only perfect solutions are acceptable. You must provide incremental solutions that can be part of a broad based move from the horrible place where we are now, towards a more safe future. I mean, *you* can do whatever you want, but users are going to ignore solutions that don't connect to where they are today. "Incremental steps with continuous improvement" is a model for advice that actually works in improving outcomes for real populations. "Burn everything to the ground and start over" is a model for advice that lets activists maintain ideological purity without dirtying their hands with actual people's actual problems. -andy
On 11/20/2014 01:40 AM, Andy Isaacson wrote:
On Wed, Nov 19, 2014 at 10:58:56PM +0000, Cathal (Phone) wrote:
Not if that E2E protocol is entirely undermined. Which is the case here: trust is security. If 600M people think they have privacy and don't, that's a problem.
Have you heard of the phrase "harm reduction"? You can't solve a social/technical problem by insisting that only perfect solutions are acceptable. You must provide incremental solutions that can be part of a broad based move from the horrible place where we are now, towards a more safe future.
Unless when it's not an improvement. False privacy promises are worse than no promises.
I mean, *you* can do whatever you want, but users are going to ignore solutions that don't connect to where they are today. "Incremental steps with continuous improvement" is a model for advice that actually works in improving outcomes for real populations. "Burn everything to the ground and start over" is a model for advice that lets activists maintain ideological purity without dirtying their hands with actual people's actual problems.
Both WhatsApp + TextSecure are centralized systems (which also happen to want full access to you contacts list). So it's actually a radical deterioration compared the decentralized protocols we built all these years. It's not about ideological purity. It's usually the "realists" who choose the easy path of building closed silos instead of getting their hands dirty and improve existing working technologies. -- Nikos Roussos http://www.roussos.cc
On 11/19/14, Andy Isaacson <adi@hexapodia.org> wrote:
... Have you heard of the phrase "harm reduction"? You can't solve a social/technical problem by insisting that only perfect solutions are acceptable. You must provide incremental solutions that can be part of a broad based move from the horrible place where we are now, towards a more safe future.
i used to agree with this, and then i realized this is bad advice if incremental improvements are resulting in less security over time. said another way, if you are currently falling behind quickly, by not moving, then moving ahead at a walk just means you fail less soon than others. everyone ends up in fail, however.
I mean, *you* can do whatever you want, but users are going to ignore solutions that don't connect to where they are today. "Incremental steps with continuous improvement" is a model for advice that actually works in improving outcomes for real populations. "Burn everything to the ground and start over" is a model for advice that lets activists maintain ideological purity without dirtying their hands with actual people's actual problems.
i think this is only true if the magnitude of broken and incompetent crushes you into inaction. if instead it spurs you to build, for years, on something of a solid base, then criticism must be deferred until that base is put to the test. of course, my time spent writing rebuttal subtracted from the time best applied proving or denying in practice, arm chair theory inviting as it is... best regards,
Dnia piątek, 28 listopada 2014 01:07:36 coderman pisze:
On 11/19/14, Andy Isaacson <adi@hexapodia.org> wrote:
... Have you heard of the phrase "harm reduction"? You can't solve a social/technical problem by insisting that only perfect solutions are acceptable. You must provide incremental solutions that can be part of a broad based move from the horrible place where we are now, towards a more safe future.
i used to agree with this, and then i realized this is bad advice if incremental improvements are resulting in less security over time.
said another way, if you are currently falling behind quickly, by not moving, then moving ahead at a walk just means you fail less soon than others.
everyone ends up in fail, however.
Still, I prefer to land in fail less soon; maybe in the meantime somebody *does* find a perfect solution I can switch to? For the time being it still makes sense to make sure I fail "the least soon" as I can.
I mean, *you* can do whatever you want, but users are going to ignore solutions that don't connect to where they are today. "Incremental steps with continuous improvement" is a model for advice that actually works in improving outcomes for real populations. "Burn everything to the ground and start over" is a model for advice that lets activists maintain ideological purity without dirtying their hands with actual people's actual problems.
i think this is only true if the magnitude of broken and incompetent crushes you into inaction.
if instead it spurs you to build, for years, on something of a solid base, then criticism must be deferred until that base is put to the test.
Well, "criticism" maybe, but then again should you be busy building your perfect solution from ground up, instead of criticising other people's temporary solutions today? ;)
of course, my time spent writing rebuttal subtracted from the time best applied proving or denying in practice, arm chair theory inviting as it is...
Ah, yes. There we are. :) There will always be different approaches to such things. Sometimes it *does* make sense to wait for the perfect solution; sometimes it *does* make sense to use harm reduction techniques. The demarcation line is *not* clear and depends heavily on circumstances. Hence, throwing any incomplete solution out just because it's incomplete, without looking at what a particular threat model is and if maybe, just maybe, it can lower the threat level to people that would be otherwise completely exposed, is disingenuous. -- Pozdr rysiek
On 11/28/14, rysiek <rysiek@hackerspace.pl> wrote:
There will always be different approaches to such things... ... The demarcation line is *not* clear and depends heavily on circumstances.
for my second act as devil's advocate, i declare that it is unreasonable to demand users recognize or understand a threat model. thus every system must be engineered to withstand the most difficult and well resourced threats, such that a solution covers all threat models sufficiently. how can making it even harder, make it simpler? well, that's the trick, isn't it? :) best regards,
On Fri, Nov 28, 2014 at 06:09:20PM -0800, coderman wrote:
for my second act as devil's advocate, i declare that it is unreasonable to demand users recognize or understand a threat model.
thus every system must be engineered to withstand the most difficult and well resourced threats, such that a solution covers all threat models sufficiently.
Agreed on the first point, disagree on the second. Any system that claims to be secure will attract uses that are inappropriate to its assumptions. Documentation is not enough to dissuade this. A colleague and I, both interested in modern cryptographic systems, started to collaborate on a new project, using Pond. Months later, we realized that we had communicated useful information early on, over Pond exclusively, and the "social norm that communications are deleted after a few days" resulted in us losing important notes about the early days of our project. Even though it was clearly documented and I had simultaneously advocated Pond to other experimental users for exactly this feature, I didn't think through the consequences of this design feature for my use case. I didn't even realize that I *had* a use case, until much later. For this scenario, it turns out we wanted a modern secure communication system more like Prate, https://github.com/kragen/prate . Except perhaps with email-sized-message semantics rather than chat semantics (or email in addition to chat?). Generalizing from this specific example, you can find many other examples of a security system being used outside of its designed envelope. ssh is widely used for login to ephemeral hosts, reducing TOFU to single session duration. ssh is used with github as merely a bidirectionally-key-authenticated transport layer ("git clone git@github.com:kragen/prate") rather than its original remote shell purpose. HTTPS x509 DV certificates have the mostly verstigal x.500 (iirc?) Location/organization/etc naming support, the CN/sAN fields being nearly the only operative ones. HTTPS virtually never uses the many varied client authentication mechanisms supported in TLS (client certificates, SRP, etc), instead Rails and the many other web-app frameworks implement user authentication over the top using passwords and cookies etc. -andy
On 11/28/14, Andy Isaacson <adi@hexapodia.org> wrote:
... A colleague and I, both interested in modern cryptographic systems, started to collaborate on a new project, using Pond. Months later, we realized that we had communicated useful information early on, over Pond exclusively, and the "social norm that communications are deleted after a few days" resulted in us losing important notes about the early days of our project.
Even though it was clearly documented and I had simultaneously advocated Pond to other experimental users for exactly this feature, I didn't think through the consequences of this design feature for my use case. I didn't even realize that I *had* a use case, until much later.
an interesting anecdote. friends and i had prior moved to configurations with explicitly no logging (a change from defaults, since OTR in most clients would log to disk by default) a change to pond no different, as prior expectations assumed no persistence...
For this scenario, it turns out we wanted a modern secure communication system more like Prate, https://github.com/kragen/prate .
we ended up on random etherpads on a trusted host. (e.g. one of our own).
Generalizing from this specific example, you can find many other examples of a security system being used outside of its designed envelope.
very true; evokes Gibson: “The street finds its own uses for things.” (and in the example above, the URI itself the authenticator for the random pad...) best regards,
participants (7)
-
Andy Isaacson
-
Cathal (Phone)
-
coderman
-
Eric Mill
-
Marco Pozzato
-
Nikos Roussos
-
rysiek