cypherpunks-legacy
Threads by month
- ----- 2025 -----
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2009 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2008 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2007 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2006 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2005 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2004 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2003 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2002 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2001 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2000 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1999 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1998 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1997 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1996 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1995 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1994 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1993 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1992 -----
- December
- November
- October
- September
July 2018
- 1371 participants
- 9656 discussions
PET 2005 Submission deadline approaching (7 Feb) and PET Award (21 Feb)
by George Danezis 06 Jul '18
by George Danezis 06 Jul '18
06 Jul '18
Dear Colleagues,
The submission deadline for the Privacy Enhancing Technologies workshop (PET
2005) is on the 7th February 2005. The latest CfP is appended.
We also solicit nominations for the "Award for Outstanding Research in Privacy
Enhancing Technologies" by February 21. For more information about suggesting
a paper for the award:
http://petworkshop.org/award/
Yours,
George Danezis
5th Workshop on Privacy Enhancing Technologies
Dubrovnik, Croatia May 30 - June 1, 2005
C A L L F O R P A P E R S
http://petworkshop.org/2005/
Important Dates:
Paper submission: February 7, 2005
Notification of acceptance: April 4, 2005
Camera-ready copy for preproceedings: May 6, 2005
Camera-ready copy for proceedings: July 1, 2005
Award for Outstanding Research in Privacy Enhancing Technologies
Nomination period: March 4, 2004 through March 7, 2005
Nomination instructions: http://petworkshop.org/award/
-----------------------------------------------------------------------
Privacy and anonymity are increasingly important in the online world.
Corporations, governments, and other organizations are realizing and
exploiting their power to track users and their behavior, and restrict
the ability to publish or retrieve documents. Approaches to
protecting individuals, groups, but also companies and governments
from such profiling and censorship include decentralization,
encryption, distributed trust, and automated policy disclosure.
This 5th workshop addresses the design and realization of such privacy
and anti-censorship services for the Internet and other communication
networks by bringing together anonymity and privacy experts from
around the world to discuss recent advances and new perspectives.
The workshop seeks submissions from academia and industry presenting
novel research on all theoretical and practical aspects of privacy
technologies, as well as experimental studies of fielded systems. We
encourage submissions from other communities such as law and business
that present their perspectives on technological issues. As in past
years, we will publish proceedings after the workshop in the Springer
Lecture Notes in Computer Science series.
Suggested topics include but are not restricted to:
* Anonymous communications and publishing systems
* Censorship resistance
* Pseudonyms, identity management, linkability, and reputation
* Data protection technologies
* Location privacy
* Policy, law, and human rights relating to privacy
* Privacy and anonymity in peer-to-peer architectures
* Economics of privacy
* Fielded systems and techniques for enhancing privacy in existing systems
* Protocols that preserve anonymity/privacy
* Privacy-enhanced access control or authentication/certification
* Privacy threat models
* Models for anonymity and unobservability
* Attacks on anonymity systems
* Traffic analysis
* Profiling and data mining
* Privacy vulnerabilities and their impact on phishing and identity theft
* Deployment models for privacy infrastructures
* Novel relations of payment mechanisms and anonymity
* Usability issues and user interfaces for PETs
* Reliability, robustness and abuse prevention in privacy systems
Stipends to attend the workshop will be made available, on the basis
of need, to cover travel expenses, hotel, or conference fees. You do
not need to submit a technical paper and you do not need to be a
student to apply for a stipend. For more information, see
http://petworkshop.org/2005/stipends.html
General Chair:
Damir Gojmerac (damir.gojmerac(a)fina.hr) Fina Corporation, Croatia
Program Chairs:
George Danezis (George.Danezis(a)cl.cam.ac.uk) University of Cambridge, UK
David Martin (dm(a)cs.uml.edu) University of Massachusetts at Lowell, USA
Program Committee:
Martin Abadi, University of California at Santa Cruz, USA
Alessandro Acquisti, Heinz School, Carnegie Mellon University, USA
Caspar Bowden, Microsoft EMEA, UK
Jean Camp, Indiana University at Bloomington, USA
Richard Clayton, University of Cambridge, UK
Lorrie Cranor, School of Computer Science, Carnegie Mellon University, USA
Roger Dingledine, The Free Haven Project, USA
Hannes Federrath, University of Regensburg, Germany
Ian Goldberg, Zero Knowledge Systems, Canada
Philippe Golle, Palo Alto Research Center, USA
Marit Hansen, Independent Centre for Privacy Protection Schleswig-Holstein,
Germany
Markus Jakobsson, Indiana University at Bloomington, USA
Dogan Kesdogan, Rheinisch-Westfaelische Technische Hochschule Aachen, Germany
Brian Levine, University of Massachusetts at Amherst, USA
Andreas Pfitzmann, Dresden University of Technology, Germany
Matthias Schunter, IBM Zurich Research Lab, Switzerland
Andrei Serjantov, The Free Haven Project, UK
Paul Syverson, Naval Research Lab, USA
Latanya Sweeney, Carnegie Mellon University, USA
Matthew Wright, University of Texas at Arlington, USA
Papers should be at most 15 pages excluding the bibliography and
well-marked appendices (using an 11-point font), and at most 20 pages
total. Submission of shorter papers (from around 4 pages) is strongly
encouraged whenever appropriate. Papers must conform to the Springer
LNCS style. Follow the "Information for Authors" link at
http://www.springer.de/comp/lncs/authors.html.
Reviewers of submitted papers are not required to read the appendices
and the paper should be intelligible without them. The paper should
start with the title, names of authors and an abstract. The
introduction should give some background and summarize the
contributions of the paper at a level appropriate for a non-specialist
reader. A preliminary version of the proceedings will be made
available to workshop participants. Final versions are not due until
after the workshop, giving the authors the opportunity to revise their
papers based on discussions during the meeting.
Submit your papers in Postscript or PDF format. To submit a paper,
compose a plain text email to pet2005-submissions(a)petworkshop.org
containing the title and abstract of the paper, the authors' names,
email and postal addresses, phone and fax numbers, and identification
of the contact author (to whom we will address all subsequent
correspondence). Attach your submission to this email and send it.
By submitting a paper, you agree that if it is accepted, you will sign
a paper distribution agreement allowing for publication, and also that
an author of the paper will register for the workshop and present the
paper there. Our current working agreement with Springer is that
authors will retain copyright on their own works while assigning an
exclusive 3-year distribution license to Springer. Authors may still
post their papers on their own Web sites. See
http://petworkshop.org/2004/paper-dist-agreement-5-04.html for the 2004
version of this agreement.
Submitted papers must not substantially overlap with papers that have
been published or that are simultaneously submitted to a journal or a
conference with proceedings.
Paper submissions must be received by February 7. We acknowledge all
submissions manually by email. If you do not receive an
acknowledgment within a few days (or one day, if you are submitting
right at the deadline), then contact the program committee chairs
directly to resolve the problem. Notification of acceptance or
rejection will be sent to authors no later than April 4 and authors
will have the opportunity to revise for the preproceedings version by
May 6.
We also invite proposals of up to 2 pages for panel discussions or
other relevant presentations. In your proposal, (1) describe the
nature of the presentation and why it is appropriate to the workshop,
(2) suggest a duration for the presentation (ideally between 45 and 90
minutes), (3) give brief descriptions of the presenters, and (4)
indicate which presenters have confirmed their availability for the
presentation if it is scheduled. Otherwise, submit your proposal by
email as described above, including the designation of a contact
author. The program committee will consider presentation proposals
along with other workshop events, and will respond by the paper
decision date with an indication of its interest in scheduling the
event. The proceedings will contain 1-page abstracts of the
presentations that take place at the workshop. Each contact author
for an accepted panel proposal must prepare and submit this abstract
in the Springer LNCS style by the "Camera-ready copy for
preproceedings" deadline date.
_______________________________________________
NymIP-res-group mailing list
NymIP-res-group(a)nymip.org
http://www.nymip.org/mailman/listinfo/nymip-res-group
--- end forwarded text
--
-----------------
R. A. Hettinga <mailto: rah(a)ibuc.com>
The Internet Bearer Underwriting Corporation <http://www.ibuc.com/>
44 Farquhar Street, Boston, MA 02131 USA
"... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience." -- Edward Gibbon, 'Decline and Fall of the Roman Empire'
1
0
CRYPTO-GRAM
July 15, 2009
by Bruce Schneier
Chief Security Technology Officer, BT
schneier(a)schneier.com
http://www.schneier.com
A free monthly newsletter providing summaries, analyses, insights, and
commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit
<http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at
<http://www.schneier.com/crypto-gram-0907.html>. These same essays
appear in the "Schneier on Security" blog:
<http://www.schneier.com/blog>. An RSS feed is available.
** *** ***** ******* *********** *************
In this issue:
Imagining Threats
Security, Group Size, and the Human Brain
North Korean Cyberattacks
Why People Don't Understand Risks
Fraud on eBay
News
Authenticating Paperwork
The Pros and Cons of Password Masking
The "Hidden Cost" of Privacy
Fixing Airport Security
Schneier News
Homomorphic Encryption Breakthrough
New Attack on AES
MD6 Withdrawn from SHA-3 Competition
Ever Better Cryptanalytic Results Against SHA-1
Comments from Readers
** *** ***** ******* *********** *************
Imagining Threats
A couple of years ago, the Department of Homeland Security hired a bunch
of science fiction writers to come in for a day and think of ways
terrorists could attack America. If our inability to prevent 9/11 marked
a failure of imagination, as some said at the time, then who better than
science fiction writers to inject a little imagination into
counterterrorism planning?
I discounted the exercise at the time, calling it "embarrassing." I
never thought that 9/11 was a failure of imagination. I thought, and
still think, that 9/11 was primarily a confluence of three things: the
dual failure of centralized coordination and local control within the
FBI, and some lucky breaks on the part of the attackers. More
imagination leads to more movie-plot threats -- which contributes to
overall fear and overestimation of the risks. And that doesn't help keep
us safe at all.
Recently, I read a paper by Magne Jorgensen that provides some insight
into why this is so. Titled More Risk Analysis Can Lead to Increased
Over-Optimism and Over-Confidence, the paper isn't about terrorism at
all. It's about software projects.
Most software development project plans are overly optimistic, and most
planners are overconfident about their overoptimistic plans. Jorgensen
studied how risk analysis affected this. He conducted four separate
experiments on software engineers, and concluded (though there are lots
of caveats in the paper, and more research needs to be done) that
performing more risk analysis can make engineers more overoptimistic
instead of more realistic.
Potential explanations all come from behavioral economics: cognitive
biases that affect how we think and make decisions. (I've written about
some of these biases and how they affect security decisions, and there's
a great book on the topic as well.)
First, there's a control bias. We tend to underestimate risks in
situations where we are in control, and overestimate risks in situations
when we are not in control. Driving versus flying is a common example.
This bias becomes stronger with familiarity, involvement and a desire to
experience control, all of which increase with increased risk analysis.
So the more risk analysis, the greater the control bias, and the greater
the underestimation of risk.
The second explanation is the availability heuristic. Basically, we
judge the importance or likelihood of something happening by the ease of
bringing instances of that thing to mind. So we tend to overestimate the
probability of a rare risk that is seen in a news headline, because it
is so easy to imagine. Likewise, we underestimate the probability of
things occurring that don't happen to be in the news.
A corollary of this phenomenon is that, if we're asked to think about a
series of things, we overestimate the probability of the last thing
thought about because it's more easily remembered.
According to Jorgensen's reasoning, people tend to do software risk
analysis by thinking of the severe risks first, and then the more
manageable risks. So the more risk analysis that's done, the less severe
the last risk imagined, and thus the greater the underestimation of the
total risk.
The third explanation is similar: the peak end rule. When thinking about
a total experience, people tend to place too much weight on the last
part of the experience. In one experiment, people had to hold their
hands under cold water for one minute. Then, they had to hold their
hands under cold water for one minute again, then keep their hands in
the water for an additional 30 seconds while the temperature was
gradually raised. When asked about it afterwards, most people preferred
the second option to the first, even though the second had more total
discomfort. (An intrusive medical device was redesigned along these
lines, resulting in a longer period of discomfort but a relatively
comfortable final few seconds. People liked it a lot better.) This
means, like the second explanation, that the least severe last risk
imagined gets greater weight than it deserves.
Fascinating stuff. But the biases produce the reverse effect when it
comes to movie-plot threats. The more you think about far-fetched
terrorism possibilities, the more outlandish and scary they become, and
the less control you think you have. This causes us to overestimate the
risks.
Think about this in the context of terrorism. If you're asked to come up
with threats, you'll think of the significant ones first. If you're
pushed to find more, if you hire science-fiction writers to dream them
up, you'll quickly get into the low-probability movie plot threats. But
since they're the last ones generated, they're more available. (They're
also more vivid -- science fiction writers are good at that -- which
also leads us to overestimate their probability.) They also suggest
we're even less in control of the situation than we believed. Spending
too much time imagining disaster scenarios leads people to overestimate
the risks of disaster.
I'm sure there's also an anchoring effect in operation. This is another
cognitive bias, where people's numerical estimates of things are
affected by numbers they've most recently thought about, even random
ones. People who are given a list of three risks will think the total
number of risks are lower than people who are given a list of 12 risks.
So if the science fiction writers come up with 137 risks, people will
believe that the number of risks is higher than they otherwise would --
even if they recognize the 137 number is absurd.
Jorgensen does not believe risk analysis is useless in software
projects, and I don't believe scenario brainstorming is useless in
counterterrorism. Both can lead to new insights and, as a result, a more
intelligent analysis of both specific risks and general risk. But an
over-reliance on either can be detrimental.
Last month, at the 2009 Homeland Security Science & Technology
Stakeholders Conference in Washington D.C., science fiction writers
helped the attendees think differently about security. This seems like a
far better use of their talents than imagining some of the zillions of
ways terrorists can attack America.
This essay originally appeared on Wired.com.
http://www.wired.com/politics/security/commentary/securitymatters/2009/06/s…
or http://tinyurl.com/nm6tj7
A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/06/imagining_threa.html
** *** ***** ******* *********** *************
Security, Group Size, and the Human Brain
If the size of your company grows past 150 people, it's time to get name
badges. It's not that larger groups are somehow less secure, it's just
that 150 is the cognitive limit to the number of people a human brain
can maintain a coherent social relationship with.
Primatologist Robin Dunbar derived this number by comparing neocortex --
the "thinking" part of the mammalian brain -- volume with the size of
primate social groups. By analyzing data from 38 primate genera and
extrapolating to the human neocortex size, he predicted a human "mean
group size" of roughly 150.
This number appears regularly in human society; it's the estimated size
of a Neolithic farming village, the size at which Hittite settlements
split, and the basic unit in professional armies from Roman times to the
present day. Larger group sizes aren't as stable because their members
don't know each other well enough. Instead of thinking of the members as
people, we think of them as groups of people. For such groups to
function well, they need externally imposed structure, such as name badges.
Of course, badges aren't the only way to determine in-group/out-group
status. Other markers include insignia, uniforms, and secret handshakes.
They have different security properties and some make more sense than
others at different levels of technology, but once a group reaches 150
people, it has to do something.
More generally, there are several layers of natural human group size
that increase with a ratio of approximately three: 5, 15, 50, 150, 500,
and 1500 -- although, really, the numbers aren't as precise as all that,
and groups that are less focused on survival tend to be smaller. The
layers relate to both the intensity and intimacy of relationship and the
frequency of contact.
The smallest, three to five, is a "clique": the number of people from
whom you would seek help in times of severe emotional distress. The
twelve to 20 group is the "sympathy group": people with which you have
special ties. After that, 30 to 50 is the typical size of
hunter-gatherer overnight camps, generally drawn from the same pool of
150 people. No matter what size company you work for, there are only
about 150 people you consider to be "co-workers." (In small companies,
Alice and Bob handle accounting. In larger companies, it's the
accounting department -- and maybe you know someone there personally.)
The 500-person group is the "megaband," and the 1,500-person group is
the "tribe." Fifteen hundred is roughly the number of faces we can put
names to, and the typical size of a hunter-gatherer society.
These numbers are reflected in military organization throughout history:
squads of 10 to 15 organized into platoons of three to four squads,
organized into companies of three to four platoons, organized into
battalions of three to four companies, organized into regiments of three
to four battalions, organized into divisions of two to three regiments,
and organized into corps of two to three divisions.
Coherence can become a real problem once organizations get above about
150 in size. So as group sizes grow across these boundaries, they have
more externally imposed infrastructure -- and more formalized security
systems. In intimate groups, pretty much all security is ad hoc.
Companies smaller than 150 don't bother with name badges; companies
greater than 500 hire a guard to sit in the lobby and check badges. The
military have had centuries of experience with this under rather trying
circumstances, but even there the real commitment and bonding invariably
occurs at the company level. Above that you need to have rank imposed by
discipline.
The whole brain-size comparison might be bunk, and a lot of evolutionary
psychologists disagree with it. But certainly security systems become
more formalized as groups grow larger and their members less known to
each other. When do more formal dispute resolution systems arise: town
elders, magistrates, judges? At what size boundary are formal
authentication schemes required? Small companies can get by without the
internal forms, memos, and procedures that large companies require; when
does what tend to appear? How does punishment formalize as group size
increase? And how do all these things affect group coherence? People act
differently on social networking sites like Facebook when their list of
"friends" grows larger and less intimate. Local merchants sometimes let
known regulars run up tabs. I lend books to friends with much less
formality than a public library. What examples have you seen?
An edited version of this essay, without links, appeared in the
July/August 2009 issue of IEEE Security & Privacy.
A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/07/security_group.html
** *** ***** ******* *********** *************
North Korean Cyberattacks
To hear the media tell it, the United States suffered a major
cyberattack last week. Stories were everywhere. "Cyber Blitz hits U.S.,
Korea" was the headline in Thursday's Wall Street Journal. North Korea
was blamed.
Where were you when North Korea attacked America? Did you feel the fury
of North Korea's armies? Were you fearful for your country? Or did
your resolve strengthen, knowing that we would defend our homeland
bravely and valiantly?
My guess is that you didn't even notice, that -- if you didn't open a
newspaper or read a news website -- you had no idea anything was
happening. Sure, a few government websites were knocked out, but that's
not alarming or even uncommon. Other government websites were attacked
but defended themselves, the sort of thing that happens all the time. If
this is what an international cyberattack looks like, it hardly seems
worth worrying about at all.
Politically motivated cyber attacks are nothing new. We've seen UK vs.
Ireland. Israel vs. the Arab states. Russia vs. several former Soviet
Republics. India vs. Pakistan, especially after the nuclear bomb tests
in 1998. China vs. the United States, especially in 2001 when a U.S. spy
plane collided with a Chinese fighter jet. And so on and so on.
The big one happened in 2007, when the government of Estonia was
attacked in cyberspace following a diplomatic incident with Russia about
the relocation of a Soviet World War II memorial. The networks of many
Estonian organizations, including the Estonian parliament, banks,
ministries, newspapers and broadcasters, were attacked and -- in many
cases -- shut down. Estonia was quick to blame Russia, which was
equally quick to deny any involvement.
It was hyped as the first cyberwar, but after two years there is still
no evidence that the Russian government was involved. Though Russian
hackers were indisputably the major instigators of the attack, the only
individuals positively identified have been young ethnic Russians living
inside Estonia, who were angry over the statue incident.
Poke at any of these international incidents, and what you find are kids
playing politics. Last Wednesday, South Korea's National Intelligence
Service admitted that it didn't actually know that North Korea was
behind the attacks: "North Korea or North Korean sympathizers in the
South" was what it said. Once again, it'll be kids playing politics.
This isn't to say that cyberattacks by governments aren't an issue, or
that cyberwar is something to be ignored. The constant attacks by
Chinese nationals against U.S. networks may not be government-sponsored,
but it's pretty clear that they're tacitly government-approved.
Criminals, from lone hackers to organized crime syndicates, attack
networks all the time. And war expands to fill every possible theater:
land, sea, air, space, and now cyberspace. But cyberterrorism is nothing
more than a media invention designed to scare people. And for there to
be a cyberwar, there first needs to be a war.
Israel is currently considering attacking Iran in cyberspace, for
example. If it tries, it'll discover that attacking computer networks
is an inconvenience to the nuclear facilities it's targeting, but
doesn't begin to substitute for bombing them.
In May, President Obama gave a major speech on cybersecurity. He was
right when he said that cybersecurity is a national security issue, and
that the government needs to step up and do more to prevent
cyberattacks. But he couldn't resist hyping the threat with scare
stories: "In one of the most serious cyber incidents to date against our
military networks, several thousand computers were infected last year by
malicious software -- malware," he said. What he didn't add was that
those infections occurred because the Air Force couldn't be bothered to
keep its patches up to date.
This is the face of cyberwar: easily preventable attacks that, even when
they succeed, only a few people notice. Even this current incident is
turning out to be a sloppily modified five-year-old worm that no modern
network should still be vulnerable to.
Securing our networks doesn't require some secret advanced NSA
technology. It's the boring network security administration stuff we
already know how to do: keep your patches up to date, install good
anti-malware software, correctly configure your firewalls and
intrusion-detection systems, monitor your networks. And while some
government and corporate networks do a pretty good job at this, others
fail again and again.
Enough of the hype and the bluster. The news isn't the attacks, but that
some networks had security lousy enough to be vulnerable to them.
This essay originally appeared on the Minnesota Public Radio website.
http://minnesota.publicradio.org/display/web/2009/07/10/schneier/
A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/07/north_korean_cy.html
** *** ***** ******* *********** *************
Why People Don't Understand Risks
Last week's Minneapolis Star Tribune had the front-page headline:
"Co-sleeping kills about 20 infants each year." The only problem is
that there's no additional information with which to make sense of the
statistic.
How many infants don't die each year? How many infants die each year in
separate beds? Is the death rate for co-sleepers greater or less than
the death rate for separate-bed sleepers? Without this information,
it's impossible to know whether this statistic is good or bad.
But the media rarely provides context for the data. The story is in the
aftermath of an incident where a baby was accidentally smothered in his
sleep.
Oh, and that 20-infants-per-year number is for Minnesota only. No word
as to whether the situation is better or worse in other states.
The headline in the web article is different.
http://www.startribune.com/local/49985722.html?elr=KArksUUUoDEy3LGDiO7aiU
or http://tinyurl.com/nfzgcl
** *** ***** ******* *********** *************
Fraud on eBay
I expected selling my computer on eBay to be easy.
Attempt 1: I listed it. Within hours, someone bought it -- from a
hacked account, as eBay notified me, canceling the sale.
Attempt 2: I listed it again. Within hours, someone bought it, and
asked me to send it to her via FedEx overnight. The buyer sent payment
via PayPal immediately, and then -- near as I could tell -- immediately
opened a dispute with PayPal so that the funds were put on hold. And
then she sent me an e-mail saying "I paid you, now send me the
computer." But PayPal was faster than she expected, I think. At the
same time, I received an e-mail from PayPal saying that I might have
received a payment that the account holder did not authorize, and that I
shouldn't ship the item until the investigation is complete.
I was willing to make Attempt 3, but someone on my blog bought it first.
It looks like eBay is completely broken for items like this.
It's not just me.
http://consumerist.com/5007790/its-now-completely-impossible-to-sell-a-lapt…
or http://tinyurl.com/55hprp
A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/06/fraud_on_ebay.html
** *** ***** ******* *********** *************
News
Did a public Twitter post lead to a burglary?
http://www.usatoday.com/travel/news/2009-06-08-twitter-vacation_N.htm
Prairie dogs hack Baltimore Zoo; an amusing story that echoes a lot of
our own security problems.
http://www.baltimoresun.com/news/maryland/baltimore-city/bal-md.ci.zoo12jun…
or http://tinyurl.com/mcuzam
The U.S. Department of Homeland Security has a blog. I don't know if it
will be as interesting or entertaining as the TSA's blog.
http://www.dhs.gov/journal/theblog
Carrot-bomb art project bombs in Sweden:
http://news.bbc.co.uk/2/hi/europe/8099561.stm
Fascinating research on the psychology of con games. "The psychology of
scams: Provoking and committing errors of judgement" was prepared for
the UK Office of Fair Trading by the University of Exeter School of
Psychology.
http://www.schneier.com/blog/archives/2009/06/the_psychology_3.html
New computer snooping tool:
http://investors.guidancesoftware.com/releasedetail.cfm?ReleaseID=384544
or http://tinyurl.com/lwhuod
This week's movie-plot threat -- fungus:
http://www.schneier.com/blog/archives/2009/06/this_weeks_movi.html
Engineers are more likely to become Muslim terrorists. At least, that's
what the facts indicate. Is it time to start profiling?
http://www.newscientist.com/article/mg20227127.200-can-university-subjects-…
or http://tinyurl.com/m5r56h
http://www.nuff.ox.ac.uk/users/gambetta/Engineers%20of%20Jihad.pdf
John Mueller on nuclear disarmament: "The notion that the world should
rid itself of nuclear weapons has been around for over six decades --
during which time they have been just about the only instrument of
destruction that hasn't killed anybody."
http://www.schneier.com/blog/archives/2009/06/john_mueller_on.html
Eavesdropping on dot-matrix printers by listening to them.
http://www.schneier.com/blog/archives/2009/06/eavesdropping_o_3.html
Research on the security of online games:
http://www.schneier.com/blog/archives/2009/06/research_on_the.html
Ross Anderson liveblogged the 8th Workshop on Economics of Information
Security (WEIS) at University College London.
http://www.lightbluetouchpaper.org/2009/06/24/weis-2009-liveblog/
I wrote about WEIS 2006 back in 2006.
http://www.schneier.com/blog/archives/2006/06/economics_and_i_1.html
Clear, the company that sped people through airport security, has ceased
operations. It is unclear what will happen to all that personal data
they have collected.
http://www.schneier.com/blog/archives/2009/06/clear_shuts_dow.html
This no-stabbing knife seems not to be a joke.
http://www.timesonline.co.uk/tol/news/uk/crime/article6501720.ece
I've already written about the risks of pointy knives.
http://www.schneier.com/blog/archives/2005/06/risks_of_pointy.html
The Communication Security Establishment (CSE, basically Canada's NSA)
is growing so fast they're running out of room and building new office
buildings.
http://www.defenseindustrydaily.com/Canadas-CSE-ELINT-Agency-Building-New-F…
or http://tinyurl.com/leu79h
Cryptography spam:
http://www.schneier.com/blog/archives/2009/06/cryptography_sp.html
More security countermeasures from the natural world:
1. The plant caladium steudneriifolium pretends to be ill so mining
moths won't eat it.
http://news.bbc.co.uk/earth/hi/earth_news/newsid_8108000/8108940.stm
2. Cabbage aphids arm themselves with chemical bombs.
http://scienceblogs.com/notrocketscience/2009/06/aphids_defend_themselves_w…
or http://tinyurl.com/ksegwk
3. The dark-footed ant spider mimics an ant so that it's not eaten by
other spiders, and so it can eat spiders itself.
http://scienceblogs.com/notrocketscience/2009/06/spiders_gather_in_groups_t…
or http://tinyurl.com/p9u8r9
http://scienceblogs.com/notrocketscience/2009/07/spider_mimics_ant_to_eat_s…
or http://tinyurl.com/mhjxh3
Information leakage from keypads. (You need to click on the link to see
the pictures.)
http://www.schneier.com/blog/archives/2009/07/information_lea_1.html
Good essay -- "The Staggering Cost of Playing it 'Safe'" -- about the
political motivations for terrorist security policy.
http://www.dailykos.com/storyonly/2009/6/16/743102/-The-Staggering-Cost-of-…
or http://tinyurl.com/m8dlvr
My commentary on a article hyping the terrorist risk of cloud computing:
http://www.schneier.com/blog/archives/2009/07/terrorist_risk.html
Pocketless trousers to protect against bribery in Nepal:
http://www.google.com/hostednews/afp/article/ALeqM5gmKIu2qKjavgL6B0s7161VCy…
or http://tinyurl.com/mexcdy
Anti-theft lunch bags:
http://design-milk.com/anti-theft-lunch-bags/
U.S. court institutes limits on TSA searches. This is good news.
http://www.schneier.com/blog/archives/2009/07/court_limits_on.html
Spanish police foil remote-controlled zeppelin jailbreak. Sometimes
movie plots actually happen.
http://gizmodo.com/5307943/spanish-police-foil-remote+controlled-zeppelin-j…
or http://tinyurl.com/qcns4y
http://www.thestar.com/news/world/article/660875
Almost two years ago, I wrote about my strategy for encrypting my
laptop. One of the things I said was: "There are still two scenarios
you aren't secure against, though. You're not secure against someone
snatching your laptop out of your hands as you're typing away at the
local coffee shop. And you're not secure against the authorities telling
you to decrypt your data for them." Here's a free program that defends
against that first threat: it locks the computer unless a key is pressed
every n seconds. Honestly, this would be too annoying for me to use,
but you're welcome to try it.
http://www.donationcoder.com/Forums/bb/index.php?topic=18656.0
http://www.schneier.com/blog/archives/2009/06/protecting_agai.html
http://www.schneier.com/essay-199.html
You won't hear about this ATM vulnerability, because the presentation
has been pulled from the BlackHat conference:
http://www.schneier.com/blog/archives/2009/07/the_atm_vulnera.html
The NSA is building a massive data center in Utah.
http://www.sltrib.com/ci_12735293
http://www.deseretnews.com/article/705314456/Psst-Big-spy-center-is-coming-…
or http://tinyurl.com/nrn64r
I was quoted as calling Google's Chrome operating system "idiotic."
Here's additional explanation and context.
http://www.schneier.com/blog/archives/2009/07/making_an_opera.html
How to cause chaos in an airport: leave a suitcase in a restroom.
http://www.schneier.com/blog/archives/2009/07/lost_suitcases.html
Interesting paper from HotSec '07: "Do Strong Web Passwords Accomplish
Anything?" by Dinei Florencio, Cormac Herley, and Baris Coskun.
http://www.usenix.org/event/hotsec07/tech/full_papers/florencio/florencio.p…
or http://tinyurl.com/ca9mp9
Interesting use of gaze tracking software to protect privacy:
http://www.schneier.com/blog/archives/2009/07/gaze_tracking_s.html
Poor man's steganography -- hiding documents in corrupt PDF documents:
http://blog.didierstevens.com/2009/07/01/embedding-and-hiding-files-in-pdf-…
or http://tinyurl.com/m6onbo
** *** ***** ******* *********** *************
Authenticating Paperwork
It's a sad, horrific story. Homeowner returns to find his house
demolished. The demolition company was hired legitimately but there was
a mistake and it demolished the wrong house. The demolition company
relied on GPS co-ordinates, but requiring street addresses isn't a
solution. A typo in the address is just as likely, and it would have
demolished the house just as quickly.
The problem is less how the demolishers knew which house to knock down,
and more how they confirmed that knowledge. They trusted the paperwork,
and the paperwork was wrong. Informality works when everybody knows
everybody else. When merchants and customers know each other, government
officials and citizens know each other, and people know their neighbors,
people know what's going on. In that sort of milieu, if something goes
wrong, people notice.
In our modern anonymous world, paperwork is how things get done.
Traditionally, signatures, forms, and watermarks all made paperwork
official. Forgeries were possible but difficult. Today, there's still
paperwork, but for the most part it only exists until the information
makes its way into a computer database. Meanwhile, modern technology --
computers, fax machines and desktop publishing software -- has made it
easy to forge paperwork. Every case of identity theft has, at its core,
a paperwork failure. Fake work orders, purchase orders, and other
documents are used to steal computers, equipment, and stock.
Occasionally, fake faxes result in people being sprung from prison. Fake
boarding passes can get you through airport security. This month hackers
officially changed the name of a Swedish man.
A reporter even changed the ownership of the Empire State Building.
Sure, it was a stunt, but this is a growing form of crime. Someone
pretends to be you -- preferably when you're away on holiday -- and
sells your home to someone else, forging your name on the paperwork. You
return to find someone else living in your house, someone who thinks he
legitimately bought it. In some senses, this isn't new. Paperwork
mistakes and fraud have happened ever since there was paperwork. And the
problem hasn't been fixed yet for several reasons.
One, our sloppy systems generally work fine, and it's how we get things
done with minimum hassle. Most people's houses don't get demolished and
most people's names don't get maliciously changed. As common as identity
theft is, it doesn't happen to most of us. These stories are news
because they are so rare. And in many cases, it's cheaper to pay for the
occasional blunder than ensure it never happens.
Two, sometimes the incentives aren't in place for paperwork to be
properly authenticated. The people who demolished that family home were
just trying to get a job done. The same is true for government officials
processing title and name changes. Banks get paid when money is
transferred from one account to another, not when they find a paperwork
problem. We're all irritated by forms stamped 17 times, and other
mysterious bureaucratic processes, but these are actually designed to
detect problems.
And three, there's a psychological mismatch: it is easy to fake
paperwork, yet for the most part we act as if it has magical properties
of authenticity.
What's changed is scale. Fraud can be perpetrated against hundreds of
thousands, automatically. Mistakes can affect that many people, too.
What we need are laws that penalize people or companies -- criminally or
civilly -- who make paperwork errors. This raises the cost of mistakes,
making authenticating paperwork more attractive, which changes the
incentives of those on the receiving end of the paperwork. And that will
cause the market to devise technologies to verify the provenance,
accuracy, and integrity of information: telephone verification,
addresses and GPS co-ordinates, cryptographic authentication, systems
that double- and triple-check, and so on.
We can't reduce society's reliance on paperwork, and we can't eliminate
errors based on it. But we can put economic incentives in place for
people and companies to authenticate paperwork more.
This essay originally appeared in The Guardian.
http://www.guardian.co.uk/technology/2009/jun/24/read-me-first-identity-fra…
or http://tinyurl.com/ls3cdp
A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/06/authenticating_1.html
** *** ***** ******* *********** *************
The Pros and Cons of Password Masking
Usability guru Jakob Nielsen opened up a can of worms when he made the
case against password masking -- the practice of hiding computer
password characters behind asterisks -- in his blog. I chimed in that I
agreed. Almost 165 comments on my blog (and several articles, essays,
and many other blog posts) later, the consensus is that we were wrong.
I was certainly too glib. Like any security countermeasure, password
masking has value. But like any countermeasure, password masking is not
a panacea. And the costs of password masking need to be balanced with
the benefits.
The cost is accuracy. When users don't get visual feedback from what
they're typing, they're more prone to make mistakes. This is especially
true with character strings that have non-standard characters and
capitalization. This has several ancillary costs:
* Users get pissed off.
* Users are more likely to choose easy-to-type passwords, reducing both
mistakes and security. Removing password masking will make people more
comfortable with complicated passwords: they'll become easier to
memorize and easier to use.
The benefits of password masking are more obvious:
*Security from shoulder surfing. If people can't look over your shoulder
and see what you're typing, they're much less likely to be able to steal
your password. Yes, they can look at your fingers instead, but that's
much harder than looking at the screen. Surveillance cameras are also an
issue: it's easier to watch someone's fingers on recorded video, but
reading a cleartext password off a screen is trivial.
* In some situations, there is a trust dynamic involved. Do you type
your password while your boss is standing over your shoulder watching?
How about your spouse or partner? Your parent or child? Your teacher or
students? At ATMs, there's a social convention of standing away from
someone using the machine, but that convention doesn't apply to
computers. You might not trust the person standing next to you enough to
let him see your password, but don't feel comfortable telling him to
look away. Password masking solves that social awkwardness.
* Security from screen scraping malware. This is less of an issue;
keyboard loggers are more common and unaffected by password masking. And
if you have that kind of malware on your computer, you've got all sorts
of problems.
* A security "signal." Password masking alerts users, and I'm thinking
users who aren't particularly security savvy, that passwords are a secret.
I believe that shoulder surfing isn't nearly the problem it's made out
to be. One, lots of people use their computers in private, with no one
looking over their shoulders. Two, personal handheld devices are used
very close to the body, making shoulder surfing all that much harder.
Three, it's hard to quickly and accurately memorize a random
non-alphanumeric string that flashes on the screen for a second or so.
This is not to say that shoulder surfing isn't a threat. It is. And, as
many readers pointed out, password masking is one of the reasons it
isn't more of a threat. And the threat is greater for those who are not
fluent computer users: slow typists and people who are likely to choose
bad passwords. But I believe that the risks are overstated.
Password masking is definitely important on public terminals with short
PINs. (I'm thinking of ATMs.) The value of the PIN is large, shoulder
surfing is more common, and a four-digit PIN is easy to remember in any
case.
And lastly, this problem largely disappears on the Internet on your
personal computer. Most browsers include the ability to save and then
automatically populate password fields, making the usability problem go
away at the expense of another security problem (the security of the
password becomes the security of the computer). There's a Firefox
plug-in that gets rid of password masking. And programs like my own
Password Safe allow passwords to be cut and pasted into applications,
also eliminating the usability problem.
One approach is to make it a configurable option. High-risk banking
applications could turn password masking on by default; other
applications could turn it off by default. Browsers in public locations
could turn it on by default. I like this, but it complicates the user
interface.
A reader mentioned BlackBerry's solution, which is to display each
character briefly before masking it; that seems like an excellent
compromise.
I, for one, would like the option. I cannot type complicated WEP keys
into Windows -- twice! what's the deal with that? -- without making
mistakes. I cannot type my rarely used and very complicated PGP keys
without making a mistake unless I turn off password masking. That's what
I was reacting to when I said "I agree."
So was I wrong? Maybe. Okay, probably. Password masking definitely
improves security; many readers pointed out that they regularly use
their computer in crowded environments, and rely on password masking to
protect their passwords. On the other hand, password masking reduces
accuracy and makes it less likely that users will choose secure and
hard-to-remember passwords, I will concede that the password masking
trade-off is more beneficial than I thought in my snap reaction, but
also that the answer is not nearly as obvious as we have historically
assumed.
A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/07/the_pros_and_co.html
** *** ***** ******* *********** *************
The "Hidden Cost" of Privacy
Forbes ran an article talking about the "hidden" cost of privacy.
Basically, the point was that privacy regulations are expensive to
comply with, and a lot of that expense gets eaten up by the mechanisms
of compliance and doesn't go toward improving anyone's actual privacy.
This is a valid point, and one that I make in talks about privacy all
the time. It's particularly bad in the United States, because we have a
patchwork of different privacy laws covering different types of
information and different situations and not a single comprehensive
privacy law.
The meta-problem is simple to describe: those entrusted with our privacy
often don't have much incentive to respect it. Examples include: credit
bureaus such as TransUnion and Experian, who don't have any business
relationship at all with the people whose data they collect and sell;
companies such as Google who give away services -- and collect personal
data as a part of that -- as an incentive to view ads, and make money by
selling those ads to other companies; medical insurance companies, who
are chosen by a person's employer; and computer software vendors, who
can have monopoly powers over the market. Even worse, it can be
impossible to connect an effect of a privacy violation with the
violation itself -- if someone opens a bank account in your name, how do
you know who was to blame for the privacy violation? -- so even when
there is a business relationship, there's no clear cause-and-effect
relationship.
What this all means is that protecting individual privacy remains an
externality for many companies, and that basic market dynamics won't
work to solve the problem. Because the efficient market solution won't
work, we're left with inefficient regulatory solutions. So now the
question becomes: how do we make regulation as efficient as possible? I
have some suggestions:
* Broad privacy regulations are better than narrow ones.
* Simple and clear regulations are better than complex and confusing ones.
* It's far better to regulate results than methodology.
* Penalties for bad behavior need to be expensive enough to make good
behavior the rational choice.
We'll never get rid of the inefficiencies of regulation -- that's the
nature of the beast, and why regulation only makes sense when the market
fails -- but we can reduce them.
Forbes article:
http://www.forbes.com/forbes/2009/0608/034-privacy-research-hidden-cost-of-…
or http://tinyurl.com/obpf6j
** *** ***** ******* *********** *************
Fixing Airport Security
It's been months since the Transportation Security Administration has
had a permanent director. If, during the job interview (no, I didn't get
one), President Obama asked me how I'd fix airport security in one
sentence, I would reply: "Get rid of the photo ID check, and return
passenger screening to pre-9/11 levels."
Okay, that's a joke. While showing ID, taking your shoes off and
throwing away your water bottles isn't making us much safer, I don't
expect the Obama administration to roll back those security measures
anytime soon. Airport security is more about CYA than anything else:
defending against what the terrorists did last time.
But the administration can't risk appearing as if it facilitated a
terrorist attack, no matter how remote the possibility, so those
annoyances are probably here to stay.
This would be my real answer: "Establish accountability and transparency
for airport screening." And if I had another sentence: "Airports are one
of the places where Americans, and visitors to America, are most likely
to interact with a law enforcement officer - and yet no one knows what
rights travelers have or how to exercise those rights."
Obama has repeatedly talked about increasing openness and transparency
in government, and it's time to bring transparency to the Transportation
Security Administration (TSA).
Let's start with the no-fly and watch lists. Right now, everything about
them is secret: You can't find out if you're on one, or who put you
there and why, and you can't clear your name if you're innocent. This
Kafkaesque scenario is so un-American it's embarrassing. Obama should
make the no-fly list subject to judicial review.
Then, move on to the checkpoints themselves. What are our rights? What
powers do the TSA officers have? If we're asked "friendly" questions by
behavioral detection officers, are we allowed not to answer? If we
object to the rough handling of ourselves or our belongings, can the TSA
official retaliate against us by putting us on a watch list? Obama
should make the rules clear and explicit, and allow people to bring
legal action against the TSA for violating those rules; otherwise,
airport checkpoints will remain a Constitution-free zone in our country.
Next, Obama should refuse to use unfunded mandates to sneak expensive
security measures past Congress. The Secure Flight program is the worst
offender. Airlines are being forced to spend billions of dollars
redesigning their reservations systems to accommodate the TSA's demands
to preapprove every passenger before he or she is allowed to board an
airplane. These costs are borne by us, in the form of higher ticket
prices, even though we never see them explicitly listed.
Maybe Secure Flight is a good use of our money; maybe it isn't. But
let's have debates like that in the open, as part of the budget process,
where it belongs.
And finally, Obama should mandate that airport security be solely about
terrorism, and not a general-purpose security checkpoint to catch
everyone from pot smokers to deadbeat dads.
The Constitution provides us, both Americans and visitors to America,
with strong protections against invasive police searches. Two exceptions
come into play at airport security checkpoints. The first is "implied
consent," which means that you cannot refuse to be searched; your
consent is implied when you purchased your ticket. And the second is
"plain view," which means that if the TSA officer happens to see
something unrelated to airport security while screening you, he is
allowed to act on that.
Both of these principles are well established and make sense, but it's
their combination that turns airport security checkpoints into
police-state-like checkpoints.
The TSA should limit its searches to bombs and weapons and leave general
policing to the police - where we know courts and the Constitution still
apply.
None of these changes will make airports any less safe, but they will go
a long way to de-ratcheting the culture of fear, restoring the
presumption of innocence and reassuring Americans, and the rest of the
world, that - as Obama said in his inauguration speech - "we reject as
false the choice between our safety and our ideals."
This essay originally appeared, without hyperlinks, in the New York
Daily News.
http://www.nydailynews.com/opinions/2009/06/24/2009-06-24_clear_common_sens…
or http://tinyurl.com/kwa2pd
http://www.schneier.com/blog/archives/2009/06/fixing_airport.html
** *** ***** ******* *********** *************
Schneier News
I am speaking at Black Hat and DefCon, in Las Vegas, on 30 and 31 July 2009.
https://www.blackhat.com/html/bh-usa-09/bh-us-09-main.html
http://defcon.org/html/defcon-17/dc-17-index.html
** *** ***** ******* *********** *************
Homomorphic Encryption Breakthrough
Last month, IBM made some pretty brash claims about homomorphic
encryption and the future of security. I hate to be the one to throw
cold water on the whole thing -- as cool as the new discovery is -- but
it's important to separate the theoretical from the practical.
Homomorphic cryptosystems are ones where mathematical operations on the
ciphertext have regular effects on the plaintext. A normal symmetric
cipher -- DES, AES, or whatever -- is not homomorphic. Assume you have a
plaintext P, and you encrypt it with AES to get a corresponding
ciphertext C. If you multiply that ciphertext by 2, and then decrypt 2C,
you get random gibberish instead of P. If you got something else, like
2P, that would imply some pretty strong nonrandomness properties of AES
and no one would trust its security.
The RSA algorithm is different. Encrypt P to get C, multiply C by 2, and
then decrypt 2C -- and you get 2P. That's a homomorphism: perform some
mathematical operation to the ciphertext, and that operation is
reflected in the plaintext. The RSA algorithm is homomorphic with
respect to multiplication, something that has to be taken into account
when evaluating the security of a security system that uses RSA.
This isn't anything new. RSA's homomorphism was known in the 1970s, and
other algorithms that are homomorphic with respect to addition have been
known since the 1980s. But what has eluded cryptographers is a fully
homomorphic cryptosystem: one that is homomorphic under both addition
and multiplication and yet still secure. And that's what IBM researcher
Craig Gentry has discovered.
This is a bigger deal than might appear at first glance. Any computation
can be expressed as a Boolean circuit: a series of additions and
multiplications. Your computer consists of a zillion Boolean circuits,
and you can run programs to do anything on your computer. This algorithm
means you can perform arbitrary computations on homomorphically
encrypted data. More concretely: if you encrypt data in a fully
homomorphic cryptosystem, you can ship that encrypted data to an
untrusted person and that person can perform arbitrary computations on
that data without being able to decrypt the data itself. Imagine what
that would mean for cloud computing, or any outsourcing infrastructure:
you no longer have to trust the outsourcer with the data.
Unfortunately -- you knew that was coming, right? -- Gentry's scheme is
completely impractical. It uses something called an ideal lattice as the
basis for the encryption scheme, and both the size of the ciphertext and
the complexity of the encryption and decryption operations grow
enormously with the number of operations you need to perform on the
ciphertext -- and that number needs to be fixed in advance. And
converting a computer program, even a simple one, into a Boolean circuit
requires an enormous number of operations. These aren't impracticalities
that can be solved with some clever optimization techniques and a few
turns of Moore's Law; this is an inherent limitation in the algorithm.
In one article, Gentry estimates that performing a Google search with
encrypted keywords -- a perfectly reasonable simple application of this
algorithm -- would increase the amount of computing time by about a
trillion. Moore's law calculates that it would be 40 years before that
homomorphic search would be as efficient as a search today, and I think
he's being optimistic with even this most simple of examples.
Despite this, IBM's PR machine has been in overdrive about the
discovery. Its press release makes it sound like this new homomorphic
scheme is going to rewrite the business of computing: not just cloud
computing, but "enabling filters to identify spam, even in encrypted
email, or protection information contained in electronic medical
records." Maybe someday, but not in my lifetime.
This is not to take anything away anything from Gentry or his discovery.
Visions of a fully homomorphic cryptosystem have been dancing in
cryptographers' heads for thirty years. I never expected to see one. It
will be years before a sufficient number of cryptographers examine the
algorithm that we can have any confidence that the scheme is secure, but
-- practicality be damned -- this is an amazing piece of work.
A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/07/homomorphic_enc.html
** *** ***** ******* *********** *************
New Attack on AES
There's a new cryptanalytic attack on AES that is better than brute force:
"Abstract. In this paper we present two related-key attacks on the full
AES. For AES-256 we show the first key recovery attack that works for
all the keys and has complexity 2^119, while the recent attack by
Biryukov-Khovratovich-Nikolic works for a weak key class and has higher
complexity. The second attack is the first cryptanalysis of the full
AES-192. Both our attacks are boomerang attacks, which are based on the
recent idea of finding local collisions in block ciphers and enhanced
with the boomerang switching techniques to gain free rounds in the middle."
In an e-mail, the authors wrote: "We also expect that a careful
analysis may reduce the complexities. As a preliminary result, we think
that the complexity of the attack on AES-256 can be lowered from 2^119
to about 2^110.5 data and time. We believe that these results may shed
a new light on the design of the key-schedules of block ciphers, but
they pose no immediate threat for the real world applications that use AES."
Agreed. While this attack is better than brute force -- and some
cryptographers will describe the algorithm as "broken" because of it --
it is still far, far beyond our capabilities of computation. The attack
is, and probably forever will be, theoretical. But remember: attacks
always get better, they never get worse. Others will continue to
improve on these numbers. While there's no reason to panic, no reason
to stop using AES, no reason to insist that NIST choose another
encryption standard, this will certainly be a problem for some of the
AES-based SHA-3 candidate hash functions.
https://cryptolux.uni.lu/mediawiki/uploads/1/1a/Aes-192-256.pdf
https://cryptolux.org/FAQ_on_the_attacks
** *** ***** ******* *********** *************
MD6 Withdrawn from SHA-3 Competition
In other SHA-3 news, Ron Rivest has suggested that his MD6 algorithm be
withdrawn from the SHA-3 competition. From an e-mail to a NIST mailing
list: "We suggest that MD6 is not yet ready for the next SHA-3 round,
and we also provide some suggestions for NIST as the contest moves forward."
Basically, the issue is that in order for MD6 to be fast enough to be
competitive, the designers have to reduce the number of rounds down to
30-40, and at those rounds, the algorithm loses its proofs of resistance
to differential attacks" "Thus, while MD6 appears to be a robust and
secure cryptographic hash algorithm, and has much merit for multi-core
processors, our inability to provide a proof of security for a
reduced-round (and possibly tweaked) version of MD6 against differential
attacks suggests that MD6 is not ready for consideration for the next
SHA-3 round."
This is a very classy withdrawal, as we expect from Ron Rivest --
especially given the fact that there are no attacks on it, while other
algorithms have been seriously broken and their submitters keep trying
to pretend that no one has noticed.
A copy of this blog post, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/07/md6.html
** *** ***** ******* *********** *************
Ever Better Cryptanalytic Results Against SHA-1
The SHA family (which, I suppose, should really be called the MD4
family) of cryptographic hash functions has been under attack for a long
time. In 2005, we saw the first cryptanalysis of SHA-1 that was faster
than brute force: collisions in 2^69 hash operations, later improved to
2^63 operations. A great result, but not devastating. But remember the
great truism of cryptanalysis: attacks always get better, they never get
worse. Last week, devastating got a whole lot closer. A new attack can,
at least in theory, find collisions in 2^52 hash operations -- well
within the realm of computational possibility. Assuming the
cryptanalysis is correct, we should expect to see an actual SHA-1
collision within the year.
Note that this is a collision attack, not a pre-image attack. Most uses
of hash functions don't care about collision attacks. But if yours does,
switch to SHA-2 immediately.
This is why NIST is administering a SHA-3 competition for a new hash
standard. And whatever algorithm is chosen, it will look nothing like
anything in the SHA family (which is why I think it should be called the
Advanced Hash Standard, or AHS).
A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/06/ever_better_cry.html
** *** ***** ******* *********** *************
Comments from Readers
There are thousands of comments -- many of them interesting -- on these
topics on my blog. Search for the story you want to comment on, and join in.
http://www.schneier.com/blog
** *** ***** ******* *********** *************
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing
summaries, analyses, insights, and commentaries on security: computer
and otherwise. You can subscribe, unsubscribe, or change your address
on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues
are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to
colleagues and friends who will find it valuable. Permission is also
granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the
best sellers "Schneier on Security," "Beyond Fear," "Secrets and Lies,"
and "Applied Cryptography," and an inventor of the Blowfish, Twofish,
Phelix, and Skein algorithms. He is the Chief Security Technology
Officer of BT BCSG, and is on the Board of Directors of the Electronic
Privacy Information Center (EPIC). He is a frequent writer and lecturer
on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not
necessarily those of BT.
Copyright (c) 2009 by Bruce Schneier.
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
1
0
Re: [p2p-hackers] TCP library - Was: p2p-hackers Digest, Vol 51, Issue 8
by Sebastien Martini 06 Jul '18
by Sebastien Martini 06 Jul '18
06 Jul '18
Hi guys,
Seems to be a new development in this area: CryptoCP by Daniel
Bernstein, see the relevant portions at 41''30 and 59''45 of his
recent talk http://27c3.iphoneblog.de/recordings/4295.html (don't know
how to watch the video from there though, I downloaded the talk this
morning from another site but the link is down right now).
While he didn't discuss much CryptoCP per se, if I had to make a
_guess_ from what he said it would be a mix of TCP in userspace with
authenticated encryption provided by his software NaCl
http://nacl.cace-project.eu/box.html . NaCl actually implements
cryptographic primitives mainly based on his own crypto primitives
such as Salsa family and Poly1305 but he has also hinted what could be
some sort of equivalent with "standard" primitives (see previous link,
bottom of the page). So a common use case, on the client side
applications would proxify their network requests through CryptoCP
which would encrypt them, simulate TCP and send UDP packets to the
destination server. Notice that in this case it would be quite similar
to what libjingle is currently implementing: OpenSSL can be used to
SSL_write and SSL_read to/from a userspace TCP stream.
On Thu, Dec 23, 2010 at 23:28, coderman <coderman(a)gmail.com> wrote:
> On Sun, Nov 28, 2010 at 2:26 PM, Alex Pankratov <ap(a)poneyhot.org> wrote:
>> Just need a drop in replacement for TCP sockets, i.e. if I have
>> two nodes talking over UDP, I want a simple way for them to do
>> TCP as well.
>
>
> current favorite:
>
> NetBSD RUMP stacks in userspace:
> http://www.netbsd.org/docs/rump/index.html
>
> the paper on TCP/IP in userspace:
> http://2009.asiabsdcon.org/papers/abc2009-P5A-paper.pdf
> presentation:
> http://www.youtube.com/watch?v=RxFctq8A0WI
> _______________________________________________
> p2p-hackers mailing list
> p2p-hackers(a)lists.zooko.com
> http://lists.zooko.com/mailman/listinfo/p2p-hackers
>
_______________________________________________
p2p-hackers mailing list
p2p-hackers(a)lists.zooko.com
http://lists.zooko.com/mailman/listinfo/p2p-hackers
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
1
0
06 Jul '18
2011/2/19 nettime's avid reader <nettime(a)kein.org>:
> In response to Mr. Moglenbs call for help, a group of developers working in
> a free operating system called Debian have started to organize Freedom Box
> software.
> Mr. Moglen said that if he could raise b slightly north of $500,000,b
> Freedom Box 1.0 would be ready in one year.
They raised more than 60,000 US dollars in 5 days on kickstarter
https://www.kickstarter.com/projects/721744279/push-the-freedombox-foundati…
"The donation that pushed us over the edge came from Blaine Cook, the
former lead architect for twitter, which is a nice vote of confidence,
but we would like to take a moment and send our thanks out to all of
the people who have helped contribute to this great drive, and
encourage everyone else to take a look." They announced on their
website yesterday
http://freedomboxfoundation.org/news/Thank_you_Kickstarters/
Praveen
--
`4*`5
`40`45`5`4#`5
b
`4`40`4?`4.`5
`4*`5
`40`4$`5
`4$`5
`4`4?`4/`4?`42`5
b
You have to keep reminding your government that you don't get your
rights from them; you give them permission to rule, only so long as
they follow the rules: laws and constitution.
# distributed via <nettime>: no commercial use without permission
# <nettime> is a moderated mailing list for net criticism,
# collaborative text filtering and cultural politics of the nets
# more info: http://mail.kein.org/mailman/listinfo/nettime-l
# archive: http://www.nettime.org contact: nettime(a)kein.org
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
1
0
Re: [p2p-hackers] TCP library - Was: p2p-hackers Digest, Vol 51, Issue 8
by Sebastien Martini 06 Jul '18
by Sebastien Martini 06 Jul '18
06 Jul '18
Hi guys,
Seems to be a new development in this area: CryptoCP by Daniel
Bernstein, see the relevant portions at 41''30 and 59''45 of his
recent talk http://27c3.iphoneblog.de/recordings/4295.html (don't know
how to watch the video from there though, I downloaded the talk this
morning from another site but the link is down right now).
While he didn't discuss much CryptoCP per se, if I had to make a
_guess_ from what he said it would be a mix of TCP in userspace with
authenticated encryption provided by his software NaCl
http://nacl.cace-project.eu/box.html . NaCl actually implements
cryptographic primitives mainly based on his own crypto primitives
such as Salsa family and Poly1305 but he has also hinted what could be
some sort of equivalent with "standard" primitives (see previous link,
bottom of the page). So a common use case, on the client side
applications would proxify their network requests through CryptoCP
which would encrypt them, simulate TCP and send UDP packets to the
destination server. Notice that in this case it would be quite similar
to what libjingle is currently implementing: OpenSSL can be used to
SSL_write and SSL_read to/from a userspace TCP stream.
On Thu, Dec 23, 2010 at 23:28, coderman <coderman(a)gmail.com> wrote:
> On Sun, Nov 28, 2010 at 2:26 PM, Alex Pankratov <ap(a)poneyhot.org> wrote:
>> Just need a drop in replacement for TCP sockets, i.e. if I have
>> two nodes talking over UDP, I want a simple way for them to do
>> TCP as well.
>
>
> current favorite:
>
> NetBSD RUMP stacks in userspace:
> http://www.netbsd.org/docs/rump/index.html
>
> the paper on TCP/IP in userspace:
> http://2009.asiabsdcon.org/papers/abc2009-P5A-paper.pdf
> presentation:
> http://www.youtube.com/watch?v=RxFctq8A0WI
> _______________________________________________
> p2p-hackers mailing list
> p2p-hackers(a)lists.zooko.com
> http://lists.zooko.com/mailman/listinfo/p2p-hackers
>
_______________________________________________
p2p-hackers mailing list
p2p-hackers(a)lists.zooko.com
http://lists.zooko.com/mailman/listinfo/p2p-hackers
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
1
0
06 Jul '18
2011/2/19 nettime's avid reader <nettime(a)kein.org>:
> In response to Mr. Moglenbs call for help, a group of developers working in
> a free operating system called Debian have started to organize Freedom Box
> software.
> Mr. Moglen said that if he could raise b slightly north of $500,000,b
> Freedom Box 1.0 would be ready in one year.
They raised more than 60,000 US dollars in 5 days on kickstarter
https://www.kickstarter.com/projects/721744279/push-the-freedombox-foundati…
"The donation that pushed us over the edge came from Blaine Cook, the
former lead architect for twitter, which is a nice vote of confidence,
but we would like to take a moment and send our thanks out to all of
the people who have helped contribute to this great drive, and
encourage everyone else to take a look." They announced on their
website yesterday
http://freedomboxfoundation.org/news/Thank_you_Kickstarters/
Praveen
--
`4*`5
`40`45`5`4#`5
b
`4`40`4?`4.`5
`4*`5
`40`4$`5
`4$`5
`4`4?`4/`4?`42`5
b
You have to keep reminding your government that you don't get your
rights from them; you give them permission to rule, only so long as
they follow the rules: laws and constitution.
# distributed via <nettime>: no commercial use without permission
# <nettime> is a moderated mailing list for net criticism,
# collaborative text filtering and cultural politics of the nets
# more info: http://mail.kein.org/mailman/listinfo/nettime-l
# archive: http://www.nettime.org contact: nettime(a)kein.org
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
1
0
06 Jul '18
Text of USA Act, which President Bush will sign today:
http://thomas.loc.gov/cgi-bin/bdquery/z?d107:h.r.03162:
Background:
http://www.wartimeliberty.com/search.pl?topic=legislation
---
http://www.wired.com/news/conflict/0,2100,47901,00.html
Terror Bill Has Lasting Effects
By Declan McCullagh (declan(a)wired.com)
2:00 a.m. Oct. 26, 2001 PDT
WASHINGTON -- Legislators who sent a sweeping anti-terrorism bill to
President Bush this week proudly say that the most controversial
surveillance sections will expire in 2005.
Senate Judiciary chairman Patrick Leahy (D-Vermont) said that a
four-year expiration date "will be crucial in making sure that these
new law enforcement powers are not abused." In the House, Bob Barr
(R-Georgia) stressed that "we take very seriously the sunset
provisions in this bill."
But the Dec. 2005 expiration date embedded in the USA Act -- which the
Senate approved 98 to 1 on Thursday -- applies only to a tiny part of
the mammoth bill.
After the president signs the measure on Friday, police will have the
permanent ability to conduct Internet surveillance without a court
order in some circumstances, secretly search homes and offices without
notifying the owner, and share confidential grand jury information
with the CIA.
Also exempt from the expiration date are investigations underway by
Dec. 2005, and any future investigations of crimes that took place
before that date.
[...]
Other sections of the USA Act, which the House approved by a 357 to 66
vote on Wednesday, that do not expire include the following:
* Police can sneak into someone's house or office, search the
contents, and leave without ever telling the owner. This would be
supervised by a court, and the notification of the surreptitious
search "may be delayed" indefinitely. (Section 213)
* Any U.S. attorney or state attorney general can order the
installation of the FBI's Carnivore surveillance system and record
addresses of Web pages visited and e-mail correspondents --
without going to a judge. Previously, there were stiffer legal
restrictions on Carnivore and other Internet surveillance
techniques. (Section 216)
* Any American "with intent to defraud" who scans in an image of a
foreign currency note or e-mails or transmits such an image will
go to jail for up to 20 years. (Section 375)
* An accused terrorist who is a foreign citizen and who cannot be
deported can be held for an unspecified series of "periods of up
to six months" with the attorney general's approval. (Section 412)
* Biometric technology, such as fingerprint readers or iris
scanners, will become part of an "integrated entry and exit data
system" with the identities of visa holders who hope to enter the
U.S. (Section 414)
* Any Internet provider or telephone company must turn over customer
information, including phone numbers called -- no court order
required -- if the FBI claims the "records sought are relevant to
an authorized investigation to protect against international
terrorism." The company contacted may not "disclose to any person"
that the FBI is doing an investigation. (Section 505)
* Credit reporting firms like Equifax must disclose to the FBI any
information that agents request in connection with a terrorist
investigation -- without police needing to seek a court order
first. Current law permits this only in espionage cases. (Section
505)
* The current definition of terrorism is radically expanded to
include biochemical attacks and computer hacking. Some current
computer crimes -- such as hacking a U.S. government system or
breaking into and damaging any Internet-connected computer -- are
covered. (Section 808)
* A new crime of "cyberterrorism" is added, which covers hacking
attempts causing damage "aggregating at least $5,000 in value" in
one year, any damage to medical equipment or "physical injury to
any person." Prison terms range between five and 20 years.
(Section 814)
* New computer forensics labs will be created to inspect "seized or
intercepted computer evidence relating to criminal activity
(including cyberterrorism)" and to train federal agents. (Section
816)
-------------------------------------------------------------------------
POLITECH -- Declan McCullagh's politics and technology mailing list
You may redistribute this message freely if you include this notice.
Declan McCullagh's photographs are at http://www.mccullagh.org/
To subscribe to Politech: http://www.politechbot.com/info/subscribe.html
This message is archived at http://www.politechbot.com/
-------------------------------------------------------------------------
----- End forwarded message -----
1
0
Dear R.A.,
I am a technical recruiter, looking of for a Senior Security Engineer /
Cryptography expert for an outstanding client of mine in Colorado. Your
name has come up as someone strong in cryptography so I was wondering if
you might know anyone who is looking for a full-time position in
cryptography and who may be interested in a new opportunity?
I am looking for a security engineer with strong experience in
developing systems using a variety of standards (RSA, DES, AES, and/or
PKI). Additional experience with DRM would be a plus, but is not
required. This is a visionary position so the most important piece is
in-depth and broad-based practical knowledge of the standards and their
uses in high-tech systems. This is an outstanding opportunity for the
right person - hands-on and leadership opportunity in a great company
with super benefits and work on the cutting edge of very exciting
technology developments.
Please feel free to pass on my information or to contact me direct if
you know of anyone who might be interested.
This is message is being sent directly to you and is not intended as
SPAM in any way. You have not been added to any mailing lists and your
information has not been shared. If you do not wish to receive any
further emails from Vita Group, Inc., please let me know and I will make
sure that you do not.
Thank you very much!
Lori A. Lister, President
Vita Group, Inc. - bringing life to business!
Recruiting Services for IT, Engineering & Biotech
Ph: 303.465.4944
email: llister(a)vitagroup.com
URL: http://www.vitagroup.com/
Our mission is to always provide professional, ethical, and honest
services.
--- end forwarded text
--
-----------------
R. A. Hettinga <mailto: rah(a)ibuc.com>
The Internet Bearer Underwriting Corporation <http://www.ibuc.com/>
44 Farquhar Street, Boston, MA 02131 USA
"... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience." -- Edward Gibbon, 'Decline and Fall of the Roman Empire'
1
0
PET 2005 Submission deadline approaching (7 Feb) and PET Award (21 Feb)
by George Danezis 06 Jul '18
by George Danezis 06 Jul '18
06 Jul '18
Dear Colleagues,
The submission deadline for the Privacy Enhancing Technologies workshop (PET
2005) is on the 7th February 2005. The latest CfP is appended.
We also solicit nominations for the "Award for Outstanding Research in Privacy
Enhancing Technologies" by February 21. For more information about suggesting
a paper for the award:
http://petworkshop.org/award/
Yours,
George Danezis
5th Workshop on Privacy Enhancing Technologies
Dubrovnik, Croatia May 30 - June 1, 2005
C A L L F O R P A P E R S
http://petworkshop.org/2005/
Important Dates:
Paper submission: February 7, 2005
Notification of acceptance: April 4, 2005
Camera-ready copy for preproceedings: May 6, 2005
Camera-ready copy for proceedings: July 1, 2005
Award for Outstanding Research in Privacy Enhancing Technologies
Nomination period: March 4, 2004 through March 7, 2005
Nomination instructions: http://petworkshop.org/award/
-----------------------------------------------------------------------
Privacy and anonymity are increasingly important in the online world.
Corporations, governments, and other organizations are realizing and
exploiting their power to track users and their behavior, and restrict
the ability to publish or retrieve documents. Approaches to
protecting individuals, groups, but also companies and governments
from such profiling and censorship include decentralization,
encryption, distributed trust, and automated policy disclosure.
This 5th workshop addresses the design and realization of such privacy
and anti-censorship services for the Internet and other communication
networks by bringing together anonymity and privacy experts from
around the world to discuss recent advances and new perspectives.
The workshop seeks submissions from academia and industry presenting
novel research on all theoretical and practical aspects of privacy
technologies, as well as experimental studies of fielded systems. We
encourage submissions from other communities such as law and business
that present their perspectives on technological issues. As in past
years, we will publish proceedings after the workshop in the Springer
Lecture Notes in Computer Science series.
Suggested topics include but are not restricted to:
* Anonymous communications and publishing systems
* Censorship resistance
* Pseudonyms, identity management, linkability, and reputation
* Data protection technologies
* Location privacy
* Policy, law, and human rights relating to privacy
* Privacy and anonymity in peer-to-peer architectures
* Economics of privacy
* Fielded systems and techniques for enhancing privacy in existing systems
* Protocols that preserve anonymity/privacy
* Privacy-enhanced access control or authentication/certification
* Privacy threat models
* Models for anonymity and unobservability
* Attacks on anonymity systems
* Traffic analysis
* Profiling and data mining
* Privacy vulnerabilities and their impact on phishing and identity theft
* Deployment models for privacy infrastructures
* Novel relations of payment mechanisms and anonymity
* Usability issues and user interfaces for PETs
* Reliability, robustness and abuse prevention in privacy systems
Stipends to attend the workshop will be made available, on the basis
of need, to cover travel expenses, hotel, or conference fees. You do
not need to submit a technical paper and you do not need to be a
student to apply for a stipend. For more information, see
http://petworkshop.org/2005/stipends.html
General Chair:
Damir Gojmerac (damir.gojmerac(a)fina.hr) Fina Corporation, Croatia
Program Chairs:
George Danezis (George.Danezis(a)cl.cam.ac.uk) University of Cambridge, UK
David Martin (dm(a)cs.uml.edu) University of Massachusetts at Lowell, USA
Program Committee:
Martin Abadi, University of California at Santa Cruz, USA
Alessandro Acquisti, Heinz School, Carnegie Mellon University, USA
Caspar Bowden, Microsoft EMEA, UK
Jean Camp, Indiana University at Bloomington, USA
Richard Clayton, University of Cambridge, UK
Lorrie Cranor, School of Computer Science, Carnegie Mellon University, USA
Roger Dingledine, The Free Haven Project, USA
Hannes Federrath, University of Regensburg, Germany
Ian Goldberg, Zero Knowledge Systems, Canada
Philippe Golle, Palo Alto Research Center, USA
Marit Hansen, Independent Centre for Privacy Protection Schleswig-Holstein,
Germany
Markus Jakobsson, Indiana University at Bloomington, USA
Dogan Kesdogan, Rheinisch-Westfaelische Technische Hochschule Aachen, Germany
Brian Levine, University of Massachusetts at Amherst, USA
Andreas Pfitzmann, Dresden University of Technology, Germany
Matthias Schunter, IBM Zurich Research Lab, Switzerland
Andrei Serjantov, The Free Haven Project, UK
Paul Syverson, Naval Research Lab, USA
Latanya Sweeney, Carnegie Mellon University, USA
Matthew Wright, University of Texas at Arlington, USA
Papers should be at most 15 pages excluding the bibliography and
well-marked appendices (using an 11-point font), and at most 20 pages
total. Submission of shorter papers (from around 4 pages) is strongly
encouraged whenever appropriate. Papers must conform to the Springer
LNCS style. Follow the "Information for Authors" link at
http://www.springer.de/comp/lncs/authors.html.
Reviewers of submitted papers are not required to read the appendices
and the paper should be intelligible without them. The paper should
start with the title, names of authors and an abstract. The
introduction should give some background and summarize the
contributions of the paper at a level appropriate for a non-specialist
reader. A preliminary version of the proceedings will be made
available to workshop participants. Final versions are not due until
after the workshop, giving the authors the opportunity to revise their
papers based on discussions during the meeting.
Submit your papers in Postscript or PDF format. To submit a paper,
compose a plain text email to pet2005-submissions(a)petworkshop.org
containing the title and abstract of the paper, the authors' names,
email and postal addresses, phone and fax numbers, and identification
of the contact author (to whom we will address all subsequent
correspondence). Attach your submission to this email and send it.
By submitting a paper, you agree that if it is accepted, you will sign
a paper distribution agreement allowing for publication, and also that
an author of the paper will register for the workshop and present the
paper there. Our current working agreement with Springer is that
authors will retain copyright on their own works while assigning an
exclusive 3-year distribution license to Springer. Authors may still
post their papers on their own Web sites. See
http://petworkshop.org/2004/paper-dist-agreement-5-04.html for the 2004
version of this agreement.
Submitted papers must not substantially overlap with papers that have
been published or that are simultaneously submitted to a journal or a
conference with proceedings.
Paper submissions must be received by February 7. We acknowledge all
submissions manually by email. If you do not receive an
acknowledgment within a few days (or one day, if you are submitting
right at the deadline), then contact the program committee chairs
directly to resolve the problem. Notification of acceptance or
rejection will be sent to authors no later than April 4 and authors
will have the opportunity to revise for the preproceedings version by
May 6.
We also invite proposals of up to 2 pages for panel discussions or
other relevant presentations. In your proposal, (1) describe the
nature of the presentation and why it is appropriate to the workshop,
(2) suggest a duration for the presentation (ideally between 45 and 90
minutes), (3) give brief descriptions of the presenters, and (4)
indicate which presenters have confirmed their availability for the
presentation if it is scheduled. Otherwise, submit your proposal by
email as described above, including the designation of a contact
author. The program committee will consider presentation proposals
along with other workshop events, and will respond by the paper
decision date with an indication of its interest in scheduling the
event. The proceedings will contain 1-page abstracts of the
presentations that take place at the workshop. Each contact author
for an accepted panel proposal must prepare and submit this abstract
in the Springer LNCS style by the "Camera-ready copy for
preproceedings" deadline date.
_______________________________________________
NymIP-res-group mailing list
NymIP-res-group(a)nymip.org
http://www.nymip.org/mailman/listinfo/nymip-res-group
--- end forwarded text
--
-----------------
R. A. Hettinga <mailto: rah(a)ibuc.com>
The Internet Bearer Underwriting Corporation <http://www.ibuc.com/>
44 Farquhar Street, Boston, MA 02131 USA
"... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience." -- Edward Gibbon, 'Decline and Fall of the Roman Empire'
1
0
Re: [tahoe-dev] Proposed short description of tahoe-LAFS for personal backup
by Zooko Wilcox-O'Hearn 06 Jul '18
by Zooko Wilcox-O'Hearn 06 Jul '18
06 Jul '18
On Tue, Jun 12, 2012 at 7:44 PM, Saint Germain <saintger(a)gmail.com> wrote:
>
> tahoe-LAFS is not really a backup software but rather a storage solution.
How about:
"""
Tahoe-LAFS is not just a backup tool, but rather a distributed file
system. It also comes with an integrated backup tool.
"""
> The primary objective is to secure your data, either for
> privacy or for safety (against damage). To do so, it stores your
> encrypted data (encrypted at the source) on several machines organized
> in a network with a configurable policy (specifying K=2 and N=5 for
> instance, will spread your data on 5 machines, 2 of which need at least
> to be available to access your data).
Nicely written.
> A few remarkable points:
> - I like its "paranoid" approach. The idea is to trust no one (and especially not your online storage provider)
This is fine and I don't think you need to change it, but for your
information whenever I see the word "trust" a little warning flag goes
up in my brain, and I go back and mentally rewrite the sentence
without the word "trust". This is because that word combines two
things: 1. Whether you think a person or tool is going to fail or
betray you, and 2. Whether your system relies on that person or tool
operating correctly and loyally.
A lot of the architecture of Tahoe-LAFS is focused on question 2 and
the interference of question 1 just confuses everyone. If you think
about question 1 then you'll sometimes end up choosing the wrong
answer for question 2.
For example: think of Least Authority Enterprises. As a company, we
don't want our users to think that we are weak or malicious -- that we
are likely to fail or to betray our customers to someone else. But, we
very much want our customers to be *invulnerable* to us, so that *if*
we were to fail or to betray our customers to someone else, there
would be very little damage that we would be able to do.
If you use the word "trust", then it is hard to explain why you don't
want to give LAE your encryption keys. Does that mean you think we are
dishonest? Don't you trust us?
If you force yourself not to use the word trust, then you can usually
rewrite the sentences to be in terms of "reliance" or "vulnerability"
instead of trust, and suddenly the confusion about question 1 vs.
question 2 disappears. Why do you withhold your capabilities from LAE?
Because you don't wish to be vulnerable to a failure or betrayal at
LAE. Because you don't want the safety of your backups to *rely on*
the continued security of LAE's servers and the continued loyalty of
its employees.
Doing that transformation on your sentence above would give something like:
"""
I like its "paranoid" approach. The idea is that no one (not even your
online storage provider) should have read or write access to your
backups.
"""
> - Don't need any redundant PAR2 checksum, given that the data are
By the way, zfec is an alternative to PAR2.
https://tahoe-lafs.org/trac/zfec/browser/zfec/README.rst
zfec is much more efficient than PAR2 for some settings. (See the
benchmarks in the README.rst.)
I don't know of anyone who is actively using zfec's command-line tool
in the way that one uses PAR2's command-line tool, though. There are
lots of people using zfec as a library inside other tools.
Looking forward to reading your article! Maybe I'll painfully struggle
through the first 10 words in French and then get my Francophone wife
to read it to me. :-)
Regards,
Zooko
_______________________________________________
tahoe-dev mailing list
tahoe-dev(a)tahoe-lafs.org
https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev
----- End forwarded message -----
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
1
0