CRYPTO-GRAM October 15, 2008 by Bruce Schneier Chief Security Technology Officer, BT schneier@schneier.com http://www.schneier.com A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>. You can read this issue on the web at <http://www.schneier.com/crypto-gram-0810.html>. These same essays appear in the "Schneier on Security" blog: <http://www.schneier.com/blog>. An RSS feed is available. ** *** ***** ******* *********** ************* In this issue: The Seven Habits of Highly Ineffective Terrorists The Two Classes of Airport Contraband News The More Things Change, the More They Stay the Same NSA's Warrantless Eavesdropping Targets Innocent Americans Schneier/BT News Taleb on the Limitations of Risk Management "New Attack" Against Encrypted Images Nonviolent Activists Are Now Terrorists Does Risk Management Make Sense? Comments from Readers ** *** ***** ******* *********** ************* The Seven Habits of Highly Ineffective Terrorists Most counterterrorism policies fail, not because of tactical problems, but because of a fundamental misunderstanding of what motivates terrorists in the first place. If we're ever going to defeat terrorism, we need to understand what drives people to become terrorists in the first place. Conventional wisdom holds that terrorism is inherently political, and that people become terrorists for political reasons. This is the "strategic" model of terrorism, and it's basically an economic model. It posits that people resort to terrorism when they believe -- rightly or wrongly -- that terrorism is worth it; that is, when they believe the political gains of terrorism minus the political costs are greater than if they engaged in some other, more peaceful form of protest. It's assumed, for example, that people join Hamas to achieve a Palestinian state; that people join the PKK to attain a Kurdish national homeland; and that people join al-Qaida to, among other things, get the United States out of the Persian Gulf. If you believe this model, the way to fight terrorism is to change that equation, and that's what most experts advocate. Governments tend to minimize the political gains of terrorism through a no-concessions policy; the international community tends to recommend reducing the political grievances of terrorists via appeasement, in hopes of getting them to renounce violence. Both advocate policies to provide effective nonviolent alternatives, like free elections. Historically, none of these solutions has worked with any regularity. Max Abrahms, a predoctoral fellow at Stanford University's Center for International Security and Cooperation, has studied dozens of terrorist groups from all over the world. He argues that the model is wrong. In a paper published this year in International Security that -- sadly -- doesn't have the title "Seven Habits of Highly Ineffective Terrorists," he discusses, well, seven habits of highly ineffective terrorists. These seven tendencies are seen in terrorist organizations all over the world, and they directly contradict the theory that terrorists are political maximizers: Terrorists, he writes, (1) attack civilians, a policy that has a lousy track record of convincing those civilians to give the terrorists what they want; (2) treat terrorism as a first resort, not a last resort, failing to embrace nonviolent alternatives like elections; (3) don't compromise with their target country, even when those compromises are in their best interest politically; (4) have protean political platforms, which regularly, and sometimes radically, change; (5) often engage in anonymous attacks, which precludes the target countries making political concessions to them; (6) regularly attack other terrorist groups with the same political platform; and (7) resist disbanding, even when they consistently fail to achieve their political objectives or when their stated political objectives have been achieved. Abrahms has an alternative model to explain all this: People turn to terrorism for social solidarity. He theorizes that people join terrorist organizations worldwide in order to be part of a community, much like the reason inner-city youths join gangs in the United States. The evidence supports this. Individual terrorists often have no prior involvement with a group's political agenda, and often join multiple terrorist groups with incompatible platforms. Individuals who join terrorist groups are frequently not oppressed in any way, and often can't describe the political goals of their organizations. People who join terrorist groups most often have friends or relatives who are members of the group, and the great majority of terrorist are socially isolated: unmarried young men or widowed women who weren't working prior to joining. These things are true for members of terrorist groups as diverse as the IRA and al-Qaida. For example, several of the 9/11 hijackers planned to fight in Chechnya, but they didn't have the right paperwork so they attacked America instead. The mujahedeen had no idea whom they would attack after the Soviets withdrew from Afghanistan, so they sat around until they came up with a new enemy: America. Pakistani terrorists regularly defect to another terrorist group with a totally different political platform. Many new al-Qaida members say, unconvincingly, that they decided to become a jihadist after reading an extreme, anti-American blog, or after converting to Islam, sometimes just a few weeks before. These people know little about politics or Islam, and they frankly don't even seem to care much about learning more. The blogs they turn to don't have a lot of substance in these areas, even though more informative blogs do exist. All of this explains the seven habits. It's not that they're ineffective; it's that they have a different goal. They might not be effective politically, but they are effective socially: They all help preserve the group's existence and cohesion. This kind of analysis isn't just theoretical; it has practical implications for counterterrorism. Not only can we now better understand who is likely to become a terrorist, we can engage in strategies specifically designed to weaken the social bonds within terrorist organizations. Driving a wedge between group members -- commuting prison sentences in exchange for actionable intelligence, planting more double agents within terrorist groups -- will go a long way to weakening the social bonds within those groups. We also need to pay more attention to the socially marginalized than to the politically downtrodden, like unassimilated communities in Western countries. We need to support vibrant, benign communities and organizations as alternative ways for potential terrorists to get the social cohesion they need. And finally, we need to minimize collateral damage in our counterterrorism operations, as well as clamping down on bigotry and hate crimes, which just creates more dislocation and social isolation, and the inevitable calls for revenge. http://maxabrahms.com/pdfs/DC_250-1846.pdf This essay previously appeared on Wired.com. http://www.wired.com/print/politics/security/commentary/securitymatters/2008... or http://tinyurl.com/3vf3x5 Interesting rebuttal: http://www.cambridgeblog.org/2008/10/can-terror-be-understood/ ** *** ***** ******* *********** ************* The Two Classes of Airport Contraband Airport security found a jar of pasta sauce in my luggage last month. It was a 6-ounce jar, above the limit; the official confiscated it, because allowing it on the airplane with me would have been too dangerous. And to demonstrate how dangerous he really thought that jar was, he blithely tossed it in a nearby bin of similar liquid bottles and sent me on my way. There are two classes of contraband at airport security checkpoints: the class that will get you in trouble if you try to bring it on an airplane, and the class that will cheerily be taken away from you if you try to bring it on an airplane. This difference is important: Making security screeners confiscate anything from that second class is a waste of time. All it does is harm innocents; it doesn't stop terrorists at all. Let me explain. If you're caught at airport security with a bomb or a gun, the screeners aren't just going to take it away from you. They're going to call the police, and you're going to be stuck for a few hours answering a lot of awkward questions. You may be arrested, and you'll almost certainly miss your flight. At best, you're going to have a very unpleasant day. This is why articles about how screeners don't catch every -- or even a majority -- of guns and bombs that go through the checkpoints don't bother me. The screeners don't have to be perfect; they just have to be good enough. No terrorist is going to base his plot on getting a gun through airport security if there's a decent chance of getting caught, because the consequences of getting caught are too great. Contrast that with a terrorist plot that requires a 12-ounce bottle of liquid. There's no evidence that the London liquid bombers actually had a workable plot, but assume for the moment they did. If some copycat terrorists try to bring their liquid bomb through airport security and the screeners catch them -- like they caught me with my bottle of pasta sauce -- the terrorists can simply try again. They can try again and again. They can keep trying until they succeed. Because there are no consequences to trying and failing, the screeners have to be 100 percent effective. Even if they slip up one in a hundred times, the plot can succeed. The same is true for knitting needles, pocketknives, scissors, corkscrews, cigarette lighters and whatever else the airport screeners are confiscating this week. If there's no consequence to getting caught with it, then confiscating it only hurts innocent people. At best, it mildly annoys the terrorists. To fix this, airport security has to make a choice. If something is dangerous, treat it as dangerous and treat anyone who tries to bring it on as potentially dangerous. If it's not dangerous, then stop trying to keep it off airplanes. Trying to have it both ways just distracts the screeners from actually making us safer. http://www.cnn.com/2008/US/01/28/tsa.bombtest/index.html http://www.homelandstupidity.us/2007/10/25/tsa-screeners-fail-most-bomb-test... or http://tinyurl.com/4npg9o http://www.homelandstupidity.us/2006/10/31/tsa-screeners-still-fail-to-find-... or http://tinyurl.com/3ephgq http://www.boston.com/news/local/articles/2003/10/16/logan_screeners_fail_we... or http://tinyurl.com/r5gu This essay originally appeared on Wired.com. http://www.wired.com/politics/security/commentary/securitymatters/2008/09/se... or http://tinyurl.com/4m6vvj ** *** ***** ******* *********** ************* News According to U.S. government documents, fear of terrorism could cause a psychosomatic epidemic: http://blog.wired.com/27bstroke6/2008/09/terrorism-fear.html GPS spoofing: http://philosecurity.org/2008/09/07/gps-spoofing http://www.ne.anl.gov/capabilities/vat/spoof.html NSA -- and others -- snooping on cell phone calls with off-the-shelf technology: http://news.cnet.com/8301-13739_3-10030134-46.html The NSA teams up with the Chinese government to limit Internet anonymity: http://www.schneier.com/blog/archives/2008/09/the_nsa_teams_u.html The Pentagon's World of Warcraft Movie-Plot threat: http://www.schneier.com/blog/archives/2008/09/the_pentagons_w.html TSA employees are bypassing airport screening. http://www.9news.com/news/article.aspx?storyid=99941&catid=339 This isn't a big deal. Screeners have to go in and out of security all the time as they work. Yes, they can smuggle things in and out of the airport. But you have to remember that the airport screeners are trusted insiders for the system: there are a zillion ways they could break airport security. On the other hand, it's probably a smart idea to screen screeners when they walk through airport security when they aren't working at that checkpoint at that time. The reason is the same reason you should screen everyone, including pilots who can crash their plane: you're not screening screeners (or pilots), you're screening people wearing screener (or pilot) uniforms and carrying screener (or pilot) IDs. You can either train your screeners to recognize authentic uniforms and IDs, or you can just screen everybody. The latter is just easier. But this isn't a big deal. I can think of specific instances where the ability to unlock your door over the Internet can be useful, but in most places it's not a good idea. http://www.theinquirer.net/gb/inquirer/news/2008/09/04/unlock-house-via-inte... or http://tinyurl.com/4rsyve http://treocentral.com/content/Stories/1999-1.htm India using brain scans to prove guilt in court. http://www.nytimes.com/2008/09/15/world/asia/15brainscan.html The pseudo-science here is even worse than for lie detectors. http://www.thehindu.com/2008/09/08/stories/2008090854420400.htm People have been asking me to comment about Sarah Palin's Yahoo e-mail account being hacked. I've already written about the security problems with "secret questions" back in 2005: http://www.schneier.com/blog/archives/2005/02/the_curse_of_th.html More commentary: http://www.freedom-to-tinker.com/blog/felten/how-yahoo-could-have-protected-... or http://tinyurl.com/4689km The $20M camera system at New York's Freedom Tower is pretty sophisticated. http://cityroom.blogs.nytimes.com/2008/09/24/unblinking-eyes-for-20-million-... or http://tinyurl.com/53e52c We're developing a pre-crime detector that detects hostile thoughts. http://www.newscientist.com/blogs/shortsharpscience/2008/09/precrime-detecto... or http://tinyurl.com/53ftps http://www.foxnews.com/printer_friendly_story/0,3566,426485,00.html Spykee is your own personal robot spy. It takes pictures and movies that you can watch on the Internet in real time or save for later. You can even talk with whoever you're spying on via Skype. Only $300. http://www.spykeeworld.com/ http://www.robotsrule.com/html/spykee.php http://www.amazon.com/gp/offer-listing/B000N6470A?tag=counterpane Security maxims from Roger Johnston. Funny, and all too true. http://www.ne.anl.gov/capabilities/vat/seals/maxims.html Send your personalized message to TSA X-ray screeners using metal plates you can put in your carry-on luggage. http://blog.makezine.com/archive/2008/09/metal_plates_send_message.html or http://tinyurl.com/4ro8es http://www.nytimes.com/idg/IDG_852573C400693880002574D70000A2FB.html Another bomb scare. Hot dogs this time. http://www.philly.com/philly/blogs/phillies_zone/Just_Hot_Dogs_Folks.html or http://tinyurl.com/5xpzsp http://www.nytimes.com/aponline/us/AP-ODD-Hot-Dog-Scare.html The Hackers Choice has released a tool allowing people to clone and modify electronic passports. The problem is self-signed certificates. A CA is not a great solution, and the link gives a good explanation as to why. "So what's the solution? We know that humans are good at Border Control. In the end they protected us well for the last 120 years. We also know that humans are good at pattern matching and image recognition. Humans also do an excellent job 'assessing' the person and not just the passport. Take the human part away and passport security falls apart." http://blog.thc.org/index.php?/archives/4-The-Risk-of-ePassports-and-RFID.ht... or http://tinyurl.com/4l49v4 http://www.theregister.co.uk/2008/09/30/epassport_hack_description/ Hand grenades are now weapons of mass destruction: http://www.schneier.com/blog/archives/2008/10/hand_grenades_a.html MI6 camera -- including secrets -- sold on eBay. The buyer turned the camera in to the police. http://www.techcrunch.com/2008/09/30/top-secret-mi6-camera-sold-to-the-highe... or http://tinyurl.com/4n5ov2 http://gizmodo.com/5056749/mi6-camera-with-secret-images-bought-on-ebay-for-... or http://tinyurl.com/4pj5jh "Scareware" vendors sued -- it's about time. http://voices.washingtonpost.com/securityfix/2008/09/microsoft_washington_st... or http://tinyurl.com/3pxho4 This is clever: bank robber hires accomplices on Craigslist. http://www.king5.com/topstories/stories/NW_100108WAB_monroe_robber_floating_... or http://tinyurl.com/3h8wfe New cross-site request forgery attacks. http://www.freedom-to-tinker.com/blog/wzeller/popular-websites-vulnerable-cr... or http://tinyurl.com/4ubb2f http://www.freedom-to-tinker.com/sites/default/files/csrf.pdf "Clickjacking" is a stunningly sexy name, but the vulnerability is really just a variant of cross-site scripting. We don't know how bad it really is, because the details are still being withheld. But the name alone is causing dread. Here's a good Q&A on the vulnerability: http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9115818&source=NLT_SEC&nlid=38 or http://tinyurl.com/3rmfac http://www.cgisecurity.org/2008/10/interview-jerem.html http://hackademix.net/2008/09/27/clickjacking-and-noscript/ Turns out you can add anyone's number to -- or remove anyone's number from -- the Canadian do-not-call list. You can also add (but not remove) numbers to the U.S. do-not-call list, though only up to three at a time, and you have to provide a valid e-mail address to confirm the addition. Here's my idea. If you're a company, add every one of your customers to the list. That way, none of your competitors will be able to cold call them. https://www.lnnte-dncl.gc.ca/ https://www.donotcall.gov/register/reg.aspx Chinese monitoring Skype messages: http://arstechnica.com/news.ars/post/20081002-skype-security-flub-leads-to-d... or http://tinyurl.com/4pgn2j According to a massive report from the National Research Council, data mining for terrorists doesn't work. http://news.cnet.com/8301-13578_3-10059987-38.html?part=rss&subj=news&tag=2547-1_3-0-20 or http://tinyurl.com/4klgqe http://arstechnica.com/news.ars/post/20081009-analysis-data-mining-doesnt-wo... or http://tinyurl.com/4azsds http://www.nap.edu/catalog.php?record_id=12452 Interesting paper by Adam Shostack on threat modeling at Microsoft: http://blogs.msdn.com/sdl/attachment/8991806.ashx Elcomsoft is claiming that the WPA protocol is dead, just because they can speed up brute-force cracking by 100 times using a hardware accelerator. Why exactly is this news? Yes, weak passwords are weak -- we already know that. And strong WPA passwords are still strong. This seems like yet another blatant attempt to grab some press attention with a half-baked cryptanalytic result. http://www.elcomsoft.com/edpr.html?r1=pr&r2=wpa http://mobile.slashdot.org/mobile/08/10/12/1724230.shtml http://www.theregister.co.uk/2008/10/10/graphics_card_wireless_hacking/ http://www.schneier.com/essay-148.html Clever counterterrorism attack against the IRA: set up a laundromat, and watch who has bomb residue on their clothes: http://www.schneier.com/blog/archives/2008/10/clever_countert.html There's a new chip-and-pin scam in the UK. The card readers were hacked when they were built, "either during the manufacturing process at a factory in China, or shortly after they came off the production line." It's being called a "supply chain hack." Sophisticated stuff, and yet another demonstration that these all-computer security systems are full of risks. http://online.wsj.com/article/SB122366999999723871.html http://www.telegraph.co.uk/news/newstopics/politics/lawandorder/3173346/Chip... http://www.telegraph.co.uk/news/worldnews/asia/pakistan/3173161/Credit-card-... BTW, what's it worth to rig an election? http://www.schneier.com/essay-046.html BART, the San Francisco subway authority, has been debating allowing passengers to bring drinks on trains. There are all sorts of good reasons why or why not -- convenience, problems with spills, and so on -- but one reason that makes no sense is that terrorists may bring flammable liquids on board. Yet that is exactly what BART managers said. No big news -- we've seen stupid things like this regularly since 9/11 -- but this time people responded: "Added Director Tom Radulovich, 'If somebody wants to break the law and bring flammable liquids on, they can. It's not like al Qaeda is waiting in their caves for us to have a sippy-cup rule.' Directing his comments to BART administrators, he said, 'You know, it's just fearmongering and you should be ashamed.' Terrorist fear mongering seems to be working less well. http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2008/10/10/BAB813EELU.DTL ** *** ***** ******* *********** ************* The More Things Change, the More They Stay the Same Guess the year: "Murderous organizations have increased in size and scope; they are more daring, they are served by the most terrible weapons offered by modern science, and the world is nowadays threatened by new forces which, if recklessly unchained, may some day wreck universal destruction. The Orsini bombs were mere children's toys compared with the later developments of infernal machines. Between 1858 and 1898 the dastardly science of destruction had made rapid and alarming strides..." No, that wasn't a typo. "Between 1858 and 1898...." This quote is from Major Arthur Griffith, "Mysteries of Police and Crime," London, 1898, II, p. 469. It's quoted in: Walter Laqueur, "A History of Terrorism," New Brunswick/London, Transaction Publishers, 2002. http://query.nytimes.com/mem/archive-free/pdf?res=9907E7D8153DE633A25757C0A9... or http://tinyurl.com/3wn2ct http://www.amazon.com/History-Terrorism-Walter-Laqueur/dp/0765807998/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1223482236&sr=8-1 or http://tinyurl.com/46s7ny ** *** ***** ******* *********** ************* NSA's Warrantless Eavesdropping Targets Innocent Americans Remember when the U.S. government said it was only spying on terrorists? Anyone with any common sense knew it was lying -- power without oversight is always abused -- but even I didn't think it was this bad: "Faulk says he and others in his section of the NSA facility at Fort Gordon routinely shared salacious or tantalizing phone calls that had been intercepted, alerting office mates to certain time codes of 'cuts' that were available on each operator's computer. "'Hey, check this out,' Faulk says he would be told, 'there's good phone sex or there's some pillow talk, pull up this call, it's really funny, go check it out. It would be some colonel making pillow talk and we would say, "Wow, this was crazy",' Faulk told ABC News." Warrants are a security device. They protect us against government abuse of power. http://www.nytimes.com/2008/10/10/washington/10nsa.html http://abcnews.go.com/Blotter/story?id=5987804&page=1 http://www.upi.com/Top_News/2008/10/10/Spy_agency_accused_of_improper_listen... http://www.reuters.com/article/domesticNews/idUSTRE4990CD20081010 ** *** ***** ******* *********** ************* Schneier/BT News Schneier is speaking at the 30th International Conference of Data Protection and Privacy Commissioners on 15 October in Strasbourg, France. http://www.privacyconference2008.org/ Schneier is speaking at the European Security and Information System Congress on 17 October in Monaco. http://cms.event-catalyst.com/assises/home.aspx Schneier is speaking at RSA Europe on 28 October in London. http://www.rsaconference.com/2008/Europe/Home.aspx Schneier is speaking at the 22nd Large Installation System Administration Conference on 13 November in San Diego, CA. http://usenix.org/events/lisa08/ Schneier was interviewed by Telecom Asia: http://www.telecomasia.net/article.php?id_article=10230 Schneier was interviewed by the Irish Times: http://www.irishtimes.com/newspaper/finance/2008/1003/1222959300589.html or http://tinyurl.com/4ccjmw Schneier was interviewed by Dr. Dobb's Journal: http://www.ddj.com/security/210605067 My essay on chemical plants and security for the Guardian. Nothing I haven't said before. http://www.schneier.com/essay-243.html ** *** ***** ******* *********** ************* Taleb on the Limitations of Risk Management Nice paragraph on the limitations of risk management in this occasionally interesting interview with Nicholas Taleb: "Because then you get a Maginot Line problem. [After World War I, the French erected concrete fortifications to prevent Germany from invading again -- a response to the previous war, which proved ineffective for the next one.] You know, they make sure they solve that particular problem, the Germans will not invade from here. The thing you have to be aware of most obviously is scenario planning, because typically if you talk about scenarios, you'll overestimate the probability of these scenarios. If you examine them at the expense of those you don't examine, sometimes it has left a lot of people worse off, so scenario planning can be bad. I'll just take my track record. Those who did scenario planning have not fared better than those who did not do scenario planning. A lot of people have done some kind of "make-sense" type measures, and that has made them more vulnerable because they give the illusion of having done your job. This is the problem with risk management. I always come back to a classical question. Don't give a fool the illusion of risk management. Don't ask someone to guess the number of dentists in Manhattan after asking him the last four digits of his Social Security number. The numbers will always be correlated. I actually did some work on risk management, to show how stupid we are when it comes to risk." http://www.portfolio.com/views/columns/the-world-according-to/2008/08/14/Int... or http://tinyurl.com/5eazpu ** *** ***** ******* *********** ************* "New Attack" Against Encrypted Images In a blatant attempt to get some PR, a researcher at PMC Ciphers has figured out that encrypting data with ECB mode results in ciphertext patterns. Yeah, we already knew that. And -1 point for a security company requiring the use of JavaScript, and not failing gracefully for a browser that doesn't have it enabled. And -- ahem -- what is it with that photograph in the paper? Couldn't the researchers have found something a little less adolescent? For the record, I doghoused PMC Ciphers back in 2003: "PMC Ciphers. The theory description is so filled with pseudo-cryptography that it's funny to read. Hypotheses are presented as conclusions. Current research is misstated or ignored. The first link is a technical paper with four references, three of them written before 1975. Who needs thirty years of cryptographic research when you have polymorphic cipher theory?" I didn't realize it at the time, but PMC Ciphers responded to my doghousing them. Funny stuff. http://www.techworld.com/security/news/index.cfm?newsid=105263 http://www.turbocrypt.com/vpics/9a8f098c615a425eab6d17c804dd67ae/whitepapers... or http://tinyurl.com/3fe64r Doghouse and response: http://www.schneier.com/crypto-gram-0303.html#4 http://www.ciphers.de/eng/content/Backround-Info/Bruce-Schneiers-comments.ht... or http://tinyurl.com/52ymfo When I posted this on my blog, three new commenters using dialups at the same German ISP showed up to defend the paper. What are the odds? http://www.schneier.com/blog/archives/2008/10/new_attack_agai.html ** *** ***** ******* *********** ************* Nonviolent Activists Are Now Terrorists This is an abomination: "The Maryland State Police classified 53 nonviolent activists as terrorists and entered their names and personal information into state and federal databases that track terrorism suspects, the state police chief acknowledged yesterday." Why did they do that? "Both Hutchins and Sheridan said the activists' names were entered into the state police database as terrorists partly because the software offered limited options for classifying entries." I know that once we had this "either you're with us or with the terrorists" mentality, but don't you think that -- just maybe -- the software should allow for a little bit more nuance? http://www.washingtonpost.com/wp-dyn/content/article/2008/10/07/AR2008100703... or http://tinyurl.com/3znjv7 ** *** ***** ******* *********** ************* Does Risk Management Make Sense? We engage in risk management all the time, but it only makes sense if we do it right. "Risk management" is just a fancy term for the cost-benefit tradeoff associated with any security decision. It's what we do when we react to fear, or try to make ourselves feel secure. It's the fight-or-flight reflex that evolved in primitive fish and remains in all vertebrates. It's instinctual, intuitive and fundamental to life, and one of the brain's primary functions. Some have hypothesized that humans have a "risk thermostat" that tries to maintain some optimal risk level. It explains why we drive our motorcycles faster when we wear a helmet, or are more likely to take up smoking during wartime. It's our natural risk management in action. The problem is our brains are intuitively suited to the sorts of risk management decisions endemic to living in small family groups in the East African highlands in 100,000 BC, and not to living in the New York City of 2008. We make systematic risk management mistakes -- miscalculating the probability of rare events, reacting more to stories than data, responding to the feeling of security rather than reality, and making decisions based on irrelevant context. And that risk thermostat of ours? It's not nearly as finely tuned as we might like it to be. Like a rabbit that responds to an oncoming car with its default predator avoidance behavior -- dart left, dart right, dart left, and at the last moment jump -- instead of just getting out of the way, our Stone Age intuition doesn't serve us well in a modern technological society. So when we in the security industry use the term "risk management," we don't want you to do it by trusting your gut. We want you to do risk management consciously and intelligently, to analyze the tradeoff and make the best decision. This means balancing the costs and benefits of any security decision -- buying and installing a new technology, implementing a new procedure or forgoing a common precaution. It means allocating a security budget to mitigate different risks by different amounts. It means buying insurance to transfer some risks to others. It's what businesses do, all the time, about everything. IT security has its own risk management decisions, based on the threats and the technologies. There's never just one risk, of course, and bad risk management decisions often carry an underlying tradeoff. Terrorism policy in the U.S. is based more on politics than actual security risk, but the politicians who make these decisions are concerned about the risks of not being re-elected. Many corporate security decisions are made to mitigate the risk of lawsuits rather than address the risk of any actual security breach. And individuals make risk management decisions that consider not only the risks to the corporation, but the risks to their departments' budgets, and to their careers. You can't completely remove emotion from risk management decisions, but the best way to keep risk management focused on the data is to formalize the methodology. That's what companies that manage risk for a living -- insurance companies, financial trading firms and arbitrageurs -- try to do. They try to replace intuition with models, and hunches with mathematics. The problem in the security world is we often lack the data to do risk management well. Technological risks are complicated and subtle. We don't know how well our network security will keep the bad guys out, and we don't know the cost to the company if we don't keep them out. And the risks change all the time, making the calculations even harder. But this doesn't mean we shouldn't try. You can't avoid risk management; it's fundamental to business just as to life. The question is whether you're going to try to use data or whether you're going to just react based on emotions, hunches and anecdotes. This essay appeared as the first half of a point-counterpoint with Marcus Ranum in Information Security magazine. http://searchsecurity.techtarget.com/loginMembersOnly/1,289498,sid14_gci1332... ** *** ***** ******* *********** ************* Comments from Readers There are hundreds of comments -- many of them interesting -- on these topics on my blog. Search for the story you want to comment on, and join in. http://www.schneier.com/blog ** *** ***** ******* *********** ************* Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL. Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety. CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish and Twofish algorithms. He is the Chief Security Technology Officer of BT (BT acquired Counterpane in 2006), and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>. Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT. Copyright (c) 2008 by Bruce Schneier. ----- End forwarded message ----- -- Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
participants (1)
-
Bruce Schneier