On 7/20/15 4:56 PM, Zenaan Harkness
wrote:
On 7/20/15, Stephen D. Williams <sdw@lig.net> wrote:
On the other hand, life is a balance.
True. I'm thinking individuals here.
I probably shouldn't have tried to
make the point here, but it is something a security
professional should understand well: The right amount of security
should be moderated by the tradeoff of costs vs. overhead vs.
maximizing benefit vs. minimizing loss.
Corporations are bound to their economic imperative to make such trade
offs. This is the heart of their sociopathic nature. This is the part
of corporations/ companies which needs, somehow, to change in order to
get this world on a better track.
...
It is terrible that some companies have been too eager to share information.
They may or may not have believed whatever safeguards
were in place, or not cared, etc. I'm sure a high pressure meeting with an
FBI crew who are strongly playing the terrorism angle is
persuasive, as it should be, up to a point.
Here's the kind of talk that looks like a hole freshly dug.
Perhaps if there is an actual existential threat to someone's life or
some building (let's please stop using the T word), then "high
pressure persuasion" would be adequate for a court order anyway. As it
should be - up to the point of a subpoena, summons and/ or order to
perform or act - to handle the actual problem.
You seem though to be normalising behaviours and approaches and "high
pressure persuasion" tactics by government departments, in a
generalised way. You might not be intending the things you imply/ say,
You're making an unqualified assumption about my unqualified
qualifier "up to a point"...
but don't be surprised when such positions are mocked or ridiculed.
Don't take such blow back as personal at all though - it's the
"normalisation of bad" and "plainly wrong/ evil" which is being
attacked for the bullshit it is.
Feel free. I totally mock and rail about it too. I can see several
sides to this, and I've been on enough "sides" of these problems, at
least in some weak sense, to have some model of decision making by
people in those roles. Poor decisions are understandable until
there are enough cases, noticed and confronted, to make the right
path clear. We're getting a lot of those lately. EFF, SPLC, ACLU,
and others, sometimes including commercial entities, are providing
an invaluable service of evolving both the law and internal
commercial and government policy.
Hacking the system cleverly and deliberately is one of the cooler
forms of hacking.
And companies holding your data
can actually look at that data for business purposes,
Perhaps try something this instead: "And for-profit therefore
sociopathic-by-nature companies do massively collect your metadata AND
your personal information, with or without your consent, and are well
leaked and reported to use and abuse all your data both within and
beyond the law, beyond your expectations, and beyond what many people
consider ethical."
A few quibbles: for-profit is sociopathic-by-default perhaps,
although even there you are assuming some socioeconomic system.
You're also glossing over whether and when consent is an issue.
People in public places sometimes believe that others need consent
to take their picture; generally not true. Is it rude to take your
picture and does rudeness matter? That depends. "Beyond your
expectations" is also problematic: How could any possible
expectation ever be said to be adhered to? Perhaps "generally
accepted fair use as defined by EFF" or something (if there is such
a thing) might be reasonable. What is the definition of "many
people"?
If you use language that can never be satisfied in any reliable way,
you can't really complain that an entity isn't satisfying it.
See what we did there? We made it personal, giving a slight hope to
the uninitiated to realise something they did not realise before. We
Education is always good. Don't infect others with pathological
paranoia, but a healthy understanding of risks and exposures is
always good.
highlighted some foundations (for profit being inherently
Not inherently. Social, economic, legal, contractual, and other
cultural systems allow, disallow, guide, and control people in their
interactions. The US, for instance, has always been a place where
there were many unwritten rules of operating in business. Some have
run roughshod over those, sometimes reaping unjust rewards and/or
changing what is acceptable, but there are always things that could
be done that just aren't. Further, a particular entity could impose
upon itself, by charter, culture, or customer agreement, a more
stringent stance than others. There could be mechanisms that audit
or otherwise control this.
You get what you optimize for. If you have a default corporation
controlled by weak, shallow leaders and driven by shallow, blind
Wall Street numbers, then the result is likely to be sociopathic.
On the other hand, however imperfectly or incompletely, certain
companies have a founder-driven culture of a far more empathic
nature than this default, whether they be different or have a stated
desire to not be evil. Both of those companies largely care about
users in some strong sense, much unlike certain other highly and
chronically annoying entities.
sociopathic). We reminded the reader that their consent is often not
obtained (yes, we can argue about implied consent, the point is we're
edumacating). We make the assertion that companies actually abuse all
that data (whatever "abuse" might mean), just in case someone missed
the memo.
One person's use is another person's abuse. People should be aware.
With all this, we are also implying that this abuse is wrong.
Abuse is wrong, use may not be. Sometimes depends on where you
stand. Some types don't have agreement. Plenty of people hate the
idea of automated ad filtering based on the content of email or chat
or other activity. There are things that could go wrong with that
if it gets to a human or is gamed, but properly done anonymously, it
can be fine: I'd rather get timely ads I may care about than the
much larger set of uninteresting dreck. I actually suggested doing
exactly this with AOL chatrooms in about 1996. This is a good
example of good education vs. bad education: If you say "This could
be misused or leaked in a way that could be a problem if a company
isn't careful, and here is a scenario..., and here is how that could
be handled better..." that's fine, especially if a company can
indicate the level of care & security they're currently
employing. If you say: "Google is reading your email, sending it to
every company that wants to buy it for a few cents!" that's
disingenuous at best and dangerous to certain people's mental state
at worst.
Your version sounds like you are -trying- to normalise the wrong,
justify the bad, and 'accept the new messed up world order as best we
can'. We hear enough of that from others. And I saw NO to that abuse!
Give me justification for abuse, at your peril!
I was mainly talking about making realistic decisions without a
value statement for current practices, which we are all going to
have different opinions on since they aren't public.
We should have some taxonomy of the nature of those abuses, with
consensus lines drawn as to what we all find acceptable or not
acceptable, why, and what mechanisms best resolve the issue.
although how they use it is somewhat bounded by privacy laws (however
incomplete), not making private things public, unfair business
practices, etc. My point was that the existence of large, valuable services
that depend on a lot of trust is, or should be to a
"should be" trustworthy?
Some are not at certain points, or all are not at some points, or
only mine is as far as I know. Take your pick.
They're companies. You've missed the bloody memo. And a very bloody
memo the corporate record is, for decades and across industries!
Have you noticed the difference in nature of various companies over
time?
sane entity, an even stronger incentive to behave than the patchwork
of laws.
You're not grokking the incentive. It's profit. And it's more than an
incentive, profit is the foundational company-constitutional
imperative for companies (funny that).
This is why companies can NOT be trusted. You seem to be missing this
basic point. Do you own a company?
Of course; it may not be worth anything, but I do actual work. You
don't? You're not doing your taxes properly if not... ;-)
Who CAN be trusted? At some level, no one, but we've already
established that in the real world, you generally have to trust
people all the time.
Are you sure you are applying your distrust criteria in a
comprehensive and rational way?
Past oversharing, then embarrassment and public
abuse, coupled with product impacts as they lose sensitive customers, has
almost certainly caused a cleanup of those attitudes. I'd
be interested in the actual policy right now, although I doubt they are
going to be too explicit. I suspect that it also varies
heavily by corporate culture.
Some companies start with good policy, and good public stance, most
significantly in this conversation, Google itself - "do no evil". They
don't say that any more. They can't. Did you ever wonder why they
stopped saying that?
They pretty much still do. And it is silly to say they can't. They
are a relatively giant company. Mistakes happen. What mistakes are
they making now?
https://www.google.com/about/company/philosophy/
You can make money without doing evil.
Every day, you are somewhat at the mercy of dozens and perhaps thousands
of people who could cause you pain, suffering, or death if
they were so inclined. There are many in the government, schools, employer
personnel departments, medical and insurance companies,
etc. The people driving around you, stopped at a light while you cross the
street, making your food, they all have access and the
ability to inflict misery on you. You have to trust someone to some extent.
Trust is a relevant foundation to community/ society, sure.
But now you've segued into personal. Which is a good place at times,
an effective place. It's more tangible for people.
But here we were talking about companies. I would ordinarily presume
your trust formula is different for companies that it is for actual,
you know, humans.
I suggest not overloading corporate rights, corporate trust, with
human rights, human trust. Not particularly useful in our context.
All companies that I know about are filled with people. They may be
sheeple a little too often (I have permanently fired ATT Mobile
(formerly Cingular) for refusing to issue a refund to my son when
they screwed up "because the policy prevents us".), but it is
personal at some level. You are trusting that the Comcast installer
is not a murderer, that the banker isn't stealing from you, and that
the well-paid Google engineer has better things to do than to
eavesdrop on you.
The question is who you trust, how incentivized they
and the people / organization around them protects you, whether wrongs will
be limited, corrected, and righted or not.
A rational approach is warranted for sure.
Companies, and in most cases humans working for them, are
predominantly incentivized by money. Yesterday I read an article on
Whether all are, or even a predominant amount are, is questionable.
Many people care about customers, their career, mission, etc. Money
is only an issue occasionally.
the Great Wall of China. Incredible vision, so many centuries of
building. But when it came down to the time it was 'needed', due to
there being only so many sentries, and so far spread out, and the
sentries paid so little, when the marauding Mongols wanted in, to do
some marauding, they just bribed a sentry or two. Apparently same with
the Europeans in more recent times. So, incentivized people were,
secure, wall was not. The biggest security theater.
I think the great wall may have been useful psychologically though...
to encourage a mindset of unity in the people within.
For a long time, as a contractor at the peak of their heyday, I had access
to AOL's entire user database, complete with name,
address, full credit card info, phone numbers, etc. I could have also
snooped on their Buddylists, their person-to-person video
(Instant Images), and a lot more. There was zero chance that I would abuse
any of that.
Your ethics are admirable. I share your personal intentions. I don't
trust companies though, except to plunder markets to the maximum
profit possible.
There are some who have acted that way, for sure. I have my black
list. Others try. They deserve a little credit, and help when
possible.
Zenaan
sdw
sdw
On 7/20/15 2:07 PM, Juan wrote:
cypherpunk :
https://www.wikileaks.org/Op-ed-Google-and-the-NSA-Who-s.html
"Google and the NSA: Who’s holding the ‘shit-bag’ now?"
Not-cypherpunk-at-all :
2015-07-19 2:22 GMT+09:00 Stephen D. Williams <sdw@lig.net>:
I feel perfectly confident that Google is going to protect their
billions in income and valuation by being very careful with
avoiding abusing their data or users in any strong sense.
--
Stephen D. Williams sdw@lig.net stephendwilliams@gmail.com LinkedIn: http://sdw.st/in
V:650-450-UNIX (8649) V:866.SDW.UNIX V:703.371.9362 F:703.995.0407
AIM:sdw Skype:StephenDWilliams Yahoo:sdwlignet Resume: http://sdw.st/gres
Personal: http://sdw.st facebook.com/sdwlig twitter.com/scienteer