cypherpunks
Threads by month
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
June 2015
- 66 participants
- 110 discussions
Re: [Bitcoin-development] questions about bitcoin-XT code fork & non-consensus hard-fork
by Eugen Leitl 07 Jul '15
by Eugen Leitl 07 Jul '15
07 Jul '15
----- Forwarded message from Adam Back <adam(a)cypherspace.org> -----
Date: Mon, 15 Jun 2015 20:03:25 +0200
From: Adam Back <adam(a)cypherspace.org>
To: Mike Hearn <mike(a)plan99.net>
Cc: Bitcoin Dev <bitcoin-development(a)lists.sourceforge.net>
Subject: Re: [Bitcoin-development] questions about bitcoin-XT code fork & non-consensus hard-fork
Message-ID: <CALqxMTFC7zBN9GvHAZLQj4SbXjzkCAM9meSErd3qn7uCoON98Q(a)mail.gmail.com>
Hi Mike
Well thank you for replying openly on this topic, its helpful.
I apologise in advance if this gets quite to the point and at times
blunt, but transparency is important, and we owe it to the users who
see Bitcoin as the start of a new future and the$3b of invested funds
and $600m of VC funds invested in companies, we owe it to them that we
be open and transparent here.
I would really prefer on a personal nor professional basis to be
having this conversation period, never mind in public, but Mike - your
and Gavin's decision to promote a unilateral hard-fork and code fork
are extremely high risk for bitcoin and so there remains little
choice. So I apologise again that we have to have this kind of
conversation on a technical discussion list. This whole thing is
hugely stressful and worrying for developers, companies and investors.
I strongly urge that we return to the existing collaborative
constructive review process that has been used for the last 4 years
which is a consensus by design to prevent one rogue person from
inserting a backdoor, or lobbying for a favoured change on behalf of a
special interest group, or working for bad actor (without accusing you
of any of those - I understand you personally just want to scale
bitcoin, but are inclined to knock heads and try to force an issue you
see, rather than work collaboratively).
For you (and everyone)
- Should there be a summit of some kind, that is open attendance, and
video recorded so that people who are unable to attend can participate
too, so that people can present the technical proposals and risks in
an unbiased way?
(It is not theoretical question, I may have a sponsor and host - not
Blockstream, an independent, its a question for everyone, developers,
users, CTOs, CEOs.)
So here I come back to more frank questions:
Governance
The rest of the developers are wise to realise that they do not want
exclusive control, to avoid governance centralising into the hands of
one person, and this is why they have shared it with a consensus
process over the last 4 years. No offence but I dont think you
personally are thinking far enough ahead to think you want personal
control of this industry. Maybe some factions dont trust your
motives, or they dont mind, but feel more assured if a dozen other
people are closely reviewing and have collective review authority.
- Do you understand that attempting to break this process by
unilateral hard-fork is extremely weakening of Bitcoin's change
governance model?
- Do you understand that change governance is important, and that it
is important that there be multiple reviewers and sign-off to avoid
someone being blackmailed or influenced by an external party - which
could potentially result in massive theft of funds if something were
missed?
- Secondarily do you understand that even if you succeed in a
unilateral fork (and the level of lost coins and market cap and damage
to confidence is recoverable), that it sets a precedent that others
may try to follow in the future to introduce coercive features that
break the assurances of bitcoin, like fungibility reducing features
say (topically I hear you once proposed on a private forum the concept
of red-lists, other such proposals have been made and quickly
abandoned), or ultimately if there is a political process to obtain
unpopular changes by unilateral threat, the sky is the limit - rewrite
the social contract at that point without consensus, but by
calculation that people will value Bitcoin enough that they will
follow a lead to avoid risk to the system?
Security
As you probably know some extremely subtle bugs in Bitcoin have at
times slipped past even the most rigorous testings, often with
innocuous but unexpected behaviours, but some security issues Some
extremely intricate and time-sensitive security defect and incident
response happens from time to time which is not necessarily publicly
disclosed until after the issue has been rolled out and fixed, which
can take some time due to the nature of protocol upgrades,
work-arounds, software upgrade via contacting key miners etc. We
could take an example of the openSSL bug.
- How do you plan to deal with security & incident response for the
duration you describe where you will have control while you are
deploying the unilateral hard-fork and being in sole maintainership
control?
- Are you a member of the bitcoin security reporting list?
On 15 June 2015 at 11:56, Mike Hearn <mike(a)plan99.net> wrote:
> I will review both and mostly delegate to Gavin's good taste around the
> details, unless there is some very strong disagreement. But that seems
> unlikely.
> ...
> Feedback will be read. There are no NACKS in Bitcoin XT. Patch requests
> aren't scored in any way. The final decision rests with the maintainer as in
> ~all open source projects.
As you know the people who have written 95% of the code (and reviewed,
and tested, and formally proved segments etc) are strenuously advising
not to push any consensus code into public use without listening to
and addressing review questions which span beyond rigorous code &
automated guided fuzz testers, simulation and sometimes formal proofs,
but also economics, game-theory and critically very subtle
determinism/consensus safety that they have collectively 4-5 years
experience of each.
- Will you pause your release plans if all of the other developers
insist that the code or algorithm is defective?
- Please don't take this the wrong way, and I know your bitcoinj work
was a significant engineering project which required porting bitcoin
logic. But If the answer to the above question is no, as you seemed
to indicate in your response, as you not have not written much bitcoin
core code yourself (I think 3 PRs in total), do you find yourself more
qualified than the combination of peer review of the group of people
who have written 95% of it, and maintained it and refactored most of
it over the last 4-5 years?
I presume from your security background you are quite familiar with
the need for review of crypto protocol changes & rigorous code review.
That is even more the case with Bitcoin given the consensus
criticality.
>> - On the idea of a non-consensus hard-fork at all, I think we can
>> assume you will get a row of NACKs. Can you explain your rationale
>> for going ahead anyway? The risks are well understood and enormous.
>
> If Bitcoin runs out of capacity it will break and many of our users will
> leave. That is not an acceptable outcome for myself or the many other
> wallet, service and merchant developers who have worked for years to build
> an ecosystem around this protocol.
That you are frustrated, is not a sufficient answer as to why you are
proposing to go ahead with a universally acknowledged extreme network
divergence danger unilateral hard-fork, lacking wide-spread consensus.
People are quite concerned about this. Patience, caution and prudence
is necessary in a software system with such high assurance
requirements.
So I ask again:
- On the idea of a non-consensus hard-fork at all, I think we can
assume you will get a row of NACKs. Can you explain your rationale
for going ahead anyway? The risks are well understood and enormous.
Note the key point is that you are working on a unilateral hard-fork,
where there is a clear 4 year established process for proposing
improvements and an extremely well thought out and important change
management governance process. While there has been much discussion,
you nor Gavin, have not actually posted a BIP for review. Nor
actually was much of the discussion even conducted in the open: it was
only when Matt felt the need to clear the air and steer this
conversation into the open that discussion arose here. During that
period of private discussion you and Gavin were largely unknown to
most of us lobbying companies with your representation of a method
that concerns everyone of the Bitcoin users. Now that the technical
community aware aware they are strenuously discouraging you on the
basis of risks.
Openness
- Do you agree that bitcoin technical discussions should happen in the open?
- As this is a FOSS project, do you agree that companies should also
be open, about their requirements and trade-offs they would prefer?
- Can you disclose the list of companies you have lobbied in private
whether they have spoken publicly or not, and whether they have
indicated approval or not?
- Did you share a specific plan, like a BIP or white paper with these
companies, and if so can we see it?
- If you didnt submit a plan, could you summarise what you asked them
and what you proposed, and if you discussed also the risks? (If you
asked them if they would like Bitcoin to scale, I expect almost
everyone does, including every member of the technical community, so
that for example would not fairly indicate approval for a unilateral
hard-fork)
I and others will be happy to talk with the CTO and CEOs of companies
you have lobbied in private, for balance to assure ourselves and the
rest of the community that their support was given - and with full
understanding of the risks of doing it unilaterally, without peer
review, benefit of maintenance and security inidence management, and
what exactly they are being quoting as having signed up for.
(This maybe more efficiently and openly achieved by the open process,
on a mailing list, maybe a different one even special purpose to this
topic, with additional option of the open public meeting I proposed at
the top).
- Do you agree that it would be appropriate, that companies be aware
of both the scaling opportunities (of course, great everyone wants
scalability) as well as the technical limits and risks with various
approaches? And that these be presented by parties from a range of
views to ensure balance?
- Do you consider your expression of issues to hold true to the ideal
of representing balanced nuanced view of all sides of a technical
debate, even when under pressure or feeling impatient about the
process?
You may want to review the opening few minutes of your epicenter 82
bitcoin for example where you claimed and I quote "[the rest of the
technical community] dont want capacity to ever increase and want it
to stay where it is and when it fills up people move to other
systems".
- Do you think that is an accurate depiction of the complex trade-offs
we have been discussing on this list?
(For the record I am not aware of a single person who has said they do
not agree with scaling Bitcoin. Changing a constant is not the
hard-part. The hard part is validating a plan and the other factors
that go into it. It's not a free choice it is a security/scalability
tradeoff. No one will thank us if we "scale" bitcoin but break it in
hard to recover ways at the same time.)
- Were you similarly balanced in your explanations when talking to
companies in private discussions?
- Do you understand that if we do not work from balanced technical
discussion, that we may end up with some biased criteria?
Authority
Neither you nor Gavin have any particular authority here to speak on
behalf of Bitcoin (eg you acknowledge in your podcast that Wladimir is
dev lead, and you and Gavin are both well aware of the 4 year
established change management consensus decision making model where
all of the technical reviewers have to come to agreement before
changes go in for security reasons explained above). I know Gavin has
a "Chief Scientist" title from the Bitcoin Foundation, but sadly that
organisation is not held in as much regard as it once was, due to
various irregularities and controversies, and as I understand it no
longer employs any developers, due to lack of funds. Gavin is now
employed by MIT's DCI project as a researcher in some capacity. As
you know Wladimir is doing the development lead role now, and it seems
part of your personal frustration you said was because he did not
agree with your views. Neither you nor Gavin have been particularly
involved in bitcoin lately, even Gavin, for 1.5 years or so.
- Do you agree that if you presume to speak where you do not have
authority you may confuse companies?
> If Bitcoin runs out of capacity it will break and many of our users will
> leave. That is not an acceptable outcome for myself or the many other
> wallet, service and merchant developers who have worked for years to build
> an ecosystem around this protocol.
But I think this is a false dichotomy. As I said in previous mail I
understand people are frustrated that it has taken so long, but it is
not the case that no progress has been made on scalability.
I itemised a long list of scalability work which you acknowledged as
impressive work (CPU, memory, network bandwidth/latency) and RBF, CPFP
fee work, fee-estimation, and so on, which you acknowledged and are
aware of.
There are multiple proposals and BIPs under consideration on the list right now.
- what is the reason that you (or Gavin) would not post your BIP along
side the others to see if it would win based on technical merit?
- why would you feel uniquely qualified to override the expert opinion
of the rest of the technical community if your proposal were not
considered to have most technical merit? (Given that this is not a
simple market competition thing where multiple hard-forks can be
considered - it is a one only decision, and if it is done in a
divisive unilateral way there are extreme risks of the ledger
diverging.)
Network Divergence Risk
>> - How do you propose to deal with the extra risks that come from
>> non-consensus hard-forks? Hard-forks themselves are quite risky, but
>> non-consensus ones are extremely dangerous for consensus.
>
> The approach is the same for other forks. Voting via block versions and then
> when there's been >X% for Y time units the 1mb limit is lifted/replaced.
But this is not a soft-fork, it is a hard-fork. Miner voting is only
peripherally related. Even if in the extremis 75% of miners tried a
unilateral hard-fork but 100% of the users stayed on the maintained
original code, no change would occur other than those miners losing
reward (mining fork-coins with no resale value) and the difficulty
would adjust. The miners who made an error in choice would lose money
and go out of business or rejoin the chain.
However if something in that direction happens with actual users and
companies on both sides of it users will lose money, the ledger will
diverge as soon as a single double-spend happens, and never share a
block again, companies will go instantly insolvent, and chaos will
break out. This is the dangerous scenario we are concerned about.
So the same question again:
- How do you propose to deal with the extra risks that come from
non-consensus hard-forks? Hard-forks themselves are quite risky, but
non-consensus ones are extremely dangerous for consensus.
Being sensitive to alarming the market
It is something akin to Greece or Portugal or Italy exiting the euro
currency in a disorderly way. Economists and central bank policy
makers are extremely worried about such an eventuality and talk about
related factors in careful, measured terms, watch Mario Draghi when he
speaks.
Imagine that bitcoin is 10x or 100x bigger. Bitcoin cant have people
taking unilateral actions such as you have been proposing. It is not
following the consensus governance process, and not good policy and it
is probably affecting bitcoin confidence and price at this moment.
>> - Do you have contingency plans for what to do if the non-consensus
>> hard-fork goes wrong and $3B is lost as a result?
>
> Where did you get the $3B figure from? The fork either doesn't happen, or it
> happens after quite a long period of people knowing it's going to happen -
> for example because their full node is printing "You need to upgrade"
> messages due to seeing the larger block version, or because they read the
> news, or because they heard about it via some other mechanisms.
This is not a soft-fork, and the community will not want to take the
risks once they understand them, and they have months in which to
understand them and at this point you've motivated and wasted 100s of
developer man hours such that we will feel impelled to make sure that
no one opts into a unilateral hard-fork without understanding the
risks. It would be negligent to allow people to do that. Before this
gets very far FAQs will be on bitcoin.org etc explaining this risk I
would imagine. Its just starting not finished.
What makes you think the rest of the community may not instead prefer
Jeff Garzik's BIP after revisions that he is making now with review
comments from others?
Or another proposal. Taken together with a deployment plan that sees
work on decentralisation tying into that plan.
- If you persisted anyway, what makes you think bitcoin could not make
code changes defensively relating to your unilateral fork?
(I am sure creative minds can find some ways to harden bitcoin against
a unilateral fork, with a soft-fork or non-consensus update can be
deployed much faster than a hard-fork).
I tried to warn Gavin privately that I thought he was under-estimating
the risk of failure to his fork proposal due to it being unilateral.
Ie as you both seem sincere in your wish to have your proposal
succeed, then obviously the best way to do that is to release a BIP in
the open collaborative process and submit it to review like everyone
else. Doing it unilaterally only increases its chance of failure.
The only sensible thing to do here is submit a BIP and stop the
unilateral fork threat.
Scalability Plans
> Let me flip the question around. Do you have a contingency plan if Bitcoin
> runs out of capacity and significant user disruption occurs that results in
> exodus, followed by fall in BTC price? The only one I've seen is "we can
> perform an emergency hard fork in a few weeks"!
Yes people have proposed other plans. Bryan Bishop posted a list of them.
Jeff Garzik has a proposal, BIP-100 which seems already better than
Gavin's having benefit of peer review which he has been incorporating.
I proposed several soft-fork models which can be deployed safely and
immediately, which do not have ledger risk.
I have another proposal relating to simplified soft-fork one-way pegs
which I'll write up in a bit.
I think there are still issues in Jeff's proposal but he is very open
and collaborating and there maybe related but different proposals
presently.
>> As you can probably tell I think a unilateral fork without wide-scale
>> consensus from the technical and business communities is a deeply
>> inadvisable.
>
> Gavin and I have been polling many key players in the ecosystem. The
> consensus you seek does exist. All wallet developers (except Lawrence), all
> the major exchanges, all the major payment processors and many of the major
> mining pools want to see the limit lifted (I haven't been talking to pools,
> Gavin has).
It does not seem to me that you understand the issue. Of course they
want to increase the scalability of bitcoin. So does everyone else on
this mailing list.
That they would support that is obvious. If you presented your
unilateral action plan without explaining the risks too.
I think I covered this further above. If you would like to share the
company list, or we can invite them to the proposed public physical
meeting, I think it would be useful for them to have a balanced view
of the ledger divergence risks, and alternative in-consensus proposals
underway, as well as the governance risks, maintenance risks, security
incident risks.
Note that other people talk to companies too, as part of their day to
day jobs, or from contacts from being in the industry. You have no
special authority or unique ability to talk with business people. Its
just that the technical community did not know you were busy doing
that.
I can not believe that any company that would listen to their CTO, CSO
or failing that board would be ok with the risks implied by what you
are proposing on full examination.
> This notion that the change has no consensus is based on you polling the
> people directly around you and people who like to spend all day on this
> mailing list. It's not an accurate reflection of the wider Bitcoin community
> and that is one of the leading reasons there is going to be a fork. A small
> number of people have been flatly ignoring LOTS of highly technical and
> passionate developers who have written vast amounts of code, built up the
> Bitcoin user base, designed hardware and software, and yes built companies.
I know you want scale bitcoin, as I said everyone here does. I think
what you're experiencing is that you've had more luck explaining your
pragmatic unilateral plan to non-technical people without peer review,
and so not experienced the kind of huge pushback you are getting from
the technical community. The whole of bitcoin is immensely
complicated such that it takes an uber-geek CS genius years to
catchup, this is not a slight of any of the business people who are
working hard to deploy Bitcoin into the world, its just complicated
and therefore not easy to understand the game-theory, security,
governance and distributed system thinking. I have a comp sci PhD in
distributed systems, implemented p2p network systems and have 2
decades of applied crypto experience with a major interest in
electronic cash crypto protocols, and it took me a several years to
catchup and even I have a few hazy spots on low-level details, and I
addictively into read everything I could find. Realistically all of
us are still learning, as bitcoin combines so many fields that it
opens new possibilities.
What I am expecting that yourself and Gavin are thinking is that
you'll knock heads and force the issue and get to consensus.
However I think you have seriously misjudged the risks and have not
adequately explained them to companies you are talking with. Indeed
you do not fully seem to acknowledge the risks, nor to have a well
thought out plan here of how you would actually manage it, nor the
moral hazards of having a lone developer in hugely divisive
circumstances in sole control of bitcoins running code. Those are
exactly the reasons for the code change governance process!
Even though you are trying to help, the full result is you are not
helping achieve anything by changing a constant and starting a
unilateral hard-fork (not to trivialise the work of making a patch to
do that).
The work to even make the constant change be feasible was a result of
1000s of hours of work by others in the development community, that is
emphatically and unilaterally telling you that hard-forks are hugely
inadvisable.
You are trying to break the code change governance security procedure
that were put in place for good reason for the security of $3b of
other peoples money, even if you have a pragmatic intent to help, this
is flat out unacceptable.
There are also security implications to what you are proposing, which
I have heard you attempting to trivialise, that are core to Bitcoins
security and core functionality.
> the overwhelming impression I get from a few
> others here is that no, they don't want to scale Bitcoin. They already
> decided it's a technological dead end.
I think this is a significant mischaracterisation, and I think almost
everybody is on board with a combination plan:
1. work to improve decentralisation (specific technical work already
underway, and education)
2. create a plan to increase block-size in a slow fashion to not cause
system shocks (eg like Jeff is proposing or some better variant)
3. work on actual algorithmic scaling
In this way we can have throughput needed for scalability and security
work to continue.
As I said you can not scale a O(n^2) broadcast network by changing
constants, you need algorithmic improvements.
People are working on them already. All of those 3 things are being
actively worked on RIGHT NOW, and in the case of algorithmic scaling
and improve decentralisation have been worked on for months.
You may have done one useful thing which is to remind people that
blocks are only 3x-4x below capacity such that we should look at it.
But we can not work under duress of haste, nor unilateral ultimatums,
this is the realm of human action that leads to moral hazard, and
ironically reminds us of why Satoshi put the quote in the genesis
block.
Bitcoin is too complex a system with too much at stake to be making
political hasty decisions, it would be negligent to act in such a way.
Again please consider that you did your job, caused people to pay
attention, but return to the process, submit a BIP, retract the
unilateral hard-fork which is so dangerous and lets have things be
calm, civil and collaborative in the technical zone of Bitcoin and not
further alarm companies and investors.
Adam
------------------------------------------------------------------------------
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development(a)lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
----- End forwarded message -----
12
29
I am wondering:
Will Greece go boom relatively soon?
The reasons are Greece owns significant amounts of money
(like the u$a) and they can't pay.
This might be an opportunity to see how a country goes
boom in modern times (after we know know what happens to
cities like Detroit).
1
1
I seem to vaguely remember this profile (or one like it) but can't turn it up myself. Would be very grateful for the link if you do repost it.
- JS7
Sent from [ProtonMail](https://protonmail.ch), encrypted email based in Switzerland.
-------- Original Message --------
Subject: Re: [cryptome] NYT on Nick Szabo and Bitcoin
Time (GMT): May 18 2015 13:13:12
From: wilfred(a)vt.edu
To: cryptome(a)freelists.org
CC: cypherpunks(a)cpunks.org, cryptography(a)metzdowd.com, cryptography(a)randombit.net
We did a context profile on Satoshi with analysis of intel-based datacenter profiles and certain known paterns from the USSS (Treasury Fincen) crew you like so much at Yale and another specific MI5 ish unit whome you remember from Anguilla. The analysis clustered age and language patterns and identified a very tight range of character and background with institutional intent, then modeled the propaganda influence that gave rise to the BTC trend. The analysis was posted on the forums and pdf, but are missing from search. We will repost the original and add some current profile analysis. In short, the character is a 20s 2-year AS pre-law 1811 (police) with distinct interest in using US/UK and new international law *pyramid scheme policies to take over global legacy datacenters* in criminal forfeiture cases. Another party did a review of law&policy influencers of the same market and similarily isolated the core group. (no need to mention DEA+FINCEN.)
On Sunday, May 17, 2015, John Young <jya(a)pipeline.com> wrote:
[ nytimes.com/2015/05/17/business/decoding-the-enigma-of-satoshi-nakamoto-and…
Those around cypherpunks 1993-1998 will recall Szabo's emails on
bitcoin early precursors along with Adam Back, Hal Finney, Tim May,
Wei Dai, Lucky Green, Hettinga, many more burgeoning F-Cs. NYT
piece credits cpunks as subversive birther, now being hyper-monetized
by arch-cryptoanarchist Goldman Sachs and many more centralists.
Szabo denies being Satoshi, but ... others rush to fill the gap
2
1
Fixing the broken GPG and HTTPS (X509) trust models with Simple Public Key Infrastructure (SPKI)
by Seth 30 Jun '15
by Seth 30 Jun '15
30 Jun '15
Reposted from
http://vinay.howtolivewiki.com/blog/other/secure-private-reliable-social-ne…
** secure private reliable social networks: sprsn **
by Vinay Gupta • December 29, 2014
sprsn is an idea for a small piece of software which I think would do the
world some good by existing, and which currently does not exist.
It’s a deeply technical project that I probably don’t have time to build
(unless somebody wants to pay my rent for a few months while I take a shot
at it with a helper or two! – I am not serious about this) but I can
describe what’s needed and maybe it will inspire somebody, in whole or in
part.
Synopsis: combine the new (telehash) with the old (SPKI) and get a
Facebook-killer in the form of a command line utility that provides a
decentralized social network. However, will Ethereum do this, and a ton
more?
The dream
sprsn bob "hey when are you coming over?"
sprsn bob list friends
> leslie
> carol
> jake
sprsn bob add carol
> added bob's friend carol with key [a23fd61b7]
> you have no other routes to carol
sprsn jake
> use bob's key for jake?
Now imagine that sprsn also has a web interface mode: sprsn -d 9999
http://localhost:9999 mounts a web interface to your local sprsn instance.
The sprsn instance connects to your (online) friends running sprsn using
telehash (a persistant DHT tool) for web chat and for key management:
click on your friend’s friends to acquire their keys, and multipath to
people (“you have 9 friends in common”) to get more certainty about the
keys.
Obviously this would be great: the best of SSH and Facebook in a single
utility. It is now relatively easy to build.
Let me show you why it hasn’t happened already, and why we need it!
The Problem
GPG and HTTPS (X509) are broken in usability terms because the conceptual
model of trust embedded in each network does not correspond to how people
actually experience the world. As a result, there is a constant grind
between people and these systems, mainly showing up as a series of user
interface disasters. The GPG web of trust results in absurd social
constructs like signing parties because it does not work and creating
social constructs that weird to support it is a sign of that: stand in a
line and show 50 strangers your government ID to prove you exist? Really?
Likewise, anybody who’s tried to buy an X509 certificate (HTTPS cert)
knows the process is absurd: anybody who’s really determined can probably
figure out how to fake your details if they happen to be doing this before
you do it for yourself, and of the 1500 or so Certificate Authorities
issuing trust credentials at least one is weak or compromised by a State,
and all your browser will tell you is “yes, I trust this credential
absolutely.” You just don’t get any say in the matter at all.
The entirely manual, Byzantine process is broken, and so is the entirely
invisible, automated one. It just doesn’t work. The process of mapping
keys to people is just broken and nearly all the rest of the trouble
emanates from this fact. A GPG key maps a person to an email address to a
key, and leaves you to pick who you trust enough to prove the map is
right. An HTTPS cert maps an organization to an IP address to a key, and
asks you to trust one of 1500 organizations your browser vendor chose to
trust. It’s not just the trust model that’s broken, it’s the binding of
these various pieces of data together using cryptography. Gluing the wrong
stuff to the wrong stuff produces constant security and reliability
problems.
What’s the wrong stuff? Legacy delivery mechanisms like email and DNS.
Mapping a person to an email address, and an email address to a key is two
mappings. Same for HTTPS where we map an organization to an IP address to
a key. Two mappings, one of which is essentially arbitrary: I care about
identity and key. I should not have to worry about IP address or email
address – that’s a minor technical detail. But these outmoded trust
systems foreground it, much to our discomfort.
Telehash
Enter Telehash, an encrypted network stack in which you route messages
directly to a public key. The code is pretty simple
expect(mesh).to.be.an('object');
mesh.receive(new Buffer("208cb2d0532f74acae82","hex"), pipe);
The cryptographic key is the routing address. So now we only have to
accomplish one level of indirection: person to key.
Something old, something new, something borrowed, something blue. Enter
SPKI and our old friend, the Granovetter diagram.
SPKI and trust in networks, not webs
Simple Public Key Infrastructure is what we should have deployed instead
of X509/HTTPS and the GPG web of trust. There are two critical differences
between SPKI and X509/GPG. They are:
1) SPKI gives users the ability to certify facts about other users, for
example “bob is allowed to use my computer” can be expressed in a
machine-readable fashion (s-expressions.) This lets users build their own
trust architectures on an ad-hoc basis.
2) SPKI allows anybody to chain certificates of this type (“fred says that
bob says that vinay says that bob is allowed to use his computer.”) This
ability removes the centrality of the CA: anybody that I trust can give me
a certificate stating “this is the key for amazon.com” and because of
certificate chaining, I can see the line of authority down which that key
passed.
These might sound like minor features, but they are not: these two
features express the difference between trust-hierarchy (X509) and
trust-soup (GPG), neither of which are productive, and the
consumer-producer based trust-anarchy which SPKI permits and, indeed,
requires.
The best explanation of this in more detail is the Ode to the Granovetter
Diagram which shows how this different trust model maps cleanly to the
networks of human communication found by Mark Granovetter in his
sociological research. We’re talking about building trust systems which
correspond to actual trust systems as they are found in the real world,
not the broken military abstractions of X509 or the flawed cryptoanarchy
of GPG.
Usable security is possible
Once you fix the trust model so that it works for humans, and use Telehash
to reduce the number of mappings from three (person -> delivery mechanism
-> key) to two (person -> key) it’s possible to imagine a secure system in
which people actually understand what is happening well enough feel
comfortable that they understand what is going on.
So let’s break this down into the desirable properties for the system we’d
build using these primitives.
For ease, let’s consider realtime chat in the first instance – just
pushing messages down telehash sockets. The only question we have to
answer is which telehash socket corresponds to which person.
1) person = key
there’s no way to break the binding between a person and a key, because a
person is a key, or multiple keys.
2) delivery = key
this is what we get from telehash – I don’t need to worry about how I’m
sending you the message, it’s right there.
So I obtain a key for a friend of mine by, say, email. Once I’ve connected
to them, I can then ask them to send me keys for our mutual friends.
3) keys carry the chain of referrers
“alice says this is her key”
“bob says that alice says that this is her key”
“fred says that bob says that alice says that this is her key”
What that looks like in practice is a social graph, like the one embedded
in facebook. I click on you, my friend, and I click on alice, your friend,
and the connection that forms is an SPKI key being transferred to my
keyring. The key is its history – the path by which the key came to me is
the trust chain. If I want to be more sure the key I have is Alice’s key
(and not your sockpuppet) I need to find an independent route or two to
Alice.
If Google and Dunn & Bradstreet both agree that this key is the key for
the IRS, that’s good enough for me.
4) tools and affordances
So how would we actually build this? I would recommend a golang
implementation for cross platform compatibility and ease of distribution.
NaCl and Telehash both exist for golang, and the self-contained binaries
which result are easy to spread around. A command line client would be
easy to augment with a web interface in which the golang program running
on localhost provides an HTTP interface for users that want graphics etc.
Basically you get a decentralized social network with secure chat pretty
much out of the box, where “friending” somebody acquires their key, and
the referral network through which keys propagate is a key social dynamic.
This can work.
5) advanced topics
How do we message friends who aren’t online? Store and forwards seems to
be the obvious approach. Suppose I create a certificate (“sign”) which
lists a set of telehash keys that are my “store and forward” servers – if
you try and chat to me and I’m not there, you can ping one of them and
dump an (encrypted) message for me there and I’ll pick it up when I’m
online again.
Same holds for large block transfer (i.e. dropbox) – I specify my choice
of servers by issuing a digital certificate. Do we need a central store of
those certificates? Maybe, or maybe it’s simply an addressing mechanism:
every time we chat, I push over my updated delivery info, and you can ask
your friends for my updated delivery info if you need to reach me.
In all probability, a decision has to be made about whether to keep the
old SPKI s-expression format for certificates, or move to JSON. Good luck
with that decision, it’s a hard one.
Conclusion
There’s no way to fix a broken conceptual model with a better user
interface.
GPG does not work for ordinary users, and GPG cannot work: we’ve been
trying to fix this for 20 years and it has not happened. The process by
which humans communicate is not tractable using those trust primitives.
We are stuck with a mess, and X509 is not an answer either – it worked
when only big orgs wanted to secure their email and web sites, but now
everybody wants to do it and the certificate issuing mechanisms are
becoming far too sloppy to trust.
We have to go back 20 years to the brilliant analysis of the people who
did not ship a sloppy hack to quickly get to market but sat there and
figured out the right thing to do, if we want to fix this mess in a
durable way.
High roads and low roads
The high road on these issues is Ethereum.
Telehash takes the DHT and uses it for routing. Bitcoin takes the DHT and
adds proof of work to generate a history.
Ethereum takes bitcoin and puts executable contracts into the history,
plus protocols for chat and block transfer.
It’s entirely possible that SPKI-style user-generated certificates will
make their way into Ethereum, either as part of the core spec or as a
common class of DAAPs. “I have bought stuff from bob and would do so
again” can be issues as a certificate, in a standardized format, and these
certificates can be spidered out of the blockchain to generate trust
metrics.
Likewise, if all your messaging is happening on the Ethereum protocol, you
do not necessarily need telehash.
Here’s the question: is Ethereum’s “one ring to rule them all” approach
feasible, or should we work closer to the Unix Philosophy and build
smaller pieces, loosely joined. I can imagine a command line Telehash/SPKI
client which is as commonly used as SSH is today, for slinging around chat
and small data.
I can also imagine an operating-system like sea of executable contracts
and helper functions in a densely knit global decentralized computational
ecosystem providing all the same services and more.
I, personally, am in favor of a mixed strategy. I think the sheer naked
moonshot ambition of Ethereum is extremely attractive, and part of the
reason I joined the team (F.I.S.T.) was that I wanted to be part of such
an ambitious vision.
But it’s an awful lot of bleeding edge tech, and with a project that large
and complex, you can never be quite sure what will come out the other end.
In particular, I have no idea whether the nuances of SPKI etc. which will
enable a revolution in the way that ordinary users experience cryptography
will show up in Ethereum in a usable way – the core smart contract etc.
functions can work perfectly well without fixing the nuances that
user-issued certificates will get.
So I’m writing this post for two reasons: to encourage the Telehash
community to examine SPKI and look at it as a way of managing keys inside
of their DHT routing paradigm, and to encourage the Ethereum community to
look at SPKI and ask whether it might empower users within the larger
Ethereum landscape. Either way, I would dearly love to see an SPKI revival
so that, finally, at long last, pull the sword from the stone: Johnny can
encrypt.
3
2
>
> https://www.tribler.org/
> https://github.com/Tribler/tribler/releases
>
> "Tribler - Privacy using our Tor-inspired onion routing.
> Tribler offers anonymous downloading. Bittorrent is fast, but has no
> privacy. We do NOT use the normal Tor network, but created a dedicated
> Tor-like onion routing network exclusively for torrent downloading. Tribler
> follows the Tor wire protocol specification and hidden services spec quite
> closely, but is enhanced to need no central (directory) server"
>
This looks very nice. And they are constantly improving.
7
6
Peculiar that there are not thousands of mirrors of offerings by
Libgen, Sci-hub and the like, as well as new inititatives by the thousands.
These collections are a lot more valuable than puny, by comparison,
offerings by WikiLeaks and Snowden's media apparatus -- heavily
publicized, politicized, monetized, glorified but minimally technically
and scientifically useful due to sparse and drippy releases.
For example, 3,415 volumes liberated by Aaron Swartz remain on torrent
(some of which we have mirrored with only a half-dozen DMCA notices):
http://cryptome.org/aaron-swartz-series.htm
At 08:33 PM 6/27/2015, you wrote:
>On 6/27/15, grarpamp <grarpamp(a)gmail.com> wrote:
> >
> http://torrentfreak.com/sci-hub-tears-down-academias-illegal-copyright-payw…
> > http://www.sci-hub.club/
>
>"""
>âThanks to Elsevierâs lawsuit, I got past the point of no return. At
>this time I either have to prove we have the full right to do this or
>risk being executed like other âpiratesâ,â she says, naming Aaron
>Swartz as an example.
>
>âIf Elsevier manages to shut down our projects or force them into the
>darknet, that will demonstrate an important idea: that the public does
>not have the right to knowledge. We have to win over Elsevier and
>other publishers and show that what these commercial companies are
>doing is fundamentally wrong.â
>"""
>
>- i expect all onions, all the time, eventually :)
2
1
Excuse delay, thought this went to the cpunks list.
Date: Thu, 11 Jun 2015 10:39:49 -0400
Message-ID:
<CAG+6jObQAytv2+mCvszWx_OgnpGANTaKggsu4jfVn-D1Tb0v5A(a)mail.gmail.com>
Subject: Re: Helmholtz Tubes, CRT Signals (Was: Sigint Dumps)
From: Wilfred Guerin <wilfred(a)vt.edu>
To: John Young <jya(a)pipeline.com>
Content-Type: text/plain; charset=UTF-8
To John-
there are over 200 groups discussing the same issues, at least 2 got
nationalised aggressively, and there have been some tangential posts
on blogs or commercial forums mainly concerns about their datacenter
security not knowing who might attack them, but overall the same
concerns are expressed:
Can the data be spoofed? Not at this resolution without having a model
of everything at the same resolution...
Are the blocks secure? No. [...] But services are stable.
Exports to public? yes, p2p clusters loading, datacenters doing
preparation processing, packed table files are set up for distributed
search
How do *you* know? UHD/4k VNC video stream to one of the master
control servers handling the database import and text chat with 200
others, no direct access to data here, but certainty that the data is
distributing.
Involved how? rendering code to make GEO-TIFF map tiles and aggrigated
spline/curves to simplify snapshot data and level of detail pyramid
(multi-variate parametric search) index data.
Now we ask... If this was NATO-ish or any variety of US/UK system
built in the 1960s(?) can we solve for locations or viable downlink
targets to intercept? This hint at geo-magnetic shift is a huge
opportunity!
The coin data is ... glorious... but the other signals are no less
amusing, one set appears to be wired sigint in mhz carrier bands,
assumably urban analogue phone signals as recorded from the switch
routing system's ground or related wiretaps to rf via cable.
genome.gov had links to the various data formats that others
identified in the organics table.
X-Ray physics and detector materials need research, 1890s through
1970s was a huge ammount of X-Ray publicity, but NEVER EVER DID ANYONE
USE IT FOR SIGNALING??? BULLSHIT!
ALSO!!!
"is there any distinction at military bases or secure areas?" YES.
there is some access control doors which others suggest is a standard
card reader with number pad and automatic door that is common on the
secure layer of military and COMMERCIAL CONTRACTOR facilities that has
a proximity detector signal built into the door frame. THE COIN DATA
WARPS AROUND THIS CYLINDRICAL SIGNAL ON THE DOOR PORTAL AND VECTOR
PASSES THROUGH THE WALL. Obviously the vector is impossible without
fields projected from that security device! There is also a ghosting
signature that reduces sample rate (in the digitiser) around similar
facilities, looks like it was isolated in the newer signal index, so
it should be really obvious who is using these jamming systems and
where! (and where else!)
more questions:
coins on boats? yes.
coins on submarines? YES. (with ghosting)
organic signals? wtf?
Yes, but im told the prior emailed reference is erroneous: the primary
index is a SIGNALS CHARACTERISTIC tree, the supporting block (large
bit scope number) is VERY SIMILAR to a genetic expression profile tree
such as genome.gov/ and many of the gene profiling standards.
Hopefully the news will start posting the technical reports the
primary groups have been preparing last few days... we need to get
physics and advanced crypto groups attention, if the signals are
encoded and that party broke through dense analogue crypto, it will
take a huge effort to solve for that (it may have been an analogue EM
field as well that performed the encoding or reference signals) to
make the same capability posible for others.
Staying alive!
On Thu, Jun 11, 2015 at 10:08 AM, John Young <jya(a)pipeline.com> wrote:
> We're tweeting these posts. Blowback: is any evidence available to
> support the narrative? Sample of the data, say, for close examination,
> with credible provenance, not the GG secret pact bloviation.
>
> Mild critique: is this sci-fi or legit or both, advancing the literary-video
> prize winning breaking news big screen Hollywood Neal Stephenson
> spirit of the Snowden "NSA disclosures."
>
>
> At 09:54 AM 6/11/2015, you wrote:
>>
>> More specifics on the sigint system:
>>
>> This looks like a "Growth Industry" ...
>>
>> Access to the beam is not restricted, anyone can pull signals out of
>> the reconaissance loop from any of its exposed vectors.***
>>
>> Viable areas:
>> Terrestrial:
>> a: Spurrious emissions from tubes or conduit, beam deflection from
>> interior particles
>> b: Stray beams passing through field coils but not redirected
>> c: Direct access to tubes or conduit (any variety of methods)
>> Orbital:
>> d: Geo-Magnetic Shift (downlinks)
>> e: Refractive / deflection (downlinks)
>>
>> As the rate of geo-magnetic shift continues to deform the containment
>> of the projected fields used to shape and steer the beams (which may
>> also have something to do with the sensor itself?), wider areas will
>> be accessible which are hit with the rogue spot beam from orbital (and
>> projected field electro-magnetic) guides.
>>
>> This means almost anyone with a sensor can gather data from the downlinks.
>>
>> Additionally, spurrious radiation from the terrestrial system is
>> available around endpoints and field coils, especially from damaged
>> conduit or particles in the tubes.
>>
>> Time to raid the libraries for antique books about 1800s-1980s X-Ray
>> EM physics and electromagnetic wave guides!
>>
>> It would not be rational to encode the carrier signal unless it was
>> certain that the encoding would not disrupt signals quality, however
>> raw X-Rated signals might have been too risky?
>>
>> [There are Thz ring oscillators, detectors, and various photonic
>> rings, but properly implemented field-effect lenses, EM field vector
>> control circuitry and coils(/phased array) (abstract field
>> projection), and optimal tube design are all that should
>> theorhetically be needed once a rogue beam is identified. X-Ray
>> Materials and interference fields must be researched and made common
>> knowledge.]
>>
>> Hopefully the data source is not too easily found and the dumps get
>> out, this is extremely relevant for "civil liberties", human rights,
>> and reconstructing your own personal history and records where your
>> data is otherwise mising.
>>
>>
>>
>> On Thu, Jun 11, 2015, Wilfred Guerin <wilfred(a)vt.edu> wrote:
>> > Helmholtz Tube, Beam Steering, EM field interaction, simple field
>> > dynamics, (and your oscilliscope) are all you need to create complex
>> > EM signals processors.
>> >
>> > No different than your antique crypto cracker, which uses an abstract
>> > field to solve complex pre-defined systems. "56-bit" https cracker was
>> > mass implemented as a 300mhz backplane EM field solver about the size
>> > of your desktop computer.
>> >
>> > Using the same technology, resolution, and methods, BTC Bitcoins are
>> > around 8m^3 of field to solve.
>> >
>> > No doubt the access and decoding to these sigint signals requires
>> > similar proessing before being steered to the digitiser.
>> >
>> > (Maxwell Tube, Helmholtz Tube, typical of high school physics
>> > classrooms)
>
>
>
5
4
črypto is finished... and it's about time × (also: 'Balrog' malnet, firsthand view)
by Seth 28 Jun '15
by Seth 28 Jun '15
28 Jun '15
Reposted from https://cryptostorm.org/viewtopic.php?f=67&t=8702
črypto is finished... and it's about time × (also: 'Balrog' malnet,
firsthand view)
Postby Pattern_Juggled » Tue May 12, 2015 11:27 am
{direct link: cryptostorm.org/balrog}
This essay forms one section of a broader paper describing a global
surveillance technology we have dubbed Corruptor-Injector Networks (CINs,
or "sins") here at cryptostorm. As we have worked on the drafting and
editing of the larger paper, we saw as a team the need for a first-hand
perspective to help provide a tangible sense of how CINs work and why
understanding them is so vitally important to the future of network
security.
I was nominated to write the first-person account, in large part because I
have spent the better part of two months entangled with a particular CIN
("painted" by it - i.e. targeted). That experience, it was decided, may
prove helpful for readers as it represents what is likely to be a
nearly-unique frontline report from someone who is both engaged in
research in this field as a professional vocation, and who was personally
painted by the preeminent CIN in the world today. Despite misgivings about
revisiting some of this experience, I see the wisdom in this decision and
here I am pecking away at this esay. It's late, as I've found it a
challenge to comport my experience with a cohesive, easily-digested
narrative arc. What follows is the best I'm able to do, when it comes to
sharing that experience in a way that is intended to help others.
Specifically, I hope to accomplish two things. One, and most importantly,
I am sharing what amounts to loosely-defined diagnostic criteria for those
concerned they have been painted by a CIN... or who are in a later-stage
state of deeply-burrowed infection by the CINs implants. In the last month
or so, I have been deluged by people concerned they may be targeted or
infected. While I have done my best to reply with useful advice our
counsel, more often than not I've been unable to provide much of either.
This essay is my attempt to fill that gap.
Apart from the designers and operators of this CIN, I am likely more
familiar with the operational details if it as it exists today than anyone
else in the world - by a long stretch. I have invested many hundreds of
deep-focus hours in this work, with only a small minority of that being
solely directed at disinfecting my - and our - machines locally, at
cryptostorm. The majority has involved, to be blunt, using myself as an
experimental subject... allowing my local machines to reinfect via the
painting profile, and then trying to limit the spread of, and eventually
revers the footorint of, the infection modules/payloads themselves. I have
iteratively followed that painting-injection-infection-corruption
trajector through dozens of iterations, countless kernels rotted from the
inside-out and simply erased as they were beyond salvation. This knowledge
base all but obligates me to share what I have learned, such as it is, so
others can leverage the hard-won bits of insight I've been able to collate
from all this dirty tech.
The second goal of this paper is to communicate the scale, scope, and
pressing urgency of CINs as a research and mitigation subject of highest
priority to anyone working in the information security field today. That's
a big task. I will do my best to share the broad outline of what we, at
cryptostorm, have watched accelerate into the biggest, most dangerous,
most complex threat we see to internet security and privacy for the next
five years.
Let's get to work.
& crypto really is finished.
...once we finish this amble,
...that conclusion is inescapable,
...its consequences both subtle & profound.
Ց forest, trees, & the sum of parts
It wouldn't be too far-fetched to say that info security is a solved
problem, or was before the CINs implanted themselves in the middle of
things. That sounds bizarre to say, since by all accounts the State of
InfoTec is... abyssmal. Stuff is broken, everywhere; everything gets
hacked by everyone, all the time. Nobody follows good security procedure,
and the net result veers between chaos and satire. That's all true, no
question - but in theoretical terms, I stand by the assertion that infosec
was essentially solved. How to implement those solution compoments...
well, that's different question entirely.
When it comes to understanding how to mitigate, manage, and monitor
security issues in technology, we know how: every attack vector has its
defensive tools that, if applied correctly, pretty much work. This state
of affairs is so ingrained in our thinking, from within infosec, that it's
tough to step back and really see how prevasive it is. As much as we all
know there's horrible implementation failure out there, nobody is (or was)
home alone late at night, wringing hands and sighing dejectedly... utterly
stumped by a question of how to defend against a particular attack.
Rather, a few minutes perusing InfoSec Taylor Swift's twitter feed... err
I mean "searching the web," is enough to turn up some pretty solid
knowledge on any imaginable infosec topic, from post-quantum cryptographic
systems to gritty OpSec-spy advice, and off to baked-in processor hardware
attack models. Winnow down the advice to the stuff that seems legit,
figure out the cost and complexity of putting it in production, and off we
go. This we all assume is simply the lay of the land in our corner of the
world.
Corruptor-Injector networks throw that somewhat comfortable state of
affairs on its head in a rude, unsettling, and comprehensive way.
This is a qualitatively different sort of security threat than is, for
example, "malware" or "the fast-approaching arrival of engineered AES128
collisions" - CINs are as different from such componentry as is a castle
from a jumble of uncut boulders sitting in a field. All the expertise out
there, developed to thwart countless sub-sub categories of security
threats to computers and the networks we use to connect them, finds itself
marooned in the dry terrain of "necessary, but not sufficient." That is to
say, we will need all those skills to avoid an otherwise-eventual
"CINtastrophe" in which the sticky extremeties of fast-mutating, competing
CINs drown the internet in a morass of corrupted data, broken routes,
unstable connections, and infected packets. But we'll also need more.
Which is the first important point in all of this, and one it took me more
than a month of more-than-fulltime study of this subject to finally
realise in one of those "oh, wow... now I get it" moments. I'm going to
boldface this, as it's a core fact: no individual functional component of
CINs is - or need be - new, or unknown, or freshy-discovered, or
surprisingly clever and far ahead of the curve in its specialised explot
category. It's all alread seen, observed, documented, and on most all
cases, reasonably well understood in the civilian world. Cryptostorm has
not, nor do we claim to have, "discovered a new exploit" or attack vector
that nobody has previously noted or published. The sense of urgency and...
dread (not the right word, but it'll do for now) we feel and are
communicating recently isn't based on a novel discovery.
Even more so, the entire concept of CINs - if not the name itself - and
the example of one created by the NSA, were thrown into stark, inescapably
real status by the whistleblowing of Edward Snowden in 2013. There's a
hefty pile of NSA slide decks, and civilan commentary, freely available to
confirm that's the case (we're collecting it all in the closing segment of
this full essay, as well as in our newly-birthed community research
library. It's all there, in black and white... nearly two years ago, with
additional follow-on disclosures continuing along the way.
So if that's the case, why are we all hot & bothered at cryptostorm about
CINS? After all, they're neither made of new pieces nor even a
newly-discovered category themselves - nothing to see, move right along.
I'll admit that I was, unconsciously, in that mindset abou this segment of
the Snowden archives. I read them - skimmed, more like - and essentially
filed them under the "interesting, but not core" tag in my internal filing
model. Yes, malware... you get it, bad things happen. Don't click on dodgy
links, or download "free" porn. There are pages about injectors and
FoxAcid, and QuantumInsert, and so on... but it all seemed mostly
Tor-specific and anyway not terribly front & centre. I say this not
because I misunderstood the mechanisms - MiTM is not a new concept for any
of us on the team, here - but rather because I miss the implictions
entirely.
We all did, or nearly all. That's despite Snowden himself taking some
effort to return focus to this category, even as we all hared off into
various sub-branches of our own particular desire: crypto brute-forcing,
mass interception, hardware interdiction and modification, and so on. Not
surprisingly, Mikko (Hypponen) calls out as something of a lone voice, in
his early-published quotes on these attack tools, in really clearly
pointing out that there's something fundamentally different about this
stuff. Here he is, from March of 2014, in The Intercept:
"“When they deploy malware on systems,” Hypponen says, “they potentially
create new vulnerabilities in these systems, making them more vulnerable
for attacks by third parties.” Hypponen believes that governments could
arguably justify using malware in a small number of targeted cases against
adversaries. But millions of malware implants being deployed by the NSA as
part of an automated process, he says, would be “out of control.” “That
would definitely not be proportionate,” Hypponen says. “It couldn’t
possibly be targeted and named. It sounds like wholesale infection and
wholesale surveillance.”
[b"]Wholesale infection."[/b] That's the visible symptom, and it's the
sharp stick in the eye that I needed to break my complacency. Mikko calls
this category "disturbing" and warns that it risks "undermining the
security of the Internet." That's no hyperbole. In fact, the observable
evidence of that critical tipping-point having already been crossed is
building up all around us.
All this doom-and-gloom from something that doesn't really have any new
parts, and has been outed to public visibility for years... how can that
be? CINs are powerful because of their systems-level characteristics, not
(merely) because of their fancy building blocks. Just like the castle,
vastly more useful as a defensive tool than a big pile of boulders, CINs
take a bunch of building blocks and create an aggregated system ouf of
them that's of a different order entirely.
The forest is greater than the sum of the trees, in other words. Much
greater.
ՑՑ "...proceed with the pwnage”
“Just pull those selectors, queue them up for QUANTUM, and proceed with
the pwnage,” the author of the posts writes. (“Pwnage,” short for “pure
ownage,” is gamer-speak for defeating opponents.) The author adds,
triumphantly, “Yay! /throws confetti in the air.”
One of the things we know - or knew, really - about infosec is what it
means to be "infected" with "malware" or "badware" or whatever term is
enjoying its 15 PFS re-keyings of fame. You do something dumb, like stick
a big wiggly floppy drive into your TRS-80 that you got from some shady
dude at the local BBS meet-up, and now you "have it." The virus. It's in
your computer...
inthecomputer.jpg
If you do silly-dumb things and bad stuff gets into your computer, then
you have to... get it out of your computer, of course. A entire industry
(dubious as it is) exists to keep bad things from getting in - "antivirus"
- and a parallel sub-industry specialises (not terribly successfully) in
getting it out when it gets in. THis same model scales up to corporate
entites, except it all costs alot more money for the same
not-really-effective results. Firewalls keep bad stuff out, and scanners
find it when it gets in so it can get removed.
Simple - even if tough to do in practice. CINs are different.
It took me most of a month to figure this out, too. At first, in early
March, I noticed odd browser activity in several machines I'd been using
to do research and fine-tuning for our torstorm gateway. I whipped out my
analyzers and packet-grabbers and browser-session sniffers, and got to
work figuring out what had infected the machines. Because that's how this
works: if you are unlucky or unwise, you disinfect. It's tedious and not
always totally successful, but it isn't complex or intellectually
challenging. Indeed, I was quite sure I knew with some precision what
vector had infected me - and I had (still have) the forensics to
demonstrate it. Feeling a bit smug, I took the weekend to collate data,
write up some findings, clean the local network, and prepare to pat myself
on the back for being such an InfoSec Profesional.
Then the weird stuff started happening again, on the computer I'd somewhat
meticulously "cleaned" of any odd tidbits. Hmm, ok. I suck at hardware, as
everyone knows, so clearly I just didn't do a good job of disinfecting -
this is not unusual. Back to the salt mines, to disinfect again. This time
I roped in most all of the rest of the cryptostorm staff computers, to
disinfect those... a security precaution in case I gave what I had to
others on the team, somehow. I still didn't really know what it was doing
("it") in the browser, specifically... but who cares? Wipe the browser to
the bare earth, or if needed reinstall the entire OS image ground-up.
Problem not. Done.
I took the opportunity of this extravagant downtime - nearly a whole week
without being on the computer for academic or cryptostorm work, amazing! -
to pick up a new laptop. Actually new, in the box - something odd for me,
as I tend towards ragged conglomerates of old machines. Once again feeling
smug, I laid out some elegant UEFI partitions - tri-boot, look at me being
all tech! Packages updated, repositories lovingly pruned and preened with
bonsai attention. I left the drives from the old infected machines, in my
local network, off in a pile for later analysis and file removal. Safety
first, right? No way this nasty stuff will jump onto the new, "clean"
boxes I've spent days setting up.
Then the new box went weird, all at once. Not just one partition, either:
I'd boot into Win and sure enough the browser would get baulky and jagged
and cache-bloated if I hopped around to a few sites... not even the same
sites I'd visited when I was in the lenny partition.. That matters,
because we assume - unconsciously - that we get infected from a specific
site. It's got bad files on the server, you visit the server, and you have
those files come down to your machine via your browser. Maybe it's a
creepy flash file making use of the endless deluge of flash 0days, or
whatever. The file comes from a server.
But I didn't visit any of the same sites, on these different operating
systems I'd just used on my new laptop... not an intentional choice, but
looking back I knew it was a clean split between the two groups of sites.
But now I certainly seemed to have the same problem on a brand-new,
well-tightened (as much as one can, because WIndows) OS instance - with no
overlap in sites visited. That's sort of weird, isn't it?
Well, ok... thinking... hmmm. And as I'm thinking, the Windows partition
locks up tight. No surprise there, it happens... though with only a couple
plain-jane websites loaded in Firefox? On a brand-new laptop? Odd, but
whatever: Windows. Reboot, and it'll be happy once again.
I push the power button to reboot the laptop. It powers off, by all
appearances... and then simply sits like turd in the hot sun. It's a
new-fangled laptop, no way to do anything to it but push the power button.
Heck even the battery is locked inside tight. I push, and push, and
push... nothing. And my mind is repeating two words: fucking hardware.
Hardware is the bane of my existence. Two days old, and a new laptop won't
even power up. Hardware and I have a fraught relationship. I go through
the grief stages, sort of... first is denial - it can't be broken, no way!
- and then the next one is anger - damned piece of garbage, amazing how
shoddy things are!
...I think there's three more stages, but I don't remember them because I
was so pissed off.
Also the laptop got a bit dented-up along the way. I was frustrated: a
week's worth of fiddling with hardware and kernels, and I was one step
backwards from where I'd begin. No stable partition. No stable local
machines, known-clean. No real idea of the infection vector, as my assumed
model wasn't doing well as new data arrived. Plus now I just had an angry
shouting match with a laptop that won't boot (not much shouting from that
side of things)... this is really, really not me at all. But I'm feeling,
at that point, a powerlessness... a sense of non-confidence in my own
ability to run a computer. This might be like a truck driver who suddenly
forgets how to operate the transmission in her daily driver: really
humiliating, and self-eroding, isn't it?
In the dozen or two cases of people I've talked to who also have been
painted by this CIN, that powerlessness feeling is a universal marker.
Many are high-level tech notables, and the concept of not being able to
make a computer run cleanly is... utterly foreign. As a group, we're the
kids who built computers from blurry blueprints published in Byte
magazine, metaphorically speaking. We not only fix computers for friends
and family when they won't work, we're the ones who the people who first
tried to fix them come to when they can't fix them. It's been like that
all our lives. It's sort of who we are, at some level.
And then there's these computers sitting in front of us that don't work.
Or, they work for a while - a few days, maybe - and then they start
sliding downhill. Browser slows, then gets GPU/CPU intensive. Lots of
activity from it, even when no page loads are happening visibly - or maybe
only a tab or two are open. Bidirectional traffic, noted by most of us who
ifconfig'd or nload'd or iptraf'd the boxes when things took a strange
turn.
Next, graphical irregularities that go beyond the browser. Fonts aren't
rendering quite right... or if they do, they render well but have these
"slips" where they get a bit pixellated... but only for a minute or ten,
and they come back. Those of us attuned to such things note that strange
tls/ssl errors spin up: mismatched certs, subtle but if one's browser is a
bit snooty about credentials, they appear. Maybe a certificate for a site
that doesn't match the site's URL... well ok not uncommon, except in these
cases it's for sites that we know have matching certs, to the character.
But they're transient.
Wireshark it. But.. wireshark crashes. Update wireshark... and suddently
you find yourself downloading a really big package relative to what you
are pretty sure a basic wireshark binary should be. You google that, to
confirm... and as you do, you notice that there's a bunch of other
packages hitching a ride on that wireshark update... how'd that happen?
More googling, but as you do, your machine is doing stuff. Htop and...wtf?
Lots of new processes, not stuff you are used to seeing. Bluetooth? You
disabled it ages ago. Avahi... what the hell is that? Cups? I don't even
own a printer.
You google each one, and they're legit packages... but packages you've
never intentionally installed or configured. And no big version upgrades
lately, to the kernel, either... hmmm. Look at the config files for these
unexpected arrivals - eeek! Ports open, remote debugging activated...
that's not default settings, and you sure as heck didn'[t set those, did
you? Meanwhile the CPU is hot, the hard disk platters are spinning
continuously, and the blinkenlight on the NIC is a solid LED.
Those who are reading this and have experienced some or all of that, you
know what I'm describing. You can feel your OS eroding out from underneath
you... but how to stop it? And how did it get in, since that's a new
machine with no hardware in common with the old (infected) ones. Perhaps
you go on a config jijad, like I did (many times): manually reviewing
every config file of every bloody package on the bloody machine, and
manually resetting to values you think sound legit... because who can
google them all? Packages crash, you didn't set values right. Reading,
googling, page 7 of the search results and still nobody will just post the
syntax that made the damned whatever-it-is do its thing without barfing!
...what did you see??!?
wisdom_of_the_ancients.jpg
Ah, yes, now you're feeling the burn. If you looked in cache (or Cache, or
Media Cache - wtf? - or .cache, or...) you see gigabytes of weirdly
symmetrical, hard-symmetric-encrypted blobs overflowing, in all
directions. Purge cache, and it builds back up. Plug the NIC in, and
traffic screams out... you didn't even up the adapter yet! And is that
your wifi adapter chattering away? That was disabled, too...
Eventually you reboot yet one more time, and the grub menu is... not the
same. You run grub2/pc, and this is old-skool grub, or whatever. Is your
kernel image listed differently? No way... that's not possible. You
mention these odd things to colleagues or friends, and they rib you about
it: "stop clicking on porn, and you won't get infected again!" But you
actually didn't... which is troubling in all sorts of ways.
Read boot logs closely, and you might see paravirtualisation come up.
And/or KVM. If you run windows, the equivalent there. But you didn't
install a virtualised kernel. Maybe you are like me, and you get downright
obsessive about this: iterate through possible infection mechanisms,
between boxes. Calculate RFC ranges for NFC devices you know are disabled,
but who knows..? Consider that air-gapped subsonic infection magic that at
first seemed legit, then got pissed all over, but is almost certainly
legit and was alll along... do you need to actually find a Faraday cage to
put your computer in?
Unplug from the network entirely, hard-down adapters at the BIOS. Machine
is stable. OK. But... useless, right? DIsable IP6, wreck bluetooth
physically with a screwdriver, read up in WiMax and all that weird
packet-radio stuff (there goes a weekend of your life you'll never get
back). Start manually setting kernel flags, pre-compile... only to see the
"new" initrd image hash-match to the infected one. Learn about
config-overrides, and config-backups, and dpkg-reconfigure, and apt-cache,
and... there's a few more weeks.
Plug back into the internet after all that - static IP on a baseline wired
ip4 NIC, no DHCP packages even installed, ffs! - first packet goes to
cstorm to initiate a secure session. Rkhunter at the ready, unhide(s)
spooled up... iptraf running, tcpdump dumin'... an hour later, having
logged in to a couple sites to check week's worth of backlogged
correspondence, and the browser starts slowing. Task manager shows big
caches of javascript and CSS and images and... oh, no. Check your browser
config files, manually - the ones you manually edited for hours last
night, and set chattr +i. They're reverted somehow. There's a proxy
enabled, and silent extensions with no names and no information when you
look for matches by their thumbprints.
Kill your browser with pkill -9... but the browser in your window is still
there. htop.... is that legit, or is that a remote xterm session? Why is
sshd running? Who enabled Atari filesystem, ffs!
So it goes...
ՑՑՑ “Owning the Net”
In the first week or two after I got painted, I stuck the name of
"SVGbola" on the malware I had captured... because .svg-format font files
are one of the mechanisms used for the initial inject of targeted network
sessions, and because ebola ofc. But quickly I saw that there were other
vectors, they seemed to evolve over time. I'd block or disable or find a
way to mitigate one clever ingress tactic, and a few hours later I'd see
the telltale cache-and-traffic stats begin climbing... not again. Two or
three days of frantic battle later, and I'd learned about a couple more
attack/inject tactics, but still had no damned idea what tied them together
I'd intentionally been avoiding reading those old NSA slide decks, as I
didn't want to taint my perceptions with a "one holds a hammer, and the
world become a nail" dynamic. But it was time to dig into the literature
(using a borrowed touchpad... I'd borrowed a few laptops along the way,
from friends and colleagues, to use for some simple email and web tasks...
and managed to brick the hard drives on every single one), and refresh my
memory on this whole "weird NSA MiTM malware" cul-de-sac.
It didn't take long at all...
The NSA began rapidly escalating its hacking efforts a decade ago. In
2004, according to secret internal records, the agency was managing a
small network of only 100 to 150 implants. But over the next six to eight
years, as an elite unit called Tailored Access Operations (TAO) recruited
new hackers and developed new malware tools, the number of implants soared
to tens of thousands. {article date: March 2014}
I had been assuming Stuxnet, in terms of initial infection vector... you
know, a USB stick with sharpie writing on the side that says: PR0N, DO NOT
OPEN!!! <-- that is how you get malware, right? ( speaking metaphorically,
sort of)
But this isn't what the NSA is doing with these programs, not at all.
They're selecting targets for injection of malware into live network
sessions - apparently http/https overwhelmingly - on the fly, at "choke
points" where they know the targets' sessions will go by the hundreds of
machines that compromise these NSA 'malnets.' Custom-sculpted nework
injections (we call them 'session prions') are forced in, seething with
0days. An analyst in some post-Snowden NSA office tomb clicks a few GUI
elements on her display and the selector logic she was fed by her bosses
primes the Quantum and Foxacid malnets worldwide, waiting for that
signature'd session to show up on their targeting radar.
You've been CIN-painted.
Now, whenever your sessions match that profile, you will get more Foxacid
Alien-implant session payloads coming back from your routine internet
activities. The selectors can be anything that identifies you as a general
profile... the slide decks mention things like Facebook tracking
fingerprints, DoubleClick leech-cookies, twitter oauth header snippets,
and so forth. Physical IP is entirely unnecessary, as is your name or any
other identifier.
Perhaps the NSA (or its clients in the civilian law enforcement world, in
dozens of countries) wants to find out who runs a particular website...
say, a .onion website like agorahooawayyfoe.onion...
l_ff525d308ba173b66cd3d533cc092237.jpg
l_ff525d308ba173b66cd3d533cc092237.jpg (5.75 KiB) Viewed 1378 times
This isn't a small-scale effort any more, either. That's what I think I
had unconsciously assumed, that it was a couple hundred people on the
Amerikan drone-list, or whatever. Not making light of such things, but
rather for me as a technologist if an attack is bespoke and requires
expertise, it limits it to a tiny, tiny percent of defensive threat
modelling scenarios. And for those on the drone-lists? Well, good luck is
what I'd generally say.
However, these CIN malnets are scaling/scaled to millions of concurrent
painted-chumps. And growing.
The implants being deployed were once reserved for a few hundred
hard-to-reach targets, whose communications could not be monitored through
traditional wiretaps. But the documents analyzed by The Intercept show how
the NSA has aggressively accelerated its hacking initiatives in the past
decade by computerizing some processes previously handled by humans. The
automated system – codenamed TURBINE – is designed to “allow the current
implant network to scale to large size (millions of implants) by creating
a system that does automated control implants by groups instead of
individually.”
In a top-secret presentation, dated August 2009, the NSA describes a
pre-programmed part of the covert infrastructure called the “Expert
System,” which is designed to operate “like the brain.” The system manages
the applications and functions of the implants and “decides” what tools
they need to best extract data from infected machines. {ibid.}
Or for another way of saying it in the NSA's own words, dating from 2009...
intelligent-command-and-control.jpg
ՑՑՑՑ ņame your poison
Once I realised this was about quite a bit more than simply borked svg's
(which is still a pretty interesting vector, imho), I pulled out the name
#SauronsEye for what I was experiencing: a totalising, all-seeing,
ever-present, burning glare from a height. I was being surveilled, by some
entity somewhere, for some reason. The pressure of the eye was almost
physical, for those middle weeks.
But the name doesn't seem to fit, now that we've been able to fit the
scrambled, jagged mess of data-pieces together into a more or less
fully-coherent understanding of what the system is. Because this stuff
isn't passive it doesn't simply sit there and watch. Rather, it's 'all up
in your shit,' as they say... every time you get online, however innocuous
and carefully-constrained your activities are, you run the risk of this
happening to your browser once those prions spread through your network
session and shoot right into your local kernel:
12.jpg
A colleague, overhearing us discussing this amoungst the team, blurted out
"Balrog." And that's the fit, just so. Yes, it's LoTR and that's drifted
twee of late - but at core Tolkein isn't twee, and he knew his evil as
only an Oxford professor of decrepit languages can know evil.
The Balrog, for the less painfully geeky amoungst the readership, are
described by JRR as "they can change their shape at will, and move unclad
in the raiment of the world, meaning invisible and without form" (cite),
which gets it spot-on for our CIN-naming task here. He goes on, waxing a
bit more poetical...
His enemy halted again, facing him, and the shadow about it reached out
like two vast wings… suddenly it drew itself up to a great height, and its
wings were spread from wall to wall…
Shadowy? Check. Great height, and wide (metaphorical) wingspan? Check. But
it's the imagery of the Balrog that seared the name into the very souls of
Tolkein-reading boys such as I. Imagery that quite hits the nail on the
head:
1826732-balrog.jpg
Balrog500ppx.png
That's something of what it feels like to face down this stuff as it
repeatedly pierces one's local perimeter and turns one's root-level kernel
sanctuary into a mutating, unreliable, dishonest, corrupted mess... right
in front of one's eyes. (and yes, I know that computers behaving badly are
very much First World Problems of the most Platonic sort, and hyperbole
aside I remain aware that starvation trumps Cronenberg-transgressed
computational resources when it comes to real problems to have in one's
life)
The final point, for this spot of writing, is this: there is no
"disinfecting" once you are painted as a target by Balrog (or any CIN).
The infection exist ephemerally in the fabric of the internet itself; it's
not something you can simply remove from your computer with antivirus
software (or manually). Trust me on this: even if you are successful in
disinfecting (and that'll require expertise in grub, Xen, containers,
obscure filesystem formats, font encoding, archaic network protocols down
the OSI stack, and on and on and on), dare to actually use the computer to
communicate with others online, and you'll be right back to the
alien-bursting-from-stomach place in short order.
Neither cryptostorm, nor cryptography, can protect you from Balrog, or
from CINs. The session prions come in via legitimate (-ish) web or network
activity. You can't blacklist the websites serving dirty files... because
they aren't coming from websites, these prions. They're phantom-present
everywhere in the internet that's a couple hops from a Foxacid shooter...
wihich means everywhere, more or less. You can blacklist the internet, I
suppose - offline yourself to stay pure... but that in and of itself
reflects a successful DoS attack by the NSA: they downed you, forever...
I can hear the grumbling from the stalwarts already: "BUT WHAT ABOUT
HTTPS??!?! IT'S SUPER-SECURE AND INVINCIBLE AND SO NSA CAN SUCK EGGS I'M
SAFE BECAUSE HTTPS EVERYWHERE WHOOOOOOO!!"
...
Https - as deployed, in the real world, based on tls & thus x509 &
Certification Authorities & Digicert & ASN.1 & parsing errors & engineered
'print-collisions & DigiNotar & #superfisk & all the rest - is so badly,
widely, deeply, permanently, irrecoverably broken on every relevant level
that it merely acts as a tool to filter out dumb or lazy attackers. Those
aren't the attackers we worry about much, do we?
I mean, if we put a lock on our door that would be totally effective in
keeping out newborn babies, caterpillars, and midsized aggregations of
Spanish Moss - but was useless against some dude who just hits the door
with his shoulder to pop it open - then it'd be less than wise to go
cavorting about the neighbourhood, crowing to all who can hear that you
left 500 pound sterling on the kitcken table and too bad suckers, no
mewling infant will ever find her way in to steal that currency...
wouldn't it?
That's https.
Indeed, I have a... something between a theory, and a strangely intense
fantasy... concept that PEM-encoded certs themselves are being used as an
implant vector by Balrog :-P Or, as my colleague graze prefers to (more
reasonably) suspect, strangely-formatted packets for use in transporting
data between Balrog-sickened victims and the MalCloud of Balrog's control
architecture, globally. Or maybe the're used as meta-fingerprints...
beyond-unicode control characters embedded in obscure fields nobody even
decodes client-side but which can be sniffed cross-site to identify
sessions over time...
Anyway, https. Were we to discover (or read the work of others who
discovered, more likely) super-exotic cert-vectored exploit pathways, we
would be not surprised in the least; it's not that it's 'only' marginally
useful in securing actual data (and network sessions) against CIN-level
active attackers, but rather it's a question of how destructive it is, on
balance. Alot, a little, or in the middle? That's an open question, but
it's the only one when it comes to https and security.
But remember, many keystrokes ago, we discussed "necessary but not
sufficient?" This is where it folds back in, like an origami crane tucked
in one's pocket...
The defensive techniques that can - and will - protect us from Balrog and
other CINs (there will be others, likely already are... that's a given),
systems-level infected-cloud virulence, must also act as integrated,
coherent, cohesive, outcomes-defined systems as well. Cryptography
(symmetric & asymmetric primitives alike) is a piece of that, a crucial
piece without which overall systems success would likely be impossible.
But crypto alone is no more protection from Balrog than would be a single
thick mitten serve as protection from a month in the Arctic during coldest
wintertimes. There's more, and more importantly it all needs to fit
together as a sum far greater than its parts: a big pile of right-handed
mittens won't substitute for a proper Inuit snow suit.
Funny thing is, we know how to do that - the systems stuff, the integrated
functionality. It's been where we've headed since last fall, perhaps
reflecting a team-wide intuition that our membership's needs were pulling
us that way. Too, we've been seeing the weirdness out there - fractal
weirdness on the network - for many months: borked routed, fishy certs,
dodgy packets, shifty CDNs, https being https, etc. Little fragments of
mysterious code piggybacking on "VPN service" installers (pretty sure we
know where some of that comes from now, eh?), microsoftupdate.com
hostnames used as C&C for... something? Repository pulls showing up
weird-shaped, with signed hashes to back their dubious claims to
legitimacy.... it goes on and on.
“La semplicità è la massima raffinatzza” (Łeonardo da Vinci)
CINs work by corrupting network integrity, at the most fundamental levels:
routing, packet integrity, DNS resolution, asymmetric session identity
validation. They use the trust we all have in those various systems more
or less working a they were designed to work, and as their maintainers
strive to enable them to work... they use that trust as a weapon against
everyone who uses the internet to communicate, from a father in Ghana
texting the family to find out what they'd like for dinner from town, to
the Chilean wind-farmer planning future blade geometries with
meteorological data available online, to the post-quantim information
theory doctoral student in Taiwan who runs her latest research results up
the flagpole with colleagues around the world, to see who salutes... all
get leeched, individually, so CINs can frolic about & implant malware as
their whims dicatate.
Galrog, and CINs generally, will prove to be our era's smallpox-infested
blankets dropped on trusting First Nation welcoming parties by white guys
behaving badly. We trust the internet to more or less inter-network, and
CINs use that trust as an ideal attack channel because who would really
think?
Well, Balrog - this Balrog, not Tolkein's - is real. Funding is in the
order of $100 million USD a year and growing. It's been up and running a
decade or so, long since out of beta. There's other CINs in the works,
surely... if not deployed already regionally or in limited scale; When
more than one is shooting filth into whatever network sessions catch its
fancy, attribution will be hopeless. Its not like one checks ARIN for
Foxacid records, eh? As to C&C, all evidence suggests Balrog piggybacks on
the incomprehensible route-hostname complexity of the mega-CDNs -
cloudflare, akamai, others so shady and insubstantial it's likely they'll
be gone before this post comes out of final-round edits: you can't
blacklist those, and their hostnames cycle so frequently you can'd even do
subhost nullroutes.
So if you are painted, and Balrog is whipping at your NICs, you'll likely
never 'prove' to anyone whose whip made those scars. But the scars are
real, eh? They burn. And it'd be a heck of alot better to avoid the whip,
rather than burn endless spans of time in Quixotic attempts to prove
whodunit when whodunit dun moved to the cloud, address uncertain and
changing by day.
So that's our job now, at cryptostorm: post-crypto network security.
Crypto, Reloaded. Crypto... but wait, there's more! Protectiion from an
ugly blanket of festering sickness already grown into the fabric of the
internet itself, and sinking its violation deeper every day. Assurance
that sessions go where intended, get there without fuckery, and come back
timely, valid, & clean.
One cannot simply 'clean' Balrog off, as the infection is entwined with
the internet itself.
Within that spreading rot, there exists the latent possibility of clean
secret pathways, reliable protected networks delivering assured transit
and deep-hardened privacy for every session, every packet, every bit... an
underground railroad of peaceful packets. Identifying and alerting to
network level threats is all well and good, but useless compared to threat
transcendence.
Done right, that kind of service delivery creates a
network-within-the-network, a sanctuary for people to talk and share and
live their lives with meaning, confidence, and peace.
º¯º
º¯¯º
...cryptostorm's sanctuary comes now ±
~ pj
11
18