[occi-wg] thought on interoperability vs integration

Alexis Richardson alexis.richardson at gmail.com
Sat May 9 13:47:56 CDT 2009


Hi all,

Thanks for a thought-provoking week of emails on the OCCI-WG list.
Especially thanks to Sam, Richard, Ben and Tim for laying out a lot of
the issues.

One link that I found useful was this:
http://www.tbray.org/ongoing/When/200x/2009/03/16/Sun-Cloud where we
find the following statement:

---
if Cloud technology is going to take off, there’ll have to be a
competitive ecosystem; so that when you bet on a service provider, if
the relationship doesn’t work out there’s a way to take your business
to another provider with relatively little operational pain. Put
another way: no lock-in.  ... I got all excited about this back in
January at that Cloud Interop session. Anant Jhingran, an IBM VIP,
spoke up and said “Customers don’t want interoperability, they want
integration.” ...  “Bzzzzzzzzzt! Wrong!” I thought. But then I
realized he was only half wrong; anyone going down this road needs
integration and interoperability.
---

What we have been discussing is "for customers".  So it is about both
of these things:

* Interoperability ("interop")
* Integration

This made me realise that our OCCI discussions have correctly been
about both issues.  But, incorrectly, we have been commingling the
two.  For example a person will say "you need X for interop" and the
reply will be "but you need Y" when in fact Y is for integration.  And
vice versa.

This is a problem for us because it leads to confusion.  But it's a
symptom of a larger problem which is that interop and integration have
opposite requirements.

* Interop is about reducing the entropy in a general purpose system
down to some level of behavioural invariance.  This minimises the cost
of meeting the defined behaviour correctly, and minimises the risk of
different behaviours breaking interop.  This is in turn about having a
defined behaviour (in some format) and minimising its scope.

* Integration is about minimising the frictions between (a) the
general purpose system and (b) any specific purpose external system
that may be coupled to it.  It is also, often, about maximising the
number of specific systems (or "features") that may be connected.
Since we don't know ex ante what those are, this tends to maximise
scope (eg "feature creep").  Too much specificity is the same as
complexity.

Because interop requires minimal scope, and integration pushes for
maximal scope, they are in tension.

They are BOTH about simplicity.  Simplicity cannot be invoked on its
own, as a reason for preferring interop over integration:

* Interop is about simplicity of definition

* Integration is about simplicity of end use

BUT

* Interop simplicity is ex ante

* Integration simplicity is ex post

We cannot predict all ex post issues, but we can try and make simple
definitions ex ante.  I argue below this means we have to have ONE
definition.

So let's look at interop first:

It's really important that interop be based on defined behaviour.  Or
it cannot be verified.  Lack of verifiability is almost always a
symptom that something is opaque, complex and ambiguous, which will
break interop in subtle ways that cannot be seen in advance (due to
the opacity).  This later leads to brittleness when interop and
integration are attempted in practice; and that leads to expensive
patching at all levels, ... which is one thing that WS-* exhibits.

NOTE 1 on WS-* ---- IMO this was not accidental, and due to focussing
on solving integration before solving interop.  IIRC, the first WS-*
was an Interop committee to fix minor vendor mismatches and versioning
issues in prior WS protocols such as WSDL and SOAP.  The formalisms of
WSDL and SOAP were not precise enough to spot these issues upfront, so
they were left until it was too late.

Now let's look at the implications of having a definition of behaviour:

You cannot define a system using multiple definitions.  You have to
have one definition, preferably in a formalism that admits of
automatic conformance testing.  In the case of data formats, this
leads us to three possible choices:

1) We remove data formats from the interop profile.  They are not part
of interop.  Data interop is excluded.  Data payloads are opaque
blobs.

OR
2) We have one data format that can be defined unambiguously.

OR
3) We have multiple formats and derive them from a single unambiguous
definition using canonical, verifiable, automatable, mappings.  This
definition can either be in a new format which we invent (option 3a)
or it can be in an existing format (option 3b).

I am going to rule out option (3a) on grounds of time and due to the
prevalence of existing candidates.

Also, I think choice (1) is a complete cop-out -- I don't see how we
can claim 'useful' interop without data interop.  This leaves options
(2) and (3b).  BOTH of these options require one format.

By occam's razor, option 2 is preferable on grounds of simplicity -
option (3b) does not add anything except one mapping for every extra
format that is based on the core definition.  Such complexity MUST be
deferred as long as possible, and MAY be pushed into integration use
cases.  The latter is preferable.  Complexity MAY be added later if
necessary and then, reluctantly.

As a general observation if you have complexity (entropy) in a system,
it is very hard to take out.  It may permeate the system at all
levels.  Removal of the complexity is known as 'complete refactoring'.
 This is a bad outcome.  Even worse is when refactoring cannot be done
for commercial reasons, eg because the systems are in production.
Then the complexity must be encapsulated and hidden.  Attempts to wrap
complex behaviours in simpler packaging usually fail.

NOTE 2 on WS-* ---- This tried to do too much and ended up trying to
wrap the complexity in simple packaging.  This usually fails to work
as we have seen.

Another general observation:

* Integration requires interop, ie. the use of a common general
interop model; otherwise it is piecemeal and pointless.  Example - the
old 'integration brokers' that had N*N connections to maintain, and
which got replaced by N connections to a common system.

* But you can have interop without integration - it just means 'you
have a smaller audience'.  This is fine because you can always grow
your audience by integrating more case with the interoperating core.
It is easier to do that when the interoperating core is
programmatically simple (as in low entropy, small code blocks, easy to
test conformance to the definition).

I would like to add some observations from the world of AMQP...

AMQP-1) It is a protocol with one simple common data format - it gets
that right.  We leave it to integration products and services to
support data formats at the edge (eg "ESBs").  OCCI should not be like
an ESB - that is for products and services in the integration
business.

AMQP-2) That AMQP data format is not XML - see below for more thoughts
on that.  XML can be carried as an attached payload (just as it can be
in the JSON case btw).

AMQP-3) The 0-8 and 0-9 specs took 18 months of production use to show
up the many very tiny interop bugs.  We used those to create 0-9-1
which does have interop (we have tested this) and is only 40 pages
long.  This would not have been possible with a complex spec.  It
would not have been possible with multiple data formats.

AMQP-4) The 0-10 spec was focussed on supporting a lot of use cases eg
"integrate with multiple transport formats including SCTP and TCP and
UDP" and adding lots of features at all levels (eg JMS, support for
hardware, ..).  That spec is really complicated and nearly 300 pages
long.  Some great new ideas are in it, but it's long and in my own
opinion not supportive of two interoperating implementations.

AMQP-5) All these painful lessons have taken the AMQP working group to
a much happier place with AMQP 1.0 which tries to simplify everything
by taking stuff out that is not needed for interop, plus refactoring
(see above my comments on how removing entropy is hard) and clean-up.

All of the above has taken time because we did not learn from WS-*.
We did too much too fast and confused interop with integration.  We
are back on track now.

Now to the issue of data formats.  I have already argued that FOR
INTEROP, there must be one definition.  I argued that the best way to
do this is via a suitable single format.  We can support as many
recommended ways as we like FOR INTEGRATION ... and they can be
evolved over time.

Here is my 2c on XML.

XML-1) XML lets you do too much - because of namespaces and xsd it is
in effect multiple formats.  This is bad - we want a single, testable,
constrained definition for data interop.

XML-2) To enforce compliance with a simple XML definition, you need to
have an extra definition of a well formed 'small' document.  But
creating a new definition of a data format in XML is equivalent to
defining a new data format, the same as 'option 3a' above.  But that
option was ruled out above, on grounds of time constraint.. provided
that a suitable alternative exists already (see JSON claims below).

NOTE 3 on WS-* ---- IMHO a third reason why WS-* failed to be simple
enough to get happy, quick and wide adoption, is that (XML-1) issue
left too much data integration in the hands of customers, because
vendors could not produce useful products when XML could be all things
to all people.  By not delivering on data integration, it became hard
to deliver on the promise of integration at all.  And recall that
lowering integration costs was the selling point...

So let's get INTEROP right and then do INTEGRATION.

Interop requires:

* Model - yes
* Data format - one definition
* Metadata - ? tbd

As an aside - I think that GData is the nuts but it is also really an
*integration techonology*.

Now, here are some claims about JSON:

JSON-1) Sun have demonstrated that it is plausible as a data model for
a cloud API.  That makes it plausible for us to ask: can it be used as
the core definition for interop?

JSON-2) It is lower entropy than XML.  This makes it easy to test conformance.

JSON-3) This means we do things the right way round -- simpler 'ex
ante' choices make it EASIER for us to extend and enhance the protocol
on a by need basis.  For example carrying OVF payloads, or other
integration points.  Many will be done by users.

So my recommendation is that the best way to avoid WS-* outcomes is

A) Use one format for interop.  Do interop first.  Integration later.
B) *At least for now* and *during the working draft* stage, to use JSON
C) Other formats, for now, are "integration".  But we do interop first.

OK.... I have to run off.  I wrote this down in one go and don't have
time to review it.  Apologies for any mistakes in the presentation.

What do you all think?

alexis



More information about the occi-wg mailing list