[occi-wg] thought on interoperability vs. integration

Alexis Richardson alexis.richardson at gmail.com
Sun May 10 11:01:14 CDT 2009


Gary,

Thanks for this note.

My claim is that interoperability, if it describes data which IMO it
has to, must use a formalism to do so, and this means choosing or
making a format.  I prefer choosing over making.  While there can only
be one format/formalism for interop, there can be many for
integration.

You say "A JSON only solution may alienate most of the world."  The
world will use OCCI directly or through an integration point.  OCCI
will not be "X only" at the integration points because it cannot be.
The question of "how OCCI is adopted" is one of integration.  The
whole point is to not impose one cloud implementation or format on
people at the cloud integration endpoint.

I am against interop formats being user-extensible because this WILL
break interop.  I am in favour of integration ("rendering") formats
being user extensible.  (Sidenote: extending interop behaviour is
foreseeable but should be done by agreement and only after OCCI 1.0 is
defined.)

More later ... got to run.

alexis




On Sun, May 10, 2009 at 1:07 AM, Gary Mazz <garymazzaferro at gmail.com> wrote:
> BTW, Happy Birthday....
>
> Some general observations on this thread....
>
> I don't believe anyone is asking to disregard your work, at least I hope
> not.  But what is happening here is a pitfall of starting with an light
> model and an API without clear definition of requirements based on use
> models.
>
> Lets touch on some of the issues, at least from my perspective, sorry
> for the terseness and directness, I'm very busy...
>
> 1) There are several public strategic use models emerging for clouds,
> occurring out of control of this group. Many are still evolving
>
> 2) Success of this effort, in part, is dependent on the ability to adopt
> and integrate other well know and deployed standards and technologies.
>
> 3) An  api is a REPRESENTATION of actions and data with implicate or
> explicate organization into groups of interfaces. Actions, data and
> organization will defined by the use models, or at least provide enough
> that an api can be extrapolated from them.
>
> 4) Any particular api  can be RENDERED based on the native need of the
> api consumer. We call them mappings. For example, you would not directly
> use a PHP or javascript  api in C/C++ program without translating it
> into a form (a mapping) that the C/C++ compilers can accept and the
> executing code can reach. C/C++ doesn't care about CDATA
>
> 5) Many standards, specifications and technologies, deployed for
> decades, are defined and implemented in XML.
>
> 6) Entropy can occur though the lack of precision. ie: clearly defined
> bounds. Ambiguity equals entropy
>
> Summarizing: The api represents a model, the rendered format of the api
> will depend on the use model (including programming languages,
> development tools chains and  execution environments). We, meaning the
> industry and the world, don't have well defined or detailed use model
> today.
>
> My Opinions
> Simple is better, but who cares if you can only interoperate with yourself.
>
> A JSON only solution may alienate  most of the world. Although all the
> consultants will have a field day with mapping, until the mapping have
> to interoperate.
>
> The occi effort may and probably will support more than one api and
> rendering.
>
> The occi NEEDS to develop profile based use cases. The profiles can be
> used to set requirements and set priorities for mappings.
>
> Trading off definition in the front of a processes for expedience
> always results in exponential expenditures as you continue down the path
> of deployments. JSON lacks definition  and increases ambiguity. I'm not
> sure what interop validation will look like, but the bounds of JSON data
> may require significantly  more work and definition before validation
> can be achieved.
>
> BTW, I have a requirement for binary defined data...
>
> -gary
>
> Sam Johnston wrote:
>> Ok so since we're talking about trashing countless hours of my work
>> and missing our deadlines for what appears to be (assuming perhaps a
>> little too much good faith) nothing more than personal preference, and
>> bearing in mind that I've been working on this since 4am this morning,
>> on my birthday, which I've since had to cancel due to a nasty plane
>> flu (thus the surly meter is already well and truly pegged), I'm of
>> course going to have something to say - even if I lack the energy and
>> inclination to give this the comprehensive retort that it deserves at
>> this time.
>>
>> For a start I know nothing about AMQP so I have nothing to say about
>> the AMQP assertions, but the XML and JSON assertions are patently
>> false and bordering on FUD (XML having extensibility and validation
>> features when JSON doesn't is a drawback? JSON has less "entropy" so
>> it's easier to test conformance even in the absence of schemas? XML is
>> to blame for WS-*? Sun Cloud API, currently a work in progress, has
>> proven something? Seriously, WTF?).
>>
>> There is absolutely no question from anyone that XML is up to the task
>> (Google have already proven it on a large scale) but significant
>> concerns about whether JSON is - concerns that remain unanswered
>> despite my repeated requests for answers and concerns that this
>> proposal seeks to postpone or avoid entirely, thus making them someone
>> else's problem. While most APIs currently support multiple formats
>> (even GData offer a JSON option for convenience if not interop) I am
>> yet to find a single successful JSON-only API so this all sounds a bit
>> too experimental for my liking.
>>
>> Furthermore, given that things that are trivial with XML (transforms,
>> embedding, signatures, encryption, etc.) are currently impossible with
>> JSON, this is the mother of all externalisations (that is, leaving our
>> users up the creek without a paddle when we are in the best position
>> to solve the problem by making practical choices). It's also a
>> disappointing attempt to foist personal preferences on others when we
>> can trivially cater for the entire spectrum of needs using XML & XSLT
>> (work which I have alread done and demonstrated I might add - the
>> absence of any volunteers to reinvent the wheel given I'm not about to
>> is also disconcerting). Other difficult questions like the
>> serialisation and serious performance problems of using hrefs (1
>> request for XML vs 1+n for JSON) have also been conveniently ignored
>> thus far and the best answer I've got about the absence of embedding
>> functionality in JSON is that as XML isn't perfect we're better off
>> with nothing?!?
>>
>> Most critically though, by telling SNIA, Google and pretty much
>> everyone else in the cloud standard space that they can go fornicate
>> themselves, and in doing so missing the [only] opportunity to
>> standardise a well proven, risk-free, extensible protocol on which
>> others can safely build, I think we may well be witnessing the
>> inception of CC-* (that is, WS-*, JSON edition). More specifically, if
>> we slam the door on the possibility of others building on our efforts
>> then they will have little choice but to blaze their own trail,
>> forcing clients to support a myriad standards (OCCI for trivial
>> management tasks, OVF for compute resources, SNIA's stuff for storage
>> and possibly networking, etc.).
>>
>> I for one have negative interest in supporting something that I
>> believe could do more damage than good and knowing what happened
>> previously, learning from our mistakes and practicing
>> the precautionary principle
>> <http://en.wikipedia.org/wiki/Precautionary_principle> would be well
>> advised. I'm currently OCCI's number one fan but if this attempt to
>> pwn the process is successful then those who witnessed the CCIF goat
>> rodeo well know that I have no problems saying it how I see it.
>>
>> What I would suggest is that those who want to see JSON adopted get to
>> work on cutting code to prove the concept as I have done already for
>> XML - loose consensus and running code remember.
>>
>> Sam (who can't help but to observe that the key JSON junkies all have
>> significant business investment in same)
>>
>> On Sat, May 9, 2009 at 10:25 PM, Krishna Sankar (ksankar)
>> <ksankar at cisco.com <mailto:ksankar at cisco.com>> wrote:
>>
>>     Alexis et al,
>>     a)      Wiki
>>            Yep, time to have a Wiki or some form of threaded discussion,
>>     other than e-mail. Having said that, let me add to the entropy ;o)
>>
>>     b)      Interop vs Integration
>>            Agreed. Just as one more POV, interop tests the functionality
>>     while integration tests the ability to work with various systems. So
>>     interop first, with one format, is good.
>>            As you said, our first goal is to be functionally complete and
>>     have that canonical model, which when done, means, collectively we
>>     have
>>     understood the essential domain complexity (at least the current state
>>     of the art, as Tim pointed out a while ago)
>>
>>     c)      "Transport"
>>            Interesting pointer to AMQP. What about XMPP as the substrate ?
>>     (Of course, it could mean XML ;o))
>>
>>     d)      CDATA
>>            The CDATA issue pointed by Sam is an interesting one. I was
>>     also
>>     thinking of the implications and had the question whether we plan to
>>     chuck OVF through our APIs? I think, we can take a position like
>>     html -
>>     i.e. use href mechanism as far as possible and base64 for the rest.
>>
>>     e)      OGF-25 Presentation
>>            Most probably we will not get to agreement on all the important
>>     points by OGF-25. We don't have to. At GGF-25, we present the state of
>>     our work - demark what we have agreed upon, and the alternatives
>>     we are
>>     working on. And until 1.0, things will be unstable anyway. Plus we can
>>     honestly represent the different camps and seek insights from the
>>     crowd.
>>     So I do not think GGF-25 is in critical path in anyway.
>>
>>     f)      Disagreeing with Sam
>>            I also agree that we should not en-masse disagree with Sam. But
>>     as Alexis' points out, the minimal scope vs. maximum coverage is the
>>     key. We can restrict or curtail the APIs to a minimum set -
>>     without any
>>     reduction in functionality; for example use hrefs instead of
>>     embedding.
>>     Constraints are wonderful thongs !
>>
>>     Cheers & have a nice day
>>     <k/>
>>
>>
>>     |-----Original Message-----
>>     |From: occi-wg-bounces at ogf.org <mailto:occi-wg-bounces at ogf.org>
>>     [mailto:occi-wg-bounces at ogf.org <mailto:occi-wg-bounces at ogf.org>] On
>>     Behalf
>>     |Of Alexis Richardson
>>     |Sent: Saturday, May 09, 2009 11:48 AM
>>     |To: occi-wg at ogf.org <mailto:occi-wg at ogf.org>
>>     |Subject: [occi-wg] thought on interoperability vs. integration
>>     |
>>     |Hi all,
>>     |
>>     |Thanks for a thought-provoking week of emails on the OCCI-WG list.
>>     |Especially thanks to Sam, Richard, Ben and Tim for laying out a
>>     lot of
>>     |the issues.
>>     |
>>     |One link that I found useful was this:
>>     |http://www.tbray.org/ongoing/When/200x/2009/03/16/Sun-Cloud where we
>>     |find the following statement:
>>     |
>>     |---
>>     |if Cloud technology is going to take off, there'll have to be a
>>     |competitive ecosystem; so that when you bet on a service provider, if
>>     |the relationship doesn't work out there's a way to take your business
>>     |to another provider with relatively little operational pain. Put
>>     |another way: no lock-in.  ... I got all excited about this back in
>>     |January at that Cloud Interop session. Anant Jhingran, an IBM VIP,
>>     |spoke up and said "Customers don't want interoperability, they want
>>     |integration." ...  "Bzzzzzzzzzt! Wrong!" I thought. But then I
>>     |realized he was only half wrong; anyone going down this road needs
>>     |integration and interoperability.
>>     |---
>>     |
>>     |What we have been discussing is "for customers".  So it is about both
>>     |of these things:
>>     |
>>     |* Interoperability ("interop")
>>     |* Integration
>>     |
>>     |This made me realise that our OCCI discussions have correctly been
>>     |about both issues.  But, incorrectly, we have been commingling the
>>     |two.  For example a person will say "you need X for interop" and the
>>     |reply will be "but you need Y" when in fact Y is for integration.
>>      And
>>     |vice versa.
>>     |
>>     |This is a problem for us because it leads to confusion.  But it's a
>>     |symptom of a larger problem which is that interop and integration
>>     have
>>     |opposite requirements.
>>     |
>>     |* Interop is about reducing the entropy in a general purpose system
>>     |down to some level of behavioural invariance.  This minimises the
>>     cost
>>     |of meeting the defined behaviour correctly, and minimises the risk of
>>     |different behaviours breaking interop.  This is in turn about
>>     having a
>>     |defined behaviour (in some format) and minimising its scope.
>>     |
>>     |* Integration is about minimising the frictions between (a) the
>>     |general purpose system and (b) any specific purpose external system
>>     |that may be coupled to it.  It is also, often, about maximising the
>>     |number of specific systems (or "features") that may be connected.
>>     |Since we don't know ex ante what those are, this tends to maximise
>>     |scope (eg "feature creep").  Too much specificity is the same as
>>     |complexity.
>>     |
>>     |Because interop requires minimal scope, and integration pushes for
>>     |maximal scope, they are in tension.
>>     |
>>     |They are BOTH about simplicity.  Simplicity cannot be invoked on its
>>     |own, as a reason for preferring interop over integration:
>>     |
>>     |* Interop is about simplicity of definition
>>     |
>>     |* Integration is about simplicity of end use
>>     |
>>     |BUT
>>     |
>>     |* Interop simplicity is ex ante
>>     |
>>     |* Integration simplicity is ex post
>>     |
>>     |We cannot predict all ex post issues, but we can try and make simple
>>     |definitions ex ante.  I argue below this means we have to have ONE
>>     |definition.
>>     |
>>     |So let's look at interop first:
>>     |
>>     |It's really important that interop be based on defined behaviour.  Or
>>     |it cannot be verified.  Lack of verifiability is almost always a
>>     |symptom that something is opaque, complex and ambiguous, which will
>>     |break interop in subtle ways that cannot be seen in advance (due to
>>     |the opacity).  This later leads to brittleness when interop and
>>     |integration are attempted in practice; and that leads to expensive
>>     |patching at all levels, ... which is one thing that WS-* exhibits.
>>     |
>>     |NOTE 1 on WS-* ---- IMO this was not accidental, and due to focussing
>>     |on solving integration before solving interop.  IIRC, the first WS-*
>>     |was an Interop committee to fix minor vendor mismatches and
>>     versioning
>>     |issues in prior WS protocols such as WSDL and SOAP.  The
>>     formalisms of
>>     |WSDL and SOAP were not precise enough to spot these issues
>>     upfront, so
>>     |they were left until it was too late.
>>     |
>>     |Now let's look at the implications of having a definition of
>>     behaviour:
>>     |
>>     |You cannot define a system using multiple definitions.  You have to
>>     |have one definition, preferably in a formalism that admits of
>>     |automatic conformance testing.  In the case of data formats, this
>>     |leads us to three possible choices:
>>     |
>>     |1) We remove data formats from the interop profile.  They are not
>>     part
>>     |of interop.  Data interop is excluded.  Data payloads are opaque
>>     |blobs.
>>     |
>>     |OR
>>     |2) We have one data format that can be defined unambiguously.
>>     |
>>     |OR
>>     |3) We have multiple formats and derive them from a single unambiguous
>>     |definition using canonical, verifiable, automatable, mappings.  This
>>     |definition can either be in a new format which we invent (option 3a)
>>     |or it can be in an existing format (option 3b).
>>     |
>>     |I am going to rule out option (3a) on grounds of time and due to the
>>     |prevalence of existing candidates.
>>     |
>>     |Also, I think choice (1) is a complete cop-out -- I don't see how we
>>     |can claim 'useful' interop without data interop.  This leaves options
>>     |(2) and (3b).  BOTH of these options require one format.
>>     |
>>     |By occam's razor, option 2 is preferable on grounds of simplicity -
>>     |option (3b) does not add anything except one mapping for every extra
>>     |format that is based on the core definition.  Such complexity MUST be
>>     |deferred as long as possible, and MAY be pushed into integration use
>>     |cases.  The latter is preferable.  Complexity MAY be added later if
>>     |necessary and then, reluctantly.
>>     |
>>     |As a general observation if you have complexity (entropy) in a
>>     system,
>>     |it is very hard to take out.  It may permeate the system at all
>>     |levels.  Removal of the complexity is known as 'complete
>>     refactoring'.
>>     | This is a bad outcome.  Even worse is when refactoring cannot be
>>     done
>>     |for commercial reasons, eg because the systems are in production.
>>     |Then the complexity must be encapsulated and hidden.  Attempts to
>>     wrap
>>     |complex behaviours in simpler packaging usually fail.
>>     |
>>     |NOTE 2 on WS-* ---- This tried to do too much and ended up trying to
>>     |wrap the complexity in simple packaging.  This usually fails to work
>>     |as we have seen.
>>     |
>>     |Another general observation:
>>     |
>>     |* Integration requires interop, ie. the use of a common general
>>     |interop model; otherwise it is piecemeal and pointless.  Example
>>     - the
>>     |old 'integration brokers' that had N*N connections to maintain, and
>>     |which got replaced by N connections to a common system.
>>     |
>>     |* But you can have interop without integration - it just means 'you
>>     |have a smaller audience'.  This is fine because you can always grow
>>     |your audience by integrating more case with the interoperating core.
>>     |It is easier to do that when the interoperating core is
>>     |programmatically simple (as in low entropy, small code blocks,
>>     easy to
>>     |test conformance to the definition).
>>     |
>>     |I would like to add some observations from the world of AMQP...
>>     |
>>     |AMQP-1) It is a protocol with one simple common data format - it gets
>>     |that right.  We leave it to integration products and services to
>>     |support data formats at the edge (eg "ESBs").  OCCI should not be
>>     like
>>     |an ESB - that is for products and services in the integration
>>     |business.
>>     |
>>     |AMQP-2) That AMQP data format is not XML - see below for more
>>     thoughts
>>     |on that.  XML can be carried as an attached payload (just as it
>>     can be
>>     |in the JSON case btw).
>>     |
>>     |AMQP-3) The 0-8 and 0-9 specs took 18 months of production use to
>>     show
>>     |up the many very tiny interop bugs.  We used those to create 0-9-1
>>     |which does have interop (we have tested this) and is only 40 pages
>>     |long.  This would not have been possible with a complex spec.  It
>>     |would not have been possible with multiple data formats.
>>     |
>>     |AMQP-4) The 0-10 spec was focussed on supporting a lot of use
>>     cases eg
>>     |"integrate with multiple transport formats including SCTP and TCP and
>>     |UDP" and adding lots of features at all levels (eg JMS, support for
>>     |hardware, ..).  That spec is really complicated and nearly 300 pages
>>     |long.  Some great new ideas are in it, but it's long and in my own
>>     |opinion not supportive of two interoperating implementations.
>>     |
>>     |AMQP-5) All these painful lessons have taken the AMQP working
>>     group to
>>     |a much happier place with AMQP 1.0 which tries to simplify everything
>>     |by taking stuff out that is not needed for interop, plus refactoring
>>     |(see above my comments on how removing entropy is hard) and clean-up.
>>     |
>>     |All of the above has taken time because we did not learn from WS-*.
>>     |We did too much too fast and confused interop with integration.  We
>>     |are back on track now.
>>     |
>>     |Now to the issue of data formats.  I have already argued that FOR
>>     |INTEROP, there must be one definition.  I argued that the best way to
>>     |do this is via a suitable single format.  We can support as many
>>     |recommended ways as we like FOR INTEGRATION ... and they can be
>>     |evolved over time.
>>     |
>>     |Here is my 2c on XML.
>>     |
>>     |XML-1) XML lets you do too much - because of namespaces and xsd it is
>>     |in effect multiple formats.  This is bad - we want a single,
>>     testable,
>>     |constrained definition for data interop.
>>     |
>>     |XML-2) To enforce compliance with a simple XML definition, you
>>     need to
>>     |have an extra definition of a well formed 'small' document.  But
>>     |creating a new definition of a data format in XML is equivalent to
>>     |defining a new data format, the same as 'option 3a' above.  But that
>>     |option was ruled out above, on grounds of time constraint.. provided
>>     |that a suitable alternative exists already (see JSON claims below).
>>     |
>>     |NOTE 3 on WS-* ---- IMHO a third reason why WS-* failed to be simple
>>     |enough to get happy, quick and wide adoption, is that (XML-1) issue
>>     |left too much data integration in the hands of customers, because
>>     |vendors could not produce useful products when XML could be all
>>     things
>>     |to all people.  By not delivering on data integration, it became hard
>>     |to deliver on the promise of integration at all.  And recall that
>>     |lowering integration costs was the selling point...
>>     |
>>     |So let's get INTEROP right and then do INTEGRATION.
>>     |
>>     |Interop requires:
>>     |
>>     |* Model - yes
>>     |* Data format - one definition
>>     |* Metadata - ? tbd
>>     |
>>     |As an aside - I think that GData is the nuts but it is also really an
>>     |*integration techonology*.
>>     |
>>     |Now, here are some claims about JSON:
>>     |
>>     |JSON-1) Sun have demonstrated that it is plausible as a data
>>     model for
>>     |a cloud API.  That makes it plausible for us to ask: can it be
>>     used as
>>     |the core definition for interop?
>>     |
>>     |JSON-2) It is lower entropy than XML.  This makes it easy to test
>>     |conformance.
>>     |
>>     |JSON-3) This means we do things the right way round -- simpler 'ex
>>     |ante' choices make it EASIER for us to extend and enhance the
>>     protocol
>>     |on a by need basis.  For example carrying OVF payloads, or other
>>     |integration points.  Many will be done by users.
>>     |
>>     |So my recommendation is that the best way to avoid WS-* outcomes is
>>     |
>>     |A) Use one format for interop.  Do interop first.  Integration later.
>>     |B) *At least for now* and *during the working draft* stage, to
>>     use JSON
>>     |C) Other formats, for now, are "integration".  But we do interop
>>     first.
>>     |
>>     |OK.... I have to run off.  I wrote this down in one go and don't have
>>     |time to review it.  Apologies for any mistakes in the presentation.
>>     |
>>     |What do you all think?
>>     |
>>     |alexis
>>     |_______________________________________________
>>     |occi-wg mailing list
>>     |occi-wg at ogf.org <mailto:occi-wg at ogf.org>
>>     |http://www.ogf.org/mailman/listinfo/occi-wg
>>     _______________________________________________
>>     occi-wg mailing list
>>     occi-wg at ogf.org <mailto:occi-wg at ogf.org>
>>     http://www.ogf.org/mailman/listinfo/occi-wg
>>
>>
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> occi-wg mailing list
>> occi-wg at ogf.org
>> http://www.ogf.org/mailman/listinfo/occi-wg
>>
>
> _______________________________________________
> occi-wg mailing list
> occi-wg at ogf.org
> http://www.ogf.org/mailman/listinfo/occi-wg
>



More information about the occi-wg mailing list