[occi-wg] Events ?

Gary Mazz garymazzaferro at gmail.com
Tue May 19 22:38:11 CDT 2009


Yes, thanks.. I'm looking at it and using it as a guide to the other specs.

I'll explain why I'm confused and it look like I'm pursuing more than 
the scope

In the occi system model, the IaaS fits between the PaaS and fabric.  
Looking at the occi UML it shows the compute resource as aggregated 
dependency of  cluster<ag>>domain<ag>>cloud. The example defines 
"groups" as racks, pools, data center, etc., real physical assets.  
Based on the example, I think of clusters as organization of physical 
compute resources. If the intent was to keep domain and cloud logical 
elements, it may be better to get rid of the cluster as a class and 
convert it to a property defining a quality of service for the user. 
Some may disagree, but I don't believe the user cares if there is a 
cluster configured for round robin, random distribution, active all or 
primary/spare failover. I'm assuming they'll care about  workload 
capacity and service availability. (and costs).

Currently cluster<ag>>domain<ag>>cloud looks more like a fabric than a 
logical components. Fabrics need a different set of capabilities, like 
events.

-gary

Alexis Richardson wrote:
> Gary
>
> Have you seen the interface comparison spreadsheet?
>
> http://spreadsheets.google.com/ccc?key=pGccO5mv6yH8Y4wV1ZAJrbQ
>
> This is our core focus for interop.  To achieve commonality right here
> right now.  No invention just interop.
>
> a
>
>
> On Tue, May 19, 2009 at 9:51 PM, Gary Mazz <garymazzaferro at gmail.com> wrote:
>   
>> Well since this is a interoperability interface, I'm assuming there will be
>> gateways to other technologies like fabrics. Events, event delivery and
>> event management are important  patterns and are supported by others. I
>> don't believe we'll be able to get away without supporting them for very
>> long. One of the big drawbacks to snmp and cimoms are the lack of event
>> support and an infrastructure to support event message persistence.
>>
>> I'm also not sure where we are drawing the line in terms of
>> interoperability. There was a general consensus that occi should be focusing
>> on integration points in the cloud, but I didn't see a clear definition of
>> an integration point. (I was out of the loop for a while) In  the occi model
>> the platform can be considered a container (loosely, a vm) with
>> infrastructure resources provisioned. The container life cycle and resource
>> provisioning are "management" integration points, although there are no
>> verbs published yet.
>> Will portions of occi interface be permitted to permeate the container
>> boundary ?  It is still unclear the level of interaction, if any, between
>> the occi and the container contents. Maybe I missed the definition.
>>
>> -gary
>>
>>
>>
>>
>>
>> Alexis Richardson wrote:
>>     
>>> Indeed and XMPP and HTTP should not be overlooked either.
>>>
>>>
>>> On Tue, May 19, 2009 at 7:49 PM, Sam Johnston <samj at samj.net> wrote:
>>>
>>>       
>>>> On Tue, May 19, 2009 at 7:13 PM, Alexis Richardson
>>>> <alexis.richardson at gmail.com> wrote:
>>>>
>>>>         
>>>>> Interesting point.
>>>>>
>>>>> Speaking as someone who is professionally involved in messaging and
>>>>> events my STRONG advice would be to completely leave them for now.
>>>>> Implementation of the planned draft will naturally bring up use cases
>>>>> suited to the various eventing technologies and protocols, none of
>>>>> which are fully baked by the way.  This will be good fodder for future
>>>>> work but currently is **** not in scope ****.
>>>>>
>>>>>           
>>>> Agreed, and I don't know AMQP well enough to say how it could fit here.
>>>>
>>>> The use case we need to take away from it is that OCCI messages aren't
>>>> necessarily going to be ephemeral - they may well be long lived, queued,
>>>> serialised, saved to file, etc.
>>>>
>>>> Sam
>>>>
>>>>
>>>>
>>>>         
>>> _______________________________________________
>>> occi-wg mailing list
>>> occi-wg at ogf.org
>>> http://www.ogf.org/mailman/listinfo/occi-wg
>>>
>>>
>>>       
>>     
>
>   




More information about the occi-wg mailing list