[Capi-bof] Charter - last call for changes

Sam Johnston samj at samj.net
Wed Mar 25 04:36:42 CDT 2009


On Wed, Mar 25, 2009 at 10:09 AM, Ignacio Martin Llorente <
llorente at dacya.ucm.es> wrote:

> Hi Sam,
>
>> You're still focusing on VMs - I'd be interested to hear your explanation
>> as to how catering for everyone's needs (as proposed by a number of us) will
>> be any more complex than restricting the spec to your own requirements.
>>
>
> Yes, my position is that if we want the group to succeed we should only
> focus on virtual machines (virtual execution environments,...).  Not in
> "virtualized resources", because resource could mean not only "machine" but
> also network, storage...


We're mixing issues here. To be clear, I suggest we *exclude* anything but
the most trivial management of fabric resources (including storage and
networking), but avoid at all costs unnecessarily placing restrictions on
the container itself. To the consumer it doesn't matter whether they are
provisioned a physical machine, VM or a "slice" thereof, provided their
performance metrics are met - those details will be handled by the provider
(and this is where most of the innovation and competition will happen).

All three container categories (physical, virtual, slice) have identical
attributes (memory, storage capacity, CPU performance, etc.) as well as the
same set of primatives (start, stop, restart). We already need to support
multiple VM image formats (even if just for the significant "legacy" install
base) so catering for additional formats for physical machines (e.g. 'dd'
disk images) and "slices" (e.g. gzipped tar files) comes at no cost and
great benefit.

Looking a little further into the future, PaaS is essentially IaaS with
smooth rather than "chunky" scalability. A PaaS workload is essentially just
a single instance which can scale up (and down, which is just as important)
as and when necessary. With fabrics like Cisco's Unified Computing System
and massively parallel computers hitting the market it's concievable that
before long we'll be able to provision a VM and scale CPU, memory and
storage up and down seamlessly and without bound.

In fact I think that the first step (after gathering requirements form use
> cases) should be to create a document clearly defining entities to be
> managed, their life-cycle and the associated processes to manage the
> life-cycle.


I have provided three use cases on the provider side already:
dedibox.fr(physical), Amazon EC2 (virtual) and Mosso (slices). On the
user side it's
even more simple - a user has a workload and a set of metrics under which it
should be run (price, performance, security, availability, etc.).


> I think that the management of physical resources or light-weight VMs
> should be out of the scope of the group, because that has different
> implications in the life-cycle and processes, and, as far as I know, there
> are other groups working on this (at least for physical resources).
>

Please be more specific as I am yet to identify any such implications.
"Instantiating" or "reserving" a physical machine may take more time than a
VM so a call that requires creation of a new physical "container" may want
to be asynchronous but most such providers maintain online stock (e.g.
Kimsufi <http://www.kimsufi.co.uk/> who publish 1hr and 72hr availability).

There are many organizations and projects interested in VMs, not only Sun or
> OpenNebula:
>
> - IaaS providers: Amazon EC2, ElasticHosts, GoGrid, Flexiscale, Sun
> Cloud....
> - Research projects: Reservoir, Eucalyptus, Nimbus, SLA at SOI...
> - Users (service management offerings): CohesiveFT, RightScale...
>

Yes, but with products already on the market that offer essentially the same
thing for an order of magnitude less cost (e.g. Cloud Sites vs EC2) it's
pretty obvious where the market will shift even within the (hopefully short)
lifetime of this workgroup.


>
>  Aside from that the "network tag" suggestion I made before would allow for
>> the creation of private networks (two interfaces with the same tag would
>> obviously be wired together) but assigning meaning to those tags (subnet
>> details, etc.) would be left to the fabric. The vast majority of workloads
>> don't care, so long as they can talk to each other and their clients
>> (assuming they have any which is not always the case).
>>
>
> Fully agree with you on this,
>

I'm starting to work through the various APIs already and there's some
pretty clever things you can do even with this extremely simple methodology.

Sam
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.ogf.org/pipermail/capi-bof/attachments/20090325/8221cba5/attachment-0001.html 


More information about the Capi-bof mailing list