[ghpn-wg] Fwd: resource-agnostic vs. resource aware - Re: Admela Jukan's presentation

Franco Travostino franco.travostino at gmail.com
Thu Apr 7 07:57:36 CDT 2005


This thread continues to fall off of the GHPN reflector ;-)
Hereafter Bill's elaboration on Admela's point.

---------- Forwarded message ----------
From: Bill St.Arnaud <bill.st.arnaud at canarie.ca>
Date: Apr 7, 2005 8:52 AM
Subject: RE: resource-agnostic vs. resource aware - Re: Admela Jukan's 
presentation
To: Admela Jukan <jukan at uiuc.edu>, travos at ieee.org
Cc: "Masum Z. Hasan" <masum at cisco.com>, Gigi Karmous-Edwards <
gkarmous at mcnc.org>, Cees de Laat <delaat at science.uva.nl>, imonga at nortel.com, 
Leon Gommans <lgommans at science.uva.nl>, chyoun at icu.ac.kr, ebslee at ntu.edu.sg

Admela:

I think the fundamental issue is the concept of "resources". There are two
type of markets - "fungible" where everything is a shared resource -buses,
public transport, bandwidth etc. A lot of the focus on grids has been on
this market where everything including storage, networking and computation
is seen as a fungible resource

But there is another market - "asset based" - where everything is seen as an
asset and not treated as a shared "resource". Your PC computer, your home,
your car, manufacturing plants are examples of this type of market assets.

Our focus at CANARIE is developing grid/web service solutions for the later
market, in the belief that storage, computation and networking in particular
is becoming so cheap that the opex cost of managing these as a shared
fungible resource will be much greater than their inherent capex value. As
a result what used to be fungible resource will be treated as an asset

Bill

> -----Original Message-----
> From: Admela Jukan [mailto:jukan at uiuc.edu]
> Sent: Wednesday, April 06, 2005 3:42 PM
> To: travos at ieee.org
> Cc: Masum Z. Hasan; <bill.st.arnaud at canarie.ca>; Gigi Karmous-Edwards;
> Cees de Laat; imonga at nortel.com; Leon Gommans; chyoun at icu.ac.kr;
> ebslee at ntu.edu.sg
> Subject: resource-agnostic vs. resource aware - Re: Admela Jukan's
> presentation
>
> Follow up on Franco's comment. Franco, - if I understand correctly, and
> please correct me if I am wrong, since "storage is equal storage" it
> implies that "bandwidth should be equal bandwidth". In other words, the
> separation of, for example optical bandwidth pipes from any other pipes
> (with the same performance), may not be future-proof (even if
> bandwidth is cheap) and schedulers should be better designed with the
> resource-agnostic considerations in mind. Architecturally, this is a
> "clean" approach, as it separates applications from networks (and type
> of resources).
> Bill on the other hand suggests to link (pipe) networks into the
> applications' software processes, - an attractive proposition. The way
> I understand it, that concept is resource aware (albeit it could be
> resource agnostic). If for example the linked network resource has to
> be a wavelength ("because is cheap"), the application is free to "link
> the wavelength" (as opposed to linking the "bandwidth of other kind").
> This is the case where application is resource-aware. At the same time,
> however, there maybe be other applications to "link" networks in a
> resource-agnostic fashion (Franco's proposition). Architecturally, this
> is a more flexible approach, but it does "couple" applications with
> network (resources) in its basic form. Is that correct, Bill?
>
> On Mar 31, 2005, at 9:42 PM, Franco Travostino wrote:
>
> > With regard to the 20k x 10Gbps argument eroding the usefulness of
> > scheduling:
> >
> > the cost may be that low until you hit a discontinuity and run out of
> > something --- PHYs, cards, NE's backplane, lambdas, software licenses,
> > OpEx contracts --- and need to go issue a new procurement.
> >
> > also, there appears to be a "virtuous cycle" at play, with more
> > bandwidth enabling new services that enable new applications that
> > demand new bandwidth.
> >
> > we have seen this phenomenon occurring in computing many times over.
> >
> > we're seeing it with wireless (who would have thought that we would be
> > watching mobisodes when it all started)
> >
> > if we want Grids to be a somewhat future-proof and versatile
> > infrastructure (beyond circuits), we need to take this virtuous cycle
> > of expansion into consideration and toy with (resource-agnostic!)
> > schedulers.
> >
> > -franco
> >
> >
> >
> >
> > On Mar 31, 2005 7:12 PM, Admela Jukan <jukan at uiuc.edu> wrote:
> >> Thanks a lot Bill for your feedback. My specific responses are given
> >> below in detail. I look forward to further discussions - Admela
> >>
> >> On Mar 31, 2005, at 2:13 PM, Bill St.Arnaud wrote:
> >>
> >>> I enjoyed Admela's presentation on control plane issues. I think it
> >>> is a good summary of most of the issues. However I would suggest
> >>> there are some areas that may be worth further exploring:
> >>>
> >>>
> >>>
> >>> (a) in addition to applications needing to interact with the
> >>> network physical layer for large data flows, there are some
> >>> situations
> >>> where it would be advantageous to bring the network into the
> >>> application. This is a lot different than the network being "aware"
> >>> of the application. There is a lot of work going on in the HPC
> >>> community to "decompose" large data applications into smaller modules
> >>> which then can be relocated anywhere on the network. However in some
> >>> cases the application modules may still be on the same physical
> >>> machine interconnected by a "virtual" network or pipeline. Extending
> >>> HPC pipeline architectures into network pipes would be clearly
> >>> advantageous.
> >>>
> >>
> >> I agree with you. Not sure however what would be the best way to
> >> incorporate that view into the three dimensional diagram I gave in the
> >> presentation, without explaining virtualization, soft
> >> switches,...etc.
> >> (I wish you can be there next time and talk about the extension of
> >> networks into applications, or point us to some good docs.)
> >> In the three dimensional space between apps, Grid resources and
> >> networks, the applications can nevertheless pick up networks, or part
> >> of networks and fully incorporate (link or pipe) them - to be their
> >> own
> >> very part in the whole distributed application. I think this in line
> >> with your thinking. It is in my opinion a great research area. Hence,
> >> I
> >> do agree with you that networks, as much as they will extend into the
> >> physical layer, will also extend into the applications.
> >>
> >>> (b) I remain skeptical about reservation and scheduling of
> >>> bandwidth or lightpaths. The cost of wavelengths continues to plummet
> >>> - and it is now cheaper to nail up the bandwidth and leave it there
> >>> sitting idle, rather than paying the high OPEX costs for scheduling,
> >>> reservation, billing etc. For example I have been informed by
> >>> reliable sources that the annual cost of a 10 Gbps wavelength on the
> >>> new Geant network will be in the order of 20K Euros. You couldn't
> >>> hire a graduate student for that price to do the scheduling and
> >>> reservation. The counter argument is that there will be applications
> >>> where data transfers are infrequent, and buying nailed up
> >>> wavelengths,
> >>> even at 20k Euros, can't be justified - in that case I say use a
> >>> general purpose routed network. Given that the data transfers are so
> >>> infrequent, I suspect the slightly longer delays of using the routed
> >>> network can be tolerated. But I suspect most large data flow
> >>> applications will be from well known and often used sources and sinks
> >>> - so the need for scheduling and reservation will be very limited
> >>>
> >>>
> >>
> >> I share your concerns given the recent trends, price of bandwidth etc.
> >> However, there maybe at least two good reasons why scheduling may be
> >> an
> >> issue we do not want to completely neglect, at least not for a while.
> >> First: network local grid scheduler, just like other Grid resources'
> >> schedulers, is represented in the corresponding hierarchical
> >> architecture within the scheduling working group in GGF
> >> (https://forge.gridforum.org/projects/gsa-rg). They look forward to
> >> our
> >> input to that issue. If we, as "networkers", can provide Grid network
> >> services for "no waiting time" (i.e. no scheduling) - that is a
> >> wonderful message to the scheduling group that no longer has to worry
> >> about that.
> >>
> >> *** Soapbox: Given that network resource management is far from the
> >> "on
> >> the fly" provision, at least not at a grander scale and not for
> >> complex
> >> application composition, I am not sure if the community is ready to
> >> make the statement. For example, one part of a complex dynamic
> >> application has to wait that the related other part of the same
> >> applications finishes. So, do you schedule the "second part" after the
> >> first part is done, and how (1)? Or, do you just set up continuous
> >> communication patterns for the whole duration of application (2)? Or,
> >> you integrate dynamic control plane processes into the application
> >> ("extend") and dynamically allocate network resources as the
> >> applications evolve (3)? I understand that if bandwidth is cheap we
> >> can waste it and do the (2). On the other hand, you suggest to
> >> "extend"
> >> into application (corresponds to (3), since the SW processes pipe to
> >> each other.) - this is more elegant but does not per se exclude the
> >> need for (advance) scheduling. Etc. Your input is a very valuable
> >> starting point to this line of discussions. ***
> >>
> >> Second, - I admit less intriguing, - network is still a shared
> >> resource. While I agree with you that network resources have to be
> >> provided with no waiting time to applications, bandwidth may not be
> >> the
> >> only requirement - it may be "guaranteed performance" . So, your
> >> comment is also here a good start of the discussion whether we have
> >> "cheap guaranteed performance" yet to be provided "with no waiting
> >> time". And, if so, how do we deal with the scale (number of bandwidth
> >> pipes, scale in implementation of network control plane software,
> >> etc,...) As a researcher, I could also imagine that scheduling can be
> >> interesting in some specific situations (recovery, failure, etc) or
> >> when the network topologies make it difficult to design reliable
> >> connections (connectivity, and min cut problem). However, I do agree
> >> that the prices can diminish the practical importance of all these
> >> possible scenarios.
> >>
> >>
> >>>
> >>> Bill
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> -----Original Message-----
> >>> From: owner-ghpn-wg at ggf.org [mailto:owner-ghpn-wg at ggf.org] On Behalf
> >>> Of Franco Travostino
> >>> Sent: Thursday, March 31, 2005 2:44 PM
> >>> To: ghpn-wg at gridforum.org
> >>> Cc: chyoun at icu.ac.kr; ebslee at ntu.edu.sg; Masum Z. Hasan; Leon
> >>> Gommans;
> >>> imonga at nortel.com; Admela Jukan; Gigi Karmous-Edwards; Cees de Laat
> >>> Subject: [ghpn-wg] Fwd: Seoul material is on-line
> >>>
> >>>
> >>>
> >>>
> >>> I've been informed that Admela's presentation could not be opened
> >>> with PowerPoint. It turns out that the handoff between Admela and me
> >>> has altered the file's content somehow. I have now replaced the
> >>> file
> >>> in forge.gridforum.org <http://forge.gridforum.org>.
> >>>
> >>> For further reference:
> >>>
> >>> /cygdrive/D/GGF13 (19) sum Admela*
> >>> 59184 2731 Admela Correct File.ppt
> >>> 11383 2731 Admela Damaged File.ppt
> >>>
> >>> -franco
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> Date: Wed, 30 Mar 2005 13:08:06 -0500
> >>> To: ghpn-wg at gridforum.org
> >>> From: Franco Travostino <travos at ieee.org>
> >>> Subject: Seoul material is on-line
> >>> Cc: chyoun at icu.ac.kr, ebslee at ntu.edu.sg, "Masum Z. Hasan"
> >>> <masum at cisco.com>, Leon Gommans <lgommans at science.uva.nl>, "inder
> >>> [BL60:418:EXCH] Monga" <imonga at AMERICASM06.nt.com>, Admela Jukan
> >>> <jukan at uiuc.edu>, Gigi Karmous-Edwards <gkarmous at mcnc.org>, Cees de
> >>> Laat <delaat at science.uva.nl>
> >>>
> >>>
> >>> The whole GHPN production for GGF13 is available at:
> >>> https://forge.gridforum.org/docman2/ViewCategory.php?
> >>> group_id=53&category_id=941
> >>>
> >>> We've had a lively meeting (we went 10' past the end of our slot
> >>> actually). I hope you will take the time to peruse the minutes and
> >>> the
> >>> material.
> >>>
> >>> The State of the Drafts that I prepared is thought to be up to date
> >>> (alert me if not) ... it also covers a couple of drafts that have
> >>> been
> >>> announced even though they didn't make the GGF13 cutoff date. See
> >>> https://forge.gridforum.org/docman2/ViewProperties.php?
> >>> group_id=53&category_id=941&document_content_id=3603
> >>>
> >>> The GGF13 program featured a couple of interesting BOFs with strong
> >>> network connotation. Kindly enough, both referenced GHPN material.
> >>>
> >>> One was the Firewall and NAT BOF. The room consensus was that it
> >>> should be chartered as a RG.
> >>>
> >>> The other one was the VPN BOF.
> >>>
> >>> On behalf of the GHPN, I invite these groups to use the GHPN
> >>> community as a sounding board for their work. If they don't get the
> >>> nod from the GFSG, they can also consider using the GHPN as the
> >>> temporary home where to incubate their work further.
> >>>
> >>> -franco
> >>>
> >> -- Admela
> >>
> >>
> >
> >
> > --
> > http://www.francotravostino.name
> >
> >
> -- Admela



-- 
http://www.francotravostino.name
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.ogf.org/pipermail/ghpn-wg/attachments/20050407/1b30a7ba/attachment.html 


More information about the ghpn-wg mailing list