[gin-ops] FW: Status

Cindy Zheng zhengc at sdsc.edu
Fri Apr 28 18:25:18 CDT 2006


Dear all,

There have been a lot of working discussions between 
EGEE system supporters and TDDFT application drivers. 
I bundled most of the email contents together below 
for your information. There will be a summary report 
when Yoshio and Yusuke get TDDFT running on EGEE's 
cluster.

Thank you for everyone's effort!

Cindy

> From: Mona Aggarwal [mailto:m.aggarwal at imperial.ac.uk] 
> Sent: Friday, April 28, 2006 10:12 AM
> To: 'Yoshio Tanaka'
> Cc: o.van-der-aa at imperial.ac.uk; zhengc at sdsc.edu; 
> Erwin.Laure at cern.ch; yusuke.tanimura at aist.go.jp
> Subject: RE: Status
> 
> 
> > -----Original Message-----
> > From: Yoshio Tanaka [mailto:yoshio.tanaka at aist.go.jp] 
> > Sent: 28 April 2006 17:38
> > To: m.aggarwal at imperial.ac.uk
> > Cc: o.van-der-aa at imperial.ac.uk; zhengc at sdsc.edu; 
> > Erwin.Laure at cern.ch; yusuke.tanimura at aist.go.jp; 
> > yoshio.tanaka at aist.go.jp
> > Subject: Re: Status
> > 
> > 
> > 
> > Hi Aggarwal,
> > 
> > I have two questions:
> > 
> > 1. What's the difference between globus gatekeeper and edg 
> > gatekeeper? 
> It is the same thing with some modifications.
> 
> The authorization decision is based upon the users' proxy
> certificate and the job specification in RSL (JDL) format. 
> The certificate
> and RSL are passed to (plug-in) authorization modules, which grant or
> deny the access after checking the user in gridmap file on the CE. 
> 
> 
> 2. How proxy cert is delegated to users's job?
> 
> Once the VO have been granted a certificate to be used for 
> job submission. 
> Tools on a user-interface (UI) machine then allow the user to 
> generate, 
> from this certificate, a delegated proxy. This is sent along 
> with the job, 
> enabling the job to perform Grid operations on behalf of the user. 
> 
> For more information:
> https://edms.cern.ch/file/498081/1.0/UserScenario2.pdf
> 
> Regards,
> 
> Mona


-----Original Message-----
From: Mona Aggarwal [mailto:m.aggarwal at imperial.ac.uk] 
Sent: Friday, April 28, 2006 7:51 AM
To: 'Yoshio Tanaka'; o.van-der-aa at imperial.ac.uk
Cc: zhengc at sdsc.edu; Erwin.Laure at cern.ch; yusuke.tanimura at aist.go.jp
Subject: RE: Status


> -----Original Message-----
> From: Yoshio Tanaka [mailto:yoshio.tanaka at aist.go.jp] 
> Sent: 28 April 2006 06:49
> To: o.van-der-aa at imperial.ac.uk
> Cc: zhengc at sdsc.edu; Erwin.Laure at cern.ch; 
> m.aggarwal at imperial.ac.uk; yusuke.tanimura at aist.go.jp; 
> yoshio.tanaka at aist.go.jp
> Subject: Re: Status
> 
> 
> 
> Hi Olivier,
> 
> I think Globus GRAM doesn't work correctly if file system is 
> not shared between the gatekeeper node and compute nodes.  
> For example, Globus GRAM checks the existence of the 
> executable on the gatekeeper node even if the executable will 
> be launched on compute nodes via the batch scheduler. In 
> addition, delegated proxy cert will be generated in ~/.globus 
> directory on the gatekeeper node.  The proxy cert will be 
> required if the executable will connect to the other process 
> via globus IO, i.e. the proxy cert must be readable in 
> compute nodes as well.
> 
> In order to confirm my understanding, I made a quick test, in 
> which I made ~/.globus directory as a symbolic link to /tmp 
> (in server machine).  The test failed if I speidifed 
> jobmanager-pbs.  (the test was succeeded if I specified 
> jobmanager-fork).
> 
> Would you confirm that the globus/jobmanager-pbs work well 
> again? Please accept my apologize if I'm missing something.
> 
Yes, it works with LCG middleware software.
Please find attached file Job_submission_chain_diagram within
LCG.

Regards,
Mona 


> From: Olivier van der Aa [mailto:o.van-der-aa at imperial.ac.uk] 
> Sent: Thursday, April 27, 2006 2:42 AM
> To: Yoshio Tanaka
> Cc: zhengc at sdsc.edu; Erwin.Laure at cern.ch; 
> m.aggarwal at imperial.ac.uk; yusuke.tanimura at aist.go.jp
> Subject: Re: Status
> 
> 
> > Does Globus GRAM and jobmanager-pbs work well in this 
> configuration (home
> > directory is not shared between the gw39 and the other wn) ?
> 
> Yes it should be ok.
> 
> I also need the certificate of the gin voms 
> (vomss://kuiken.nikhef.nl:8443/voms/gin.ggf.org).
> 
> Thanks, Olivier.


> From: Yoshio Tanaka [mailto:yoshio.tanaka at aist.go.jp] 
> Sent: Thursday, April 27, 2006 1:38 AM
> To: o.van-der-aa at imperial.ac.uk
> Cc: zhengc at sdsc.edu; Erwin.Laure at cern.ch; 
> m.aggarwal at imperial.ac.uk; yusuke.tanimura at aist.go.jp; 
> yoshio.tanaka at aist.go.jp
> Subject: Re: Status
> 
> 
> 
> Hi Olivier and Mona,
> 
> Let me ask one quick question.
> 
> o.van-der-aa> -the worker nodes (wn) names are from 
> gw01.hep.ph.ic.ac.uk to 
> o.van-der-aa> gw35.hep.ph.ic.ac.uk.
> o.van-der-aa> - gcc 3.2.3 is installed on all wn
> o.van-der-aa> - All WN have a variable $VO_GIN_SW_DIR that 
> points to a shared 
> o.van-der-aa> directory (/opt/exp_soft/gin) where the ninf-g 
> and intel9.0 are installed.
> o.van-der-aa> - For the gin vo the PATH and LD_LIBRARY_PATH 
> has entries for ninf-g and 
> o.van-der-aa> intel fortran 9.0.
> o.van-der-aa> - home directories are not shared only 
> /opt/exp_soft/gin is shared 
> o.van-der-aa> (read-only).
> o.van-der-aa> - each machine is a dual CPU Pentium III 1GHz. 
> 1GB of memory per machine.
> o.van-der-aa> 
> o.van-der-aa> - gw39, our gatekeeper, is not sharing 
> directories with the wn.
> 
> Does Globus GRAM and jobmanager-pbs work well in this 
> configuration (home
> directory is not shared between the gw39 and the other wn) ?
> 
> Thanks,


> From: Yoshio Tanaka [mailto:yoshio.tanaka at aist.go.jp] 
> Sent: Wednesday, April 26, 2006 7:17 AM
> To: o.van-der-aa at imperial.ac.uk
> Cc: zhengc at sdsc.edu; Erwin.Laure at cern.ch; 
> m.aggarwal at imperial.ac.uk; yusuke.tanimura at aist.go.jp; 
> yoshio.tanaka at aist.go.jp
> Subject: Re: Status
> 
> 
> 
> Hi Olivier and Kostas,
> 
> Thanks for the detailed information of your cluster and License issues
> for Intel compiler.  I'll summarize the answer shortly.
> 
> Thanks,


> -----Original Message-----
> From: Kostas Georgiou [mailto:k.georgiou at imperial.ac.uk] 
> Sent: Wednesday, April 26, 2006 6:47 AM
> To: Olivier van der Aa
> Cc: Yoshio Tanaka; zhengc at sdsc.edu; Erwin.Laure at cern.ch; 
> m.aggarwal at imperial.ac.uk; yusuke.tanimura at aist.go.jp
> Subject: Re: Status
> 
> 
> On Wed, Apr 26, 2006 at 02:43:12PM +0100, Olivier van der Aa wrote:
> 
> > Hi Yoshio,
> > 
> > After looking at the intel licence it turns out that I 
> cannot install 
> > the free version even for an academic project. So I will 
> have to remove 
> > the fortran 9.0 installation.
> 
> For details here is the FAQ for the license:
> http://www.intel.com/cd/software/products/asmo-na/eng/compiler
s/219692.htm


> From: Olivier van der Aa [mailto:o.van-der-aa at imperial.ac.uk] 
> Sent: Wednesday, April 26, 2006 6:43 AM
> To: Yoshio Tanaka
> Cc: zhengc at sdsc.edu; Erwin.Laure at cern.ch; 
> m.aggarwal at imperial.ac.uk; yusuke.tanimura at aist.go.jp; Kostas Georgiou
> Subject: Re: Status
> 
> 
> Hi Yoshio,
> 
> After looking at the intel licence it turns out that I cannot install 
> the free version even for an academic project. So I will have 
> to remove 
> the fortran 9.0 installation.
> 
> Could you explain me why you really need the fortran 
> compiler. Could you 
> compile tddft on your machine to avoid the licence problem.
> 
> Cheers, Olivier.


> From: Olivier van der Aa [mailto:o.van-der-aa at imperial.ac.uk] 
> Sent: Wednesday, April 26, 2006 6:24 AM
> To: Yoshio Tanaka
> Cc: zhengc at sdsc.edu; Erwin.Laure at cern.ch; 
> m.aggarwal at imperial.ac.uk; yusuke.tanimura at aist.go.jp
> Subject: Re: Status
> 
>  
> Hi Yoshio,
> 
> -the worker nodes (wn) names are from gw01.hep.ph.ic.ac.uk to 
> gw35.hep.ph.ic.ac.uk.
> - gcc 3.2.3 is installed on all wn
> - All WN have a variable $VO_GIN_SW_DIR that points to a shared 
> directory (/opt/exp_soft/gin) where the ninf-g and intel9.0 
> are installed.
> - For the gin vo the PATH and LD_LIBRARY_PATH has entries for 
> ninf-g and 
> intel fortran 9.0.
> - home directories are not shared only /opt/exp_soft/gin is shared 
> (read-only).
> - each machine is a dual CPU Pentium III 1GHz. 1GB of memory 
> per machine.
> 
> - gw39, our gatekeeper, is not sharing directories with the wn.
> 
> - On all the worker nodes a set of pool accounts have been created
> gin001-gin010 and ginsgm. ginsgm is the user that can install 
> software 
> in /opt/exp_soft/gin
> 
> How will you access the nodes for compilation of the TDDFT software ?
> Will you submit a job ?
> 
> Hope it helps, Olivier
> ps: i will be on holiday from 28/04-07/04 but Mona Aggarwal 
> will be able 
> to help if there are problems with the current install.


> From: Yoshio Tanaka [mailto:yoshio.tanaka at aist.go.jp] 
> Sent: Wednesday, April 26, 2006 6:04 AM
> To: o.van-der-aa at imperial.ac.uk
> Cc: zhengc at sdsc.edu; Erwin.Laure at cern.ch; 
> m.aggarwal at imperial.ac.uk; yusuke.tanimura at aist.go.jp; 
> yoshio.tanaka at aist.go.jp
> Subject: Re: Status
> 
> 
> Hi Olivier,
> 
> Thanks for your quick work.
> 
> o.van-der-aa> Hi Yoshio,
> o.van-der-aa> 
> o.van-der-aa> Thanks that worked. and I could compile ninf-g 2.4.1 on 
> o.van-der-aa> gw05.hep.ph.ic.ac.uk. I have used ./configure 
> --prefix=/home/ginsgm/gin/SW
> o.van-der-aa> 
> o.van-der-aa> Is that ok ?
> 
> Yes, I think this should be ok.
> 
> o.van-der-aa> For the intel fortran compiler, is version 9 ok ?
> 
> Yes, version 9 is ok.
> 
> o.van-der-aa> I would like to understand more in detail how 
> we are going to proceed.
> o.van-der-aa> As soon as I have installed the software on the 
> compute node and the 
> o.van-der-aa> worker node how are you going to access the 
> compile node to build your 
> o.van-der-aa> executable ?
> o.van-der-aa> 
> o.van-der-aa> We have created a queue for the gin vo the end point is
> o.van-der-aa> gw39.hep.ph.ic.ac.uk:2119/jobmanager-lcgpbs-gin
> 
> Would you let me know the configuration of your system?
> What kind of node do you have (e.g. compute, worker, compile, etc.)?  
> What's the difference between the nodes? (sharing of file 
> systems, accounts, etc.)?
> 
> I could see that globus gatekeeper is running on gw39 and Ninf-G 2.4.1
> is installed in gw05.  Is the home directory shared between these
> nodes?  Is the home directory shared between the gatekeeper node and
> backend compute nodes?
> 
> If you provide this kind of information on the web or on the other
> document, please let me know.
> 
> P.S.  We, in Japan, will have about 10 days holidays from this
> Saturday until May 7.  Our application driver, Yusuke, will start his
> vacation from tomorrow.  Thus, our response may be delayed.
> 
> Thanks,


> From: Yoshio Tanaka [mailto:yoshio.tanaka at aist.go.jp] 
> Sent: Tuesday, April 25, 2006 6:03 PM
> To: o.van-der-aa at imperial.ac.uk
> Cc: zhengc at sdsc.edu; Erwin.Laure at cern.ch; 
> m.aggarwal at imperial.ac.uk; yusuke.tanimura at aist.go.jp; 
> bacon at mcs.anl.gov; childers at mcs.anl.gov; ninf-developers at apgrid.org
> Subject: Re: Status
> 
> 
> 
> Hi Olivier,
> 
> # I'm Cc-ing to Charles and Lisa from Globus and Ninf-G developers.
> 
> We believe that at least the following three RPMs are necessary for
> installing Ninf-G:
> 
>     gpt-VDT1.2.2rh9-1.i386.rpm
>     vdt_globus_essentials-VDT1.2.2rh9-1.i386.rpm
>     vdt_globus_sdk-VDT1.2.2rh9-1.i386.rpm
> 
> In addition to installing these RPMs, you need to run the following
> command: 
> 
>     $ gpt-build <flavor> -nosrc
> 
> globus-build-env-gcc32dbg.sh is not included at least in these RPMs
> and I guess it does not appear in any RPM according to the following
> document in GT4 installation manual:
> 
> ----------------------------------------------------------------------
> B.4. Using globus-makefile-header with a binary distribution
> 
> We use a package called "globus_core" to detect the compiler and
> platform settings of the computer that the Toolkit is installed
> on. This package is excluded from binary distributions, because the
> compiler settings on your machine are likely to be different from
> those used on the machine that built the binaries.
> 
> If you need to install a source update into a binary installation,
> globus_core will automatically be built for you. If you're building
> something using "globus-makefile-header", though, you will need to
> install globus_core yourself. Install it with the following command:
> 
> $ $GLOBUS_LOCATION/sbin/gpt-build -nosrc <flavor>
> 
> Where flavor is the flavor you're passing to globus-makefile-header.
> ----------------------------------------------------------------------
> 
> Ninf-G uses various Globus API and it does require files and tools
> such as globus-makefile-header and globus-build-env-<flavor>.sh for
> compling GT-based middleware.
> 
> Thanks,
> 
> P.S. These three RPMs are minimum requirements for installing Ninf-G
>      and of course you need to install the other RPMs such as
>      rm_server for runtime.


> From: Olivier van der Aa [mailto:o.van-der-aa at imperial.ac.uk] 
> Sent: Tuesday, April 25, 2006 1:31 AM
> To: Yoshio Tanaka
> Cc: zhengc at sdsc.edu; Erwin.Laure at cern.ch; 
> m.aggarwal at imperial.ac.uk; yusuke.tanimura at aist.go.jp
> Subject: Re: Status
> 
> 
> > Hi Olivier,
> > 
> > Let me ask two more questions:
> > 
> > 1. Does /opt/globus/libexec/globus-build-env-gcc32dbg.sh exist?
> 
> I only have the following sh files in /opt/globus/libexec/ directory
> 
> /opt/globus/libexec/globus-gram-protocol-constants.sh
> /opt/globus/libexec/globus-personal-gatekeeper-version.sh
> /opt/globus/libexec/globus-sh-tools.sh
> /opt/globus/libexec/globus-sh-tools-vars.sh
> 
> > 
> > 2. How is the output of the following command?
> >     % globus-makefile-header --flavor=gcc32dbg globus_io
> 
> ERROR: Cannot open /opt/globus/libexec/globus-build-env-gcc32dbg.sh!
> (Hint: This error is most commonly caused when no packages of the 
> specified flavor type 
>             are installed..)
> 
> Our globus was installed from rpm that can be found in the vdt1.2.2 
> toolkit: http://vdt.cs.wisc.edu/native_packages/1.2.2/rpm/stable/rh9/
> 
> Could you check from which rpm the 
> /opt/globus/libexec/globus-build-env-gcc32dbg.sh comes.
> 
> Cheers, Olivier.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Job_submission_chain_diagram_action-UI-RB-CE-WN.pdf
Type: application/pdf
Size: 7114 bytes
Desc: not available
Url : http://www.ogf.org/pipermail/gin-ops/attachments/20060428/73c19fac/attachment.pdf 


More information about the gin-ops mailing list