[cddlm] Interoperability testing problems

Ayla Debora Dantas de Souza - Projeto Ourgrid ayla at dsc.ufcg.edu.br
Thu May 18 12:03:03 CDT 2006


Hi Steve,

that's a good idea, but I think that the main problem remains.
Although we have a generic CDL that all implementations use, we must 
have the one that depends on the specific implementation in order to 
deploy there a component, don't you think? That happens because a 
component that we deploy here may not be deployable in another 
implementation. For instance, our implementation deploys .war files 
specified through the code base.

Thanks,
  Ayla

Steve Loughran wrote:

> Ayla Debora Dantas de Souza - Projeto Ourgrid wrote:
>
>> Hi all,
>>
>> while performing interoperability tests, we have noticed that it is 
>> not possible to test others endpoint simply changing the Portal EPR. 
>> Besides, we must also use CDLs that work on the given implementation, 
>> and configure our tests with those CDLs. This happens because 
>> CodeBase and CommandPath are used in different ways between the 
>> implementations, as the Component Model allows that. Therefore, in 
>> order to perform some of the tests, we need CDLs that work on the 
>> each others implementation, or we need in the Test Plans a definition 
>> of a CDL that should work in all of them. For now, we have 
>> interchanged some CDLs with Satish to test the NEC implementation. Is 
>> this the correct way to proceed? We were not sure if this is a good 
>> way to test interoperability. Besides, we wanted to know what is the 
>> definition of interoperability we will use in order to show that two 
>> different implementations interoperate.
>>
>> Thanks,
>>  Ayla
>>
>
> I think we are going to need some well known components, at well-known 
> URLs, that every implementation can import.
>
> We could have some .cdl file you can import from some URI that is 
> clearly *not* a simple URL:
>
> urn:isbn:0-201-6198-0    (isbn no of Distributed Systems 3rd edition. 
> Collouris & Dollimore; it was within arms reach)
>
> in here we could implement some standard things to
>
> -create files, dirs, delete them
> -assert facts
>
> then you can deploy stuff that import and extend the base components. 
> There's no need to worry about the differences in command path, or the 
> differences in platform. That gets handled in the base .cdl files, 
> which we can write together and then extend. That is, every team has a 
> custom urn:isbn:0-201-6198-0 that could be something like
>
>
> <cdl:cdl>
>     <cdl:import location="urn:isbn:0-201-6198-0" />
>        <cdl:configuration
>         xmlns:base="urn:isbn:0-201-18059-6"
>         xmlns:ext="urn:isbn:0-201-6198-0">
>     <ext:touch cdl:extends="base:touch" >
>              <cmp:commandPath>...</cmp:commandPath>
>         </ext:touch>
>     <ext:fileExists cdl:extends="base:fileExists" >
>              <cmp:commandPath>...</cmp:commandPath>
>         </ext:fileExists>
>
>     </cdl:configuration>
> <cdl:cdl>
>
> Behind that there is some base cdl file that we can keep in CVS, one 
> that defines the abstract components
>
> <cdl:cdl>
>
>        <cdl:configuration xmns:base="urn:isbn:0-201-18059-6">  <!-- 
> C&D, 1st edition -->
>
>     <base:touch  >
>     
>         </base:touch>
>
>     <base:fileExists >
>         <minimumSize>0</minimumSize>
>         </base:fileExists>
>
>     </cdl:configuration>
> <cdl:cdl>
>
> I'm thinking of file operations as they are generally useful, and 
> nicely side effecting.
> <
>
>





More information about the cddlm-wg mailing list