[cddlm] Minutes from CDDLM Interop session at GridWorld 2006.

Steve Loughran steve_loughran at hpl.hp.com
Wed Sep 13 16:10:34 CDT 2006


Notes taken by  Spencer Dawkins (spencer at mcsr-labs.org) . I've post 
edited them slightly to be less critical of WS-A, since now the spec is 
final, its just my implementation that doesnt work. On the subject of 
interop, even though we're testing across the Alpine and Axis soap 
stacks, I'm actually the author of the fault handling logic and servlets 
of both systems. There could be a common error in both. Alternatively, 
my misunderstanding of the specs have set the defacto standard for java 
soap stacks.

We covered both CDL and deploy api tests. There was a very nice demo by 
Satish, and I got to show my localhost deploying tests against the 
public endpoints and collecting the results.



-----------
CDDLM is "Configuration Description, Deployment, and Lifecycle
Management".
This is the first specification on system administration issues in a
Grid.

The chairs for this session weren't available, so it was chaired by
Steve Loughran of HP Labs in Bristol, UK.

"Any applications without adequate system tests doesn't exist" - nice
slide.
All CDDLM applications "exist", by this definition. "Interoperation"
means that we work together for a set of agreed-upon test cases.

HP's "high performance datacenter" is an old laptop no one wants, hosted
at home because HP's security types became violent when hearing the
description of the application. It's nice that they are "behind a
firewall".

HP has a test set of components which are also deployable. This means we
can deploy to other machines and test our machines, etc.

Test cases test all mandatory (non-optional) parts of the spec.
Different groups had different ideas about what was optional, so had to
split up the test cases.

Session included 30 tests run against NEC and HP test boxes.

WSA is still troublesome. A lot of interoperability
problems happen at this layer (four schema versions have been
developed, for example).

NEC and UFCG both interoperate successfully. Problems we have are at the
top layers. HP broke own SOAP stack. some bits work, some bits don't.

Publish all SOAP logs on the net as a public service - recommend this
because it lets the caller see what the far-end saw. Would be nice if
client could pull these logs back in.

Tests can put excessive loads on some systems (should have thought more
about this in our tests).

Don't have consistent error subcodes (would be nice to know if test that
was defined to fail, failed for the right reason). "That doesn't work"
isn't quite enough.

Patchy interoperation isn't just result of incomplete implementations;
it's also differences in spec interpretation. This will be true for a
while.

We need a better X[HT]ML test result format.

Really like CDL language test framework that's being used ("best ever").

Have lots of valid and invalid documents defined. Also have manifest
files that drive execution.

Tests are available on Sourceforge. Experience is that good test cases
help specification designers talk about ambiguity, because everyone's
test is failing.

CDL tests are passing, other tests are failing (at this time).

Have "near-100-percent" test case coverage. If everyone gets the same
results, we should have interoperability.

How do you know you're covering everything that needs to be covered?
Really can't. A lot more invalid things than valid things - probably
infinitely more. This comes down to the (in)competence of the
developers. When I write CDL and check it in, everyone else has to pass
my test, too, and that's nice, but we don't know we have 100 percent
coverage.

When starting WS-Addressing work, there was no test coverage at all.
Things are better now.

Former GGF was not test-centric, and OASIS is even less so. Need this
for agility in standards bodies. Now we're ahead of OASIS and W3C.

Not sure we could ever define a complete set of fault codes, or that we
should try. We would need all possible fault codes in the specs, and
that's problematic.

WSR-F does say that everything should throw a specific fault, but the
SOAP stack itself throws faults - can't do what the spec says. If I can
deploy anything, why not deploy an unreliable proxy server so we can
have network failures? But we can't say this in our own specs. This is
the difference between specification and implementation.

What we've done is probably as state of the art as testing
specifications have gone.

Test results would include real-time and post-run display, stack traces,
logs from different machines, etc. One decision is about
machine-readable versus human-readable.

Using a Swing GUI that provides more than success/failure - partial
success, test in progress, etc. Would like file format that everyone can
read, but we don't have this yet.

With virtualization, some machines that you ran your tests on may not
exist again the next morning - hard to duplicate tests because virtual
host may be hosted somewhere different on a re-run.

Also looking at NEC implementation - using a script to verify results of
tests. Using a Java component implementation, showing an API front end
to the implementation.

Remaining work on interoperability - only common test deployments.

Not particularly hard to resynchronize on different versions of the same
specification - file formats, etc.

Now that we have CDL and components working, we can focus on component
logs.

As far as CDL language goes we are complete barring new test cases. For
deploy API, think we're complete but may identify more work that must be
done as we get experience. As we define more complex components, this
gets blurry because it's tied to the component model.

We went through most of the NEC tests as well as the HP tests.




More information about the cddlm-wg mailing list