Single test per file or not?

Dave Fugate dfugate at microsoft.com
Wed Jul 27 15:42:37 PDT 2011


Allen's thoughts on this are an accurate reflection of my own as well.  On one end of the spectrum there are test files like this<http://hg.ecmascript.org/tests/test262/file/1f621c6a8726/external/contributions/Google/sputniktests/tests/Conformance/15_Native_ECMA_Script_Objects/15.8_The_Math_Object/15.8.2_Function_Properties_of_the_Math_Object/15.8.2.17_sqrt/S15.8.2.17_A6.js> which have ~60 individual test cases packed into them.  Long term I'd love to see these split up, it's just kind of low priority at this point.  Maybe we could break-up the "worst offenders" in the not too distant future though...

My best,

Dave

From: test262-discuss-bounces at mozilla.org [mailto:test262-discuss-bounces at mozilla.org] On Behalf Of Allen Wirfs-Brock
Sent: Wednesday, July 27, 2011 8:02 AM
To: Rick Waldron
Cc: test262-discuss at mozilla.org
Subject: Re: Single test per file or not?

My original intent in putting together the first version of test262 and its predecessor esconform @ codeplex was that each test should test only a single requirement of the specification.   Most of the original Microsoft tests were written that way.  However, the Sputnik tests  were not written in that manner.  When we initially integrated Sputnik we tried to mechanically breakup the multiple test files.  It didn't work so well and at the time the total number of tests over stressed the test driver, so we backed off from doing the conversion.

As a matter of policy, I think we should expect new tests to be done in the single test per file manner for the reasons that David and Rick articulate.  It would be nice for someone to work on cleaning up the Sputnik tests but I guess I would prioritize that below creating new tests that fills in current coverage gaps.

Allen



On Jul 27, 2011, at 7:03 AM, Rick Waldron wrote:


David,

Thanks for including me in this discussion. Dave Fugate and I recently had an exchange regarding granularity, that resulted in my suggesting that tests should be broken down to one aspect per test.

To illustrate:
15.1.1.3 undefined
The value of undefined is undefined (see 8.1). This property has the attributes { [[Writable]]: false, [[Enumerable]]: false, [[Configurable]]: false }.


The test I had referred to is here:
http://samples.msdn.microsoft.com/ietestcenter/Javascript/ES15.1.html

function testcase() {
  var desc = Object.getOwnPropertyDescriptor(global, 'undefined');
  if (desc.writable === false &&
      desc.enumerable === false &&
      desc.configurable === false) {
    return true;
  }
}


Each of the property descriptor conditions should be a single stand alone test; in total there would be 4 tests covering the single unit (the unit being the whole of implementation for the Global Object value property "undefined" )


Rick


On Wed, Jul 27, 2011 at 8:02 AM, David Bruant <david.bruant at labri.fr<mailto:david.bruant at labri.fr>> wrote:
[+Rick Waldron, because he had a discussion on this topic with Dave Fugate on Twitter iirc]

Le 27/07/2011 13:28, Geoffrey Sneddon a écrit :

While the current test262 runner makes the assumption that there is only one test per file (see the implementation of ES5Harness.registerTest), the WebWorker-based demo MS showed off a while back allowed multiple tests per file. Seeming both are, as I understand it, by the same group of people, this is an interesting change.

Is it intended to allow multiple tests per file, or should there be limits to one test per file (and hence only one call to ES5Harness.registerTest)?
This is an interesting topic. Granularity.
I have myself been a bit annoyed once or twice by this issue. Typically, I would run the tests, see one failing and trying to see what was wrong. I can't remember which, but sometimes, the test was testing several things at once and by doing so, it was harder to track down where the non-conformance came from. If i recall, a good share of Sputnik imported tests tend to do this.

I have not seen a rational or guidelines or rules discussing test granularity and I think it should be done. I think that for the purpose of a conformance test suite, the ultimate goal of a test should be to be able to spot instantaneously where the non-conformance issue comes from.
There are two things that can help out:
1) test description
I have noticed that it wasn't always perfectly accurate. I will report bugs on that as i find time.
2) test granularity

You may disagree and i'd be happy to have a discussion on how tests should be designed and maybe providing a set of rules/good practices/guideline.

On top of my head, I see one problem which is that some tests have dependency and rely on other parts of the spec to be conformant. So a failure in a test can be caused by what the test is testing or one of its "conformance dependency". I have no idea on how to help out with this issue, but i wanted to pointed out in case other had ideas.

David

_______________________________________________
test262-discuss mailing list
test262-discuss at mozilla.org<mailto:test262-discuss at mozilla.org>
https://mail.mozilla.org/listinfo/test262-discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.mozilla.org/pipermail/test262-discuss/attachments/20110727/42d71809/attachment.html>


More information about the test262-discuss mailing list