Single test per file or not?

Dave Fugate dfugate at
Wed Jul 27 15:42:37 PDT 2011

Allen's thoughts on this are an accurate reflection of my own as well.  On one end of the spectrum there are test files like this<> which have ~60 individual test cases packed into them.  Long term I'd love to see these split up, it's just kind of low priority at this point.  Maybe we could break-up the "worst offenders" in the not too distant future though...

My best,


From: test262-discuss-bounces at [mailto:test262-discuss-bounces at] On Behalf Of Allen Wirfs-Brock
Sent: Wednesday, July 27, 2011 8:02 AM
To: Rick Waldron
Cc: test262-discuss at
Subject: Re: Single test per file or not?

My original intent in putting together the first version of test262 and its predecessor esconform @ codeplex was that each test should test only a single requirement of the specification.   Most of the original Microsoft tests were written that way.  However, the Sputnik tests  were not written in that manner.  When we initially integrated Sputnik we tried to mechanically breakup the multiple test files.  It didn't work so well and at the time the total number of tests over stressed the test driver, so we backed off from doing the conversion.

As a matter of policy, I think we should expect new tests to be done in the single test per file manner for the reasons that David and Rick articulate.  It would be nice for someone to work on cleaning up the Sputnik tests but I guess I would prioritize that below creating new tests that fills in current coverage gaps.


On Jul 27, 2011, at 7:03 AM, Rick Waldron wrote:


Thanks for including me in this discussion. Dave Fugate and I recently had an exchange regarding granularity, that resulted in my suggesting that tests should be broken down to one aspect per test.

To illustrate: undefined
The value of undefined is undefined (see 8.1). This property has the attributes { [[Writable]]: false, [[Enumerable]]: false, [[Configurable]]: false }.

The test I had referred to is here:

function testcase() {
  var desc = Object.getOwnPropertyDescriptor(global, 'undefined');
  if (desc.writable === false &&
      desc.enumerable === false &&
      desc.configurable === false) {
    return true;

Each of the property descriptor conditions should be a single stand alone test; in total there would be 4 tests covering the single unit (the unit being the whole of implementation for the Global Object value property "undefined" )


On Wed, Jul 27, 2011 at 8:02 AM, David Bruant <david.bruant at<mailto:david.bruant at>> wrote:
[+Rick Waldron, because he had a discussion on this topic with Dave Fugate on Twitter iirc]

Le 27/07/2011 13:28, Geoffrey Sneddon a écrit :

While the current test262 runner makes the assumption that there is only one test per file (see the implementation of ES5Harness.registerTest), the WebWorker-based demo MS showed off a while back allowed multiple tests per file. Seeming both are, as I understand it, by the same group of people, this is an interesting change.

Is it intended to allow multiple tests per file, or should there be limits to one test per file (and hence only one call to ES5Harness.registerTest)?
This is an interesting topic. Granularity.
I have myself been a bit annoyed once or twice by this issue. Typically, I would run the tests, see one failing and trying to see what was wrong. I can't remember which, but sometimes, the test was testing several things at once and by doing so, it was harder to track down where the non-conformance came from. If i recall, a good share of Sputnik imported tests tend to do this.

I have not seen a rational or guidelines or rules discussing test granularity and I think it should be done. I think that for the purpose of a conformance test suite, the ultimate goal of a test should be to be able to spot instantaneously where the non-conformance issue comes from.
There are two things that can help out:
1) test description
I have noticed that it wasn't always perfectly accurate. I will report bugs on that as i find time.
2) test granularity

You may disagree and i'd be happy to have a discussion on how tests should be designed and maybe providing a set of rules/good practices/guideline.

On top of my head, I see one problem which is that some tests have dependency and rely on other parts of the spec to be conformant. So a failure in a test can be caused by what the test is testing or one of its "conformance dependency". I have no idea on how to help out with this issue, but i wanted to pointed out in case other had ideas.


test262-discuss mailing list
test262-discuss at<mailto:test262-discuss at>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the test262-discuss mailing list