Single test per file or not?

Rick Waldron waldron.rick at gmail.com
Wed Jul 27 07:03:10 PDT 2011


David,

Thanks for including me in this discussion. Dave Fugate and I recently had
an exchange regarding granularity, that resulted in my suggesting that tests
should be broken down to one aspect per test.

To illustrate:
15.1.1.3 undefined
The value of undefined is undefined (see 8.1). This property has the
attributes { [[Writable]]: false, [[Enumerable]]: false, [[Configurable]]:
false }.


The test I had referred to is here:
http://samples.msdn.microsoft.com/ietestcenter/Javascript/ES15.1.html

function testcase() {
  var desc = Object.getOwnPropertyDescriptor(global, 'undefined');
  if (desc.writable === false &&
      desc.enumerable === false &&
      desc.configurable === false) {
    return true;
  }
}


Each of the property descriptor conditions should be a single stand alone
test; in total there would be 4 tests covering the single unit (the unit
being the whole of implementation for the Global Object value property
"undefined" )


Rick


On Wed, Jul 27, 2011 at 8:02 AM, David Bruant <david.bruant at labri.fr> wrote:

> [+Rick Waldron, because he had a discussion on this topic with Dave Fugate
> on Twitter iirc]
>
> Le 27/07/2011 13:28, Geoffrey Sneddon a écrit :
>
>  While the current test262 runner makes the assumption that there is only
>> one test per file (see the implementation of ES5Harness.registerTest), the
>> WebWorker-based demo MS showed off a while back allowed multiple tests per
>> file. Seeming both are, as I understand it, by the same group of people,
>> this is an interesting change.
>>
>> Is it intended to allow multiple tests per file, or should there be limits
>> to one test per file (and hence only one call to ES5Harness.registerTest)?
>>
>>  This is an interesting topic. Granularity.
> I have myself been a bit annoyed once or twice by this issue. Typically, I
> would run the tests, see one failing and trying to see what was wrong. I
> can't remember which, but sometimes, the test was testing several things at
> once and by doing so, it was harder to track down where the non-conformance
> came from. If i recall, a good share of Sputnik imported tests tend to do
> this.
>
> I have not seen a rational or guidelines or rules discussing test
> granularity and I think it should be done. I think that for the purpose of a
> conformance test suite, the ultimate goal of a test should be to be able to
> spot instantaneously where the non-conformance issue comes from.
> There are two things that can help out:
> 1) test description
> I have noticed that it wasn't always perfectly accurate. I will report bugs
> on that as i find time.
> 2) test granularity
>
> You may disagree and i'd be happy to have a discussion on how tests should
> be designed and maybe providing a set of rules/good practices/guideline.
>
> On top of my head, I see one problem which is that some tests have
> dependency and rely on other parts of the spec to be conformant. So a
> failure in a test can be caused by what the test is testing or one of its
> "conformance dependency". I have no idea on how to help out with this issue,
> but i wanted to pointed out in case other had ideas.
>
> David
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.mozilla.org/pipermail/test262-discuss/attachments/20110727/ad440592/attachment.html>


More information about the test262-discuss mailing list