Single test per file or not?

Rick Waldron waldron.rick at gmail.com
Fri Jul 29 14:07:13 PDT 2011


To clarify, I don't individual test _files_ are entirely necessary - I'm
more concerned with the tests themselves. The idea of a test generator and
test generation data is appealing.

On Thu, Jul 28, 2011 at 3:38 AM, Mark S. Miller <erights at google.com> wrote:

> On Wed, Jul 27, 2011 at 3:42 PM, Dave Fugate <dfugate at microsoft.com>wrote:
>
>>  Allen’s thoughts on this are an accurate reflection of my own as well.
>> On one end of the spectrum there are test files like this<http://hg.ecmascript.org/tests/test262/file/1f621c6a8726/external/contributions/Google/sputniktests/tests/Conformance/15_Native_ECMA_Script_Objects/15.8_The_Math_Object/15.8.2_Function_Properties_of_the_Math_Object/15.8.2.17_sqrt/S15.8.2.17_A6.js>which have ~60 individual test cases packed into them.  Long term I’d love
>> to see these split up, it’s just kind of low priority at this point.  Maybe
>> we could break-up the “worst offenders” in the not too distant future
>> though…
>>
>
> The file you link to is an excellent example for a point I was about to
> bring up anyway and just discussed with Dave. If this file were broken up
> into 64 separate source files, it would be a maintenance nightmare compared
> to the current one. Nevertheless, the need to break up the tests it performs
> into separate cases, for purposes of exclusion, running, reporting, etc, is
> compelling. I propose that such tests be converted into test generators. It
> would continue to be maintained as compact sources; but it would generate
> tests, with predictably mangled names, each of whom can be individually
> excluded, run, reported, etc.
>
> Another example is
> http://codereview.appspot.com/4182070/diff/10001/tests/Generate/stdheap.json
> which is a JSON serialization of the specified initial ES5 heap state.
> This is currently used by http://www.erights.org/tests/testjs/
> to test over 4K individual attributes. This is currently written as one
> monolithic test, which is why it isn't checked in anywhere. I propose that
> it be added to test262 as something to generate 4K separate attribute tests,
> each with a separate stable mangled name. These generated tests would not be
> considered sources.
>
>
>> ****
>>
>> ** **
>>
>> My best,****
>>
>> ** **
>>
>> Dave****
>>
>> ** **
>>
>> *From:* test262-discuss-bounces at mozilla.org [mailto:
>> test262-discuss-bounces at mozilla.org] *On Behalf Of *Allen Wirfs-Brock
>> *Sent:* Wednesday, July 27, 2011 8:02 AM
>> *To:* Rick Waldron
>> *Cc:* test262-discuss at mozilla.org
>> *Subject:* Re: Single test per file or not?****
>>
>> ** **
>>
>> My original intent in putting together the first version of test262 and
>> its predecessor esconform @ codeplex was that each test should test only a
>> single requirement of the specification.   Most of the original Microsoft
>> tests were written that way.  However, the Sputnik tests  were not written
>> in that manner.  When we initially integrated Sputnik we tried to
>> mechanically breakup the multiple test files.  It didn't work so well and at
>> the time the total number of tests over stressed the test driver, so we
>> backed off from doing the conversion.****
>>
>> ** **
>>
>> As a matter of policy, I think we should expect new tests to be done in
>> the single test per file manner for the reasons that David and Rick
>> articulate.  It would be nice for someone to work on cleaning up the Sputnik
>> tests but I guess I would prioritize that below creating new tests that
>> fills in current coverage gaps.****
>>
>> ** **
>>
>> Allen****
>>
>> ** **
>>
>> ** **
>>
>> ** **
>>
>> On Jul 27, 2011, at 7:03 AM, Rick Waldron wrote:****
>>
>>
>>
>> ****
>>
>> David,****
>>
>> ** **
>>
>> Thanks for including me in this discussion. Dave Fugate and I recently had
>> an exchange regarding granularity, that resulted in my suggesting that tests
>> should be broken down to one aspect per test.****
>>
>> ** **
>>
>> To illustrate:****
>>
>> 15.1.1.3 undefined ****
>>
>> The value of undefined is undefined (see 8.1). This property has the
>> attributes { [[Writable]]: false, [[Enumerable]]: false, [[Configurable]]:
>> false }.****
>>
>> ** **
>>
>> ** **
>>
>> The test I had referred to is here: ****
>>
>> http://samples.msdn.microsoft.com/ietestcenter/Javascript/ES15.1.html****
>>
>> ** **
>>
>> function testcase() {****
>>
>>   var desc = Object.getOwnPropertyDescriptor(global, 'undefined');****
>>
>>   if (desc.writable === false &&****
>>
>>       desc.enumerable === false &&****
>>
>>       desc.configurable === false) {****
>>
>>     return true;****
>>
>>   }****
>>
>> }****
>>
>> ** **
>>
>> ** **
>>
>> Each of the property descriptor conditions should be a single stand alone
>> test; in total there would be 4 tests covering the single unit (the unit
>> being the whole of implementation for the Global Object value property
>> "undefined" )****
>>
>> ** **
>>
>> ** **
>>
>> Rick ****
>>
>> ** **
>>
>> ** **
>>
>> On Wed, Jul 27, 2011 at 8:02 AM, David Bruant <david.bruant at labri.fr>
>> wrote:****
>>
>> [+Rick Waldron, because he had a discussion on this topic with Dave Fugate
>> on Twitter iirc]
>>
>> Le 27/07/2011 13:28, Geoffrey Sneddon a écrit :****
>>
>> ** **
>>
>> While the current test262 runner makes the assumption that there is only
>> one test per file (see the implementation of ES5Harness.registerTest), the
>> WebWorker-based demo MS showed off a while back allowed multiple tests per
>> file. Seeming both are, as I understand it, by the same group of people,
>> this is an interesting change.
>>
>> Is it intended to allow multiple tests per file, or should there be limits
>> to one test per file (and hence only one call to ES5Harness.registerTest)?
>> ****
>>
>>  This is an interesting topic. Granularity.
>> I have myself been a bit annoyed once or twice by this issue. Typically, I
>> would run the tests, see one failing and trying to see what was wrong. I
>> can't remember which, but sometimes, the test was testing several things at
>> once and by doing so, it was harder to track down where the non-conformance
>> came from. If i recall, a good share of Sputnik imported tests tend to do
>> this.
>>
>> I have not seen a rational or guidelines or rules discussing test
>> granularity and I think it should be done. I think that for the purpose of a
>> conformance test suite, the ultimate goal of a test should be to be able to
>> spot instantaneously where the non-conformance issue comes from.
>> There are two things that can help out:
>> 1) test description
>> I have noticed that it wasn't always perfectly accurate. I will report
>> bugs on that as i find time.
>> 2) test granularity
>>
>> You may disagree and i'd be happy to have a discussion on how tests should
>> be designed and maybe providing a set of rules/good practices/guideline.
>>
>> On top of my head, I see one problem which is that some tests have
>> dependency and rely on other parts of the spec to be conformant. So a
>> failure in a test can be caused by what the test is testing or one of its
>> "conformance dependency". I have no idea on how to help out with this issue,
>> but i wanted to pointed out in case other had ideas.
>>
>> David****
>>
>>
>> _______________________________________________
>> test262-discuss mailing list
>> test262-discuss at mozilla.org
>> https://mail.mozilla.org/listinfo/test262-discuss****
>>
>> ** **
>>
>> _______________________________________________
>> test262-discuss mailing list
>> test262-discuss at mozilla.org
>> https://mail.mozilla.org/listinfo/test262-discuss
>>
>>
>
>
> --
>     Cheers,
>     --MarkM
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.mozilla.org/pipermail/test262-discuss/attachments/20110729/069176ff/attachment.html>


More information about the test262-discuss mailing list