SpiderNode for Firefox chrome code
myk at mykzilla.org
Wed Dec 14 18:24:25 UTC 2016
> Gregory Szorc <mailto:gps at mozilla.com>
> 2016 December 13 at 22:36
> As a build system maintainer, I have a few more questions:
> * Do we need to compile Node and libuv as part of the build? If so,
> how will that impact build times?
Yes, we would need to compile Node and libuv (along with other libraries
that Node entrains) as part of the build, which would increase build times.
> * Will Node enforce a filesystem layout for e.g. packages? If so, is
> that compatible with our source layout? Or, how much overhead will
> putting it in place add to the build?
In my experience with Positron, Node's source layout has been compatible
with Mozilla's for the core modules. I suspect the same would be true
for third-party modules we vendor via NPM, although I haven't confirmed
> * What about Windows (since Windows is typically an unloved OS for
Node supports Windows, but SpiderNode doesn't yet. We'd need to fix
that, perhaps by first enhancing mozbuild's GYP reader to generate
moz.build configurations from SpiderNode GYP files. Ted Mielczarek's
recent work on the GYP reader will help there, although there's more
work to do.
> * Many tools in the Node ecosystem require running Node as part of
> building. How will this work with cross-compile builds? (See my
> comments in bug 1320607 on the challenges of requiring "host binaries.")
I'm unsure exactly which tools you're referring to, but Node itself (and
hence SpiderNode) doesn't require an existing Node binary in order to
build. Supporting other use cases, such as Node programs generally
(including Node-based build tools/toolchains), would be a different project.
> And some other general questions:
> * Will Node modules require yet another event loop? How does Node's
> event loop interact with existing event loops?
Yes, libuv has yet another event loop that we'd integrate with the Gecko
event loop by posting runnables to the Gecko loop when libuv events
occur. We've done this for Positron and SpiderWeb (our POC of SpiderNode
for Firefox). It's similar to what Electron did to integrate libuv and
Chromium, as described in this section of a blog post:
> * What's the memory overhead? Does Node have granular memory
> management/monitoring that we've worked into JSMs via e.g.
> compartments and about:memory?
I don't have a good sense of this yet.
> * What's the multi-thread/process story?
Node programs themselves are single-threaded, although libuv uses
threads to parallelize operations. We've also prototyped running Node in
a child process, which would be useful for certain use cases, although I
suspect not for Firefox chrome code.
> * Is there overhead e.g. passing thousands of strings between Node
> modules and JSMs?
I don't think so, given that Node would be using the same JS
context/runtime as JSMs that are running in the same Firefox
chrome/parent process, so any string optimizations (like interning)
should apply. But there may be some overhead from the SpiderShim
functions that translate V8 string operations into their SpiderMonkey
> * Do we care about implications for sandboxing?
I'm unsure, but my current thinking would be to make this available
first in the parent process.
> * If we're going to perform a giant refactor of core components
> currently implemented in JS, would the time better be spent converting
> to Rust?
This is a difficult question to answer, as it depends on a variety of
factors. If Node were available, then I suspect that there would be a
subset of core components that it would make sense to refactor into Node
modules, another subset that would be better converted to Rust, and a
third subset that we should leave alone.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the firefox-dev