gps at mozilla.com
Wed Sep 11 22:36:29 UTC 2013
On 9/11/13 3:21 PM, Justin Dolske wrote:
> The recent discussion around minifying chrome JS (bug 903149) reminds me
> of another area for some easy mechanical wins... Image (PNG)
> optimization. See bugs 872082 and 631392 for the background, but the
> nutshell version is that PNG images often contain overhead, and there
> exist tools to losslessly reduce their filesize. But it's easy to forget
> to run these tools, and it's impossible to review such patches.
> The question is how to best make use of those tools in a more automated
> fashion. Some options:
> * Add an optimization step when building or packaging. This ensures we
> always ship optimized images, but is wasteful of resources. And it makes
> me vaguely uncomfortable to have an effectively-hidden step that changes
> what's checked in. (EG, if someone intentionally wants a color profile
> in some PNG but the tool strips it, that'll be confusing to track down.)
> * Fix everything with a one-time megapatch, then add an Hg hook to guard
> against checking in unoptimized images. This feels annoying -- people
> will forget (or not know in the firstplace), and you'd still need a
> separate way for people to actually fix images.
> * (Variation) Instead of a hook, make an automated test. But that feels
> even more annoying and resource wasteful. It's a land-backout-fix-reland
> * Integrate this into the uplift process/checklist. Infrequent runs
> would be ok, but doesn't really feel like the kind of thing releng
> should have to deal with.
> * It just now occurs to me that we have an automated task for syncing
> blocklist and HSTS updates, maybe having it run a weekly optimization
> step would work.
> * Do nothing! Just make a script (or mach command?) to optimize images
> in the current patch and/or tree -- which we need to do anyway -- and
> hope people remember to use it. We don't require perfection (nor do we
> have it today), and this might turn out to be good enough.
> Thoughts? I'm leaning towards either of the last two.
Good idea! Assuming this is actually a good idea, my vote for
implementing would be to have the checked-in files match what is
shipped/packaged and to have automated verification (as part of |make
check|, packaging, etc) that the checked in files are in the proper
format. If you check in an unoptimized file, TBPL goes green or orange.
Think FAIL_ON_WARNINGS. We could even do it selectively, per-directory,
I prefer this because:
1) We want to minimize the differences between local developer builds
and shipped/packaged builds.
2) Repository hooks technically live outside the tree and can be
expensive on the version control server.
3) It's automated and enforced. "Regressions" are clearly visible and
the recourse is obvious.
More information about the firefox-dev