tree: 02ec897f0d31aad1a0622fa416f8711a74e6e096 [path history] [tgz]
  1. airjs-tests.yaml
  2. all.js
  3. allocate_stack.js
  4. arg.js
  5. basic_block.js
  6. benchmark.js
  7. code.js
  8. custom.js
  9. frequented_block.js
  10. insertion_set.js
  11. inst.js
  12. liveness.js
  13. make_dist.sh
  14. opcode.js
  15. payload-airjs-ACLj8C.js
  16. payload-gbemu-executeIteration.js
  17. payload-imaging-gaussian-blur-gaussianBlur.js
  18. payload-typescript-scanIdentifier.js
  19. README.md
  20. reg.js
  21. stack_slot.js
  22. stress-test.js
  23. strip-hash.rb
  24. symbols.js
  25. test.html
  26. test.js
  27. tmp.js
  28. tmp_base.js
  29. util.js
Websites/browserbench.org/ARES-6/Air/README.md

All about Air.js

Air.js is an ES6 benchmark. It tries to faithfully use new features like arrow functions, classes, for-of, and Map/Set, among others. Air.js doesn't avoid any features out of fear that they might be slow, in the hope that we might learn how to make those features fast by looking at how Air.js and other benchmarks use them.

This documents the motivation, design, and license of Air.js.

To run Air.js, simply open “Air.js/test.html” in your browser. It will only run correctly if your browser supports ES6.

Motivation

At the time that Air.js was written, most JavaScript benchmarks used ES5 or older versions of the language. ES6 testing mostly relied on microbenchmarks or conversions of existing tests to ES6. We try to use larger benchmarks to avoid over-optimizing for small pieces of code, and we avoid making changes to existing benchmarks because that approach has no limiting principle: if it‘s OK to change a benchmark to use a feature, does that mean we can also change it to remove the use of a feature we don’t like? We feel that the best way to avoid falling into the trap of creating benchmarks that reinforce what some JS engine is already good at is to create a new benchmark from first principles.

We only recently completed our new JavaScript compiler, called B3. B3‘s backend, called Air, is very CPU-intensive and uses a combination of object-oriented and functional idioms in C++. Additionally, it relies heavily on high speed maps and sets. It goes so far as to use customized map/set implementations - even more so than the rest of WebKit. This makes Air a great candidate for ES6 benchmarking. Air.js is a faithful ES6 implementation of Air. It pulls no punches: just as the original C++ Air was written with expressiveness as a top priority, Air.js is liberal in its use of modern ES6 idioms whenever this helps make the code more readable. Unlike the original C++ Air, Air.js doesn’t exploit a deep understanding of compilers to make the code easy to compile.

Design

Air.js runs one of the more expensive Air phases, Air::allocateStack(). This turns abstract stack references into concrete stack references, by selecting how to lay out stack slots in the stack frame. This requires liveness analysis and an interference graph.

Air.js relies on three major ES6 features more so than most of the others:

  • Arrow functions. Like the C++ Air, Air.js uses a functional style of iterating most non-trivial data-structures:

      inst.forEachArg((arg, role, type, width) => ...)
    

    This is because the functional style allows the callbacks to mutate the data being iterated: if the callback returns a non-null value, forEachArg() will replace the argument with that value. This would not have been possible with for-of.

  • For-of. Many Air data structures are amenable to for-of iteration. While the innermost loops tend to use functional iteration, pretty much all of the outer logic uses for-of heavily. For example:

      for (let block of code) // Iterate over the basic blocks
          for (let inst of block) // Iterate over the instructions in a block
              ...
    
  • Map/Set. The liveness analysis and Air::allocateStack() rely on maps and sets. For example, we use a liveAtHead map that is keyed by basic block. Its values are sets of live stack slots. This is a relatively crude way of doing liveness, but it is exactly how the original Air::LivenessAnalysis worked, so we view it as being quite faithful to how a sensible programmer might use Map and Set.

Air.js also uses some other ES6 features. For example, it uses a Proxy in one place, though we doubt that it's on a critical path. Air.js uses classes and let/const extensively, as well a symbols. Symbols are used as enumeration elements, and so they frequently show up as cases in switch statements.

The workflow of an Air.js run is pretty simple: we do 150 runs of allocateStack on four IR payloads.

Each IR payload is a large piece of ES6 code that constructs an Air.js Code object, complete with blocks, temporaries, stack slots, and instructions. These payloads are generated by running Air::dumpAsJS() phase just prior to the native allocateStack phase on the largest hot function in four major JS benchmarks according to JavaScriptCore's internal profiling:

  • Octane/GBEmu, the executeIteration function.
  • Kraken/imaging-gaussian-blur, the gaussianBlur function.
  • Octane/Typescript, the scanIdentifier function,
  • Air.js, an anonymous closure identified by our profiler as ACLj8C.

These payloads allow Air.js to precisely replay allocateStack on those actual functions.

It was an a priori goal of Air.js to spend most of the time in the allocateStack phase. This is a faithful reproduction of the C++ allocateStack phase, including its use of an abstract liveness analysis. It's abstract in the sense that the same liveness algorithm can be reused for temporaries, registers, or stack slots. In C++ this meant using templates, while in ES6 it means more run-time dynamic dispatch.

Each IR payload is executable code that allocates the IR, and about 15% of benchmark execution time is spent in that code. This is significant, but having learned this, we don't feel that it would be honest to try to change the efficiency of payload initialization. What if the payload initialization was more expensive on our engine than others? If it was, then such a change would not be fair.

Air.js validates its results. We added a Code hashing capability to both the C++ Air and Air.js, and we assert each payload looks identical after allocateStack to what it would have looked like after the original C++ allocateStack. We also validate that payloads hash properly before allcoateStack, to help catch bugs during payload initialization. We have not measured how long hashing takes, but it's a O(N) operation, while allocateStack is closer to O(N^2). We suspect that barring some engine pathologies, hashing should be much faster than allocateStack, and allocateStack should be where the bulk of time is spent.

License

Copyright (C) 2016 Apple Inc. All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

  2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Summary

At the time that Air.js was written, we weren't happy with the ES6 benchmarks that were available to us. Air.js uses some ES6 features in anger, in the hope that we can learn about possible optimization strategies by looking at this and other benchmarks.