Butterflies should be allocated in Auxiliary MarkedSpace instead of CopiedSpace and we should rewrite as much of the GC as needed to make this not a regression
https://bugs.webkit.org/show_bug.cgi?id=160125
Reviewed by Geoffrey Garen and Keith Miller.
JSTests:
Most of the things I did properly covered by existing tests, but I found some simple cases of
unshifting that had sketchy coverage.
* stress/array-storage-array-unshift.js: Added.
* stress/contiguous-array-unshift.js: Added.
* stress/double-array-unshift.js: Added.
* stress/int32-array-unshift.js: Added.
Source/bmalloc:
I needed to tryMemalign, so I added such a thing.
* bmalloc/Allocator.cpp:
(bmalloc::Allocator::allocate):
(bmalloc::Allocator::tryAllocate):
(bmalloc::Allocator::allocateImpl):
* bmalloc/Allocator.h:
* bmalloc/Cache.h:
(bmalloc::Cache::tryAllocate):
* bmalloc/bmalloc.h:
(bmalloc::api::tryMemalign):
Source/JavaScriptCore:
In order to make the GC concurrent (bug 149432), we would either need to enable concurrent
copying or we would need to not copy. Concurrent copying carries a 1-2% throughput overhead
from the barriers alone. Considering that MarkedSpace does a decent job of avoiding
fragmentation, it's unlikely that it's worth paying 1-2% throughput for copying. So, we want
to get rid of copied space. This change moves copied space's biggest client over to marked
space.
Moving butterflies to marked space means having them use the new Auxiliary HeapCell
allocation path. This is a fairly mechanical change, but it caused performance regressions
everywhere, so this change also fixes MarkedSpace's performance issues.
At a high level the mechanical changes are:
- We use AuxiliaryBarrier instead of CopyBarrier.
- We use tryAllocateAuxiliary instead of tryAllocateStorage. I got rid of the silly
CheckedBoolean stuff, since it's so much more trouble than it's worth.
- The JITs have to emit inlined marked space allocations instead of inline copy space
allocations.
- Everyone has to get used to zeroing their butterflies after allocation instead of relying
on them being pre-zeroed by the GC. Copied space would zero things for you, while marked
space doesn't.
That's about 1/3 of this change. But this led to performance problems, which I fixed with
optimizations that amounted to a major MarkedSpace rewrite:
- MarkedSpace always causes internal fragmentation for array allocations because the vector
length we choose when we resize usually leads to a cell size that doesn't correspond to any
size class. I got around this by making array allocations usually round up vectorLength to
the maximum allowed by the size class that we would have allocated in. Also,
ensureLengthSlow() and friends first make sure that the requested length can't just be
fulfilled with the current allocation size. This safeguard means that not every array
allocation has to do size class queries. For example, the fast path of new Array(length)
never does any size class queries, under the assumption that (1) the speed gained from
avoiding an ensureLengthSlow() call, which then just changes the vectorLength by doing the
size class query, is too small to offset the speed lost by doing the query on every
allocation and (2) new Array(length) is a pretty good hint that resizing is not very
likely.
- Size classes in MarkedSpace were way too precise, which led to external fragmentation. This
changes MarkedSpace size classes to use a linear progression for very small sizes followed
by a geometric progression that naturally transitions to a hyperbolic progression. We want
hyperbolic sizes when we get close to blockSize: for example the largest size we want is
payloadSize / 2 rounded down, to ensure we get exactly two cells with minimal slop. The
next size down should be payloadSize / 3 rounded down, and so on. After the last precise
size (80 bytes), we proceed using a geometric progression, but round up each size to
minimize slop at the end of the block. This naturally causes the geometric progression to
turn hyperbolic for large sizes. The size class configuration happens at VM start-up, so
it can be controlled with runtime options. I found that a base of 1.4 works pretty well.
- Large allocations caused massive internal fragmentation, since the smallest large
allocation had to use exactly blockSize, and the largest small allocation used
blockSize / 2. The next size up - the first large allocation size to require two blocks -
also had 50% internal fragmentation. This is because we required large allocations to be
blockSize aligned, so that MarkedBlock::blockFor() would work. I decided to rewrite all of
that. Cells no longer have to be owned by a MarkedBlock. They can now alternatively be
owned by a LargeAllocation. These two things are abstracted as CellContainer. You know that
a cell is owned by a LargeAllocation if the MarkedBlock::atomSize / 2 bit is set.
Basically, large allocations are deliberately misaligned by 8 bytes. This actually works
out great since (1) typed arrays won't use large allocations anyway since they have their
own malloc fallback and (2) large array butterflies already have a 8 byte header, which
means that the 8 byte base misalignment aligns the large array payload on a 16 byte
boundary. I took extreme care to make sure that the isLargeAllocation bit checks are as
rare as possible; for example, ExecState::vm() skips the check because we know that callees
must be small allocations. It's also possible to use template tricks to do one check for
cell container kind, and then invoke a function specialized for MarkedBlock or a function
specialized for LargeAllocation. LargeAllocation includes stubs for all MarkedBlock methods
that get used from functions that are template-specialized like this. That's mostly to
speed up the GC marking code. Most other code can use CellContainer API or HeapCell API
directly. That's another thing: HeapCell, the common base of JSCell and auxiliary
allocations, is now smart enough to do a lot of things for you, like HeapCell::vm(),
HeapCell::heap(), HeapCell::isLargeAllocation(), and HeapCell::cellContainer(). The size
cutoff for large allocations is runtime-configurable, so long as you don't choose something
so small that callees end up large. I found that 400 bytes is roughly optimal. This means
that the MarkedBlock size classes end up being:
16, 32, 48, 64, 80, 112, 160, 224, 320
The next size class would have been 432, but that's above the 400 byte cutoff. All of this
is configurable with --sizeClassProgression and --largeAllocationCutoff. You can see what
size classes you end up with by doing --dumpSizeClasses=true.
- Copied space uses 64KB blocks, while marked space used to use 16KB blocks. Allocating a lot
of stuff in 16KB blocks was slower than allocating it in 64KB blocks because the GC had a
lot of per-block overhead. I removed this overhead: It's now 2x faster to scan all
MarkedBlocks because the list that contains the interesting meta-data is allocated on the
side, for better locality during a sequential walk. It's no longer necessary to scan
MarkedBlocks to find WeakSets, since the sets of WeakSets for eden scan and full scan are
maintained on-the-fly. It's no longer necessary to scan all MarkedBlocks to clear mark
bits because we now use versioned mark bits: to clear then, just increment the 64-bit
heap version. It's no longer necessary to scan retired MarkedBlocks while allocating
because marking retires them on-the-fly. It's no longer necessary to sort all blocks in
the IncrementalSweeper's snapshot because blocks now know if they are in the snapshot. Put
together, these optimizations allowed me to reduce block size to 16KB without losing much
performance. There is some small perf loss on JetStream/splay, but not enough to hurt
JetStream overall. I tried reducing block sizes further, to 4KB, since that is a
progression on membuster. That's not possible yet, since there is still enough per-block
overhead yet that such a reduction hurts JetStream too much. I filed a bug about improving
this further: https://bugs.webkit.org/show_bug.cgi?id=161581.
- Even after all of that, copying butterflies was still faster because it allowed us to skip
sweeping dead space. A good GC allocates over dead bytes without explicitly freeing them,
so the GC pause is O(size of live), not O(size of live + dead). O(dead) is usually much
larger than O(live), especially in an eden collection. Copying satisfies this premise while
mark+sweep does not. So, I invented a new kind of allocator: bump'n'pop. Previously, our
MarkedSpace allocator was a freelist pop. That's simple and easy to inline but requires
that we walk the block to build a free list. This means walking dead space. The new
allocator allows totally free MarkedBlocks to simply set up a bump-pointer arena instead.
The allocator is a hybrid of bump-pointer and freelist pop. It tries bump first. The bump
pointer always bumps by cellSize, so the result of filling a block with bumping looks as if
we had used freelist popping to fill it. Additionally, each MarkedBlock now has a bit to
quickly tell if the block is entirely free. This makes sweeping O(1) whenever a MarkedBlock
is completely empty, which is the common case because of the generational hypothesis: the
number of objects that survive an eden collection is a tiny fraction of the number of
objects that had been allocated, and this fraction is so small that there are typically
fewer than one survivors per MarkedBlock. This change was enough to make this change a net
win over tip-of-tree.
- FTL now shares the same allocation fast paths as everything else, which is great, because
bump'n'pop has gnarly control flow. We don't really want B3 to have to think about that
control flow, since it won't be able to improve the machine code we write ourselves. GC
fast paths are best written in assembly. So, I've empowered B3 to have even better support
for Patchpoint terminals. It's now totally fine for a Patchpoint terminal to be non-Void.
So, the new FTL allocation fast paths are just Patchpoint terminals that call through to
AssemblyHelpers::emitAllocate(). B3 still reasons about things like constant-folding the
size class calculation and constant-hoisting the allocator. Also, I gave the FTL the
ability to constant-fold some allocator logic (in case we first assume that we're doing a
variable-length allocation but then realize that the length is known). I think it makes
sense to have constant folding rules in FTL::Output, or whatever the B3 IR builder is,
since this makes lowering easier (you can constant fold during lowering more easily) and it
reduces the amount of malloc traffic. In the future, we could teach B3 how to better
constant-fold this code. That would require allowing loads to be constant-folded, which is
doable but hella tricky.
- It used to be that if a logical object allocation required two physical allocations (first
the butterfly and then the cell), then the JIT would emit the code in such a way that a
failure in the second fast path would cause us to forget the successful first physical
allocation. This was pointlessly wasteful. It turns out that it's very cheap to devote a
register to storing either the butterfly or null, because the butterfly register is anyway
going to be free inside the first allocation. The only overhead here is zeroing the
butterfly register. With that in place, we can just pass the butterfly-or-null to the slow
path, which can then either allocate a butterfly or not. So now we never waste a successful
allocation. This patch implements such a solution both in DFG (where it's easy to do this
since we control registers already) and in FTL (where it's annoying, because mutable
"butterfly-or-null" variables are hard to say in SSA; also I realized that we had code
duplicated the JSArray allocation utility, so I deduplicated it). This came up because in
one version of this patch, this wastage would resonate with some Kraken benchmark: the
benchmark would always allocate N small things followed by one bigger thing. The problem
was I accidentally adjusted the various fixed overheads in MarkedBlock in such a way that
the JSObject size class, which both the small and big thing shared for their cell, could
hold exactly N cells per MarkedBlock. Then the benchmark would always call slow path when
it allocated the big thing. So, it would end up having to allocate the big thing's large
butterfly twice, every single time! Ouch!
- It used to be that we zeroed CopiedBlocks using memset, and so array allocations enjoyed
amortization of the cost of zeroing. This doesn't work anymore - it's now up to the client
of the allocator to initialize the object to whatever state they need. It used to be that
we would just use a dumb loop. I initially changed this so that we would end up in memset
for large allocations, but this didn't actually help performance that much. I got a much
better result by playing with different memsets written in assembly. First I wrote one
using non-temporal stores. That was a small speed-up over memset. Then I tried the classic
"rep stos" approach, and holy cow that version was fast. It's a ~20% speed-up on array
allocation microbenchmarks. So, this patch adds code paths to do "rep stos" on x86_64, or
memset, or use a loop, as appropriate, for both "contiguous" arrays (holes are zero) and
double arrays (holes are PNaN). Note that the JIT always emits either a loop or a flat slab
of stores (if the size is known), but those paths in the JIT won't trigger for
NewArrayWithSize() if the size is large, since that takes us to the
operationNewArrayWithSize() slow path, which calls into JSArray::create(). That's why the
optimizations here are all in JSArray::create() - that's the hot place for large arrays
that need to be filled with holes.
All of this put together gives us neutral perf on JetStream, membuster, and PLT3, a ~1%
regression on Speedometer, and up to a 4% regression Kraken. The Kraken regression is
because Kraken was allocating exactly 1024 element arrays at a rate of 400MB/sec. This is a
best-case scenario for bump allocation. I think that we should fix bmalloc to make up the
difference, but take the hit for now because it's a crazy corner case. By comparison, the
alternative approach of using a copy barrier would have cost us 1-2%. That's the real
apples-to-apples comparison if your premise is that we should have a concurrent GC. After we
finish removing copied space, we will be barrier-ready for concurrent GC: we already have a
marking barrier and we simply won't need a copying barrier. This change gets us there for
the purposes of our benchmarks, since the remaining clients of copied space are not very
important. On the other hand, if we keep copying, then getting barrier-ready would mean
adding back the copy barrier, which costs more perf.
We might get bigger speed-ups once we remove CopiedSpace altogether. That requires moving
typed arrays and a few other weird things over to Aux MarkedSpace.
This also includes some header sanitization. The introduction of AuxiliaryBarrier, HeapCell,
and CellContainer meant that I had to include those files from everywhere. Fortunately,
just including JSCInlines.h (instead of manually including the files that includes) is
usually enough. So, I made most of JSC's cpp files include JSCInlines.h, which is something
that we were already basically doing. In places where JSCInlines.h would be too much, I just
included HeapInlines.h. This got weird, because we previously included HeapInlines.h from
JSObject.h. That's bad because it led to some circular dependencies, so I fixed it - but that
meant having to manually include HeapInlines.h from the places that previously got it
implicitly via JSObject.h. But that led to more problems for some reason: I started getting
build errors because non-JSC files were having trouble including Opcode.h. That's just silly,
since Opcode.h is meant to be an internal JSC header. So, I made it an internal header and
made it impossible to include it from outside JSC. This was a lot of work, but it was
necessary to get the patch to build on all ports. It's also a net win. There were many places
in WebCore that were transitively including a *ton* of JSC headers just because of the
JSObject.h->HeapInlines.h edge and a bunch of dependency edges that arose from some public
(for WebCore) JSC headers needing Interpreter.h or Opcode.h for bad reasons.
* API/JSManagedValue.mm:
(-[JSManagedValue initWithValue:]):
* API/JSTypedArray.cpp:
* API/ObjCCallbackFunction.mm:
* API/tests/testapi.mm:
(testObjectiveCAPI):
(testWeakValue): Deleted.
* CMakeLists.txt:
* JavaScriptCore.xcodeproj/project.pbxproj:
* Scripts/builtins/builtins_generate_combined_implementation.py:
(BuiltinsCombinedImplementationGenerator.generate_secondary_header_includes):
* Scripts/builtins/builtins_generate_internals_wrapper_implementation.py:
(BuiltinsInternalsWrapperImplementationGenerator.generate_secondary_header_includes):
* Scripts/builtins/builtins_generate_separate_implementation.py:
(BuiltinsSeparateImplementationGenerator.generate_secondary_header_includes):
* assembler/AbstractMacroAssembler.h:
(JSC::AbstractMacroAssembler::JumpList::link):
(JSC::AbstractMacroAssembler::JumpList::linkTo):
* assembler/MacroAssembler.h:
* assembler/MacroAssemblerARM64.h:
(JSC::MacroAssemblerARM64::add32):
* assembler/MacroAssemblerCodeRef.cpp: Added.
(JSC::MacroAssemblerCodePtr::createLLIntCodePtr):
(JSC::MacroAssemblerCodePtr::dumpWithName):
(JSC::MacroAssemblerCodePtr::dump):
(JSC::MacroAssemblerCodeRef::createLLIntCodeRef):
(JSC::MacroAssemblerCodeRef::dump):
* assembler/MacroAssemblerCodeRef.h:
(JSC::MacroAssemblerCodePtr::createLLIntCodePtr): Deleted.
(JSC::MacroAssemblerCodePtr::dumpWithName): Deleted.
(JSC::MacroAssemblerCodePtr::dump): Deleted.
(JSC::MacroAssemblerCodeRef::createLLIntCodeRef): Deleted.
(JSC::MacroAssemblerCodeRef::dump): Deleted.
* b3/B3BasicBlock.cpp:
(JSC::B3::BasicBlock::appendBoolConstant):
* b3/B3BasicBlock.h:
* b3/B3DuplicateTails.cpp:
* b3/B3StackmapGenerationParams.h:
* b3/testb3.cpp:
(JSC::B3::testPatchpointTerminalReturnValue):
(JSC::B3::run):
* bindings/ScriptValue.cpp:
* bytecode/AdaptiveInferredPropertyValueWatchpointBase.cpp:
* bytecode/BytecodeBasicBlock.cpp:
* bytecode/BytecodeLivenessAnalysis.cpp:
* bytecode/BytecodeUseDef.h:
* bytecode/CallLinkInfo.cpp:
(JSC::CallLinkInfo::callTypeFor):
* bytecode/CallLinkInfo.h:
(JSC::CallLinkInfo::callTypeFor): Deleted.
* bytecode/CallLinkStatus.cpp:
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::finishCreation):
(JSC::CodeBlock::clearLLIntGetByIdCache):
(JSC::CodeBlock::predictedMachineCodeSize):
* bytecode/CodeBlock.h:
(JSC::CodeBlock::jitCodeMap): Deleted.
(JSC::clearLLIntGetByIdCache): Deleted.
* bytecode/ExecutionCounter.h:
* bytecode/Instruction.h:
* bytecode/LLIntPrototypeLoadAdaptiveStructureWatchpoint.cpp:
(JSC::LLIntPrototypeLoadAdaptiveStructureWatchpoint::fireInternal):
* bytecode/ObjectAllocationProfile.h:
(JSC::ObjectAllocationProfile::isNull):
(JSC::ObjectAllocationProfile::initialize):
* bytecode/Opcode.h:
(JSC::padOpcodeName):
* bytecode/PolymorphicAccess.cpp:
(JSC::AccessCase::generateImpl):
(JSC::PolymorphicAccess::regenerate):
* bytecode/PolymorphicAccess.h:
* bytecode/PreciseJumpTargets.cpp:
* bytecode/StructureStubInfo.cpp:
* bytecode/StructureStubInfo.h:
* bytecode/UnlinkedCodeBlock.cpp:
(JSC::UnlinkedCodeBlock::vm): Deleted.
* bytecode/UnlinkedCodeBlock.h:
* bytecode/UnlinkedInstructionStream.cpp:
* bytecode/UnlinkedInstructionStream.h:
* dfg/DFGOperations.cpp:
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::emitAllocateRawObject):
(JSC::DFG::SpeculativeJIT::compileMakeRope):
(JSC::DFG::SpeculativeJIT::compileAllocatePropertyStorage):
(JSC::DFG::SpeculativeJIT::compileReallocatePropertyStorage):
* dfg/DFGSpeculativeJIT.h:
(JSC::DFG::SpeculativeJIT::emitAllocateJSCell):
(JSC::DFG::SpeculativeJIT::emitAllocateJSObject):
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::compile):
(JSC::DFG::SpeculativeJIT::compileAllocateNewArrayWithSize):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::compile):
(JSC::DFG::SpeculativeJIT::compileAllocateNewArrayWithSize):
* dfg/DFGStrengthReductionPhase.cpp:
(JSC::DFG::StrengthReductionPhase::handleNode):
* ftl/FTLAbstractHeapRepository.h:
* ftl/FTLCompile.cpp:
* ftl/FTLJITFinalizer.cpp:
* ftl/FTLLowerDFGToB3.cpp:
(JSC::FTL::DFG::LowerDFGToB3::compileCreateDirectArguments):
(JSC::FTL::DFG::LowerDFGToB3::compileCreateRest):
(JSC::FTL::DFG::LowerDFGToB3::allocateArrayWithSize):
(JSC::FTL::DFG::LowerDFGToB3::compileNewArrayWithSize):
(JSC::FTL::DFG::LowerDFGToB3::compileMakeRope):
(JSC::FTL::DFG::LowerDFGToB3::compileMaterializeNewObject):
(JSC::FTL::DFG::LowerDFGToB3::initializeArrayElements):
(JSC::FTL::DFG::LowerDFGToB3::allocatePropertyStorageWithSizeImpl):
(JSC::FTL::DFG::LowerDFGToB3::allocateHeapCell):
(JSC::FTL::DFG::LowerDFGToB3::allocateCell):
(JSC::FTL::DFG::LowerDFGToB3::allocateObject):
(JSC::FTL::DFG::LowerDFGToB3::allocatorForSize):
(JSC::FTL::DFG::LowerDFGToB3::allocateVariableSizedObject):
(JSC::FTL::DFG::LowerDFGToB3::allocateJSArray):
(JSC::FTL::DFG::LowerDFGToB3::compileAllocateArrayWithSize): Deleted.
* ftl/FTLOutput.cpp:
(JSC::FTL::Output::constBool):
(JSC::FTL::Output::add):
(JSC::FTL::Output::shl):
(JSC::FTL::Output::aShr):
(JSC::FTL::Output::lShr):
(JSC::FTL::Output::zeroExt):
(JSC::FTL::Output::equal):
(JSC::FTL::Output::notEqual):
(JSC::FTL::Output::above):
(JSC::FTL::Output::aboveOrEqual):
(JSC::FTL::Output::below):
(JSC::FTL::Output::belowOrEqual):
(JSC::FTL::Output::greaterThan):
(JSC::FTL::Output::greaterThanOrEqual):
(JSC::FTL::Output::lessThan):
(JSC::FTL::Output::lessThanOrEqual):
(JSC::FTL::Output::select):
(JSC::FTL::Output::appendSuccessor):
(JSC::FTL::Output::addIncomingToPhi):
* ftl/FTLOutput.h:
* ftl/FTLValueFromBlock.h:
(JSC::FTL::ValueFromBlock::operator bool):
(JSC::FTL::ValueFromBlock::ValueFromBlock): Deleted.
* ftl/FTLWeightedTarget.h:
(JSC::FTL::WeightedTarget::frequentedBlock):
* heap/CellContainer.h: Added.
(JSC::CellContainer::CellContainer):
(JSC::CellContainer::operator bool):
(JSC::CellContainer::isMarkedBlock):
(JSC::CellContainer::isLargeAllocation):
(JSC::CellContainer::markedBlock):
(JSC::CellContainer::largeAllocation):
* heap/CellContainerInlines.h: Added.
(JSC::CellContainer::isMarked):
(JSC::CellContainer::isMarkedOrNewlyAllocated):
(JSC::CellContainer::noteMarked):
(JSC::CellContainer::cellSize):
(JSC::CellContainer::weakSet):
(JSC::CellContainer::flipIfNecessary):
* heap/ConservativeRoots.cpp:
(JSC::ConservativeRoots::ConservativeRoots):
(JSC::ConservativeRoots::~ConservativeRoots):
(JSC::ConservativeRoots::grow):
(JSC::ConservativeRoots::genericAddPointer):
(JSC::ConservativeRoots::genericAddSpan):
* heap/ConservativeRoots.h:
(JSC::ConservativeRoots::roots):
* heap/CopyToken.h:
* heap/FreeList.cpp: Added.
(JSC::FreeList::dump):
* heap/FreeList.h: Added.
(JSC::FreeList::FreeList):
(JSC::FreeList::list):
(JSC::FreeList::bump):
(JSC::FreeList::operator==):
(JSC::FreeList::operator!=):
(JSC::FreeList::operator bool):
(JSC::FreeList::allocationWillFail):
(JSC::FreeList::allocationWillSucceed):
* heap/GCTypeMap.h: Added.
(JSC::GCTypeMap::operator[]):
* heap/Heap.cpp:
(JSC::Heap::Heap):
(JSC::Heap::lastChanceToFinalize):
(JSC::Heap::finalizeUnconditionalFinalizers):
(JSC::Heap::markRoots):
(JSC::Heap::copyBackingStores):
(JSC::Heap::gatherStackRoots):
(JSC::Heap::gatherJSStackRoots):
(JSC::Heap::gatherScratchBufferRoots):
(JSC::Heap::clearLivenessData):
(JSC::Heap::visitSmallStrings):
(JSC::Heap::visitConservativeRoots):
(JSC::Heap::removeDeadCompilerWorklistEntries):
(JSC::Heap::gatherExtraHeapSnapshotData):
(JSC::Heap::removeDeadHeapSnapshotNodes):
(JSC::Heap::visitProtectedObjects):
(JSC::Heap::visitArgumentBuffers):
(JSC::Heap::visitException):
(JSC::Heap::visitStrongHandles):
(JSC::Heap::visitHandleStack):
(JSC::Heap::visitSamplingProfiler):
(JSC::Heap::traceCodeBlocksAndJITStubRoutines):
(JSC::Heap::converge):
(JSC::Heap::visitWeakHandles):
(JSC::Heap::updateObjectCounts):
(JSC::Heap::clearUnmarkedExecutables):
(JSC::Heap::deleteUnmarkedCompiledCode):
(JSC::Heap::collectAllGarbage):
(JSC::Heap::collect):
(JSC::Heap::collectWithoutAnySweep):
(JSC::Heap::collectImpl):
(JSC::Heap::suspendCompilerThreads):
(JSC::Heap::willStartCollection):
(JSC::Heap::flushOldStructureIDTables):
(JSC::Heap::flushWriteBarrierBuffer):
(JSC::Heap::stopAllocation):
(JSC::Heap::prepareForMarking):
(JSC::Heap::reapWeakHandles):
(JSC::Heap::pruneStaleEntriesFromWeakGCMaps):
(JSC::Heap::sweepArrayBuffers):
(JSC::MarkedBlockSnapshotFunctor::MarkedBlockSnapshotFunctor):
(JSC::MarkedBlockSnapshotFunctor::operator()):
(JSC::Heap::snapshotMarkedSpace):
(JSC::Heap::deleteSourceProviderCaches):
(JSC::Heap::notifyIncrementalSweeper):
(JSC::Heap::writeBarrierCurrentlyExecutingCodeBlocks):
(JSC::Heap::resetAllocators):
(JSC::Heap::updateAllocationLimits):
(JSC::Heap::didFinishCollection):
(JSC::Heap::resumeCompilerThreads):
(JSC::Zombify::visit):
(JSC::Heap::forEachCodeBlockImpl):
* heap/Heap.h:
(JSC::Heap::allocatorForObjectWithoutDestructor):
(JSC::Heap::allocatorForObjectWithDestructor):
(JSC::Heap::allocatorForAuxiliaryData):
(JSC::Heap::jitStubRoutines):
(JSC::Heap::codeBlockSet):
(JSC::Heap::storageAllocator): Deleted.
* heap/HeapCell.h:
(JSC::HeapCell::isZapped): Deleted.
* heap/HeapCellInlines.h: Added.
(JSC::HeapCell::isLargeAllocation):
(JSC::HeapCell::cellContainer):
(JSC::HeapCell::markedBlock):
(JSC::HeapCell::largeAllocation):
(JSC::HeapCell::heap):
(JSC::HeapCell::vm):
(JSC::HeapCell::cellSize):
(JSC::HeapCell::allocatorAttributes):
(JSC::HeapCell::destructionMode):
(JSC::HeapCell::cellKind):
* heap/HeapInlines.h:
(JSC::Heap::heap):
(JSC::Heap::isLive):
(JSC::Heap::isMarked):
(JSC::Heap::testAndSetMarked):
(JSC::Heap::setMarked):
(JSC::Heap::cellSize):
(JSC::Heap::forEachCodeBlock):
(JSC::Heap::allocateObjectOfType):
(JSC::Heap::subspaceForObjectOfType):
(JSC::Heap::allocatorForObjectOfType):
(JSC::Heap::allocateAuxiliary):
(JSC::Heap::tryAllocateAuxiliary):
(JSC::Heap::tryReallocateAuxiliary):
(JSC::Heap::isPointerGCObject): Deleted.
(JSC::Heap::isValueGCObject): Deleted.
* heap/HeapOperation.cpp: Added.
(WTF::printInternal):
* heap/HeapOperation.h:
* heap/HeapUtil.h: Added.
(JSC::HeapUtil::findGCObjectPointersForMarking):
(JSC::HeapUtil::isPointerGCObjectJSCell):
(JSC::HeapUtil::isValueGCObject):
* heap/IncrementalSweeper.cpp:
(JSC::IncrementalSweeper::sweepNextBlock):
* heap/IncrementalSweeper.h:
* heap/LargeAllocation.cpp: Added.
(JSC::LargeAllocation::tryCreate):
(JSC::LargeAllocation::LargeAllocation):
(JSC::LargeAllocation::lastChanceToFinalize):
(JSC::LargeAllocation::shrink):
(JSC::LargeAllocation::visitWeakSet):
(JSC::LargeAllocation::reapWeakSet):
(JSC::LargeAllocation::flip):
(JSC::LargeAllocation::isEmpty):
(JSC::LargeAllocation::sweep):
(JSC::LargeAllocation::destroy):
(JSC::LargeAllocation::dump):
* heap/LargeAllocation.h: Added.
(JSC::LargeAllocation::fromCell):
(JSC::LargeAllocation::cell):
(JSC::LargeAllocation::isLargeAllocation):
(JSC::LargeAllocation::heap):
(JSC::LargeAllocation::vm):
(JSC::LargeAllocation::weakSet):
(JSC::LargeAllocation::clearNewlyAllocated):
(JSC::LargeAllocation::isNewlyAllocated):
(JSC::LargeAllocation::isMarked):
(JSC::LargeAllocation::isMarkedOrNewlyAllocated):
(JSC::LargeAllocation::isLive):
(JSC::LargeAllocation::hasValidCell):
(JSC::LargeAllocation::cellSize):
(JSC::LargeAllocation::aboveLowerBound):
(JSC::LargeAllocation::belowUpperBound):
(JSC::LargeAllocation::contains):
(JSC::LargeAllocation::attributes):
(JSC::LargeAllocation::flipIfNecessary):
(JSC::LargeAllocation::flipIfNecessaryConcurrently):
(JSC::LargeAllocation::testAndSetMarked):
(JSC::LargeAllocation::setMarked):
(JSC::LargeAllocation::clearMarked):
(JSC::LargeAllocation::noteMarked):
(JSC::LargeAllocation::headerSize):
* heap/MarkedAllocator.cpp:
(JSC::MarkedAllocator::MarkedAllocator):
(JSC::MarkedAllocator::isPagedOut):
(JSC::MarkedAllocator::retire):
(JSC::MarkedAllocator::filterNextBlock):
(JSC::MarkedAllocator::setNextBlockToSweep):
(JSC::MarkedAllocator::tryAllocateWithoutCollectingImpl):
(JSC::MarkedAllocator::tryAllocateWithoutCollecting):
(JSC::MarkedAllocator::allocateSlowCase):
(JSC::MarkedAllocator::tryAllocateSlowCase):
(JSC::MarkedAllocator::allocateSlowCaseImpl):
(JSC::blockHeaderSize):
(JSC::MarkedAllocator::blockSizeForBytes):
(JSC::MarkedAllocator::tryAllocateBlock):
(JSC::MarkedAllocator::addBlock):
(JSC::MarkedAllocator::removeBlock):
(JSC::MarkedAllocator::stopAllocating):
(JSC::MarkedAllocator::reset):
(JSC::MarkedAllocator::lastChanceToFinalize):
(JSC::MarkedAllocator::setFreeList):
(JSC::isListPagedOut): Deleted.
(JSC::MarkedAllocator::tryAllocateHelper): Deleted.
(JSC::MarkedAllocator::tryPopFreeList): Deleted.
(JSC::MarkedAllocator::tryAllocate): Deleted.
(JSC::MarkedAllocator::allocateBlock): Deleted.
* heap/MarkedAllocator.h:
(JSC::MarkedAllocator::takeLastActiveBlock):
(JSC::MarkedAllocator::offsetOfFreeList):
(JSC::MarkedAllocator::offsetOfCellSize):
(JSC::MarkedAllocator::tryAllocate):
(JSC::MarkedAllocator::allocate):
(JSC::MarkedAllocator::forEachBlock):
(JSC::MarkedAllocator::offsetOfFreeListHead): Deleted.
(JSC::MarkedAllocator::MarkedAllocator): Deleted.
(JSC::MarkedAllocator::init): Deleted.
(JSC::MarkedAllocator::stopAllocating): Deleted.
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::tryCreate):
(JSC::MarkedBlock::Handle::Handle):
(JSC::MarkedBlock::Handle::~Handle):
(JSC::MarkedBlock::MarkedBlock):
(JSC::MarkedBlock::Handle::specializedSweep):
(JSC::MarkedBlock::Handle::sweep):
(JSC::MarkedBlock::Handle::sweepHelperSelectScribbleMode):
(JSC::MarkedBlock::Handle::sweepHelperSelectStateAndSweepMode):
(JSC::MarkedBlock::Handle::unsweepWithNoNewlyAllocated):
(JSC::SetNewlyAllocatedFunctor::SetNewlyAllocatedFunctor):
(JSC::SetNewlyAllocatedFunctor::operator()):
(JSC::MarkedBlock::Handle::stopAllocating):
(JSC::MarkedBlock::Handle::lastChanceToFinalize):
(JSC::MarkedBlock::Handle::resumeAllocating):
(JSC::MarkedBlock::Handle::zap):
(JSC::MarkedBlock::Handle::forEachFreeCell):
(JSC::MarkedBlock::flipIfNecessary):
(JSC::MarkedBlock::Handle::flipIfNecessary):
(JSC::MarkedBlock::flipIfNecessarySlow):
(JSC::MarkedBlock::flipIfNecessaryConcurrentlySlow):
(JSC::MarkedBlock::clearMarks):
(JSC::MarkedBlock::assertFlipped):
(JSC::MarkedBlock::needsFlip):
(JSC::MarkedBlock::Handle::needsFlip):
(JSC::MarkedBlock::Handle::willRemoveBlock):
(JSC::MarkedBlock::Handle::didConsumeFreeList):
(JSC::MarkedBlock::markCount):
(JSC::MarkedBlock::Handle::isEmpty):
(JSC::MarkedBlock::clearHasAnyMarked):
(JSC::MarkedBlock::noteMarkedSlow):
(WTF::printInternal):
(JSC::MarkedBlock::create): Deleted.
(JSC::MarkedBlock::destroy): Deleted.
(JSC::MarkedBlock::callDestructor): Deleted.
(JSC::MarkedBlock::specializedSweep): Deleted.
(JSC::MarkedBlock::sweep): Deleted.
(JSC::MarkedBlock::sweepHelper): Deleted.
(JSC::MarkedBlock::stopAllocating): Deleted.
(JSC::MarkedBlock::clearMarksWithCollectionType): Deleted.
(JSC::MarkedBlock::lastChanceToFinalize): Deleted.
(JSC::MarkedBlock::resumeAllocating): Deleted.
(JSC::MarkedBlock::didRetireBlock): Deleted.
* heap/MarkedBlock.h:
(JSC::MarkedBlock::VoidFunctor::returnValue):
(JSC::MarkedBlock::CountFunctor::CountFunctor):
(JSC::MarkedBlock::CountFunctor::count):
(JSC::MarkedBlock::CountFunctor::returnValue):
(JSC::MarkedBlock::Handle::hasAnyNewlyAllocated):
(JSC::MarkedBlock::Handle::isOnBlocksToSweep):
(JSC::MarkedBlock::Handle::setIsOnBlocksToSweep):
(JSC::MarkedBlock::Handle::state):
(JSC::MarkedBlock::needsDestruction):
(JSC::MarkedBlock::handle):
(JSC::MarkedBlock::Handle::block):
(JSC::MarkedBlock::firstAtom):
(JSC::MarkedBlock::atoms):
(JSC::MarkedBlock::isAtomAligned):
(JSC::MarkedBlock::Handle::cellAlign):
(JSC::MarkedBlock::blockFor):
(JSC::MarkedBlock::Handle::allocator):
(JSC::MarkedBlock::Handle::heap):
(JSC::MarkedBlock::Handle::vm):
(JSC::MarkedBlock::vm):
(JSC::MarkedBlock::Handle::weakSet):
(JSC::MarkedBlock::weakSet):
(JSC::MarkedBlock::Handle::shrink):
(JSC::MarkedBlock::Handle::visitWeakSet):
(JSC::MarkedBlock::Handle::reapWeakSet):
(JSC::MarkedBlock::Handle::cellSize):
(JSC::MarkedBlock::cellSize):
(JSC::MarkedBlock::Handle::attributes):
(JSC::MarkedBlock::attributes):
(JSC::MarkedBlock::Handle::needsDestruction):
(JSC::MarkedBlock::Handle::destruction):
(JSC::MarkedBlock::Handle::cellKind):
(JSC::MarkedBlock::Handle::markCount):
(JSC::MarkedBlock::Handle::size):
(JSC::MarkedBlock::atomNumber):
(JSC::MarkedBlock::flipIfNecessary):
(JSC::MarkedBlock::flipIfNecessaryConcurrently):
(JSC::MarkedBlock::Handle::flipIfNecessary):
(JSC::MarkedBlock::Handle::flipIfNecessaryConcurrently):
(JSC::MarkedBlock::Handle::flipForEdenCollection):
(JSC::MarkedBlock::assertFlipped):
(JSC::MarkedBlock::Handle::assertFlipped):
(JSC::MarkedBlock::isMarked):
(JSC::MarkedBlock::testAndSetMarked):
(JSC::MarkedBlock::Handle::isNewlyAllocated):
(JSC::MarkedBlock::Handle::setNewlyAllocated):
(JSC::MarkedBlock::Handle::clearNewlyAllocated):
(JSC::MarkedBlock::Handle::isMarkedOrNewlyAllocated):
(JSC::MarkedBlock::isMarkedOrNewlyAllocated):
(JSC::MarkedBlock::Handle::isLive):
(JSC::MarkedBlock::isAtom):
(JSC::MarkedBlock::Handle::isLiveCell):
(JSC::MarkedBlock::Handle::forEachCell):
(JSC::MarkedBlock::Handle::forEachLiveCell):
(JSC::MarkedBlock::Handle::forEachDeadCell):
(JSC::MarkedBlock::Handle::needsSweeping):
(JSC::MarkedBlock::Handle::isAllocated):
(JSC::MarkedBlock::Handle::isMarked):
(JSC::MarkedBlock::Handle::isFreeListed):
(JSC::MarkedBlock::hasAnyMarked):
(JSC::MarkedBlock::noteMarked):
(WTF::MarkedBlockHash::hash):
(JSC::MarkedBlock::FreeList::FreeList): Deleted.
(JSC::MarkedBlock::allocator): Deleted.
(JSC::MarkedBlock::heap): Deleted.
(JSC::MarkedBlock::shrink): Deleted.
(JSC::MarkedBlock::visitWeakSet): Deleted.
(JSC::MarkedBlock::reapWeakSet): Deleted.
(JSC::MarkedBlock::willRemoveBlock): Deleted.
(JSC::MarkedBlock::didConsumeFreeList): Deleted.
(JSC::MarkedBlock::markCount): Deleted.
(JSC::MarkedBlock::isEmpty): Deleted.
(JSC::MarkedBlock::destruction): Deleted.
(JSC::MarkedBlock::cellKind): Deleted.
(JSC::MarkedBlock::size): Deleted.
(JSC::MarkedBlock::capacity): Deleted.
(JSC::MarkedBlock::setMarked): Deleted.
(JSC::MarkedBlock::clearMarked): Deleted.
(JSC::MarkedBlock::isNewlyAllocated): Deleted.
(JSC::MarkedBlock::setNewlyAllocated): Deleted.
(JSC::MarkedBlock::clearNewlyAllocated): Deleted.
(JSC::MarkedBlock::isLive): Deleted.
(JSC::MarkedBlock::isLiveCell): Deleted.
(JSC::MarkedBlock::forEachCell): Deleted.
(JSC::MarkedBlock::forEachLiveCell): Deleted.
(JSC::MarkedBlock::forEachDeadCell): Deleted.
(JSC::MarkedBlock::needsSweeping): Deleted.
(JSC::MarkedBlock::isAllocated): Deleted.
(JSC::MarkedBlock::isMarkedOrRetired): Deleted.
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::initializeSizeClassForStepSize):
(JSC::MarkedSpace::MarkedSpace):
(JSC::MarkedSpace::~MarkedSpace):
(JSC::MarkedSpace::lastChanceToFinalize):
(JSC::MarkedSpace::allocate):
(JSC::MarkedSpace::tryAllocate):
(JSC::MarkedSpace::allocateLarge):
(JSC::MarkedSpace::tryAllocateLarge):
(JSC::MarkedSpace::sweep):
(JSC::MarkedSpace::sweepLargeAllocations):
(JSC::MarkedSpace::zombifySweep):
(JSC::MarkedSpace::resetAllocators):
(JSC::MarkedSpace::visitWeakSets):
(JSC::MarkedSpace::reapWeakSets):
(JSC::MarkedSpace::stopAllocating):
(JSC::MarkedSpace::prepareForMarking):
(JSC::MarkedSpace::resumeAllocating):
(JSC::MarkedSpace::isPagedOut):
(JSC::MarkedSpace::freeBlock):
(JSC::MarkedSpace::freeOrShrinkBlock):
(JSC::MarkedSpace::shrink):
(JSC::MarkedSpace::clearNewlyAllocated):
(JSC::VerifyMarked::operator()):
(JSC::MarkedSpace::flip):
(JSC::MarkedSpace::objectCount):
(JSC::MarkedSpace::size):
(JSC::MarkedSpace::capacity):
(JSC::MarkedSpace::addActiveWeakSet):
(JSC::MarkedSpace::didAddBlock):
(JSC::MarkedSpace::didAllocateInBlock):
(JSC::MarkedSpace::forEachAllocator): Deleted.
(JSC::VerifyMarkedOrRetired::operator()): Deleted.
(JSC::MarkedSpace::clearMarks): Deleted.
* heap/MarkedSpace.h:
(JSC::MarkedSpace::sizeClassToIndex):
(JSC::MarkedSpace::indexToSizeClass):
(JSC::MarkedSpace::version):
(JSC::MarkedSpace::blocksWithNewObjects):
(JSC::MarkedSpace::largeAllocations):
(JSC::MarkedSpace::largeAllocationsNurseryOffset):
(JSC::MarkedSpace::largeAllocationsOffsetForThisCollection):
(JSC::MarkedSpace::largeAllocationsForThisCollectionBegin):
(JSC::MarkedSpace::largeAllocationsForThisCollectionEnd):
(JSC::MarkedSpace::largeAllocationsForThisCollectionSize):
(JSC::MarkedSpace::forEachLiveCell):
(JSC::MarkedSpace::forEachDeadCell):
(JSC::MarkedSpace::allocatorFor):
(JSC::MarkedSpace::destructorAllocatorFor):
(JSC::MarkedSpace::auxiliaryAllocatorFor):
(JSC::MarkedSpace::allocateWithoutDestructor):
(JSC::MarkedSpace::allocateWithDestructor):
(JSC::MarkedSpace::allocateAuxiliary):
(JSC::MarkedSpace::tryAllocateAuxiliary):
(JSC::MarkedSpace::forEachBlock):
(JSC::MarkedSpace::forEachAllocator):
(JSC::MarkedSpace::optimalSizeFor):
(JSC::MarkedSpace::didAddBlock): Deleted.
(JSC::MarkedSpace::didAllocateInBlock): Deleted.
(JSC::MarkedSpace::objectCount): Deleted.
(JSC::MarkedSpace::size): Deleted.
(JSC::MarkedSpace::capacity): Deleted.
* heap/SlotVisitor.cpp:
(JSC::SlotVisitor::SlotVisitor):
(JSC::SlotVisitor::didStartMarking):
(JSC::SlotVisitor::reset):
(JSC::SlotVisitor::append):
(JSC::SlotVisitor::appendJSCellOrAuxiliary):
(JSC::SlotVisitor::setMarkedAndAppendToMarkStack):
(JSC::SlotVisitor::appendToMarkStack):
(JSC::SlotVisitor::markAuxiliary):
(JSC::SlotVisitor::noteLiveAuxiliaryCell):
(JSC::SlotVisitor::visitChildren):
* heap/SlotVisitor.h:
* heap/WeakBlock.cpp:
(JSC::WeakBlock::create):
(JSC::WeakBlock::WeakBlock):
(JSC::WeakBlock::visit):
(JSC::WeakBlock::reap):
* heap/WeakBlock.h:
(JSC::WeakBlock::disconnectContainer):
(JSC::WeakBlock::disconnectMarkedBlock): Deleted.
* heap/WeakSet.cpp:
(JSC::WeakSet::~WeakSet):
(JSC::WeakSet::sweep):
(JSC::WeakSet::shrink):
(JSC::WeakSet::addAllocator):
* heap/WeakSet.h:
(JSC::WeakSet::container):
(JSC::WeakSet::setContainer):
(JSC::WeakSet::WeakSet):
(JSC::WeakSet::visit):
(JSC::WeakSet::shrink): Deleted.
* heap/WeakSetInlines.h:
(JSC::WeakSet::allocate):
* inspector/InjectedScriptManager.cpp:
* inspector/JSGlobalObjectInspectorController.cpp:
* inspector/JSJavaScriptCallFrame.cpp:
* inspector/ScriptDebugServer.cpp:
* inspector/agents/InspectorDebuggerAgent.cpp:
* interpreter/CachedCall.h:
(JSC::CachedCall::CachedCall):
* interpreter/Interpreter.cpp:
(JSC::loadVarargs):
(JSC::StackFrame::sourceID): Deleted.
(JSC::StackFrame::sourceURL): Deleted.
(JSC::StackFrame::functionName): Deleted.
(JSC::StackFrame::computeLineAndColumn): Deleted.
(JSC::StackFrame::toString): Deleted.
* interpreter/Interpreter.h:
(JSC::StackFrame::isNative): Deleted.
* jit/AssemblyHelpers.h:
(JSC::AssemblyHelpers::emitAllocateWithNonNullAllocator):
(JSC::AssemblyHelpers::emitAllocate):
(JSC::AssemblyHelpers::emitAllocateJSCell):
(JSC::AssemblyHelpers::emitAllocateJSObject):
(JSC::AssemblyHelpers::emitAllocateJSObjectWithKnownSize):
(JSC::AssemblyHelpers::emitAllocateVariableSized):
* jit/GCAwareJITStubRoutine.cpp:
(JSC::GCAwareJITStubRoutine::GCAwareJITStubRoutine):
* jit/JIT.cpp:
(JSC::JIT::compileCTINativeCall):
(JSC::JIT::link):
* jit/JIT.h:
(JSC::JIT::compileCTINativeCall): Deleted.
* jit/JITExceptions.cpp:
(JSC::genericUnwind):
* jit/JITExceptions.h:
* jit/JITOpcodes.cpp:
(JSC::JIT::emit_op_new_object):
(JSC::JIT::emitSlow_op_new_object):
(JSC::JIT::emit_op_create_this):
(JSC::JIT::emitSlow_op_create_this):
* jit/JITOpcodes32_64.cpp:
(JSC::JIT::emit_op_new_object):
(JSC::JIT::emitSlow_op_new_object):
(JSC::JIT::emit_op_create_this):
(JSC::JIT::emitSlow_op_create_this):
* jit/JITOperations.cpp:
* jit/JITOperations.h:
* jit/JITPropertyAccess.cpp:
(JSC::JIT::emitWriteBarrier):
* jit/JITThunks.cpp:
* jit/JITThunks.h:
* jsc.cpp:
(functionDescribeArray):
(main):
* llint/LLIntData.cpp:
(JSC::LLInt::Data::performAssertions):
* llint/LLIntExceptions.cpp:
* llint/LLIntThunks.cpp:
* llint/LLIntThunks.h:
* llint/LowLevelInterpreter.asm:
* llint/LowLevelInterpreter.cpp:
* llint/LowLevelInterpreter32_64.asm:
* llint/LowLevelInterpreter64.asm:
* parser/ModuleAnalyzer.cpp:
* parser/NodeConstructors.h:
* parser/Nodes.h:
* profiler/ProfilerBytecode.cpp:
* profiler/ProfilerBytecode.h:
* profiler/ProfilerBytecodeSequence.cpp:
* runtime/ArrayConventions.h:
(JSC::indexingHeaderForArrayStorage):
(JSC::baseIndexingHeaderForArrayStorage):
(JSC::indexingHeaderForArray): Deleted.
(JSC::baseIndexingHeaderForArray): Deleted.
* runtime/ArrayPrototype.cpp:
(JSC::arrayProtoFuncSplice):
(JSC::concatAppendOne):
(JSC::arrayProtoPrivateFuncConcatMemcpy):
* runtime/ArrayStorage.h:
(JSC::ArrayStorage::vectorLength):
(JSC::ArrayStorage::totalSizeFor):
(JSC::ArrayStorage::totalSize):
(JSC::ArrayStorage::availableVectorLength):
(JSC::ArrayStorage::optimalVectorLength):
(JSC::ArrayStorage::sizeFor): Deleted.
* runtime/AuxiliaryBarrier.h: Added.
(JSC::AuxiliaryBarrier::AuxiliaryBarrier):
(JSC::AuxiliaryBarrier::clear):
(JSC::AuxiliaryBarrier::get):
(JSC::AuxiliaryBarrier::slot):
(JSC::AuxiliaryBarrier::operator bool):
(JSC::AuxiliaryBarrier::setWithoutBarrier):
* runtime/AuxiliaryBarrierInlines.h: Added.
(JSC::AuxiliaryBarrier<T>::AuxiliaryBarrier):
(JSC::AuxiliaryBarrier<T>::set):
* runtime/Butterfly.h:
* runtime/ButterflyInlines.h:
(JSC::Butterfly::availableContiguousVectorLength):
(JSC::Butterfly::optimalContiguousVectorLength):
(JSC::Butterfly::createUninitialized):
(JSC::Butterfly::growArrayRight):
* runtime/ClonedArguments.cpp:
(JSC::ClonedArguments::createEmpty):
* runtime/CommonSlowPathsExceptions.cpp:
* runtime/CommonSlowPathsExceptions.h:
* runtime/DataView.cpp:
* runtime/DirectArguments.h:
* runtime/ECMAScriptSpecInternalFunctions.cpp:
* runtime/Error.cpp:
* runtime/Error.h:
* runtime/ErrorInstance.cpp:
* runtime/ErrorInstance.h:
* runtime/Exception.cpp:
* runtime/Exception.h:
* runtime/GeneratorFrame.cpp:
* runtime/GeneratorPrototype.cpp:
* runtime/InternalFunction.cpp:
(JSC::InternalFunction::InternalFunction):
* runtime/IntlCollator.cpp:
* runtime/IntlCollatorConstructor.cpp:
* runtime/IntlCollatorPrototype.cpp:
* runtime/IntlDateTimeFormat.cpp:
* runtime/IntlDateTimeFormatConstructor.cpp:
* runtime/IntlDateTimeFormatPrototype.cpp:
* runtime/IntlNumberFormat.cpp:
* runtime/IntlNumberFormatConstructor.cpp:
* runtime/IntlNumberFormatPrototype.cpp:
* runtime/IntlObject.cpp:
* runtime/IteratorPrototype.cpp:
* runtime/JSArray.cpp:
(JSC::JSArray::tryCreateUninitialized):
(JSC::JSArray::setLengthWritable):
(JSC::JSArray::unshiftCountSlowCase):
(JSC::JSArray::setLengthWithArrayStorage):
(JSC::JSArray::appendMemcpy):
(JSC::JSArray::setLength):
(JSC::JSArray::pop):
(JSC::JSArray::push):
(JSC::JSArray::fastSlice):
(JSC::JSArray::shiftCountWithArrayStorage):
(JSC::JSArray::shiftCountWithAnyIndexingType):
(JSC::JSArray::unshiftCountWithArrayStorage):
(JSC::JSArray::fillArgList):
(JSC::JSArray::copyToArguments):
* runtime/JSArray.h:
(JSC::createContiguousArrayButterfly):
(JSC::createArrayButterfly):
(JSC::JSArray::create):
(JSC::JSArray::tryCreateUninitialized): Deleted.
* runtime/JSArrayBufferView.h:
* runtime/JSCInlines.h:
* runtime/JSCJSValue.cpp:
(JSC::JSValue::dumpInContextAssumingStructure):
* runtime/JSCallee.cpp:
(JSC::JSCallee::JSCallee):
* runtime/JSCell.cpp:
(JSC::JSCell::estimatedSize):
* runtime/JSCell.h:
(JSC::JSCell::cellStateOffset): Deleted.
* runtime/JSCellInlines.h:
(JSC::ExecState::vm):
(JSC::JSCell::classInfo):
(JSC::JSCell::callDestructor):
(JSC::JSCell::vm): Deleted.
* runtime/JSFunction.cpp:
(JSC::JSFunction::create):
(JSC::JSFunction::allocateAndInitializeRareData):
(JSC::JSFunction::initializeRareData):
(JSC::JSFunction::getOwnPropertySlot):
(JSC::JSFunction::put):
(JSC::JSFunction::deleteProperty):
(JSC::JSFunction::defineOwnProperty):
(JSC::JSFunction::setFunctionName):
(JSC::JSFunction::reifyLength):
(JSC::JSFunction::reifyName):
(JSC::JSFunction::reifyLazyPropertyIfNeeded):
(JSC::JSFunction::reifyBoundNameIfNeeded):
* runtime/JSFunction.h:
* runtime/JSFunctionInlines.h:
(JSC::JSFunction::createWithInvalidatedReallocationWatchpoint):
(JSC::JSFunction::JSFunction):
* runtime/JSGenericTypedArrayViewInlines.h:
(JSC::JSGenericTypedArrayView<Adaptor>::slowDownAndWasteMemory):
* runtime/JSInternalPromise.cpp:
* runtime/JSInternalPromiseConstructor.cpp:
* runtime/JSInternalPromiseDeferred.cpp:
* runtime/JSInternalPromisePrototype.cpp:
* runtime/JSJob.cpp:
* runtime/JSMapIterator.cpp:
* runtime/JSModuleNamespaceObject.cpp:
* runtime/JSModuleRecord.cpp:
* runtime/JSObject.cpp:
(JSC::JSObject::visitButterfly):
(JSC::JSObject::notifyPresenceOfIndexedAccessors):
(JSC::JSObject::createInitialIndexedStorage):
(JSC::JSObject::createInitialUndecided):
(JSC::JSObject::createInitialInt32):
(JSC::JSObject::createInitialDouble):
(JSC::JSObject::createInitialContiguous):
(JSC::JSObject::createArrayStorage):
(JSC::JSObject::createInitialArrayStorage):
(JSC::JSObject::convertUndecidedToInt32):
(JSC::JSObject::convertUndecidedToContiguous):
(JSC::JSObject::convertUndecidedToArrayStorage):
(JSC::JSObject::convertInt32ToDouble):
(JSC::JSObject::convertInt32ToArrayStorage):
(JSC::JSObject::convertDoubleToArrayStorage):
(JSC::JSObject::convertContiguousToArrayStorage):
(JSC::JSObject::putByIndexBeyondVectorLength):
(JSC::JSObject::putDirectIndexBeyondVectorLength):
(JSC::JSObject::getNewVectorLength):
(JSC::JSObject::increaseVectorLength):
(JSC::JSObject::ensureLengthSlow):
(JSC::JSObject::growOutOfLineStorage):
(JSC::JSObject::copyButterfly): Deleted.
(JSC::JSObject::copyBackingStore): Deleted.
* runtime/JSObject.h:
(JSC::JSObject::globalObject):
(JSC::JSObject::putDirectInternal):
(JSC::JSObject::setStructureAndReallocateStorageIfNecessary): Deleted.
* runtime/JSObjectInlines.h:
* runtime/JSPromise.cpp:
* runtime/JSPromiseConstructor.cpp:
* runtime/JSPromiseDeferred.cpp:
* runtime/JSPromisePrototype.cpp:
* runtime/JSPropertyNameIterator.cpp:
* runtime/JSScope.cpp:
(JSC::JSScope::resolve):
* runtime/JSScope.h:
(JSC::JSScope::globalObject):
(JSC::JSScope::vm): Deleted.
* runtime/JSSetIterator.cpp:
* runtime/JSStringIterator.cpp:
* runtime/JSTemplateRegistryKey.cpp:
* runtime/JSTypedArrayViewConstructor.cpp:
* runtime/JSTypedArrayViewPrototype.cpp:
* runtime/JSWeakMap.cpp:
* runtime/JSWeakSet.cpp:
* runtime/MapConstructor.cpp:
* runtime/MapIteratorPrototype.cpp:
* runtime/MapPrototype.cpp:
* runtime/NativeErrorConstructor.cpp:
* runtime/NativeStdFunctionCell.cpp:
* runtime/Operations.h:
(JSC::scribbleFreeCells):
(JSC::scribble):
* runtime/Options.h:
* runtime/PropertyTable.cpp:
* runtime/ProxyConstructor.cpp:
* runtime/ProxyObject.cpp:
* runtime/ProxyRevoke.cpp:
* runtime/RegExp.cpp:
(JSC::RegExp::match):
(JSC::RegExp::matchConcurrently):
(JSC::RegExp::matchCompareWithInterpreter):
* runtime/RegExp.h:
* runtime/RegExpConstructor.h:
* runtime/RegExpInlines.h:
(JSC::RegExp::matchInline):
* runtime/RegExpMatchesArray.h:
(JSC::tryCreateUninitializedRegExpMatchesArray):
(JSC::createRegExpMatchesArray):
* runtime/RegExpPrototype.cpp:
(JSC::genericSplit):
* runtime/RuntimeType.cpp:
* runtime/SamplingProfiler.cpp:
(JSC::SamplingProfiler::processUnverifiedStackTraces):
* runtime/SetConstructor.cpp:
* runtime/SetIteratorPrototype.cpp:
* runtime/SetPrototype.cpp:
* runtime/StackFrame.cpp: Added.
(JSC::StackFrame::sourceID):
(JSC::StackFrame::sourceURL):
(JSC::StackFrame::functionName):
(JSC::StackFrame::computeLineAndColumn):
(JSC::StackFrame::toString):
* runtime/StackFrame.h: Added.
(JSC::StackFrame::isNative):
* runtime/StringConstructor.cpp:
* runtime/StringIteratorPrototype.cpp:
* runtime/StructureInlines.h:
(JSC::Structure::propertyTable):
* runtime/TemplateRegistry.cpp:
* runtime/TestRunnerUtils.cpp:
(JSC::finalizeStatsAtEndOfTesting):
* runtime/TestRunnerUtils.h:
* runtime/TypeProfilerLog.cpp:
* runtime/TypeSet.cpp:
* runtime/VM.cpp:
(JSC::VM::VM):
(JSC::VM::ensureStackCapacityForCLoop):
(JSC::VM::isSafeToRecurseSoftCLoop):
* runtime/VM.h:
* runtime/VMEntryScope.h:
* runtime/VMInlines.h:
(JSC::VM::ensureStackCapacityFor):
(JSC::VM::isSafeToRecurseSoft):
* runtime/WeakMapConstructor.cpp:
* runtime/WeakMapData.cpp:
* runtime/WeakMapPrototype.cpp:
* runtime/WeakSetConstructor.cpp:
* runtime/WeakSetPrototype.cpp:
* testRegExp.cpp:
(testOneRegExp):
* tools/JSDollarVM.cpp:
* tools/JSDollarVMPrototype.cpp:
(JSC::JSDollarVMPrototype::isInObjectSpace):
Source/WebCore:
No new tests because no new WebCore behavior.
Just rewiring #includes.
* ForwardingHeaders/heap/HeapInlines.h: Added.
* ForwardingHeaders/interpreter/Interpreter.h: Removed.
* ForwardingHeaders/runtime/AuxiliaryBarrierInlines.h: Added.
* Modules/indexeddb/IDBCursorWithValue.cpp:
* Modules/indexeddb/client/TransactionOperation.cpp:
* Modules/indexeddb/server/SQLiteIDBBackingStore.cpp:
* Modules/indexeddb/server/UniqueIDBDatabase.cpp:
* bindings/js/JSApplePayPaymentAuthorizedEventCustom.cpp:
* bindings/js/JSApplePayPaymentMethodSelectedEventCustom.cpp:
* bindings/js/JSApplePayShippingContactSelectedEventCustom.cpp:
* bindings/js/JSApplePayShippingMethodSelectedEventCustom.cpp:
* bindings/js/JSClientRectCustom.cpp:
* bindings/js/JSDOMBinding.cpp:
* bindings/js/JSDOMBinding.h:
* bindings/js/JSDeviceMotionEventCustom.cpp:
* bindings/js/JSDeviceOrientationEventCustom.cpp:
* bindings/js/JSErrorEventCustom.cpp:
* bindings/js/JSIDBCursorWithValueCustom.cpp:
* bindings/js/JSIDBIndexCustom.cpp:
* bindings/js/JSPopStateEventCustom.cpp:
* bindings/js/JSWebGL2RenderingContextCustom.cpp:
* bindings/js/JSWorkerGlobalScopeCustom.cpp:
* bindings/js/WorkerScriptController.cpp:
* contentextensions/ContentExtensionParser.cpp:
* dom/ErrorEvent.cpp:
* html/HTMLCanvasElement.cpp:
* html/MediaDocument.cpp:
* inspector/CommandLineAPIModule.cpp:
* loader/EmptyClients.cpp:
* page/CaptionUserPreferences.cpp:
* page/Frame.cpp:
* page/PageGroup.cpp:
* page/UserContentController.cpp:
* platform/mock/mediasource/MockBox.cpp:
* testing/GCObservation.cpp:
Source/WebKit2:
Just rewiring some #includes.
* UIProcess/ViewGestureController.cpp:
* UIProcess/WebPageProxy.cpp:
* UIProcess/WebProcessPool.cpp:
* UIProcess/WebProcessProxy.cpp:
* WebProcess/InjectedBundle/DOM/InjectedBundleRangeHandle.cpp:
* WebProcess/Plugins/Netscape/JSNPObject.cpp:
Source/WTF:
I needed tryFastAlignedMalloc() so I added it.
* wtf/FastMalloc.cpp:
(WTF::tryFastAlignedMalloc):
* wtf/FastMalloc.h:
* wtf/ParkingLot.cpp:
(WTF::ParkingLot::forEachImpl):
(WTF::ParkingLot::forEach): Deleted.
* wtf/ParkingLot.h:
(WTF::ParkingLot::parkConditionally):
(WTF::ParkingLot::unparkOne):
(WTF::ParkingLot::forEach):
* wtf/ScopedLambda.h:
(WTF::scopedLambdaRef):
* wtf/SentinelLinkedList.h:
(WTF::SentinelLinkedList::forEach):
(WTF::RawNode>::takeFrom):
* wtf/SimpleStats.h:
(WTF::SimpleStats::operator bool):
(WTF::SimpleStats::operator!): Deleted.
Tools:
* DumpRenderTree/TestRunner.cpp:
* DumpRenderTree/mac/DumpRenderTree.mm:
(DumpRenderTreeMain):
* Scripts/run-jsc-stress-tests:
* TestWebKitAPI/Tests/WTF/Vector.cpp:
(TestWebKitAPI::TEST):
git-svn-id: http://svn.webkit.org/repository/webkit/trunk@205462 268f45cc-cd09-0410-ab3c-d52691b4dbfc
diff --git a/Source/JavaScriptCore/API/JSManagedValue.mm b/Source/JavaScriptCore/API/JSManagedValue.mm
index e788b5c..d62dfa0 100644
--- a/Source/JavaScriptCore/API/JSManagedValue.mm
+++ b/Source/JavaScriptCore/API/JSManagedValue.mm
@@ -213,6 +213,7 @@
m_owners = [[NSMapTable alloc] initWithKeyOptions:weakIDOptions valueOptions:integerOptions capacity:1];
JSC::JSValue jsValue = toJS(exec, [value JSValueRef]);
+ dataLog("Creating managed value with value ", jsValue, "\n");
if (jsValue.isObject())
m_weakValue.setObject(JSC::jsCast<JSC::JSObject*>(jsValue.asCell()), self);
else if (jsValue.isString())
diff --git a/Source/JavaScriptCore/API/JSTypedArray.cpp b/Source/JavaScriptCore/API/JSTypedArray.cpp
index 87dd8fa..d488cc8 100644
--- a/Source/JavaScriptCore/API/JSTypedArray.cpp
+++ b/Source/JavaScriptCore/API/JSTypedArray.cpp
@@ -32,7 +32,7 @@
#include "ClassInfo.h"
#include "Error.h"
#include "JSArrayBufferViewInlines.h"
-#include "JSCJSValueInlines.h"
+#include "JSCInlines.h"
#include "JSDataView.h"
#include "JSGenericTypedArrayViewInlines.h"
#include "JSTypedArrays.h"
diff --git a/Source/JavaScriptCore/API/ObjCCallbackFunction.mm b/Source/JavaScriptCore/API/ObjCCallbackFunction.mm
index 10b3097..ffdc0d40 100644
--- a/Source/JavaScriptCore/API/ObjCCallbackFunction.mm
+++ b/Source/JavaScriptCore/API/ObjCCallbackFunction.mm
@@ -31,9 +31,8 @@
#import "APICallbackFunction.h"
#import "APICast.h"
#import "Error.h"
-#import "JSCJSValueInlines.h"
#import "JSCell.h"
-#import "JSCellInlines.h"
+#import "JSCInlines.h"
#import "JSContextInternal.h"
#import "JSWrapperMap.h"
#import "JSValueInternal.h"
diff --git a/Source/JavaScriptCore/API/tests/testapi.mm b/Source/JavaScriptCore/API/tests/testapi.mm
index 6d6ed17..13d4362 100644
--- a/Source/JavaScriptCore/API/tests/testapi.mm
+++ b/Source/JavaScriptCore/API/tests/testapi.mm
@@ -510,33 +510,6 @@
return nullptr;
}
-// This test is flaky. Since GC marks C stack and registers as roots conservatively,
-// objects not referenced logically can be accidentally marked and alive.
-// To avoid this situation as possible as we can,
-// 1. run this test first before stack is polluted,
-// 2. extract this test as a function to suppress stack height.
-static void testWeakValue()
-{
- @autoreleasepool {
- JSVirtualMachine *vm = [[JSVirtualMachine alloc] init];
- TestObject *testObject = [TestObject testObject];
- JSManagedValue *weakValue;
- @autoreleasepool {
- JSContext *context = [[JSContext alloc] initWithVirtualMachine:vm];
- context[@"testObject"] = testObject;
- weakValue = [[JSManagedValue alloc] initWithValue:context[@"testObject"]];
- }
-
- @autoreleasepool {
- JSContext *context = [[JSContext alloc] initWithVirtualMachine:vm];
- context[@"testObject"] = testObject;
- JSSynchronousGarbageCollectForDebugging([context JSGlobalContextRef]);
- checkResult(@"weak value == nil", ![weakValue value]);
- checkResult(@"root is still alive", !context[@"testObject"].isUndefined);
- }
- }
-}
-
static void testObjectiveCAPIMain()
{
@autoreleasepool {
@@ -1513,7 +1486,6 @@
{
NSLog(@"Testing Objective-C API");
checkNegativeNSIntegers();
- testWeakValue();
testObjectiveCAPIMain();
}
diff --git a/Source/JavaScriptCore/CMakeLists.txt b/Source/JavaScriptCore/CMakeLists.txt
index 5dec317..503af3a 100644
--- a/Source/JavaScriptCore/CMakeLists.txt
+++ b/Source/JavaScriptCore/CMakeLists.txt
@@ -66,6 +66,7 @@
assembler/MacroAssembler.cpp
assembler/MacroAssemblerARM.cpp
assembler/MacroAssemblerARMv7.cpp
+ assembler/MacroAssemblerCodeRef.cpp
assembler/MacroAssemblerPrinter.cpp
assembler/MacroAssemblerX86Common.cpp
@@ -447,6 +448,7 @@
heap/DestructionMode.cpp
heap/EdenGCActivityCallback.cpp
heap/FullGCActivityCallback.cpp
+ heap/FreeList.cpp
heap/GCActivityCallback.cpp
heap/GCLogging.cpp
heap/HandleSet.cpp
@@ -454,6 +456,7 @@
heap/Heap.cpp
heap/HeapCell.cpp
heap/HeapHelperPool.cpp
+ heap/HeapOperation.cpp
heap/HeapProfiler.cpp
heap/HeapSnapshot.cpp
heap/HeapSnapshotBuilder.cpp
@@ -462,6 +465,7 @@
heap/HeapVerifier.cpp
heap/IncrementalSweeper.cpp
heap/JITStubRoutineSet.cpp
+ heap/LargeAllocation.cpp
heap/LiveObjectList.cpp
heap/MachineStackMarker.cpp
heap/MarkStack.cpp
@@ -619,6 +623,7 @@
runtime/ArrayBufferNeuteringWatchpoint.cpp
runtime/ArrayBufferView.cpp
runtime/ArrayConstructor.cpp
+ runtime/ArrayConventions.cpp
runtime/ArrayIteratorPrototype.cpp
runtime/ArrayPrototype.cpp
runtime/BasicBlockLocation.cpp
@@ -799,6 +804,7 @@
runtime/SimpleTypedArrayController.cpp
runtime/SmallStrings.cpp
runtime/SparseArrayValueMap.cpp
+ runtime/StackFrame.cpp
runtime/StrictEvalActivation.cpp
runtime/StringConstructor.cpp
runtime/StringIteratorPrototype.cpp
diff --git a/Source/JavaScriptCore/ChangeLog b/Source/JavaScriptCore/ChangeLog
index 33a81b5..ec7c64f 100644
--- a/Source/JavaScriptCore/ChangeLog
+++ b/Source/JavaScriptCore/ChangeLog
@@ -1,3 +1,1098 @@
+2016-08-31 Filip Pizlo <fpizlo@apple.com>
+
+ Butterflies should be allocated in Auxiliary MarkedSpace instead of CopiedSpace and we should rewrite as much of the GC as needed to make this not a regression
+ https://bugs.webkit.org/show_bug.cgi?id=160125
+
+ Reviewed by Geoffrey Garen and Keith Miller.
+
+ In order to make the GC concurrent (bug 149432), we would either need to enable concurrent
+ copying or we would need to not copy. Concurrent copying carries a 1-2% throughput overhead
+ from the barriers alone. Considering that MarkedSpace does a decent job of avoiding
+ fragmentation, it's unlikely that it's worth paying 1-2% throughput for copying. So, we want
+ to get rid of copied space. This change moves copied space's biggest client over to marked
+ space.
+
+ Moving butterflies to marked space means having them use the new Auxiliary HeapCell
+ allocation path. This is a fairly mechanical change, but it caused performance regressions
+ everywhere, so this change also fixes MarkedSpace's performance issues.
+
+ At a high level the mechanical changes are:
+
+ - We use AuxiliaryBarrier instead of CopyBarrier.
+
+ - We use tryAllocateAuxiliary instead of tryAllocateStorage. I got rid of the silly
+ CheckedBoolean stuff, since it's so much more trouble than it's worth.
+
+ - The JITs have to emit inlined marked space allocations instead of inline copy space
+ allocations.
+
+ - Everyone has to get used to zeroing their butterflies after allocation instead of relying
+ on them being pre-zeroed by the GC. Copied space would zero things for you, while marked
+ space doesn't.
+
+ That's about 1/3 of this change. But this led to performance problems, which I fixed with
+ optimizations that amounted to a major MarkedSpace rewrite:
+
+ - MarkedSpace always causes internal fragmentation for array allocations because the vector
+ length we choose when we resize usually leads to a cell size that doesn't correspond to any
+ size class. I got around this by making array allocations usually round up vectorLength to
+ the maximum allowed by the size class that we would have allocated in. Also,
+ ensureLengthSlow() and friends first make sure that the requested length can't just be
+ fulfilled with the current allocation size. This safeguard means that not every array
+ allocation has to do size class queries. For example, the fast path of new Array(length)
+ never does any size class queries, under the assumption that (1) the speed gained from
+ avoiding an ensureLengthSlow() call, which then just changes the vectorLength by doing the
+ size class query, is too small to offset the speed lost by doing the query on every
+ allocation and (2) new Array(length) is a pretty good hint that resizing is not very
+ likely.
+
+ - Size classes in MarkedSpace were way too precise, which led to external fragmentation. This
+ changes MarkedSpace size classes to use a linear progression for very small sizes followed
+ by a geometric progression that naturally transitions to a hyperbolic progression. We want
+ hyperbolic sizes when we get close to blockSize: for example the largest size we want is
+ payloadSize / 2 rounded down, to ensure we get exactly two cells with minimal slop. The
+ next size down should be payloadSize / 3 rounded down, and so on. After the last precise
+ size (80 bytes), we proceed using a geometric progression, but round up each size to
+ minimize slop at the end of the block. This naturally causes the geometric progression to
+ turn hyperbolic for large sizes. The size class configuration happens at VM start-up, so
+ it can be controlled with runtime options. I found that a base of 1.4 works pretty well.
+
+ - Large allocations caused massive internal fragmentation, since the smallest large
+ allocation had to use exactly blockSize, and the largest small allocation used
+ blockSize / 2. The next size up - the first large allocation size to require two blocks -
+ also had 50% internal fragmentation. This is because we required large allocations to be
+ blockSize aligned, so that MarkedBlock::blockFor() would work. I decided to rewrite all of
+ that. Cells no longer have to be owned by a MarkedBlock. They can now alternatively be
+ owned by a LargeAllocation. These two things are abstracted as CellContainer. You know that
+ a cell is owned by a LargeAllocation if the MarkedBlock::atomSize / 2 bit is set.
+ Basically, large allocations are deliberately misaligned by 8 bytes. This actually works
+ out great since (1) typed arrays won't use large allocations anyway since they have their
+ own malloc fallback and (2) large array butterflies already have a 8 byte header, which
+ means that the 8 byte base misalignment aligns the large array payload on a 16 byte
+ boundary. I took extreme care to make sure that the isLargeAllocation bit checks are as
+ rare as possible; for example, ExecState::vm() skips the check because we know that callees
+ must be small allocations. It's also possible to use template tricks to do one check for
+ cell container kind, and then invoke a function specialized for MarkedBlock or a function
+ specialized for LargeAllocation. LargeAllocation includes stubs for all MarkedBlock methods
+ that get used from functions that are template-specialized like this. That's mostly to
+ speed up the GC marking code. Most other code can use CellContainer API or HeapCell API
+ directly. That's another thing: HeapCell, the common base of JSCell and auxiliary
+ allocations, is now smart enough to do a lot of things for you, like HeapCell::vm(),
+ HeapCell::heap(), HeapCell::isLargeAllocation(), and HeapCell::cellContainer(). The size
+ cutoff for large allocations is runtime-configurable, so long as you don't choose something
+ so small that callees end up large. I found that 400 bytes is roughly optimal. This means
+ that the MarkedBlock size classes end up being:
+
+ 16, 32, 48, 64, 80, 112, 160, 224, 320
+
+ The next size class would have been 432, but that's above the 400 byte cutoff. All of this
+ is configurable with --sizeClassProgression and --largeAllocationCutoff. You can see what
+ size classes you end up with by doing --dumpSizeClasses=true.
+
+ - Copied space uses 64KB blocks, while marked space used to use 16KB blocks. Allocating a lot
+ of stuff in 16KB blocks was slower than allocating it in 64KB blocks because the GC had a
+ lot of per-block overhead. I removed this overhead: It's now 2x faster to scan all
+ MarkedBlocks because the list that contains the interesting meta-data is allocated on the
+ side, for better locality during a sequential walk. It's no longer necessary to scan
+ MarkedBlocks to find WeakSets, since the sets of WeakSets for eden scan and full scan are
+ maintained on-the-fly. It's no longer necessary to scan all MarkedBlocks to clear mark
+ bits because we now use versioned mark bits: to clear then, just increment the 64-bit
+ heap version. It's no longer necessary to scan retired MarkedBlocks while allocating
+ because marking retires them on-the-fly. It's no longer necessary to sort all blocks in
+ the IncrementalSweeper's snapshot because blocks now know if they are in the snapshot. Put
+ together, these optimizations allowed me to reduce block size to 16KB without losing much
+ performance. There is some small perf loss on JetStream/splay, but not enough to hurt
+ JetStream overall. I tried reducing block sizes further, to 4KB, since that is a
+ progression on membuster. That's not possible yet, since there is still enough per-block
+ overhead yet that such a reduction hurts JetStream too much. I filed a bug about improving
+ this further: https://bugs.webkit.org/show_bug.cgi?id=161581.
+
+ - Even after all of that, copying butterflies was still faster because it allowed us to skip
+ sweeping dead space. A good GC allocates over dead bytes without explicitly freeing them,
+ so the GC pause is O(size of live), not O(size of live + dead). O(dead) is usually much
+ larger than O(live), especially in an eden collection. Copying satisfies this premise while
+ mark+sweep does not. So, I invented a new kind of allocator: bump'n'pop. Previously, our
+ MarkedSpace allocator was a freelist pop. That's simple and easy to inline but requires
+ that we walk the block to build a free list. This means walking dead space. The new
+ allocator allows totally free MarkedBlocks to simply set up a bump-pointer arena instead.
+ The allocator is a hybrid of bump-pointer and freelist pop. It tries bump first. The bump
+ pointer always bumps by cellSize, so the result of filling a block with bumping looks as if
+ we had used freelist popping to fill it. Additionally, each MarkedBlock now has a bit to
+ quickly tell if the block is entirely free. This makes sweeping O(1) whenever a MarkedBlock
+ is completely empty, which is the common case because of the generational hypothesis: the
+ number of objects that survive an eden collection is a tiny fraction of the number of
+ objects that had been allocated, and this fraction is so small that there are typically
+ fewer than one survivors per MarkedBlock. This change was enough to make this change a net
+ win over tip-of-tree.
+
+ - FTL now shares the same allocation fast paths as everything else, which is great, because
+ bump'n'pop has gnarly control flow. We don't really want B3 to have to think about that
+ control flow, since it won't be able to improve the machine code we write ourselves. GC
+ fast paths are best written in assembly. So, I've empowered B3 to have even better support
+ for Patchpoint terminals. It's now totally fine for a Patchpoint terminal to be non-Void.
+ So, the new FTL allocation fast paths are just Patchpoint terminals that call through to
+ AssemblyHelpers::emitAllocate(). B3 still reasons about things like constant-folding the
+ size class calculation and constant-hoisting the allocator. Also, I gave the FTL the
+ ability to constant-fold some allocator logic (in case we first assume that we're doing a
+ variable-length allocation but then realize that the length is known). I think it makes
+ sense to have constant folding rules in FTL::Output, or whatever the B3 IR builder is,
+ since this makes lowering easier (you can constant fold during lowering more easily) and it
+ reduces the amount of malloc traffic. In the future, we could teach B3 how to better
+ constant-fold this code. That would require allowing loads to be constant-folded, which is
+ doable but hella tricky.
+
+ - It used to be that if a logical object allocation required two physical allocations (first
+ the butterfly and then the cell), then the JIT would emit the code in such a way that a
+ failure in the second fast path would cause us to forget the successful first physical
+ allocation. This was pointlessly wasteful. It turns out that it's very cheap to devote a
+ register to storing either the butterfly or null, because the butterfly register is anyway
+ going to be free inside the first allocation. The only overhead here is zeroing the
+ butterfly register. With that in place, we can just pass the butterfly-or-null to the slow
+ path, which can then either allocate a butterfly or not. So now we never waste a successful
+ allocation. This patch implements such a solution both in DFG (where it's easy to do this
+ since we control registers already) and in FTL (where it's annoying, because mutable
+ "butterfly-or-null" variables are hard to say in SSA; also I realized that we had code
+ duplicated the JSArray allocation utility, so I deduplicated it). This came up because in
+ one version of this patch, this wastage would resonate with some Kraken benchmark: the
+ benchmark would always allocate N small things followed by one bigger thing. The problem
+ was I accidentally adjusted the various fixed overheads in MarkedBlock in such a way that
+ the JSObject size class, which both the small and big thing shared for their cell, could
+ hold exactly N cells per MarkedBlock. Then the benchmark would always call slow path when
+ it allocated the big thing. So, it would end up having to allocate the big thing's large
+ butterfly twice, every single time! Ouch!
+
+ - It used to be that we zeroed CopiedBlocks using memset, and so array allocations enjoyed
+ amortization of the cost of zeroing. This doesn't work anymore - it's now up to the client
+ of the allocator to initialize the object to whatever state they need. It used to be that
+ we would just use a dumb loop. I initially changed this so that we would end up in memset
+ for large allocations, but this didn't actually help performance that much. I got a much
+ better result by playing with different memsets written in assembly. First I wrote one
+ using non-temporal stores. That was a small speed-up over memset. Then I tried the classic
+ "rep stos" approach, and holy cow that version was fast. It's a ~20% speed-up on array
+ allocation microbenchmarks. So, this patch adds code paths to do "rep stos" on x86_64, or
+ memset, or use a loop, as appropriate, for both "contiguous" arrays (holes are zero) and
+ double arrays (holes are PNaN). Note that the JIT always emits either a loop or a flat slab
+ of stores (if the size is known), but those paths in the JIT won't trigger for
+ NewArrayWithSize() if the size is large, since that takes us to the
+ operationNewArrayWithSize() slow path, which calls into JSArray::create(). That's why the
+ optimizations here are all in JSArray::create() - that's the hot place for large arrays
+ that need to be filled with holes.
+
+ All of this put together gives us neutral perf on JetStream, membuster, and PLT3, a ~1%
+ regression on Speedometer, and up to a 4% regression Kraken. The Kraken regression is
+ because Kraken was allocating exactly 1024 element arrays at a rate of 400MB/sec. This is a
+ best-case scenario for bump allocation. I think that we should fix bmalloc to make up the
+ difference, but take the hit for now because it's a crazy corner case. By comparison, the
+ alternative approach of using a copy barrier would have cost us 1-2%. That's the real
+ apples-to-apples comparison if your premise is that we should have a concurrent GC. After we
+ finish removing copied space, we will be barrier-ready for concurrent GC: we already have a
+ marking barrier and we simply won't need a copying barrier. This change gets us there for
+ the purposes of our benchmarks, since the remaining clients of copied space are not very
+ important. On the other hand, if we keep copying, then getting barrier-ready would mean
+ adding back the copy barrier, which costs more perf.
+
+ We might get bigger speed-ups once we remove CopiedSpace altogether. That requires moving
+ typed arrays and a few other weird things over to Aux MarkedSpace.
+
+ This also includes some header sanitization. The introduction of AuxiliaryBarrier, HeapCell,
+ and CellContainer meant that I had to include those files from everywhere. Fortunately,
+ just including JSCInlines.h (instead of manually including the files that includes) is
+ usually enough. So, I made most of JSC's cpp files include JSCInlines.h, which is something
+ that we were already basically doing. In places where JSCInlines.h would be too much, I just
+ included HeapInlines.h. This got weird, because we previously included HeapInlines.h from
+ JSObject.h. That's bad because it led to some circular dependencies, so I fixed it - but that
+ meant having to manually include HeapInlines.h from the places that previously got it
+ implicitly via JSObject.h. But that led to more problems for some reason: I started getting
+ build errors because non-JSC files were having trouble including Opcode.h. That's just silly,
+ since Opcode.h is meant to be an internal JSC header. So, I made it an internal header and
+ made it impossible to include it from outside JSC. This was a lot of work, but it was
+ necessary to get the patch to build on all ports. It's also a net win. There were many places
+ in WebCore that were transitively including a *ton* of JSC headers just because of the
+ JSObject.h->HeapInlines.h edge and a bunch of dependency edges that arose from some public
+ (for WebCore) JSC headers needing Interpreter.h or Opcode.h for bad reasons.
+
+ * API/JSManagedValue.mm:
+ (-[JSManagedValue initWithValue:]):
+ * API/JSTypedArray.cpp:
+ * API/ObjCCallbackFunction.mm:
+ * API/tests/testapi.mm:
+ (testObjectiveCAPI):
+ (testWeakValue): Deleted.
+ * CMakeLists.txt:
+ * JavaScriptCore.xcodeproj/project.pbxproj:
+ * Scripts/builtins/builtins_generate_combined_implementation.py:
+ (BuiltinsCombinedImplementationGenerator.generate_secondary_header_includes):
+ * Scripts/builtins/builtins_generate_internals_wrapper_implementation.py:
+ (BuiltinsInternalsWrapperImplementationGenerator.generate_secondary_header_includes):
+ * Scripts/builtins/builtins_generate_separate_implementation.py:
+ (BuiltinsSeparateImplementationGenerator.generate_secondary_header_includes):
+ * assembler/AbstractMacroAssembler.h:
+ (JSC::AbstractMacroAssembler::JumpList::link):
+ (JSC::AbstractMacroAssembler::JumpList::linkTo):
+ * assembler/MacroAssembler.h:
+ * assembler/MacroAssemblerARM64.h:
+ (JSC::MacroAssemblerARM64::add32):
+ * assembler/MacroAssemblerCodeRef.cpp: Added.
+ (JSC::MacroAssemblerCodePtr::createLLIntCodePtr):
+ (JSC::MacroAssemblerCodePtr::dumpWithName):
+ (JSC::MacroAssemblerCodePtr::dump):
+ (JSC::MacroAssemblerCodeRef::createLLIntCodeRef):
+ (JSC::MacroAssemblerCodeRef::dump):
+ * assembler/MacroAssemblerCodeRef.h:
+ (JSC::MacroAssemblerCodePtr::createLLIntCodePtr): Deleted.
+ (JSC::MacroAssemblerCodePtr::dumpWithName): Deleted.
+ (JSC::MacroAssemblerCodePtr::dump): Deleted.
+ (JSC::MacroAssemblerCodeRef::createLLIntCodeRef): Deleted.
+ (JSC::MacroAssemblerCodeRef::dump): Deleted.
+ * b3/B3BasicBlock.cpp:
+ (JSC::B3::BasicBlock::appendBoolConstant):
+ * b3/B3BasicBlock.h:
+ * b3/B3DuplicateTails.cpp:
+ * b3/B3StackmapGenerationParams.h:
+ * b3/testb3.cpp:
+ (JSC::B3::testPatchpointTerminalReturnValue):
+ (JSC::B3::run):
+ * bindings/ScriptValue.cpp:
+ * bytecode/AdaptiveInferredPropertyValueWatchpointBase.cpp:
+ * bytecode/BytecodeBasicBlock.cpp:
+ * bytecode/BytecodeLivenessAnalysis.cpp:
+ * bytecode/BytecodeUseDef.h:
+ * bytecode/CallLinkInfo.cpp:
+ (JSC::CallLinkInfo::callTypeFor):
+ * bytecode/CallLinkInfo.h:
+ (JSC::CallLinkInfo::callTypeFor): Deleted.
+ * bytecode/CallLinkStatus.cpp:
+ * bytecode/CodeBlock.cpp:
+ (JSC::CodeBlock::finishCreation):
+ (JSC::CodeBlock::clearLLIntGetByIdCache):
+ (JSC::CodeBlock::predictedMachineCodeSize):
+ * bytecode/CodeBlock.h:
+ (JSC::CodeBlock::jitCodeMap): Deleted.
+ (JSC::clearLLIntGetByIdCache): Deleted.
+ * bytecode/ExecutionCounter.h:
+ * bytecode/Instruction.h:
+ * bytecode/LLIntPrototypeLoadAdaptiveStructureWatchpoint.cpp:
+ (JSC::LLIntPrototypeLoadAdaptiveStructureWatchpoint::fireInternal):
+ * bytecode/ObjectAllocationProfile.h:
+ (JSC::ObjectAllocationProfile::isNull):
+ (JSC::ObjectAllocationProfile::initialize):
+ * bytecode/Opcode.h:
+ (JSC::padOpcodeName):
+ * bytecode/PolymorphicAccess.cpp:
+ (JSC::AccessCase::generateImpl):
+ (JSC::PolymorphicAccess::regenerate):
+ * bytecode/PolymorphicAccess.h:
+ * bytecode/PreciseJumpTargets.cpp:
+ * bytecode/StructureStubInfo.cpp:
+ * bytecode/StructureStubInfo.h:
+ * bytecode/UnlinkedCodeBlock.cpp:
+ (JSC::UnlinkedCodeBlock::vm): Deleted.
+ * bytecode/UnlinkedCodeBlock.h:
+ * bytecode/UnlinkedInstructionStream.cpp:
+ * bytecode/UnlinkedInstructionStream.h:
+ * dfg/DFGOperations.cpp:
+ * dfg/DFGSpeculativeJIT.cpp:
+ (JSC::DFG::SpeculativeJIT::emitAllocateRawObject):
+ (JSC::DFG::SpeculativeJIT::compileMakeRope):
+ (JSC::DFG::SpeculativeJIT::compileAllocatePropertyStorage):
+ (JSC::DFG::SpeculativeJIT::compileReallocatePropertyStorage):
+ * dfg/DFGSpeculativeJIT.h:
+ (JSC::DFG::SpeculativeJIT::emitAllocateJSCell):
+ (JSC::DFG::SpeculativeJIT::emitAllocateJSObject):
+ * dfg/DFGSpeculativeJIT32_64.cpp:
+ (JSC::DFG::SpeculativeJIT::compile):
+ (JSC::DFG::SpeculativeJIT::compileAllocateNewArrayWithSize):
+ * dfg/DFGSpeculativeJIT64.cpp:
+ (JSC::DFG::SpeculativeJIT::compile):
+ (JSC::DFG::SpeculativeJIT::compileAllocateNewArrayWithSize):
+ * dfg/DFGStrengthReductionPhase.cpp:
+ (JSC::DFG::StrengthReductionPhase::handleNode):
+ * ftl/FTLAbstractHeapRepository.h:
+ * ftl/FTLCompile.cpp:
+ * ftl/FTLJITFinalizer.cpp:
+ * ftl/FTLLowerDFGToB3.cpp:
+ (JSC::FTL::DFG::LowerDFGToB3::compileCreateDirectArguments):
+ (JSC::FTL::DFG::LowerDFGToB3::compileCreateRest):
+ (JSC::FTL::DFG::LowerDFGToB3::allocateArrayWithSize):
+ (JSC::FTL::DFG::LowerDFGToB3::compileNewArrayWithSize):
+ (JSC::FTL::DFG::LowerDFGToB3::compileMakeRope):
+ (JSC::FTL::DFG::LowerDFGToB3::compileMaterializeNewObject):
+ (JSC::FTL::DFG::LowerDFGToB3::initializeArrayElements):
+ (JSC::FTL::DFG::LowerDFGToB3::allocatePropertyStorageWithSizeImpl):
+ (JSC::FTL::DFG::LowerDFGToB3::allocateHeapCell):
+ (JSC::FTL::DFG::LowerDFGToB3::allocateCell):
+ (JSC::FTL::DFG::LowerDFGToB3::allocateObject):
+ (JSC::FTL::DFG::LowerDFGToB3::allocatorForSize):
+ (JSC::FTL::DFG::LowerDFGToB3::allocateVariableSizedObject):
+ (JSC::FTL::DFG::LowerDFGToB3::allocateJSArray):
+ (JSC::FTL::DFG::LowerDFGToB3::compileAllocateArrayWithSize): Deleted.
+ * ftl/FTLOutput.cpp:
+ (JSC::FTL::Output::constBool):
+ (JSC::FTL::Output::add):
+ (JSC::FTL::Output::shl):
+ (JSC::FTL::Output::aShr):
+ (JSC::FTL::Output::lShr):
+ (JSC::FTL::Output::zeroExt):
+ (JSC::FTL::Output::equal):
+ (JSC::FTL::Output::notEqual):
+ (JSC::FTL::Output::above):
+ (JSC::FTL::Output::aboveOrEqual):
+ (JSC::FTL::Output::below):
+ (JSC::FTL::Output::belowOrEqual):
+ (JSC::FTL::Output::greaterThan):
+ (JSC::FTL::Output::greaterThanOrEqual):
+ (JSC::FTL::Output::lessThan):
+ (JSC::FTL::Output::lessThanOrEqual):
+ (JSC::FTL::Output::select):
+ (JSC::FTL::Output::appendSuccessor):
+ (JSC::FTL::Output::addIncomingToPhi):
+ * ftl/FTLOutput.h:
+ * ftl/FTLValueFromBlock.h:
+ (JSC::FTL::ValueFromBlock::operator bool):
+ (JSC::FTL::ValueFromBlock::ValueFromBlock): Deleted.
+ * ftl/FTLWeightedTarget.h:
+ (JSC::FTL::WeightedTarget::frequentedBlock):
+ * heap/CellContainer.h: Added.
+ (JSC::CellContainer::CellContainer):
+ (JSC::CellContainer::operator bool):
+ (JSC::CellContainer::isMarkedBlock):
+ (JSC::CellContainer::isLargeAllocation):
+ (JSC::CellContainer::markedBlock):
+ (JSC::CellContainer::largeAllocation):
+ * heap/CellContainerInlines.h: Added.
+ (JSC::CellContainer::isMarked):
+ (JSC::CellContainer::isMarkedOrNewlyAllocated):
+ (JSC::CellContainer::noteMarked):
+ (JSC::CellContainer::cellSize):
+ (JSC::CellContainer::weakSet):
+ (JSC::CellContainer::flipIfNecessary):
+ * heap/ConservativeRoots.cpp:
+ (JSC::ConservativeRoots::ConservativeRoots):
+ (JSC::ConservativeRoots::~ConservativeRoots):
+ (JSC::ConservativeRoots::grow):
+ (JSC::ConservativeRoots::genericAddPointer):
+ (JSC::ConservativeRoots::genericAddSpan):
+ * heap/ConservativeRoots.h:
+ (JSC::ConservativeRoots::roots):
+ * heap/CopyToken.h:
+ * heap/FreeList.cpp: Added.
+ (JSC::FreeList::dump):
+ * heap/FreeList.h: Added.
+ (JSC::FreeList::FreeList):
+ (JSC::FreeList::list):
+ (JSC::FreeList::bump):
+ (JSC::FreeList::operator==):
+ (JSC::FreeList::operator!=):
+ (JSC::FreeList::operator bool):
+ (JSC::FreeList::allocationWillFail):
+ (JSC::FreeList::allocationWillSucceed):
+ * heap/GCTypeMap.h: Added.
+ (JSC::GCTypeMap::operator[]):
+ * heap/Heap.cpp:
+ (JSC::Heap::Heap):
+ (JSC::Heap::lastChanceToFinalize):
+ (JSC::Heap::finalizeUnconditionalFinalizers):
+ (JSC::Heap::markRoots):
+ (JSC::Heap::copyBackingStores):
+ (JSC::Heap::gatherStackRoots):
+ (JSC::Heap::gatherJSStackRoots):
+ (JSC::Heap::gatherScratchBufferRoots):
+ (JSC::Heap::clearLivenessData):
+ (JSC::Heap::visitSmallStrings):
+ (JSC::Heap::visitConservativeRoots):
+ (JSC::Heap::removeDeadCompilerWorklistEntries):
+ (JSC::Heap::gatherExtraHeapSnapshotData):
+ (JSC::Heap::removeDeadHeapSnapshotNodes):
+ (JSC::Heap::visitProtectedObjects):
+ (JSC::Heap::visitArgumentBuffers):
+ (JSC::Heap::visitException):
+ (JSC::Heap::visitStrongHandles):
+ (JSC::Heap::visitHandleStack):
+ (JSC::Heap::visitSamplingProfiler):
+ (JSC::Heap::traceCodeBlocksAndJITStubRoutines):
+ (JSC::Heap::converge):
+ (JSC::Heap::visitWeakHandles):
+ (JSC::Heap::updateObjectCounts):
+ (JSC::Heap::clearUnmarkedExecutables):
+ (JSC::Heap::deleteUnmarkedCompiledCode):
+ (JSC::Heap::collectAllGarbage):
+ (JSC::Heap::collect):
+ (JSC::Heap::collectWithoutAnySweep):
+ (JSC::Heap::collectImpl):
+ (JSC::Heap::suspendCompilerThreads):
+ (JSC::Heap::willStartCollection):
+ (JSC::Heap::flushOldStructureIDTables):
+ (JSC::Heap::flushWriteBarrierBuffer):
+ (JSC::Heap::stopAllocation):
+ (JSC::Heap::prepareForMarking):
+ (JSC::Heap::reapWeakHandles):
+ (JSC::Heap::pruneStaleEntriesFromWeakGCMaps):
+ (JSC::Heap::sweepArrayBuffers):
+ (JSC::MarkedBlockSnapshotFunctor::MarkedBlockSnapshotFunctor):
+ (JSC::MarkedBlockSnapshotFunctor::operator()):
+ (JSC::Heap::snapshotMarkedSpace):
+ (JSC::Heap::deleteSourceProviderCaches):
+ (JSC::Heap::notifyIncrementalSweeper):
+ (JSC::Heap::writeBarrierCurrentlyExecutingCodeBlocks):
+ (JSC::Heap::resetAllocators):
+ (JSC::Heap::updateAllocationLimits):
+ (JSC::Heap::didFinishCollection):
+ (JSC::Heap::resumeCompilerThreads):
+ (JSC::Zombify::visit):
+ (JSC::Heap::forEachCodeBlockImpl):
+ * heap/Heap.h:
+ (JSC::Heap::allocatorForObjectWithoutDestructor):
+ (JSC::Heap::allocatorForObjectWithDestructor):
+ (JSC::Heap::allocatorForAuxiliaryData):
+ (JSC::Heap::jitStubRoutines):
+ (JSC::Heap::codeBlockSet):
+ (JSC::Heap::storageAllocator): Deleted.
+ * heap/HeapCell.h:
+ (JSC::HeapCell::isZapped): Deleted.
+ * heap/HeapCellInlines.h: Added.
+ (JSC::HeapCell::isLargeAllocation):
+ (JSC::HeapCell::cellContainer):
+ (JSC::HeapCell::markedBlock):
+ (JSC::HeapCell::largeAllocation):
+ (JSC::HeapCell::heap):
+ (JSC::HeapCell::vm):
+ (JSC::HeapCell::cellSize):
+ (JSC::HeapCell::allocatorAttributes):
+ (JSC::HeapCell::destructionMode):
+ (JSC::HeapCell::cellKind):
+ * heap/HeapInlines.h:
+ (JSC::Heap::heap):
+ (JSC::Heap::isLive):
+ (JSC::Heap::isMarked):
+ (JSC::Heap::testAndSetMarked):
+ (JSC::Heap::setMarked):
+ (JSC::Heap::cellSize):
+ (JSC::Heap::forEachCodeBlock):
+ (JSC::Heap::allocateObjectOfType):
+ (JSC::Heap::subspaceForObjectOfType):
+ (JSC::Heap::allocatorForObjectOfType):
+ (JSC::Heap::allocateAuxiliary):
+ (JSC::Heap::tryAllocateAuxiliary):
+ (JSC::Heap::tryReallocateAuxiliary):
+ (JSC::Heap::isPointerGCObject): Deleted.
+ (JSC::Heap::isValueGCObject): Deleted.
+ * heap/HeapOperation.cpp: Added.
+ (WTF::printInternal):
+ * heap/HeapOperation.h:
+ * heap/HeapUtil.h: Added.
+ (JSC::HeapUtil::findGCObjectPointersForMarking):
+ (JSC::HeapUtil::isPointerGCObjectJSCell):
+ (JSC::HeapUtil::isValueGCObject):
+ * heap/IncrementalSweeper.cpp:
+ (JSC::IncrementalSweeper::sweepNextBlock):
+ * heap/IncrementalSweeper.h:
+ * heap/LargeAllocation.cpp: Added.
+ (JSC::LargeAllocation::tryCreate):
+ (JSC::LargeAllocation::LargeAllocation):
+ (JSC::LargeAllocation::lastChanceToFinalize):
+ (JSC::LargeAllocation::shrink):
+ (JSC::LargeAllocation::visitWeakSet):
+ (JSC::LargeAllocation::reapWeakSet):
+ (JSC::LargeAllocation::flip):
+ (JSC::LargeAllocation::isEmpty):
+ (JSC::LargeAllocation::sweep):
+ (JSC::LargeAllocation::destroy):
+ (JSC::LargeAllocation::dump):
+ * heap/LargeAllocation.h: Added.
+ (JSC::LargeAllocation::fromCell):
+ (JSC::LargeAllocation::cell):
+ (JSC::LargeAllocation::isLargeAllocation):
+ (JSC::LargeAllocation::heap):
+ (JSC::LargeAllocation::vm):
+ (JSC::LargeAllocation::weakSet):
+ (JSC::LargeAllocation::clearNewlyAllocated):
+ (JSC::LargeAllocation::isNewlyAllocated):
+ (JSC::LargeAllocation::isMarked):
+ (JSC::LargeAllocation::isMarkedOrNewlyAllocated):
+ (JSC::LargeAllocation::isLive):
+ (JSC::LargeAllocation::hasValidCell):
+ (JSC::LargeAllocation::cellSize):
+ (JSC::LargeAllocation::aboveLowerBound):
+ (JSC::LargeAllocation::belowUpperBound):
+ (JSC::LargeAllocation::contains):
+ (JSC::LargeAllocation::attributes):
+ (JSC::LargeAllocation::flipIfNecessary):
+ (JSC::LargeAllocation::flipIfNecessaryConcurrently):
+ (JSC::LargeAllocation::testAndSetMarked):
+ (JSC::LargeAllocation::setMarked):
+ (JSC::LargeAllocation::clearMarked):
+ (JSC::LargeAllocation::noteMarked):
+ (JSC::LargeAllocation::headerSize):
+ * heap/MarkedAllocator.cpp:
+ (JSC::MarkedAllocator::MarkedAllocator):
+ (JSC::MarkedAllocator::isPagedOut):
+ (JSC::MarkedAllocator::retire):
+ (JSC::MarkedAllocator::filterNextBlock):
+ (JSC::MarkedAllocator::setNextBlockToSweep):
+ (JSC::MarkedAllocator::tryAllocateWithoutCollectingImpl):
+ (JSC::MarkedAllocator::tryAllocateWithoutCollecting):
+ (JSC::MarkedAllocator::allocateSlowCase):
+ (JSC::MarkedAllocator::tryAllocateSlowCase):
+ (JSC::MarkedAllocator::allocateSlowCaseImpl):
+ (JSC::blockHeaderSize):
+ (JSC::MarkedAllocator::blockSizeForBytes):
+ (JSC::MarkedAllocator::tryAllocateBlock):
+ (JSC::MarkedAllocator::addBlock):
+ (JSC::MarkedAllocator::removeBlock):
+ (JSC::MarkedAllocator::stopAllocating):
+ (JSC::MarkedAllocator::reset):
+ (JSC::MarkedAllocator::lastChanceToFinalize):
+ (JSC::MarkedAllocator::setFreeList):
+ (JSC::isListPagedOut): Deleted.
+ (JSC::MarkedAllocator::tryAllocateHelper): Deleted.
+ (JSC::MarkedAllocator::tryPopFreeList): Deleted.
+ (JSC::MarkedAllocator::tryAllocate): Deleted.
+ (JSC::MarkedAllocator::allocateBlock): Deleted.
+ * heap/MarkedAllocator.h:
+ (JSC::MarkedAllocator::takeLastActiveBlock):
+ (JSC::MarkedAllocator::offsetOfFreeList):
+ (JSC::MarkedAllocator::offsetOfCellSize):
+ (JSC::MarkedAllocator::tryAllocate):
+ (JSC::MarkedAllocator::allocate):
+ (JSC::MarkedAllocator::forEachBlock):
+ (JSC::MarkedAllocator::offsetOfFreeListHead): Deleted.
+ (JSC::MarkedAllocator::MarkedAllocator): Deleted.
+ (JSC::MarkedAllocator::init): Deleted.
+ (JSC::MarkedAllocator::stopAllocating): Deleted.
+ * heap/MarkedBlock.cpp:
+ (JSC::MarkedBlock::tryCreate):
+ (JSC::MarkedBlock::Handle::Handle):
+ (JSC::MarkedBlock::Handle::~Handle):
+ (JSC::MarkedBlock::MarkedBlock):
+ (JSC::MarkedBlock::Handle::specializedSweep):
+ (JSC::MarkedBlock::Handle::sweep):
+ (JSC::MarkedBlock::Handle::sweepHelperSelectScribbleMode):
+ (JSC::MarkedBlock::Handle::sweepHelperSelectStateAndSweepMode):
+ (JSC::MarkedBlock::Handle::unsweepWithNoNewlyAllocated):
+ (JSC::SetNewlyAllocatedFunctor::SetNewlyAllocatedFunctor):
+ (JSC::SetNewlyAllocatedFunctor::operator()):
+ (JSC::MarkedBlock::Handle::stopAllocating):
+ (JSC::MarkedBlock::Handle::lastChanceToFinalize):
+ (JSC::MarkedBlock::Handle::resumeAllocating):
+ (JSC::MarkedBlock::Handle::zap):
+ (JSC::MarkedBlock::Handle::forEachFreeCell):
+ (JSC::MarkedBlock::flipIfNecessary):
+ (JSC::MarkedBlock::Handle::flipIfNecessary):
+ (JSC::MarkedBlock::flipIfNecessarySlow):
+ (JSC::MarkedBlock::flipIfNecessaryConcurrentlySlow):
+ (JSC::MarkedBlock::clearMarks):
+ (JSC::MarkedBlock::assertFlipped):
+ (JSC::MarkedBlock::needsFlip):
+ (JSC::MarkedBlock::Handle::needsFlip):
+ (JSC::MarkedBlock::Handle::willRemoveBlock):
+ (JSC::MarkedBlock::Handle::didConsumeFreeList):
+ (JSC::MarkedBlock::markCount):
+ (JSC::MarkedBlock::Handle::isEmpty):
+ (JSC::MarkedBlock::clearHasAnyMarked):
+ (JSC::MarkedBlock::noteMarkedSlow):
+ (WTF::printInternal):
+ (JSC::MarkedBlock::create): Deleted.
+ (JSC::MarkedBlock::destroy): Deleted.
+ (JSC::MarkedBlock::callDestructor): Deleted.
+ (JSC::MarkedBlock::specializedSweep): Deleted.
+ (JSC::MarkedBlock::sweep): Deleted.
+ (JSC::MarkedBlock::sweepHelper): Deleted.
+ (JSC::MarkedBlock::stopAllocating): Deleted.
+ (JSC::MarkedBlock::clearMarksWithCollectionType): Deleted.
+ (JSC::MarkedBlock::lastChanceToFinalize): Deleted.
+ (JSC::MarkedBlock::resumeAllocating): Deleted.
+ (JSC::MarkedBlock::didRetireBlock): Deleted.
+ * heap/MarkedBlock.h:
+ (JSC::MarkedBlock::VoidFunctor::returnValue):
+ (JSC::MarkedBlock::CountFunctor::CountFunctor):
+ (JSC::MarkedBlock::CountFunctor::count):
+ (JSC::MarkedBlock::CountFunctor::returnValue):
+ (JSC::MarkedBlock::Handle::hasAnyNewlyAllocated):
+ (JSC::MarkedBlock::Handle::isOnBlocksToSweep):
+ (JSC::MarkedBlock::Handle::setIsOnBlocksToSweep):
+ (JSC::MarkedBlock::Handle::state):
+ (JSC::MarkedBlock::needsDestruction):
+ (JSC::MarkedBlock::handle):
+ (JSC::MarkedBlock::Handle::block):
+ (JSC::MarkedBlock::firstAtom):
+ (JSC::MarkedBlock::atoms):
+ (JSC::MarkedBlock::isAtomAligned):
+ (JSC::MarkedBlock::Handle::cellAlign):
+ (JSC::MarkedBlock::blockFor):
+ (JSC::MarkedBlock::Handle::allocator):
+ (JSC::MarkedBlock::Handle::heap):
+ (JSC::MarkedBlock::Handle::vm):
+ (JSC::MarkedBlock::vm):
+ (JSC::MarkedBlock::Handle::weakSet):
+ (JSC::MarkedBlock::weakSet):
+ (JSC::MarkedBlock::Handle::shrink):
+ (JSC::MarkedBlock::Handle::visitWeakSet):
+ (JSC::MarkedBlock::Handle::reapWeakSet):
+ (JSC::MarkedBlock::Handle::cellSize):
+ (JSC::MarkedBlock::cellSize):
+ (JSC::MarkedBlock::Handle::attributes):
+ (JSC::MarkedBlock::attributes):
+ (JSC::MarkedBlock::Handle::needsDestruction):
+ (JSC::MarkedBlock::Handle::destruction):
+ (JSC::MarkedBlock::Handle::cellKind):
+ (JSC::MarkedBlock::Handle::markCount):
+ (JSC::MarkedBlock::Handle::size):
+ (JSC::MarkedBlock::atomNumber):
+ (JSC::MarkedBlock::flipIfNecessary):
+ (JSC::MarkedBlock::flipIfNecessaryConcurrently):
+ (JSC::MarkedBlock::Handle::flipIfNecessary):
+ (JSC::MarkedBlock::Handle::flipIfNecessaryConcurrently):
+ (JSC::MarkedBlock::Handle::flipForEdenCollection):
+ (JSC::MarkedBlock::assertFlipped):
+ (JSC::MarkedBlock::Handle::assertFlipped):
+ (JSC::MarkedBlock::isMarked):
+ (JSC::MarkedBlock::testAndSetMarked):
+ (JSC::MarkedBlock::Handle::isNewlyAllocated):
+ (JSC::MarkedBlock::Handle::setNewlyAllocated):
+ (JSC::MarkedBlock::Handle::clearNewlyAllocated):
+ (JSC::MarkedBlock::Handle::isMarkedOrNewlyAllocated):
+ (JSC::MarkedBlock::isMarkedOrNewlyAllocated):
+ (JSC::MarkedBlock::Handle::isLive):
+ (JSC::MarkedBlock::isAtom):
+ (JSC::MarkedBlock::Handle::isLiveCell):
+ (JSC::MarkedBlock::Handle::forEachCell):
+ (JSC::MarkedBlock::Handle::forEachLiveCell):
+ (JSC::MarkedBlock::Handle::forEachDeadCell):
+ (JSC::MarkedBlock::Handle::needsSweeping):
+ (JSC::MarkedBlock::Handle::isAllocated):
+ (JSC::MarkedBlock::Handle::isMarked):
+ (JSC::MarkedBlock::Handle::isFreeListed):
+ (JSC::MarkedBlock::hasAnyMarked):
+ (JSC::MarkedBlock::noteMarked):
+ (WTF::MarkedBlockHash::hash):
+ (JSC::MarkedBlock::FreeList::FreeList): Deleted.
+ (JSC::MarkedBlock::allocator): Deleted.
+ (JSC::MarkedBlock::heap): Deleted.
+ (JSC::MarkedBlock::shrink): Deleted.
+ (JSC::MarkedBlock::visitWeakSet): Deleted.
+ (JSC::MarkedBlock::reapWeakSet): Deleted.
+ (JSC::MarkedBlock::willRemoveBlock): Deleted.
+ (JSC::MarkedBlock::didConsumeFreeList): Deleted.
+ (JSC::MarkedBlock::markCount): Deleted.
+ (JSC::MarkedBlock::isEmpty): Deleted.
+ (JSC::MarkedBlock::destruction): Deleted.
+ (JSC::MarkedBlock::cellKind): Deleted.
+ (JSC::MarkedBlock::size): Deleted.
+ (JSC::MarkedBlock::capacity): Deleted.
+ (JSC::MarkedBlock::setMarked): Deleted.
+ (JSC::MarkedBlock::clearMarked): Deleted.
+ (JSC::MarkedBlock::isNewlyAllocated): Deleted.
+ (JSC::MarkedBlock::setNewlyAllocated): Deleted.
+ (JSC::MarkedBlock::clearNewlyAllocated): Deleted.
+ (JSC::MarkedBlock::isLive): Deleted.
+ (JSC::MarkedBlock::isLiveCell): Deleted.
+ (JSC::MarkedBlock::forEachCell): Deleted.
+ (JSC::MarkedBlock::forEachLiveCell): Deleted.
+ (JSC::MarkedBlock::forEachDeadCell): Deleted.
+ (JSC::MarkedBlock::needsSweeping): Deleted.
+ (JSC::MarkedBlock::isAllocated): Deleted.
+ (JSC::MarkedBlock::isMarkedOrRetired): Deleted.
+ * heap/MarkedSpace.cpp:
+ (JSC::MarkedSpace::initializeSizeClassForStepSize):
+ (JSC::MarkedSpace::MarkedSpace):
+ (JSC::MarkedSpace::~MarkedSpace):
+ (JSC::MarkedSpace::lastChanceToFinalize):
+ (JSC::MarkedSpace::allocate):
+ (JSC::MarkedSpace::tryAllocate):
+ (JSC::MarkedSpace::allocateLarge):
+ (JSC::MarkedSpace::tryAllocateLarge):
+ (JSC::MarkedSpace::sweep):
+ (JSC::MarkedSpace::sweepLargeAllocations):
+ (JSC::MarkedSpace::zombifySweep):
+ (JSC::MarkedSpace::resetAllocators):
+ (JSC::MarkedSpace::visitWeakSets):
+ (JSC::MarkedSpace::reapWeakSets):
+ (JSC::MarkedSpace::stopAllocating):
+ (JSC::MarkedSpace::prepareForMarking):
+ (JSC::MarkedSpace::resumeAllocating):
+ (JSC::MarkedSpace::isPagedOut):
+ (JSC::MarkedSpace::freeBlock):
+ (JSC::MarkedSpace::freeOrShrinkBlock):
+ (JSC::MarkedSpace::shrink):
+ (JSC::MarkedSpace::clearNewlyAllocated):
+ (JSC::VerifyMarked::operator()):
+ (JSC::MarkedSpace::flip):
+ (JSC::MarkedSpace::objectCount):
+ (JSC::MarkedSpace::size):
+ (JSC::MarkedSpace::capacity):
+ (JSC::MarkedSpace::addActiveWeakSet):
+ (JSC::MarkedSpace::didAddBlock):
+ (JSC::MarkedSpace::didAllocateInBlock):
+ (JSC::MarkedSpace::forEachAllocator): Deleted.
+ (JSC::VerifyMarkedOrRetired::operator()): Deleted.
+ (JSC::MarkedSpace::clearMarks): Deleted.
+ * heap/MarkedSpace.h:
+ (JSC::MarkedSpace::sizeClassToIndex):
+ (JSC::MarkedSpace::indexToSizeClass):
+ (JSC::MarkedSpace::version):
+ (JSC::MarkedSpace::blocksWithNewObjects):
+ (JSC::MarkedSpace::largeAllocations):
+ (JSC::MarkedSpace::largeAllocationsNurseryOffset):
+ (JSC::MarkedSpace::largeAllocationsOffsetForThisCollection):
+ (JSC::MarkedSpace::largeAllocationsForThisCollectionBegin):
+ (JSC::MarkedSpace::largeAllocationsForThisCollectionEnd):
+ (JSC::MarkedSpace::largeAllocationsForThisCollectionSize):
+ (JSC::MarkedSpace::forEachLiveCell):
+ (JSC::MarkedSpace::forEachDeadCell):
+ (JSC::MarkedSpace::allocatorFor):
+ (JSC::MarkedSpace::destructorAllocatorFor):
+ (JSC::MarkedSpace::auxiliaryAllocatorFor):
+ (JSC::MarkedSpace::allocateWithoutDestructor):
+ (JSC::MarkedSpace::allocateWithDestructor):
+ (JSC::MarkedSpace::allocateAuxiliary):
+ (JSC::MarkedSpace::tryAllocateAuxiliary):
+ (JSC::MarkedSpace::forEachBlock):
+ (JSC::MarkedSpace::forEachAllocator):
+ (JSC::MarkedSpace::optimalSizeFor):
+ (JSC::MarkedSpace::didAddBlock): Deleted.
+ (JSC::MarkedSpace::didAllocateInBlock): Deleted.
+ (JSC::MarkedSpace::objectCount): Deleted.
+ (JSC::MarkedSpace::size): Deleted.
+ (JSC::MarkedSpace::capacity): Deleted.
+ * heap/SlotVisitor.cpp:
+ (JSC::SlotVisitor::SlotVisitor):
+ (JSC::SlotVisitor::didStartMarking):
+ (JSC::SlotVisitor::reset):
+ (JSC::SlotVisitor::append):
+ (JSC::SlotVisitor::appendJSCellOrAuxiliary):
+ (JSC::SlotVisitor::setMarkedAndAppendToMarkStack):
+ (JSC::SlotVisitor::appendToMarkStack):
+ (JSC::SlotVisitor::markAuxiliary):
+ (JSC::SlotVisitor::noteLiveAuxiliaryCell):
+ (JSC::SlotVisitor::visitChildren):
+ * heap/SlotVisitor.h:
+ * heap/WeakBlock.cpp:
+ (JSC::WeakBlock::create):
+ (JSC::WeakBlock::WeakBlock):
+ (JSC::WeakBlock::visit):
+ (JSC::WeakBlock::reap):
+ * heap/WeakBlock.h:
+ (JSC::WeakBlock::disconnectContainer):
+ (JSC::WeakBlock::disconnectMarkedBlock): Deleted.
+ * heap/WeakSet.cpp:
+ (JSC::WeakSet::~WeakSet):
+ (JSC::WeakSet::sweep):
+ (JSC::WeakSet::shrink):
+ (JSC::WeakSet::addAllocator):
+ * heap/WeakSet.h:
+ (JSC::WeakSet::container):
+ (JSC::WeakSet::setContainer):
+ (JSC::WeakSet::WeakSet):
+ (JSC::WeakSet::visit):
+ (JSC::WeakSet::shrink): Deleted.
+ * heap/WeakSetInlines.h:
+ (JSC::WeakSet::allocate):
+ * inspector/InjectedScriptManager.cpp:
+ * inspector/JSGlobalObjectInspectorController.cpp:
+ * inspector/JSJavaScriptCallFrame.cpp:
+ * inspector/ScriptDebugServer.cpp:
+ * inspector/agents/InspectorDebuggerAgent.cpp:
+ * interpreter/CachedCall.h:
+ (JSC::CachedCall::CachedCall):
+ * interpreter/Interpreter.cpp:
+ (JSC::loadVarargs):
+ (JSC::StackFrame::sourceID): Deleted.
+ (JSC::StackFrame::sourceURL): Deleted.
+ (JSC::StackFrame::functionName): Deleted.
+ (JSC::StackFrame::computeLineAndColumn): Deleted.
+ (JSC::StackFrame::toString): Deleted.
+ * interpreter/Interpreter.h:
+ (JSC::StackFrame::isNative): Deleted.
+ * jit/AssemblyHelpers.h:
+ (JSC::AssemblyHelpers::emitAllocateWithNonNullAllocator):
+ (JSC::AssemblyHelpers::emitAllocate):
+ (JSC::AssemblyHelpers::emitAllocateJSCell):
+ (JSC::AssemblyHelpers::emitAllocateJSObject):
+ (JSC::AssemblyHelpers::emitAllocateJSObjectWithKnownSize):
+ (JSC::AssemblyHelpers::emitAllocateVariableSized):
+ * jit/GCAwareJITStubRoutine.cpp:
+ (JSC::GCAwareJITStubRoutine::GCAwareJITStubRoutine):
+ * jit/JIT.cpp:
+ (JSC::JIT::compileCTINativeCall):
+ (JSC::JIT::link):
+ * jit/JIT.h:
+ (JSC::JIT::compileCTINativeCall): Deleted.
+ * jit/JITExceptions.cpp:
+ (JSC::genericUnwind):
+ * jit/JITExceptions.h:
+ * jit/JITOpcodes.cpp:
+ (JSC::JIT::emit_op_new_object):
+ (JSC::JIT::emitSlow_op_new_object):
+ (JSC::JIT::emit_op_create_this):
+ (JSC::JIT::emitSlow_op_create_this):
+ * jit/JITOpcodes32_64.cpp:
+ (JSC::JIT::emit_op_new_object):
+ (JSC::JIT::emitSlow_op_new_object):
+ (JSC::JIT::emit_op_create_this):
+ (JSC::JIT::emitSlow_op_create_this):
+ * jit/JITOperations.cpp:
+ * jit/JITOperations.h:
+ * jit/JITPropertyAccess.cpp:
+ (JSC::JIT::emitWriteBarrier):
+ * jit/JITThunks.cpp:
+ * jit/JITThunks.h:
+ * jsc.cpp:
+ (functionDescribeArray):
+ (main):
+ * llint/LLIntData.cpp:
+ (JSC::LLInt::Data::performAssertions):
+ * llint/LLIntExceptions.cpp:
+ * llint/LLIntThunks.cpp:
+ * llint/LLIntThunks.h:
+ * llint/LowLevelInterpreter.asm:
+ * llint/LowLevelInterpreter.cpp:
+ * llint/LowLevelInterpreter32_64.asm:
+ * llint/LowLevelInterpreter64.asm:
+ * parser/ModuleAnalyzer.cpp:
+ * parser/NodeConstructors.h:
+ * parser/Nodes.h:
+ * profiler/ProfilerBytecode.cpp:
+ * profiler/ProfilerBytecode.h:
+ * profiler/ProfilerBytecodeSequence.cpp:
+ * runtime/ArrayConventions.h:
+ (JSC::indexingHeaderForArrayStorage):
+ (JSC::baseIndexingHeaderForArrayStorage):
+ (JSC::indexingHeaderForArray): Deleted.
+ (JSC::baseIndexingHeaderForArray): Deleted.
+ * runtime/ArrayPrototype.cpp:
+ (JSC::arrayProtoFuncSplice):
+ (JSC::concatAppendOne):
+ (JSC::arrayProtoPrivateFuncConcatMemcpy):
+ * runtime/ArrayStorage.h:
+ (JSC::ArrayStorage::vectorLength):
+ (JSC::ArrayStorage::totalSizeFor):
+ (JSC::ArrayStorage::totalSize):
+ (JSC::ArrayStorage::availableVectorLength):
+ (JSC::ArrayStorage::optimalVectorLength):
+ (JSC::ArrayStorage::sizeFor): Deleted.
+ * runtime/AuxiliaryBarrier.h: Added.
+ (JSC::AuxiliaryBarrier::AuxiliaryBarrier):
+ (JSC::AuxiliaryBarrier::clear):
+ (JSC::AuxiliaryBarrier::get):
+ (JSC::AuxiliaryBarrier::slot):
+ (JSC::AuxiliaryBarrier::operator bool):
+ (JSC::AuxiliaryBarrier::setWithoutBarrier):
+ * runtime/AuxiliaryBarrierInlines.h: Added.
+ (JSC::AuxiliaryBarrier<T>::AuxiliaryBarrier):
+ (JSC::AuxiliaryBarrier<T>::set):
+ * runtime/Butterfly.h:
+ * runtime/ButterflyInlines.h:
+ (JSC::Butterfly::availableContiguousVectorLength):
+ (JSC::Butterfly::optimalContiguousVectorLength):
+ (JSC::Butterfly::createUninitialized):
+ (JSC::Butterfly::growArrayRight):
+ * runtime/ClonedArguments.cpp:
+ (JSC::ClonedArguments::createEmpty):
+ * runtime/CommonSlowPathsExceptions.cpp:
+ * runtime/CommonSlowPathsExceptions.h:
+ * runtime/DataView.cpp:
+ * runtime/DirectArguments.h:
+ * runtime/ECMAScriptSpecInternalFunctions.cpp:
+ * runtime/Error.cpp:
+ * runtime/Error.h:
+ * runtime/ErrorInstance.cpp:
+ * runtime/ErrorInstance.h:
+ * runtime/Exception.cpp:
+ * runtime/Exception.h:
+ * runtime/GeneratorFrame.cpp:
+ * runtime/GeneratorPrototype.cpp:
+ * runtime/InternalFunction.cpp:
+ (JSC::InternalFunction::InternalFunction):
+ * runtime/IntlCollator.cpp:
+ * runtime/IntlCollatorConstructor.cpp:
+ * runtime/IntlCollatorPrototype.cpp:
+ * runtime/IntlDateTimeFormat.cpp:
+ * runtime/IntlDateTimeFormatConstructor.cpp:
+ * runtime/IntlDateTimeFormatPrototype.cpp:
+ * runtime/IntlNumberFormat.cpp:
+ * runtime/IntlNumberFormatConstructor.cpp:
+ * runtime/IntlNumberFormatPrototype.cpp:
+ * runtime/IntlObject.cpp:
+ * runtime/IteratorPrototype.cpp:
+ * runtime/JSArray.cpp:
+ (JSC::JSArray::tryCreateUninitialized):
+ (JSC::JSArray::setLengthWritable):
+ (JSC::JSArray::unshiftCountSlowCase):
+ (JSC::JSArray::setLengthWithArrayStorage):
+ (JSC::JSArray::appendMemcpy):
+ (JSC::JSArray::setLength):
+ (JSC::JSArray::pop):
+ (JSC::JSArray::push):
+ (JSC::JSArray::fastSlice):
+ (JSC::JSArray::shiftCountWithArrayStorage):
+ (JSC::JSArray::shiftCountWithAnyIndexingType):
+ (JSC::JSArray::unshiftCountWithArrayStorage):
+ (JSC::JSArray::fillArgList):
+ (JSC::JSArray::copyToArguments):
+ * runtime/JSArray.h:
+ (JSC::createContiguousArrayButterfly):
+ (JSC::createArrayButterfly):
+ (JSC::JSArray::create):
+ (JSC::JSArray::tryCreateUninitialized): Deleted.
+ * runtime/JSArrayBufferView.h:
+ * runtime/JSCInlines.h:
+ * runtime/JSCJSValue.cpp:
+ (JSC::JSValue::dumpInContextAssumingStructure):
+ * runtime/JSCallee.cpp:
+ (JSC::JSCallee::JSCallee):
+ * runtime/JSCell.cpp:
+ (JSC::JSCell::estimatedSize):
+ * runtime/JSCell.h:
+ (JSC::JSCell::cellStateOffset): Deleted.
+ * runtime/JSCellInlines.h:
+ (JSC::ExecState::vm):
+ (JSC::JSCell::classInfo):
+ (JSC::JSCell::callDestructor):
+ (JSC::JSCell::vm): Deleted.
+ * runtime/JSFunction.cpp:
+ (JSC::JSFunction::create):
+ (JSC::JSFunction::allocateAndInitializeRareData):
+ (JSC::JSFunction::initializeRareData):
+ (JSC::JSFunction::getOwnPropertySlot):
+ (JSC::JSFunction::put):
+ (JSC::JSFunction::deleteProperty):
+ (JSC::JSFunction::defineOwnProperty):
+ (JSC::JSFunction::setFunctionName):
+ (JSC::JSFunction::reifyLength):
+ (JSC::JSFunction::reifyName):
+ (JSC::JSFunction::reifyLazyPropertyIfNeeded):
+ (JSC::JSFunction::reifyBoundNameIfNeeded):
+ * runtime/JSFunction.h:
+ * runtime/JSFunctionInlines.h:
+ (JSC::JSFunction::createWithInvalidatedReallocationWatchpoint):
+ (JSC::JSFunction::JSFunction):
+ * runtime/JSGenericTypedArrayViewInlines.h:
+ (JSC::JSGenericTypedArrayView<Adaptor>::slowDownAndWasteMemory):
+ * runtime/JSInternalPromise.cpp:
+ * runtime/JSInternalPromiseConstructor.cpp:
+ * runtime/JSInternalPromiseDeferred.cpp:
+ * runtime/JSInternalPromisePrototype.cpp:
+ * runtime/JSJob.cpp:
+ * runtime/JSMapIterator.cpp:
+ * runtime/JSModuleNamespaceObject.cpp:
+ * runtime/JSModuleRecord.cpp:
+ * runtime/JSObject.cpp:
+ (JSC::JSObject::visitButterfly):
+ (JSC::JSObject::notifyPresenceOfIndexedAccessors):
+ (JSC::JSObject::createInitialIndexedStorage):
+ (JSC::JSObject::createInitialUndecided):
+ (JSC::JSObject::createInitialInt32):
+ (JSC::JSObject::createInitialDouble):
+ (JSC::JSObject::createInitialContiguous):
+ (JSC::JSObject::createArrayStorage):
+ (JSC::JSObject::createInitialArrayStorage):
+ (JSC::JSObject::convertUndecidedToInt32):
+ (JSC::JSObject::convertUndecidedToContiguous):
+ (JSC::JSObject::convertUndecidedToArrayStorage):
+ (JSC::JSObject::convertInt32ToDouble):
+ (JSC::JSObject::convertInt32ToArrayStorage):
+ (JSC::JSObject::convertDoubleToArrayStorage):
+ (JSC::JSObject::convertContiguousToArrayStorage):
+ (JSC::JSObject::putByIndexBeyondVectorLength):
+ (JSC::JSObject::putDirectIndexBeyondVectorLength):
+ (JSC::JSObject::getNewVectorLength):
+ (JSC::JSObject::increaseVectorLength):
+ (JSC::JSObject::ensureLengthSlow):
+ (JSC::JSObject::growOutOfLineStorage):
+ (JSC::JSObject::copyButterfly): Deleted.
+ (JSC::JSObject::copyBackingStore): Deleted.
+ * runtime/JSObject.h:
+ (JSC::JSObject::globalObject):
+ (JSC::JSObject::putDirectInternal):
+ (JSC::JSObject::setStructureAndReallocateStorageIfNecessary): Deleted.
+ * runtime/JSObjectInlines.h:
+ * runtime/JSPromise.cpp:
+ * runtime/JSPromiseConstructor.cpp:
+ * runtime/JSPromiseDeferred.cpp:
+ * runtime/JSPromisePrototype.cpp:
+ * runtime/JSPropertyNameIterator.cpp:
+ * runtime/JSScope.cpp:
+ (JSC::JSScope::resolve):
+ * runtime/JSScope.h:
+ (JSC::JSScope::globalObject):
+ (JSC::JSScope::vm): Deleted.
+ * runtime/JSSetIterator.cpp:
+ * runtime/JSStringIterator.cpp:
+ * runtime/JSTemplateRegistryKey.cpp:
+ * runtime/JSTypedArrayViewConstructor.cpp:
+ * runtime/JSTypedArrayViewPrototype.cpp:
+ * runtime/JSWeakMap.cpp:
+ * runtime/JSWeakSet.cpp:
+ * runtime/MapConstructor.cpp:
+ * runtime/MapIteratorPrototype.cpp:
+ * runtime/MapPrototype.cpp:
+ * runtime/NativeErrorConstructor.cpp:
+ * runtime/NativeStdFunctionCell.cpp:
+ * runtime/Operations.h:
+ (JSC::scribbleFreeCells):
+ (JSC::scribble):
+ * runtime/Options.h:
+ * runtime/PropertyTable.cpp:
+ * runtime/ProxyConstructor.cpp:
+ * runtime/ProxyObject.cpp:
+ * runtime/ProxyRevoke.cpp:
+ * runtime/RegExp.cpp:
+ (JSC::RegExp::match):
+ (JSC::RegExp::matchConcurrently):
+ (JSC::RegExp::matchCompareWithInterpreter):
+ * runtime/RegExp.h:
+ * runtime/RegExpConstructor.h:
+ * runtime/RegExpInlines.h:
+ (JSC::RegExp::matchInline):
+ * runtime/RegExpMatchesArray.h:
+ (JSC::tryCreateUninitializedRegExpMatchesArray):
+ (JSC::createRegExpMatchesArray):
+ * runtime/RegExpPrototype.cpp:
+ (JSC::genericSplit):
+ * runtime/RuntimeType.cpp:
+ * runtime/SamplingProfiler.cpp:
+ (JSC::SamplingProfiler::processUnverifiedStackTraces):
+ * runtime/SetConstructor.cpp:
+ * runtime/SetIteratorPrototype.cpp:
+ * runtime/SetPrototype.cpp:
+ * runtime/StackFrame.cpp: Added.
+ (JSC::StackFrame::sourceID):
+ (JSC::StackFrame::sourceURL):
+ (JSC::StackFrame::functionName):
+ (JSC::StackFrame::computeLineAndColumn):
+ (JSC::StackFrame::toString):
+ * runtime/StackFrame.h: Added.
+ (JSC::StackFrame::isNative):
+ * runtime/StringConstructor.cpp:
+ * runtime/StringIteratorPrototype.cpp:
+ * runtime/StructureInlines.h:
+ (JSC::Structure::propertyTable):
+ * runtime/TemplateRegistry.cpp:
+ * runtime/TestRunnerUtils.cpp:
+ (JSC::finalizeStatsAtEndOfTesting):
+ * runtime/TestRunnerUtils.h:
+ * runtime/TypeProfilerLog.cpp:
+ * runtime/TypeSet.cpp:
+ * runtime/VM.cpp:
+ (JSC::VM::VM):
+ (JSC::VM::ensureStackCapacityForCLoop):
+ (JSC::VM::isSafeToRecurseSoftCLoop):
+ * runtime/VM.h:
+ * runtime/VMEntryScope.h:
+ * runtime/VMInlines.h:
+ (JSC::VM::ensureStackCapacityFor):
+ (JSC::VM::isSafeToRecurseSoft):
+ * runtime/WeakMapConstructor.cpp:
+ * runtime/WeakMapData.cpp:
+ * runtime/WeakMapPrototype.cpp:
+ * runtime/WeakSetConstructor.cpp:
+ * runtime/WeakSetPrototype.cpp:
+ * testRegExp.cpp:
+ (testOneRegExp):
+ * tools/JSDollarVM.cpp:
+ * tools/JSDollarVMPrototype.cpp:
+ (JSC::JSDollarVMPrototype::isInObjectSpace):
+
2016-09-04 Commit Queue <commit-queue@webkit.org>
Unreviewed, rolling out r205415.
diff --git a/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj b/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
index f49ab60..1c84b78 100644
--- a/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
+++ b/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
@@ -88,6 +88,11 @@
0F04396D1B03DC0B009598B7 /* DFGCombinedLiveness.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F04396B1B03DC0B009598B7 /* DFGCombinedLiveness.cpp */; };
0F04396E1B03DC0B009598B7 /* DFGCombinedLiveness.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F04396C1B03DC0B009598B7 /* DFGCombinedLiveness.h */; };
0F05C3B41683CF9200BAF45B /* DFGArrayifySlowPathGenerator.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F05C3B21683CF8F00BAF45B /* DFGArrayifySlowPathGenerator.h */; };
+ 0F070A471D543A8B006E7232 /* CellContainer.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F070A421D543A89006E7232 /* CellContainer.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ 0F070A481D543A90006E7232 /* CellContainerInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F070A431D543A89006E7232 /* CellContainerInlines.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ 0F070A491D543A93006E7232 /* HeapCellInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F070A441D543A89006E7232 /* HeapCellInlines.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ 0F070A4A1D543A95006E7232 /* LargeAllocation.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F070A451D543A89006E7232 /* LargeAllocation.cpp */; };
+ 0F070A4B1D543A98006E7232 /* LargeAllocation.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F070A461D543A89006E7232 /* LargeAllocation.h */; settings = {ATTRIBUTES = (Private, ); }; };
0F0776BF14FF002B00102332 /* JITCompilationEffort.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0776BD14FF002800102332 /* JITCompilationEffort.h */; settings = {ATTRIBUTES = (Private, ); }; };
0F0A75221B94BFA900110660 /* InferredType.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F0A75201B94BFA900110660 /* InferredType.cpp */; };
0F0A75231B94BFA900110660 /* InferredType.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0A75211B94BFA900110660 /* InferredType.h */; settings = {ATTRIBUTES = (Private, ); }; };
@@ -323,6 +328,8 @@
0F38B01817CFE75500B144D3 /* DFGCompilationKey.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F38B01417CFE75500B144D3 /* DFGCompilationKey.h */; };
0F38B01917CFE75500B144D3 /* DFGCompilationMode.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F38B01517CFE75500B144D3 /* DFGCompilationMode.cpp */; };
0F38B01A17CFE75500B144D3 /* DFGCompilationMode.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F38B01617CFE75500B144D3 /* DFGCompilationMode.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ 0F38D2A21D44196800680499 /* AuxiliaryBarrier.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F38D2A01D44196600680499 /* AuxiliaryBarrier.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ 0F38D2A31D44196D00680499 /* AuxiliaryBarrierInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F38D2A11D44196600680499 /* AuxiliaryBarrierInlines.h */; settings = {ATTRIBUTES = (Private, ); }; };
0F392C891B46188400844728 /* DFGOSRExitFuzz.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F392C871B46188400844728 /* DFGOSRExitFuzz.cpp */; };
0F392C8A1B46188400844728 /* DFGOSRExitFuzz.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F392C881B46188400844728 /* DFGOSRExitFuzz.h */; };
0F3A1BF91A9ECB7D000DE01A /* DFGPutStackSinkingPhase.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F3A1BF71A9ECB7D000DE01A /* DFGPutStackSinkingPhase.cpp */; };
@@ -364,9 +371,9 @@
0F4680CA14BBB16C00BFE272 /* LLIntCommon.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4680C514BBB16900BFE272 /* LLIntCommon.h */; settings = {ATTRIBUTES = (Private, ); }; };
0F4680CB14BBB17200BFE272 /* LLIntOfflineAsmConfig.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4680C614BBB16900BFE272 /* LLIntOfflineAsmConfig.h */; settings = {ATTRIBUTES = (Private, ); }; };
0F4680CC14BBB17A00BFE272 /* LowLevelInterpreter.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F4680C714BBB16900BFE272 /* LowLevelInterpreter.cpp */; };
- 0F4680CD14BBB17D00BFE272 /* LowLevelInterpreter.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4680C814BBB16900BFE272 /* LowLevelInterpreter.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ 0F4680CD14BBB17D00BFE272 /* LowLevelInterpreter.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4680C814BBB16900BFE272 /* LowLevelInterpreter.h */; };
0F4680D214BBD16500BFE272 /* LLIntData.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F4680CE14BBB3D100BFE272 /* LLIntData.cpp */; };
- 0F4680D314BBD16700BFE272 /* LLIntData.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4680CF14BBB3D100BFE272 /* LLIntData.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ 0F4680D314BBD16700BFE272 /* LLIntData.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4680CF14BBB3D100BFE272 /* LLIntData.h */; };
0F4680D414BBD24900BFE272 /* HostCallReturnValue.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F4680D014BBC5F800BFE272 /* HostCallReturnValue.cpp */; };
0F4680D514BBD24B00BFE272 /* HostCallReturnValue.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4680D114BBC5F800BFE272 /* HostCallReturnValue.h */; settings = {ATTRIBUTES = (Private, ); }; };
0F485321187750560083B687 /* DFGArithMode.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F48531F187750560083B687 /* DFGArithMode.cpp */; };
@@ -386,6 +393,8 @@
0F4F29DF18B6AD1C0057BC15 /* DFGStaticExecutionCountEstimationPhase.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F4F29DD18B6AD1C0057BC15 /* DFGStaticExecutionCountEstimationPhase.cpp */; };
0F4F29E018B6AD1C0057BC15 /* DFGStaticExecutionCountEstimationPhase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4F29DE18B6AD1C0057BC15 /* DFGStaticExecutionCountEstimationPhase.h */; };
0F50AF3C193E8B3900674EE8 /* DFGStructureClobberState.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F50AF3B193E8B3900674EE8 /* DFGStructureClobberState.h */; };
+ 0F5513A61D5A682C00C32BD8 /* FreeList.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5513A51D5A682A00C32BD8 /* FreeList.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ 0F5513A81D5A68CD00C32BD8 /* FreeList.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F5513A71D5A68CB00C32BD8 /* FreeList.cpp */; };
0F5541B11613C1FB00CE3E25 /* SpecialPointer.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F5541AF1613C1FB00CE3E25 /* SpecialPointer.cpp */; };
0F5541B21613C1FB00CE3E25 /* SpecialPointer.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5541B01613C1FB00CE3E25 /* SpecialPointer.h */; settings = {ATTRIBUTES = (Private, ); }; };
0F55989817C86C5800A1E543 /* ToNativeFromValue.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F55989717C86C5600A1E543 /* ToNativeFromValue.h */; settings = {ATTRIBUTES = (Private, ); }; };
@@ -458,6 +467,9 @@
0F6B8AE51C4EFE1700969052 /* B3FixSSA.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6B8AE11C4EFE1700969052 /* B3FixSSA.h */; };
0F6C73501AC9F99F00BE1682 /* VariableWriteFireDetail.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F6C734E1AC9F99F00BE1682 /* VariableWriteFireDetail.cpp */; };
0F6C73511AC9F99F00BE1682 /* VariableWriteFireDetail.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6C734F1AC9F99F00BE1682 /* VariableWriteFireDetail.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ 0F6DB7E91D6124B500CDBF8E /* StackFrame.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6DB7E81D6124B200CDBF8E /* StackFrame.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ 0F6DB7EA1D6124B800CDBF8E /* StackFrame.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F6DB7E71D6124B200CDBF8E /* StackFrame.cpp */; };
+ 0F6DB7EC1D617D1100CDBF8E /* MacroAssemblerCodeRef.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F6DB7EB1D617D0F00CDBF8E /* MacroAssemblerCodeRef.cpp */; };
0F6E845A19030BEF00562741 /* DFGVariableAccessData.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F6E845919030BEF00562741 /* DFGVariableAccessData.cpp */; };
0F6FC750196110A800E1D02D /* ComplexGetStatus.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F6FC74E196110A800E1D02D /* ComplexGetStatus.cpp */; };
0F6FC751196110A800E1D02D /* ComplexGetStatus.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6FC74F196110A800E1D02D /* ComplexGetStatus.h */; settings = {ATTRIBUTES = (Private, ); }; };
@@ -493,6 +505,8 @@
0F8335B71639C1E6001443B5 /* ArrayAllocationProfile.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F8335B41639C1E3001443B5 /* ArrayAllocationProfile.cpp */; };
0F8335B81639C1EA001443B5 /* ArrayAllocationProfile.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F8335B51639C1E3001443B5 /* ArrayAllocationProfile.h */; settings = {ATTRIBUTES = (Private, ); }; };
0F8364B7164B0C110053329A /* DFGBranchDirection.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F8364B5164B0C0E0053329A /* DFGBranchDirection.h */; };
+ 0F86A26D1D6F796500CB0C92 /* HeapOperation.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F86A26C1D6F796200CB0C92 /* HeapOperation.cpp */; };
+ 0F86A26F1D6F7B3300CB0C92 /* GCTypeMap.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F86A26E1D6F7B3100CB0C92 /* GCTypeMap.h */; };
0F86AE201C5311C5006BE8EC /* B3ComputeDivisionMagic.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F86AE1F1C5311C5006BE8EC /* B3ComputeDivisionMagic.h */; };
0F885E111849A3BE00F1E3FA /* BytecodeUseDef.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F885E101849A3BE00F1E3FA /* BytecodeUseDef.h */; settings = {ATTRIBUTES = (Private, ); }; };
0F893BDB1936E23C001211F4 /* DFGStructureAbstractValue.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F893BDA1936E23C001211F4 /* DFGStructureAbstractValue.cpp */; };
@@ -573,6 +587,7 @@
0FA7A8EB18B413C80052371D /* Reg.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FA7A8E918B413C80052371D /* Reg.cpp */; };
0FA7A8EC18B413C80052371D /* Reg.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FA7A8EA18B413C80052371D /* Reg.h */; settings = {ATTRIBUTES = (Private, ); }; };
0FA7A8EE18CE4FD80052371D /* ScratchRegisterAllocator.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FA7A8ED18CE4FD80052371D /* ScratchRegisterAllocator.cpp */; };
+ 0FADE6731D4D23BE00768457 /* HeapUtil.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FADE6721D4D23BC00768457 /* HeapUtil.h */; };
0FAF7EFD165BA91B000C8455 /* JITDisassembler.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FAF7EFA165BA919000C8455 /* JITDisassembler.cpp */; };
0FAF7EFE165BA91F000C8455 /* JITDisassembler.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FAF7EFB165BA919000C8455 /* JITDisassembler.h */; settings = {ATTRIBUTES = (Private, ); }; };
0FB105851675480F00F8AB6E /* ExitKind.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FB105821675480C00F8AB6E /* ExitKind.cpp */; };
@@ -593,6 +608,7 @@
0FB3878F1BFBC44D00E3AB1E /* AirOptimizeBlockOrder.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FB3878C1BFBC44D00E3AB1E /* AirOptimizeBlockOrder.cpp */; };
0FB387901BFBC44D00E3AB1E /* AirOptimizeBlockOrder.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FB3878D1BFBC44D00E3AB1E /* AirOptimizeBlockOrder.h */; };
0FB387921BFD31A100E3AB1E /* FTLCompile.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FB387911BFD31A100E3AB1E /* FTLCompile.cpp */; };
+ 0FB415841D78FB4C00DF8D09 /* ArrayConventions.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FB415831D78F98200DF8D09 /* ArrayConventions.cpp */; };
0FB438A319270B1D00E1FBC9 /* StructureSet.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FB438A219270B1D00E1FBC9 /* StructureSet.cpp */; };
0FB4FB731BC843140025CA5A /* FTLLazySlowPath.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FB4FB701BC843140025CA5A /* FTLLazySlowPath.cpp */; };
0FB4FB741BC843140025CA5A /* FTLLazySlowPath.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FB4FB711BC843140025CA5A /* FTLLazySlowPath.h */; };
@@ -989,7 +1005,7 @@
14280865107EC11A0013E7B2 /* BooleanPrototype.cpp in Sources */ = {isa = PBXBuildFile; fileRef = BC7952340E15EB5600A898AB /* BooleanPrototype.cpp */; };
14280870107EC1340013E7B2 /* JSWrapperObject.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 65C7A1710A8EAACB00FA37EA /* JSWrapperObject.cpp */; };
14280875107EC13E0013E7B2 /* JSLock.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 65EA4C99092AF9E20093D800 /* JSLock.cpp */; };
- 1429D77C0ED20D7300B89619 /* Interpreter.h in Headers */ = {isa = PBXBuildFile; fileRef = 1429D77B0ED20D7300B89619 /* Interpreter.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ 1429D77C0ED20D7300B89619 /* Interpreter.h in Headers */ = {isa = PBXBuildFile; fileRef = 1429D77B0ED20D7300B89619 /* Interpreter.h */; };
1429D7D40ED2128200B89619 /* Interpreter.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 1429D7D30ED2128200B89619 /* Interpreter.cpp */; };
1429D8780ED21ACD00B89619 /* ExceptionHelpers.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 1429D8770ED21ACD00B89619 /* ExceptionHelpers.cpp */; };
1429D8DD0ED2205B00B89619 /* CallFrame.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 1429D8DB0ED2205B00B89619 /* CallFrame.cpp */; };
@@ -1449,7 +1465,7 @@
969A07980ED1D3AE00F1F681 /* EvalCodeCache.h in Headers */ = {isa = PBXBuildFile; fileRef = 969A07920ED1D3AE00F1F681 /* EvalCodeCache.h */; settings = {ATTRIBUTES = (Private, ); }; };
969A07990ED1D3AE00F1F681 /* Instruction.h in Headers */ = {isa = PBXBuildFile; fileRef = 969A07930ED1D3AE00F1F681 /* Instruction.h */; settings = {ATTRIBUTES = (Private, ); }; };
969A079A0ED1D3AE00F1F681 /* Opcode.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 969A07940ED1D3AE00F1F681 /* Opcode.cpp */; };
- 969A079B0ED1D3AE00F1F681 /* Opcode.h in Headers */ = {isa = PBXBuildFile; fileRef = 969A07950ED1D3AE00F1F681 /* Opcode.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ 969A079B0ED1D3AE00F1F681 /* Opcode.h in Headers */ = {isa = PBXBuildFile; fileRef = 969A07950ED1D3AE00F1F681 /* Opcode.h */; };
978801401471AD920041B016 /* JSDateMath.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 9788FC221471AD0C0068CE2D /* JSDateMath.cpp */; };
978801411471AD920041B016 /* JSDateMath.h in Headers */ = {isa = PBXBuildFile; fileRef = 9788FC231471AD0C0068CE2D /* JSDateMath.h */; settings = {ATTRIBUTES = (Private, ); }; };
990DA67F1C8E316A00295159 /* generate_objc_protocol_type_conversions_implementation.py in Headers */ = {isa = PBXBuildFile; fileRef = 990DA67E1C8E311D00295159 /* generate_objc_protocol_type_conversions_implementation.py */; settings = {ATTRIBUTES = (Private, ); }; };
@@ -2313,6 +2329,11 @@
0F04396B1B03DC0B009598B7 /* DFGCombinedLiveness.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGCombinedLiveness.cpp; path = dfg/DFGCombinedLiveness.cpp; sourceTree = "<group>"; };
0F04396C1B03DC0B009598B7 /* DFGCombinedLiveness.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGCombinedLiveness.h; path = dfg/DFGCombinedLiveness.h; sourceTree = "<group>"; };
0F05C3B21683CF8F00BAF45B /* DFGArrayifySlowPathGenerator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGArrayifySlowPathGenerator.h; path = dfg/DFGArrayifySlowPathGenerator.h; sourceTree = "<group>"; };
+ 0F070A421D543A89006E7232 /* CellContainer.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CellContainer.h; sourceTree = "<group>"; };
+ 0F070A431D543A89006E7232 /* CellContainerInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CellContainerInlines.h; sourceTree = "<group>"; };
+ 0F070A441D543A89006E7232 /* HeapCellInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = HeapCellInlines.h; sourceTree = "<group>"; };
+ 0F070A451D543A89006E7232 /* LargeAllocation.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = LargeAllocation.cpp; sourceTree = "<group>"; };
+ 0F070A461D543A89006E7232 /* LargeAllocation.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = LargeAllocation.h; sourceTree = "<group>"; };
0F0776BD14FF002800102332 /* JITCompilationEffort.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JITCompilationEffort.h; sourceTree = "<group>"; };
0F0A75201B94BFA900110660 /* InferredType.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = InferredType.cpp; sourceTree = "<group>"; };
0F0A75211B94BFA900110660 /* InferredType.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = InferredType.h; sourceTree = "<group>"; };
@@ -2547,6 +2568,8 @@
0F38B01417CFE75500B144D3 /* DFGCompilationKey.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGCompilationKey.h; path = dfg/DFGCompilationKey.h; sourceTree = "<group>"; };
0F38B01517CFE75500B144D3 /* DFGCompilationMode.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGCompilationMode.cpp; path = dfg/DFGCompilationMode.cpp; sourceTree = "<group>"; };
0F38B01617CFE75500B144D3 /* DFGCompilationMode.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGCompilationMode.h; path = dfg/DFGCompilationMode.h; sourceTree = "<group>"; };
+ 0F38D2A01D44196600680499 /* AuxiliaryBarrier.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AuxiliaryBarrier.h; sourceTree = "<group>"; };
+ 0F38D2A11D44196600680499 /* AuxiliaryBarrierInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AuxiliaryBarrierInlines.h; sourceTree = "<group>"; };
0F392C871B46188400844728 /* DFGOSRExitFuzz.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGOSRExitFuzz.cpp; path = dfg/DFGOSRExitFuzz.cpp; sourceTree = "<group>"; };
0F392C881B46188400844728 /* DFGOSRExitFuzz.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGOSRExitFuzz.h; path = dfg/DFGOSRExitFuzz.h; sourceTree = "<group>"; };
0F3A1BF71A9ECB7D000DE01A /* DFGPutStackSinkingPhase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGPutStackSinkingPhase.cpp; path = dfg/DFGPutStackSinkingPhase.cpp; sourceTree = "<group>"; };
@@ -2608,6 +2631,8 @@
0F4F29DD18B6AD1C0057BC15 /* DFGStaticExecutionCountEstimationPhase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGStaticExecutionCountEstimationPhase.cpp; path = dfg/DFGStaticExecutionCountEstimationPhase.cpp; sourceTree = "<group>"; };
0F4F29DE18B6AD1C0057BC15 /* DFGStaticExecutionCountEstimationPhase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGStaticExecutionCountEstimationPhase.h; path = dfg/DFGStaticExecutionCountEstimationPhase.h; sourceTree = "<group>"; };
0F50AF3B193E8B3900674EE8 /* DFGStructureClobberState.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGStructureClobberState.h; path = dfg/DFGStructureClobberState.h; sourceTree = "<group>"; };
+ 0F5513A51D5A682A00C32BD8 /* FreeList.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = FreeList.h; sourceTree = "<group>"; };
+ 0F5513A71D5A68CB00C32BD8 /* FreeList.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = FreeList.cpp; sourceTree = "<group>"; };
0F5541AF1613C1FB00CE3E25 /* SpecialPointer.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = SpecialPointer.cpp; sourceTree = "<group>"; };
0F5541B01613C1FB00CE3E25 /* SpecialPointer.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = SpecialPointer.h; sourceTree = "<group>"; };
0F55989717C86C5600A1E543 /* ToNativeFromValue.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = ToNativeFromValue.h; sourceTree = "<group>"; };
@@ -2682,6 +2707,9 @@
0F6B8AE11C4EFE1700969052 /* B3FixSSA.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = B3FixSSA.h; path = b3/B3FixSSA.h; sourceTree = "<group>"; };
0F6C734E1AC9F99F00BE1682 /* VariableWriteFireDetail.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = VariableWriteFireDetail.cpp; sourceTree = "<group>"; };
0F6C734F1AC9F99F00BE1682 /* VariableWriteFireDetail.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = VariableWriteFireDetail.h; sourceTree = "<group>"; };
+ 0F6DB7E71D6124B200CDBF8E /* StackFrame.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = StackFrame.cpp; sourceTree = "<group>"; };
+ 0F6DB7E81D6124B200CDBF8E /* StackFrame.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = StackFrame.h; sourceTree = "<group>"; };
+ 0F6DB7EB1D617D0F00CDBF8E /* MacroAssemblerCodeRef.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = MacroAssemblerCodeRef.cpp; sourceTree = "<group>"; };
0F6E845919030BEF00562741 /* DFGVariableAccessData.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGVariableAccessData.cpp; path = dfg/DFGVariableAccessData.cpp; sourceTree = "<group>"; };
0F6FC74E196110A800E1D02D /* ComplexGetStatus.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ComplexGetStatus.cpp; sourceTree = "<group>"; };
0F6FC74F196110A800E1D02D /* ComplexGetStatus.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ComplexGetStatus.h; sourceTree = "<group>"; };
@@ -2715,6 +2743,8 @@
0F8335B41639C1E3001443B5 /* ArrayAllocationProfile.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ArrayAllocationProfile.cpp; sourceTree = "<group>"; };
0F8335B51639C1E3001443B5 /* ArrayAllocationProfile.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ArrayAllocationProfile.h; sourceTree = "<group>"; };
0F8364B5164B0C0E0053329A /* DFGBranchDirection.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGBranchDirection.h; path = dfg/DFGBranchDirection.h; sourceTree = "<group>"; };
+ 0F86A26C1D6F796200CB0C92 /* HeapOperation.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = HeapOperation.cpp; sourceTree = "<group>"; };
+ 0F86A26E1D6F7B3100CB0C92 /* GCTypeMap.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = GCTypeMap.h; sourceTree = "<group>"; };
0F86AE1F1C5311C5006BE8EC /* B3ComputeDivisionMagic.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = B3ComputeDivisionMagic.h; path = b3/B3ComputeDivisionMagic.h; sourceTree = "<group>"; };
0F885E101849A3BE00F1E3FA /* BytecodeUseDef.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = BytecodeUseDef.h; sourceTree = "<group>"; };
0F893BDA1936E23C001211F4 /* DFGStructureAbstractValue.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGStructureAbstractValue.cpp; path = dfg/DFGStructureAbstractValue.cpp; sourceTree = "<group>"; };
@@ -2794,6 +2824,7 @@
0FA7A8E918B413C80052371D /* Reg.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = Reg.cpp; sourceTree = "<group>"; };
0FA7A8EA18B413C80052371D /* Reg.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = Reg.h; sourceTree = "<group>"; };
0FA7A8ED18CE4FD80052371D /* ScratchRegisterAllocator.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ScratchRegisterAllocator.cpp; sourceTree = "<group>"; };
+ 0FADE6721D4D23BC00768457 /* HeapUtil.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = HeapUtil.h; sourceTree = "<group>"; };
0FAF7EFA165BA919000C8455 /* JITDisassembler.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JITDisassembler.cpp; sourceTree = "<group>"; };
0FAF7EFB165BA919000C8455 /* JITDisassembler.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JITDisassembler.h; sourceTree = "<group>"; };
0FB105821675480C00F8AB6E /* ExitKind.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ExitKind.cpp; sourceTree = "<group>"; };
@@ -2814,6 +2845,7 @@
0FB3878C1BFBC44D00E3AB1E /* AirOptimizeBlockOrder.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = AirOptimizeBlockOrder.cpp; path = b3/air/AirOptimizeBlockOrder.cpp; sourceTree = "<group>"; };
0FB3878D1BFBC44D00E3AB1E /* AirOptimizeBlockOrder.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = AirOptimizeBlockOrder.h; path = b3/air/AirOptimizeBlockOrder.h; sourceTree = "<group>"; };
0FB387911BFD31A100E3AB1E /* FTLCompile.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLCompile.cpp; path = ftl/FTLCompile.cpp; sourceTree = "<group>"; };
+ 0FB415831D78F98200DF8D09 /* ArrayConventions.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ArrayConventions.cpp; sourceTree = "<group>"; };
0FB438A219270B1D00E1FBC9 /* StructureSet.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = StructureSet.cpp; sourceTree = "<group>"; };
0FB4B51016B3A964003F696B /* DFGMinifiedID.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGMinifiedID.h; path = dfg/DFGMinifiedID.h; sourceTree = "<group>"; };
0FB4B51916B62772003F696B /* DFGAllocator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGAllocator.h; path = dfg/DFGAllocator.h; sourceTree = "<group>"; };
@@ -5239,6 +5271,8 @@
children = (
0F9630351D4192C3005609D9 /* AllocatorAttributes.cpp */,
0F9630361D4192C3005609D9 /* AllocatorAttributes.h */,
+ 0F070A421D543A89006E7232 /* CellContainer.h */,
+ 0F070A431D543A89006E7232 /* CellContainerInlines.h */,
0F1C3DD91BBCE09E00E523E4 /* CellState.h */,
0FD8A31117D4326C00CA2C40 /* CodeBlockSet.cpp */,
0FD8A31217D4326C00CA2C40 /* CodeBlockSet.h */,
@@ -5263,6 +5297,8 @@
0F9630381D4192C3005609D9 /* DestructionMode.h */,
2A83638318D7D0EE0000EBCC /* EdenGCActivityCallback.cpp */,
2A83638418D7D0EE0000EBCC /* EdenGCActivityCallback.h */,
+ 0F5513A71D5A68CB00C32BD8 /* FreeList.cpp */,
+ 0F5513A51D5A682A00C32BD8 /* FreeList.h */,
2A83638718D7D0FE0000EBCC /* FullGCActivityCallback.cpp */,
2A83638818D7D0FE0000EBCC /* FullGCActivityCallback.h */,
2AACE63A18CA5A0300ED0191 /* GCActivityCallback.cpp */,
@@ -5276,6 +5312,7 @@
2AABCDE618EF294200002096 /* GCLogging.h */,
2A343F7418A1748B0039B085 /* GCSegmentedArray.h */,
2A343F7718A1749D0039B085 /* GCSegmentedArrayInlines.h */,
+ 0F86A26E1D6F7B3100CB0C92 /* GCTypeMap.h */,
142E312B134FF0A600AFADB5 /* Handle.h */,
C28318FF16FE4B7D00157BFD /* HandleBlock.h */,
C283190116FE533E00157BFD /* HandleBlockInlines.h */,
@@ -5288,11 +5325,13 @@
14BA7A9613AADFF8005B7C2C /* Heap.h */,
DC3D2B0B1D34376E00BA918C /* HeapCell.cpp */,
DC3D2B091D34316100BA918C /* HeapCell.h */,
+ 0F070A441D543A89006E7232 /* HeapCellInlines.h */,
0F32BD0E1BB34F190093A57F /* HeapHelperPool.cpp */,
0F32BD0F1BB34F190093A57F /* HeapHelperPool.h */,
C2DA778218E259990066FCB6 /* HeapInlines.h */,
2AD8932917E3868F00668276 /* HeapIterationScope.h */,
A5339EC81BB4B4510054F005 /* HeapObserver.h */,
+ 0F86A26C1D6F796200CB0C92 /* HeapOperation.cpp */,
2A6F462517E959CE00C45C98 /* HeapOperation.h */,
A5398FA91C750D950060A963 /* HeapProfiler.cpp */,
A5398FAA1C750D950060A963 /* HeapProfiler.h */,
@@ -5305,12 +5344,15 @@
C24D31E1161CD695002AA4DB /* HeapStatistics.h */,
C2E526BB1590EF000054E48D /* HeapTimer.cpp */,
C2E526BC1590EF000054E48D /* HeapTimer.h */,
+ 0FADE6721D4D23BC00768457 /* HeapUtil.h */,
FE7BA60D1A1A7CEC00F1F7B4 /* HeapVerifier.cpp */,
FE7BA60E1A1A7CEC00F1F7B4 /* HeapVerifier.h */,
C25F8BCB157544A900245B71 /* IncrementalSweeper.cpp */,
C25F8BCC157544A900245B71 /* IncrementalSweeper.h */,
0F766D2915A8CC34008F363E /* JITStubRoutineSet.cpp */,
0F766D2A15A8CC34008F363E /* JITStubRoutineSet.h */,
+ 0F070A451D543A89006E7232 /* LargeAllocation.cpp */,
+ 0F070A461D543A89006E7232 /* LargeAllocation.h */,
0F431736146BAC65007E3890 /* ListableHandler.h */,
FE3913511B794AC900EDAF71 /* LiveObjectData.h */,
FE3913521B794AC900EDAF71 /* LiveObjectList.cpp */,
@@ -5662,12 +5704,15 @@
A7A8AF2817ADB5F3005AB174 /* ArrayBufferView.h */,
BC7952060E15E8A800A898AB /* ArrayConstructor.cpp */,
BC7952070E15E8A800A898AB /* ArrayConstructor.h */,
+ 0FB415831D78F98200DF8D09 /* ArrayConventions.cpp */,
0FB7F38915ED8E3800F167B2 /* ArrayConventions.h */,
A7BDAEC217F4EA1400F6140C /* ArrayIteratorPrototype.cpp */,
A7BDAEC317F4EA1400F6140C /* ArrayIteratorPrototype.h */,
F692A84D0255597D01FF60F7 /* ArrayPrototype.cpp */,
F692A84E0255597D01FF60F7 /* ArrayPrototype.h */,
0FB7F38A15ED8E3800F167B2 /* ArrayStorage.h */,
+ 0F38D2A01D44196600680499 /* AuxiliaryBarrier.h */,
+ 0F38D2A11D44196600680499 /* AuxiliaryBarrierInlines.h */,
52678F8C1A031009006A306D /* BasicBlockLocation.cpp */,
52678F8D1A031009006A306D /* BasicBlockLocation.h */,
147B83AA0E6DB8C9004775A4 /* BatchedTransitionOptimizer.h */,
@@ -5832,7 +5877,6 @@
70DC3E081B2DF2C700054299 /* IteratorPrototype.h */,
93ADFCE60CCBD7AC00D30B08 /* JSArray.cpp */,
938772E5038BFE19008635CE /* JSArray.h */,
- 539FB8B91C99DA7C00940FA1 /* JSArrayInlines.h */,
0F2B66B417B6B5AB00A7AE3F /* JSArrayBuffer.cpp */,
0F2B66B517B6B5AB00A7AE3F /* JSArrayBuffer.h */,
0F2B66B617B6B5AB00A7AE3F /* JSArrayBufferConstructor.cpp */,
@@ -5842,6 +5886,7 @@
0F2B66BA17B6B5AB00A7AE3F /* JSArrayBufferView.cpp */,
0F2B66BB17B6B5AB00A7AE3F /* JSArrayBufferView.h */,
0F2B66BC17B6B5AB00A7AE3F /* JSArrayBufferViewInlines.h */,
+ 539FB8B91C99DA7C00940FA1 /* JSArrayInlines.h */,
86FA9E8F142BBB2D001773B7 /* JSBoundFunction.cpp */,
86FA9E90142BBB2E001773B7 /* JSBoundFunction.h */,
657CF45619BF6662004ACBF2 /* JSCallee.cpp */,
@@ -5910,6 +5955,8 @@
A74DEF90182D991400522C22 /* JSMapIterator.h */,
E3D239C61B829C1C00BBEF67 /* JSModuleEnvironment.cpp */,
E3D239C71B829C1C00BBEF67 /* JSModuleEnvironment.h */,
+ 1879510614C540FFB561C124 /* JSModuleLoader.cpp */,
+ 77B25CB2C3094A92A38E1DB3 /* JSModuleLoader.h */,
E318CBBE1B8AEF5100A2929D /* JSModuleNamespaceObject.cpp */,
E318CBBF1B8AEF5100A2929D /* JSModuleNamespaceObject.h */,
E39DA4A41B7E8B7C0084F33A /* JSModuleRecord.cpp */,
@@ -6102,6 +6149,8 @@
0F0CD4C315F6B6B50032F1C0 /* SparseArrayValueMap.cpp */,
0FB7F39215ED8E3800F167B2 /* SparseArrayValueMap.h */,
0F3AC751183EA1040032029F /* StackAlignment.h */,
+ 0F6DB7E71D6124B200CDBF8E /* StackFrame.cpp */,
+ 0F6DB7E81D6124B200CDBF8E /* StackFrame.h */,
A730B6111250068F009D25B1 /* StrictEvalActivation.cpp */,
A730B6101250068F009D25B1 /* StrictEvalActivation.h */,
BC18C3C00E16EE3300B34460 /* StringConstructor.cpp */,
@@ -6190,8 +6239,6 @@
709FB8661AE335C60039D069 /* WeakSetPrototype.h */,
A7DCB77912E3D90500911940 /* WriteBarrier.h */,
C2B6D75218A33793004A9301 /* WriteBarrierInlines.h */,
- 77B25CB2C3094A92A38E1DB3 /* JSModuleLoader.h */,
- 1879510614C540FFB561C124 /* JSModuleLoader.cpp */,
);
path = runtime;
sourceTree = "<group>";
@@ -6634,6 +6681,7 @@
8640923C156EED3B00566CB2 /* MacroAssemblerARM64.h */,
A729009B17976C6000317298 /* MacroAssemblerARMv7.cpp */,
86ADD1440FDDEA980006EEC2 /* MacroAssemblerARMv7.h */,
+ 0F6DB7EB1D617D0F00CDBF8E /* MacroAssemblerCodeRef.cpp */,
863B23DF0FC60E6200703AA4 /* MacroAssemblerCodeRef.h */,
86C568DE11A213EE0007F7F0 /* MacroAssemblerMIPS.h */,
FE68C6351B90DDD90042BCB3 /* MacroAssemblerPrinter.cpp */,
@@ -7245,6 +7293,7 @@
99DA00A91BD5993100F4575C /* builtins_generate_separate_header.py in Headers */,
0F338E111BF0276C0013C88F /* B3OpaqueByproduct.h in Headers */,
FEA0C4031CDD7D1D00481991 /* FunctionWhitelist.h in Headers */,
+ 0F6DB7E91D6124B500CDBF8E /* StackFrame.h in Headers */,
E3A421431D6F58930007C617 /* PreciseJumpTargetsInlines.h in Headers */,
99DA00AA1BD5993100F4575C /* builtins_generate_separate_implementation.py in Headers */,
99DA00A31BD5993100F4575C /* builtins_generator.py in Headers */,
@@ -7337,6 +7386,7 @@
0F338E1C1BF286EA0013C88F /* B3BlockInsertionSet.h in Headers */,
0F9495881C57F47500413A48 /* B3StackSlot.h in Headers */,
C4F4B6F31A05C944005CAB76 /* cpp_generator_templates.py in Headers */,
+ 0F38D2A21D44196800680499 /* AuxiliaryBarrier.h in Headers */,
5DE6E5B30E1728EC00180407 /* create_hash_table in Headers */,
9959E92B1BD17FA4001AA413 /* cssmin.py in Headers */,
2A111246192FCE79005EE18D /* CustomGetterSetter.h in Headers */,
@@ -7466,6 +7516,7 @@
A7D9A29817A0BC7400EE2618 /* DFGLICMPhase.h in Headers */,
99D6A1161BEAD34D00E25C37 /* RemoteAutomationTarget.h in Headers */,
79C4B15E1BA2158F00FD592E /* DFGLiveCatchVariablePreservationPhase.h in Headers */,
+ 0F86A26F1D6F7B3300CB0C92 /* GCTypeMap.h in Headers */,
A7D89CFC17A0B8CC00773AD8 /* DFGLivenessAnalysisPhase.h in Headers */,
0FF0F19B16B729FA005DF95B /* DFGLongLivedState.h in Headers */,
0F338DF21BE93AD10013C88F /* B3StackmapValue.h in Headers */,
@@ -7562,6 +7613,7 @@
0FFFC96014EF90BD00C72532 /* DFGVirtualRegisterAllocationPhase.h in Headers */,
0FC97F4218202119002C9B26 /* DFGWatchpointCollectionPhase.h in Headers */,
0FDB2CE8174830A2007B3C1B /* DFGWorklist.h in Headers */,
+ 0F070A491D543A93006E7232 /* HeapCellInlines.h in Headers */,
0FE050181AA9091100D33B33 /* DirectArguments.h in Headers */,
0FE050161AA9091100D33B33 /* DirectArgumentsOffset.h in Headers */,
0FF42731158EBD54004CB9FF /* Disassembler.h in Headers */,
@@ -7692,6 +7744,7 @@
0FE0501A1AA9091100D33B33 /* GenericArgumentsInlines.h in Headers */,
FE3A06C01C11041A00390FDD /* JITRightShiftGenerator.h in Headers */,
708EBE241CE8F35800453146 /* IntlObjectInlines.h in Headers */,
+ 0F070A481D543A90006E7232 /* CellContainerInlines.h in Headers */,
FE6029D91D6E1E4F0030204D /* ThrowScopeLocation.h in Headers */,
0FE0501B1AA9091100D33B33 /* GenericOffset.h in Headers */,
0F2B66E017B6B5AB00A7AE3F /* GenericTypedArrayView.h in Headers */,
@@ -7779,6 +7832,7 @@
A1587D6E1B4DC14100D69849 /* IntlDateTimeFormat.h in Headers */,
FE187A0F1C030D6C0038BBCA /* SnippetOperand.h in Headers */,
A1587D701B4DC14100D69849 /* IntlDateTimeFormatConstructor.h in Headers */,
+ 0FADE6731D4D23BE00768457 /* HeapUtil.h in Headers */,
A1587D751B4DC1C600D69849 /* IntlDateTimeFormatConstructor.lut.h in Headers */,
A5398FAB1C750DA40060A963 /* HeapProfiler.h in Headers */,
A1587D721B4DC14100D69849 /* IntlDateTimeFormatPrototype.h in Headers */,
@@ -7903,6 +7957,7 @@
C25D709C16DE99F400FCA6BC /* JSManagedValue.h in Headers */,
2A4BB7F318A41179008A0FCD /* JSManagedValueInternal.h in Headers */,
A700874217CBE8EB00C3E643 /* JSMap.h in Headers */,
+ 0F38D2A31D44196D00680499 /* AuxiliaryBarrierInlines.h in Headers */,
A74DEF96182D991400522C22 /* JSMapIterator.h in Headers */,
9959E92D1BD17FA4001AA413 /* jsmin.py in Headers */,
E3D239C91B829C1C00BBEF67 /* JSModuleEnvironment.h in Headers */,
@@ -7940,6 +7995,7 @@
BC18C4270E16F5CD00B34460 /* JSString.h in Headers */,
86E85539111B9968001AF51E /* JSStringBuilder.h in Headers */,
70EC0EC31AA0D7DA00B6AAFA /* JSStringIterator.h in Headers */,
+ 0F070A471D543A8B006E7232 /* CellContainer.h in Headers */,
2600B5A7152BAAA70091EE5F /* JSStringJoiner.h in Headers */,
BC18C4280E16F5CD00B34460 /* JSStringRef.h in Headers */,
43AB26C61C1A535900D82AE6 /* B3MathExtras.h in Headers */,
@@ -8010,6 +8066,7 @@
14B723B812D7DA6F003BD5ED /* MachineStackMarker.h in Headers */,
86C36EEA0EE1289D00B3DF59 /* MacroAssembler.h in Headers */,
43422A671C16267800E2EB98 /* B3ReduceDoubleToFloat.h in Headers */,
+ 0F070A4B1D543A98006E7232 /* LargeAllocation.h in Headers */,
86D3B2C610156BDE002865E7 /* MacroAssemblerARM.h in Headers */,
A1A009C01831A22D00CF8711 /* MacroAssemblerARM64.h in Headers */,
86ADD1460FDDEA980006EEC2 /* MacroAssemblerARMv7.h in Headers */,
@@ -8057,6 +8114,7 @@
BC18C4440E16F5CD00B34460 /* NumberPrototype.h in Headers */,
996B73211BDA08EF00331B84 /* NumberPrototype.lut.h in Headers */,
142D3939103E4560007DCB52 /* NumericStrings.h in Headers */,
+ 0F5513A61D5A682C00C32BD8 /* FreeList.h in Headers */,
A5EA710C19F6DE820098F5EC /* objc_generator.py in Headers */,
C4F4B6F61A05C984005CAB76 /* objc_generator_templates.py in Headers */,
86F3EEBD168CDE930077B92A /* ObjCCallbackFunction.h in Headers */,
@@ -8839,6 +8897,7 @@
0FEC856F1BDACDC70080FF74 /* AirArg.cpp in Sources */,
0F4DE1CE1C4C1B54004D6C11 /* AirFixObviousSpills.cpp in Sources */,
0FEC85711BDACDC70080FF74 /* AirBasicBlock.cpp in Sources */,
+ 0F070A4A1D543A95006E7232 /* LargeAllocation.cpp in Sources */,
0FEC85731BDACDC70080FF74 /* AirCCallSpecial.cpp in Sources */,
0FEC85751BDACDC70080FF74 /* AirCode.cpp in Sources */,
0F4570381BE44C910062A629 /* AirEliminateDeadCode.cpp in Sources */,
@@ -9100,6 +9159,7 @@
A78A977A179738B8009DF744 /* DFGPlan.cpp in Sources */,
0FBE0F7416C1DB090082C5E8 /* DFGPredictionInjectionPhase.cpp in Sources */,
0FFFC95D14EF90B300C72532 /* DFGPredictionPropagationPhase.cpp in Sources */,
+ 0F86A26D1D6F796500CB0C92 /* HeapOperation.cpp in Sources */,
0F3E01AA19D353A500F61B7F /* DFGPrePostNumbering.cpp in Sources */,
0F2B9CEC19D0BA7D00B1D1B5 /* DFGPromotedHeapLocation.cpp in Sources */,
0FB17662196B8F9E0091052A /* DFGPureValue.cpp in Sources */,
@@ -9261,6 +9321,7 @@
0FE34C191C4B39AE0003A512 /* AirLogRegisterPressure.cpp in Sources */,
A1B9E2391B4E0D6700BC7FED /* IntlCollator.cpp in Sources */,
A1B9E23B1B4E0D6700BC7FED /* IntlCollatorConstructor.cpp in Sources */,
+ 0F6DB7EA1D6124B800CDBF8E /* StackFrame.cpp in Sources */,
A1B9E23D1B4E0D6700BC7FED /* IntlCollatorPrototype.cpp in Sources */,
A1587D6D1B4DC14100D69849 /* IntlDateTimeFormat.cpp in Sources */,
A1587D6F1B4DC14100D69849 /* IntlDateTimeFormatConstructor.cpp in Sources */,
@@ -9284,14 +9345,17 @@
146FE51211A710430087AE66 /* JITCall32_64.cpp in Sources */,
0F8F94441667635400D61971 /* JITCode.cpp in Sources */,
0FAF7EFD165BA91B000C8455 /* JITDisassembler.cpp in Sources */,
+ 0F6DB7EC1D617D1100CDBF8E /* MacroAssemblerCodeRef.cpp in Sources */,
0F46808314BA573100BFE272 /* JITExceptions.cpp in Sources */,
0FB14E1E18124ACE009B6B4D /* JITInlineCacheGenerator.cpp in Sources */,
BCDD51EB0FB8DF74004A8BDC /* JITOpcodes.cpp in Sources */,
A71236E51195F33C00BD2174 /* JITOpcodes32_64.cpp in Sources */,
0F24E54C17EE274900ABB217 /* JITOperations.cpp in Sources */,
+ 0F5513A81D5A68CD00C32BD8 /* FreeList.cpp in Sources */,
FE99B24A1C24C3D700C82159 /* JITNegGenerator.cpp in Sources */,
86CC85C40EE7A89400288682 /* JITPropertyAccess.cpp in Sources */,
A7C1E8E4112E72EF00A37F98 /* JITPropertyAccess32_64.cpp in Sources */,
+ 0FB415841D78FB4C00DF8D09 /* ArrayConventions.cpp in Sources */,
0F766D2815A8CC1E008F363E /* JITStubRoutine.cpp in Sources */,
0F766D2B15A8CC38008F363E /* JITStubRoutineSet.cpp in Sources */,
FE4238901BE18C3C00514737 /* JITSubGenerator.cpp in Sources */,
diff --git a/Source/JavaScriptCore/Scripts/builtins/builtins_generate_combined_implementation.py b/Source/JavaScriptCore/Scripts/builtins/builtins_generate_combined_implementation.py
index 5331031..c60ee4d 100644
--- a/Source/JavaScriptCore/Scripts/builtins/builtins_generate_combined_implementation.py
+++ b/Source/JavaScriptCore/Scripts/builtins/builtins_generate_combined_implementation.py
@@ -72,6 +72,9 @@
("JavaScriptCore", "builtins/BuiltinExecutables.h"),
),
(["JavaScriptCore", "WebCore"],
+ ("JavaScriptCore", "heap/HeapInlines.h"),
+ ),
+ (["JavaScriptCore", "WebCore"],
("JavaScriptCore", "runtime/Executable.h"),
),
(["JavaScriptCore", "WebCore"],
diff --git a/Source/JavaScriptCore/Scripts/builtins/builtins_generate_internals_wrapper_implementation.py b/Source/JavaScriptCore/Scripts/builtins/builtins_generate_internals_wrapper_implementation.py
index 9f23f49..2597f28 100644
--- a/Source/JavaScriptCore/Scripts/builtins/builtins_generate_internals_wrapper_implementation.py
+++ b/Source/JavaScriptCore/Scripts/builtins/builtins_generate_internals_wrapper_implementation.py
@@ -68,6 +68,9 @@
("WebCore", "WebCoreJSClientData.h"),
),
(["WebCore"],
+ ("JavaScriptCore", "heap/HeapInlines.h"),
+ ),
+ (["WebCore"],
("JavaScriptCore", "heap/SlotVisitorInlines.h"),
),
(["WebCore"],
diff --git a/Source/JavaScriptCore/Scripts/builtins/builtins_generate_separate_implementation.py b/Source/JavaScriptCore/Scripts/builtins/builtins_generate_separate_implementation.py
index 7789fe3..5c7b554 100644
--- a/Source/JavaScriptCore/Scripts/builtins/builtins_generate_separate_implementation.py
+++ b/Source/JavaScriptCore/Scripts/builtins/builtins_generate_separate_implementation.py
@@ -84,6 +84,9 @@
("JavaScriptCore", "builtins/BuiltinExecutables.h"),
),
(["JavaScriptCore", "WebCore"],
+ ("JavaScriptCore", "heap/HeapInlines.h"),
+ ),
+ (["JavaScriptCore", "WebCore"],
("JavaScriptCore", "runtime/Executable.h"),
),
(["JavaScriptCore", "WebCore"],
diff --git a/Source/JavaScriptCore/assembler/AbstractMacroAssembler.h b/Source/JavaScriptCore/assembler/AbstractMacroAssembler.h
index 50bc264..d05c913 100644
--- a/Source/JavaScriptCore/assembler/AbstractMacroAssembler.h
+++ b/Source/JavaScriptCore/assembler/AbstractMacroAssembler.h
@@ -725,20 +725,18 @@
append(jump);
}
- void link(AbstractMacroAssemblerType* masm)
+ void link(AbstractMacroAssemblerType* masm) const
{
size_t size = m_jumps.size();
for (size_t i = 0; i < size; ++i)
m_jumps[i].link(masm);
- m_jumps.clear();
}
- void linkTo(Label label, AbstractMacroAssemblerType* masm)
+ void linkTo(Label label, AbstractMacroAssemblerType* masm) const
{
size_t size = m_jumps.size();
for (size_t i = 0; i < size; ++i)
m_jumps[i].linkTo(label, masm);
- m_jumps.clear();
}
void append(Jump jump)
diff --git a/Source/JavaScriptCore/assembler/MacroAssembler.h b/Source/JavaScriptCore/assembler/MacroAssembler.h
index 3e2bb18..bfd483e 100644
--- a/Source/JavaScriptCore/assembler/MacroAssembler.h
+++ b/Source/JavaScriptCore/assembler/MacroAssembler.h
@@ -28,6 +28,8 @@
#if ENABLE(ASSEMBLER)
+#include "JSCJSValue.h"
+
#if CPU(ARM_THUMB2)
#include "MacroAssemblerARMv7.h"
namespace JSC { typedef MacroAssemblerARMv7 MacroAssemblerBase; };
diff --git a/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h b/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h
index d306f62..92ffdbc 100644
--- a/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h
+++ b/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h
@@ -166,7 +166,10 @@
m_assembler.add<32>(dest, src, UInt12(imm.m_value));
else if (isUInt12(-imm.m_value))
m_assembler.sub<32>(dest, src, UInt12(-imm.m_value));
- else {
+ else if (src != dest) {
+ move(imm, dest);
+ add32(src, dest);
+ } else {
move(imm, getCachedDataTempRegisterIDAndInvalidate());
m_assembler.add<32>(dest, src, dataTempRegister);
}
diff --git a/Source/JavaScriptCore/assembler/MacroAssemblerCodeRef.cpp b/Source/JavaScriptCore/assembler/MacroAssemblerCodeRef.cpp
new file mode 100644
index 0000000..168b328
--- /dev/null
+++ b/Source/JavaScriptCore/assembler/MacroAssemblerCodeRef.cpp
@@ -0,0 +1,68 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "config.h"
+#include "MacroAssemblerCodeRef.h"
+
+#include "JSCInlines.h"
+#include "LLIntData.h"
+
+namespace JSC {
+
+MacroAssemblerCodePtr MacroAssemblerCodePtr::createLLIntCodePtr(OpcodeID codeId)
+{
+ return createFromExecutableAddress(LLInt::getCodePtr(codeId));
+}
+
+void MacroAssemblerCodePtr::dumpWithName(const char* name, PrintStream& out) const
+{
+ if (!m_value) {
+ out.print(name, "(null)");
+ return;
+ }
+ if (executableAddress() == dataLocation()) {
+ out.print(name, "(", RawPointer(executableAddress()), ")");
+ return;
+ }
+ out.print(name, "(executable = ", RawPointer(executableAddress()), ", dataLocation = ", RawPointer(dataLocation()), ")");
+}
+
+void MacroAssemblerCodePtr::dump(PrintStream& out) const
+{
+ dumpWithName("CodePtr", out);
+}
+
+MacroAssemblerCodeRef MacroAssemblerCodeRef::createLLIntCodeRef(OpcodeID codeId)
+{
+ return createSelfManagedCodeRef(MacroAssemblerCodePtr::createFromExecutableAddress(LLInt::getCodePtr(codeId)));
+}
+
+void MacroAssemblerCodeRef::dump(PrintStream& out) const
+{
+ m_codePtr.dumpWithName("CodeRef", out);
+}
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/assembler/MacroAssemblerCodeRef.h b/Source/JavaScriptCore/assembler/MacroAssemblerCodeRef.h
index 94cabc7..f155f7d 100644
--- a/Source/JavaScriptCore/assembler/MacroAssemblerCodeRef.h
+++ b/Source/JavaScriptCore/assembler/MacroAssemblerCodeRef.h
@@ -28,7 +28,6 @@
#include "Disassembler.h"
#include "ExecutableAllocator.h"
-#include "LLIntData.h"
#include <wtf/DataLog.h>
#include <wtf/PassRefPtr.h>
#include <wtf/PrintStream.h>
@@ -53,6 +52,8 @@
namespace JSC {
+enum OpcodeID : unsigned;
+
// FunctionPtr:
//
// FunctionPtr should be used to wrap pointers to C/C++ functions in JSC
@@ -273,10 +274,7 @@
return result;
}
- static MacroAssemblerCodePtr createLLIntCodePtr(OpcodeID codeId)
- {
- return createFromExecutableAddress(LLInt::getCodePtr(codeId));
- }
+ static MacroAssemblerCodePtr createLLIntCodePtr(OpcodeID codeId);
explicit MacroAssemblerCodePtr(ReturnAddressPtr ra)
: m_value(ra.value())
@@ -299,23 +297,9 @@
return m_value == other.m_value;
}
- void dumpWithName(const char* name, PrintStream& out) const
- {
- if (!m_value) {
- out.print(name, "(null)");
- return;
- }
- if (executableAddress() == dataLocation()) {
- out.print(name, "(", RawPointer(executableAddress()), ")");
- return;
- }
- out.print(name, "(executable = ", RawPointer(executableAddress()), ", dataLocation = ", RawPointer(dataLocation()), ")");
- }
+ void dumpWithName(const char* name, PrintStream& out) const;
- void dump(PrintStream& out) const
- {
- dumpWithName("CodePtr", out);
- }
+ void dump(PrintStream& out) const;
enum EmptyValueTag { EmptyValue };
enum DeletedValueTag { DeletedValue };
@@ -389,10 +373,7 @@
}
// Helper for creating self-managed code refs from LLInt.
- static MacroAssemblerCodeRef createLLIntCodeRef(OpcodeID codeId)
- {
- return createSelfManagedCodeRef(MacroAssemblerCodePtr::createFromExecutableAddress(LLInt::getCodePtr(codeId)));
- }
+ static MacroAssemblerCodeRef createLLIntCodeRef(OpcodeID codeId);
ExecutableMemoryHandle* executableMemory() const
{
@@ -418,10 +399,7 @@
explicit operator bool() const { return !!m_codePtr; }
- void dump(PrintStream& out) const
- {
- m_codePtr.dumpWithName("CodeRef", out);
- }
+ void dump(PrintStream& out) const;
private:
MacroAssemblerCodePtr m_codePtr;
diff --git a/Source/JavaScriptCore/b3/B3BasicBlock.cpp b/Source/JavaScriptCore/b3/B3BasicBlock.cpp
index 0317c81..63a4e58 100644
--- a/Source/JavaScriptCore/b3/B3BasicBlock.cpp
+++ b/Source/JavaScriptCore/b3/B3BasicBlock.cpp
@@ -85,6 +85,11 @@
return appendIntConstant(proc, likeValue->origin(), likeValue->type(), value);
}
+Value* BasicBlock::appendBoolConstant(Procedure& proc, Origin origin, bool value)
+{
+ return appendIntConstant(proc, origin, Int32, value ? 1 : 0);
+}
+
void BasicBlock::clearSuccessors()
{
m_successors.clear();
diff --git a/Source/JavaScriptCore/b3/B3BasicBlock.h b/Source/JavaScriptCore/b3/B3BasicBlock.h
index 7d46bb9..9fb666a 100644
--- a/Source/JavaScriptCore/b3/B3BasicBlock.h
+++ b/Source/JavaScriptCore/b3/B3BasicBlock.h
@@ -82,6 +82,7 @@
JS_EXPORT_PRIVATE Value* appendIntConstant(Procedure&, Origin, Type, int64_t value);
Value* appendIntConstant(Procedure&, Value* likeValue, int64_t value);
+ Value* appendBoolConstant(Procedure&, Origin, bool);
void removeLast(Procedure&);
diff --git a/Source/JavaScriptCore/b3/B3DuplicateTails.cpp b/Source/JavaScriptCore/b3/B3DuplicateTails.cpp
index 95492bc..fe94a60 100644
--- a/Source/JavaScriptCore/b3/B3DuplicateTails.cpp
+++ b/Source/JavaScriptCore/b3/B3DuplicateTails.cpp
@@ -71,7 +71,11 @@
IndexSet<BasicBlock> candidates;
for (BasicBlock* block : m_proc) {
- if (block->size() > m_maxSize || block->numSuccessors() > m_maxSuccessors)
+ if (block->size() > m_maxSize)
+ continue;
+ if (block->numSuccessors() > m_maxSuccessors)
+ continue;
+ if (block->last()->type() != Void) // Demoting doesn't handle terminals with values.
continue;
candidates.add(block);
diff --git a/Source/JavaScriptCore/b3/B3StackmapGenerationParams.h b/Source/JavaScriptCore/b3/B3StackmapGenerationParams.h
index af8c3fb..77e17b8 100644
--- a/Source/JavaScriptCore/b3/B3StackmapGenerationParams.h
+++ b/Source/JavaScriptCore/b3/B3StackmapGenerationParams.h
@@ -92,7 +92,7 @@
// This is computed lazily, so it won't work if you capture StackmapGenerationParams by value.
// Returns true if the successor at the given index is going to be emitted right after the
// patchpoint.
- bool fallsThroughToSuccessor(unsigned successorIndex) const;
+ JS_EXPORT_PRIVATE bool fallsThroughToSuccessor(unsigned successorIndex) const;
// This is provided for convenience; it means that you don't have to capture it if you don't want to.
JS_EXPORT_PRIVATE Procedure& proc() const;
diff --git a/Source/JavaScriptCore/b3/testb3.cpp b/Source/JavaScriptCore/b3/testb3.cpp
index 6f134dc..e7b8ca8 100644
--- a/Source/JavaScriptCore/b3/testb3.cpp
+++ b/Source/JavaScriptCore/b3/testb3.cpp
@@ -12920,6 +12920,80 @@
CHECK(terminal.args[2].kind() == Air::Arg::BitImm || terminal.args[2].kind() == Air::Arg::BitImm64);
}
+void testPatchpointTerminalReturnValue(bool successIsRare)
+{
+ // This is a unit test for how FTL's heap allocation fast paths behave.
+ Procedure proc;
+
+ BasicBlock* root = proc.addBlock();
+ BasicBlock* success = proc.addBlock();
+ BasicBlock* slowPath = proc.addBlock();
+ BasicBlock* continuation = proc.addBlock();
+
+ Value* arg = root->appendNew<Value>(
+ proc, Trunc, Origin(),
+ root->appendNew<ArgumentRegValue>(proc, Origin(), GPRInfo::argumentGPR0));
+
+ PatchpointValue* patchpoint = root->appendNew<PatchpointValue>(proc, Int32, Origin());
+ patchpoint->effects.terminal = true;
+ patchpoint->clobber(RegisterSet::macroScratchRegisters());
+
+ if (successIsRare) {
+ root->appendSuccessor(FrequentedBlock(success, FrequencyClass::Rare));
+ root->appendSuccessor(slowPath);
+ } else {
+ root->appendSuccessor(success);
+ root->appendSuccessor(FrequentedBlock(slowPath, FrequencyClass::Rare));
+ }
+
+ patchpoint->appendSomeRegister(arg);
+
+ patchpoint->setGenerator(
+ [&] (CCallHelpers& jit, const StackmapGenerationParams& params) {
+ AllowMacroScratchRegisterUsage allowScratch(jit);
+
+ CCallHelpers::Jump jumpToSlow =
+ jit.branch32(CCallHelpers::Above, params[1].gpr(), CCallHelpers::TrustedImm32(42));
+
+ jit.add32(CCallHelpers::TrustedImm32(31), params[1].gpr(), params[0].gpr());
+
+ CCallHelpers::Jump jumpToSuccess;
+ if (!params.fallsThroughToSuccessor(0))
+ jumpToSuccess = jit.jump();
+
+ Vector<Box<CCallHelpers::Label>> labels = params.successorLabels();
+
+ params.addLatePath(
+ [=] (CCallHelpers& jit) {
+ jumpToSlow.linkTo(*labels[1], &jit);
+ if (jumpToSuccess.isSet())
+ jumpToSuccess.linkTo(*labels[0], &jit);
+ });
+ });
+
+ UpsilonValue* successUpsilon = success->appendNew<UpsilonValue>(proc, Origin(), patchpoint);
+ success->appendNew<Value>(proc, Jump, Origin());
+ success->setSuccessors(continuation);
+
+ UpsilonValue* slowPathUpsilon = slowPath->appendNew<UpsilonValue>(
+ proc, Origin(), slowPath->appendNew<Const32Value>(proc, Origin(), 666));
+ slowPath->appendNew<Value>(proc, Jump, Origin());
+ slowPath->setSuccessors(continuation);
+
+ Value* phi = continuation->appendNew<Value>(proc, Phi, Int32, Origin());
+ successUpsilon->setPhi(phi);
+ slowPathUpsilon->setPhi(phi);
+ continuation->appendNew<Value>(proc, Return, Origin(), phi);
+
+ auto code = compile(proc);
+ CHECK_EQ(invoke<int>(*code, 0), 31);
+ CHECK_EQ(invoke<int>(*code, 1), 32);
+ CHECK_EQ(invoke<int>(*code, 41), 72);
+ CHECK_EQ(invoke<int>(*code, 42), 73);
+ CHECK_EQ(invoke<int>(*code, 43), 666);
+ CHECK_EQ(invoke<int>(*code, -1), 666);
+}
+
// Make sure the compiler does not try to optimize anything out.
NEVER_INLINE double zero()
{
@@ -14337,6 +14411,8 @@
RUN(testEntrySwitchLoop());
RUN(testSomeEarlyRegister());
+ RUN(testPatchpointTerminalReturnValue(true));
+ RUN(testPatchpointTerminalReturnValue(false));
if (isX86()) {
RUN(testBranchBitAndImmFusion(Identity, Int64, 1, Air::BranchTest32, Air::Arg::Tmp));
diff --git a/Source/JavaScriptCore/bindings/ScriptValue.cpp b/Source/JavaScriptCore/bindings/ScriptValue.cpp
index 7da8673..7f6ea16 100644
--- a/Source/JavaScriptCore/bindings/ScriptValue.cpp
+++ b/Source/JavaScriptCore/bindings/ScriptValue.cpp
@@ -32,9 +32,8 @@
#include "APICast.h"
#include "InspectorValues.h"
+#include "JSCInlines.h"
#include "JSLock.h"
-#include "JSObjectInlines.h"
-#include "StructureInlines.h"
using namespace JSC;
using namespace Inspector;
diff --git a/Source/JavaScriptCore/bytecode/AdaptiveInferredPropertyValueWatchpointBase.cpp b/Source/JavaScriptCore/bytecode/AdaptiveInferredPropertyValueWatchpointBase.cpp
index 04d98f6..2fd7031 100644
--- a/Source/JavaScriptCore/bytecode/AdaptiveInferredPropertyValueWatchpointBase.cpp
+++ b/Source/JavaScriptCore/bytecode/AdaptiveInferredPropertyValueWatchpointBase.cpp
@@ -26,8 +26,7 @@
#include "config.h"
#include "AdaptiveInferredPropertyValueWatchpointBase.h"
-#include "JSCellInlines.h"
-#include "StructureInlines.h"
+#include "JSCInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/bytecode/BytecodeLivenessAnalysis.cpp b/Source/JavaScriptCore/bytecode/BytecodeLivenessAnalysis.cpp
index b13e80b..d7868a8 100644
--- a/Source/JavaScriptCore/bytecode/BytecodeLivenessAnalysis.cpp
+++ b/Source/JavaScriptCore/bytecode/BytecodeLivenessAnalysis.cpp
@@ -32,6 +32,7 @@
#include "CodeBlock.h"
#include "FullBytecodeLiveness.h"
#include "InterpreterInlines.h"
+#include "PreciseJumpTargets.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/bytecode/BytecodeRewriter.cpp b/Source/JavaScriptCore/bytecode/BytecodeRewriter.cpp
index 28dc20b3..6dadb6e 100644
--- a/Source/JavaScriptCore/bytecode/BytecodeRewriter.cpp
+++ b/Source/JavaScriptCore/bytecode/BytecodeRewriter.cpp
@@ -27,6 +27,7 @@
#include "config.h"
#include "BytecodeRewriter.h"
+#include "HeapInlines.h"
#include "PreciseJumpTargetsInlines.h"
#include <wtf/BubbleSort.h>
diff --git a/Source/JavaScriptCore/bytecode/BytecodeUseDef.h b/Source/JavaScriptCore/bytecode/BytecodeUseDef.h
index 4f14b23..0eb5ec6 100644
--- a/Source/JavaScriptCore/bytecode/BytecodeUseDef.h
+++ b/Source/JavaScriptCore/bytecode/BytecodeUseDef.h
@@ -27,6 +27,7 @@
#define BytecodeUseDef_h
#include "CodeBlock.h"
+#include "Interpreter.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/bytecode/CallLinkInfo.cpp b/Source/JavaScriptCore/bytecode/CallLinkInfo.cpp
index c004d8a..67248fe 100644
--- a/Source/JavaScriptCore/bytecode/CallLinkInfo.cpp
+++ b/Source/JavaScriptCore/bytecode/CallLinkInfo.cpp
@@ -30,12 +30,29 @@
#include "DFGOperations.h"
#include "DFGThunks.h"
#include "JSCInlines.h"
+#include "Opcode.h"
#include "Repatch.h"
#include <wtf/ListDump.h>
#if ENABLE(JIT)
namespace JSC {
+CallLinkInfo::CallType CallLinkInfo::callTypeFor(OpcodeID opcodeID)
+{
+ if (opcodeID == op_call || opcodeID == op_call_eval)
+ return Call;
+ if (opcodeID == op_call_varargs)
+ return CallVarargs;
+ if (opcodeID == op_construct)
+ return Construct;
+ if (opcodeID == op_construct_varargs)
+ return ConstructVarargs;
+ if (opcodeID == op_tail_call)
+ return TailCall;
+ ASSERT(opcodeID == op_tail_call_varargs || op_tail_call_forward_arguments);
+ return TailCallVarargs;
+}
+
CallLinkInfo::CallLinkInfo()
: m_hasSeenShouldRepatch(false)
, m_hasSeenClosure(false)
diff --git a/Source/JavaScriptCore/bytecode/CallLinkInfo.h b/Source/JavaScriptCore/bytecode/CallLinkInfo.h
index f7e6e73..d5402eb 100644
--- a/Source/JavaScriptCore/bytecode/CallLinkInfo.h
+++ b/Source/JavaScriptCore/bytecode/CallLinkInfo.h
@@ -31,7 +31,6 @@
#include "CodeSpecializationKind.h"
#include "JITWriteBarrier.h"
#include "JSFunction.h"
-#include "Opcode.h"
#include "PolymorphicCallStubRoutine.h"
#include "WriteBarrier.h"
#include <wtf/SentinelLinkedList.h>
@@ -40,26 +39,13 @@
#if ENABLE(JIT)
+enum OpcodeID : unsigned;
struct CallFrameShuffleData;
class CallLinkInfo : public BasicRawSentinelNode<CallLinkInfo> {
public:
enum CallType { None, Call, CallVarargs, Construct, ConstructVarargs, TailCall, TailCallVarargs };
- static CallType callTypeFor(OpcodeID opcodeID)
- {
- if (opcodeID == op_call || opcodeID == op_call_eval)
- return Call;
- if (opcodeID == op_call_varargs)
- return CallVarargs;
- if (opcodeID == op_construct)
- return Construct;
- if (opcodeID == op_construct_varargs)
- return ConstructVarargs;
- if (opcodeID == op_tail_call)
- return TailCall;
- ASSERT(opcodeID == op_tail_call_varargs || op_tail_call_forward_arguments);
- return TailCallVarargs;
- }
+ static CallType callTypeFor(OpcodeID opcodeID);
static bool isVarargsCallType(CallType callType)
{
diff --git a/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp b/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp
index 7ce79ab..d909508 100644
--- a/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp
+++ b/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp
@@ -30,6 +30,7 @@
#include "CodeBlock.h"
#include "DFGJITCode.h"
#include "InlineCallFrame.h"
+#include "Interpreter.h"
#include "LLIntCallLinkInfo.h"
#include "JSCInlines.h"
#include <wtf/CommaPrinter.h>
diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.cpp b/Source/JavaScriptCore/bytecode/CodeBlock.cpp
index d23a234..043c946 100644
--- a/Source/JavaScriptCore/bytecode/CodeBlock.cpp
+++ b/Source/JavaScriptCore/bytecode/CodeBlock.cpp
@@ -52,6 +52,7 @@
#include "JSFunction.h"
#include "JSLexicalEnvironment.h"
#include "JSModuleEnvironment.h"
+#include "LLIntData.h"
#include "LLIntEntrypoint.h"
#include "LLIntPrototypeLoadAdaptiveStructureWatchpoint.h"
#include "LowLevelInterpreter.h"
@@ -70,6 +71,7 @@
#include "VMInlines.h"
#include <wtf/BagToHashMap.h>
#include <wtf/CommaPrinter.h>
+#include <wtf/SimpleStats.h>
#include <wtf/StringExtras.h>
#include <wtf/StringPrintStream.h>
#include <wtf/text/UniquedStringImpl.h>
@@ -1876,7 +1878,7 @@
m_rareData->m_stringSwitchJumpTables = other.m_rareData->m_stringSwitchJumpTables;
}
- heap()->m_codeBlocks.add(this);
+ heap()->m_codeBlocks->add(this);
}
CodeBlock::CodeBlock(VM* vm, Structure* structure, ScriptExecutable* ownerExecutable, UnlinkedCodeBlock* unlinkedCodeBlock,
@@ -2342,7 +2344,7 @@
if (Options::dumpGeneratedBytecodes())
dumpBytecode();
- heap()->m_codeBlocks.add(this);
+ heap()->m_codeBlocks->add(this);
heap()->reportExtraMemoryAllocated(m_instructions.size() * sizeof(Instruction));
}
@@ -2379,7 +2381,7 @@
{
Base::finishCreation(vm);
- heap()->m_codeBlocks.add(this);
+ heap()->m_codeBlocks->add(this);
}
#endif
@@ -2783,6 +2785,14 @@
codeBlock->determineLiveness(visitor);
}
+void CodeBlock::clearLLIntGetByIdCache(Instruction* instruction)
+{
+ instruction[0].u.opcode = LLInt::getOpcode(op_get_by_id);
+ instruction[4].u.pointer = nullptr;
+ instruction[5].u.pointer = nullptr;
+ instruction[6].u.pointer = nullptr;
+}
+
void CodeBlock::finalizeLLIntInlineCaches()
{
#if ENABLE(WEBASSEMBLY)
@@ -4184,12 +4194,12 @@
if (!m_vm)
return 0;
- if (!m_vm->machineCodeBytesPerBytecodeWordForBaselineJIT)
+ if (!*m_vm->machineCodeBytesPerBytecodeWordForBaselineJIT)
return 0; // It's as good of a prediction as we'll get.
// Be conservative: return a size that will be an overestimation 84% of the time.
- double multiplier = m_vm->machineCodeBytesPerBytecodeWordForBaselineJIT.mean() +
- m_vm->machineCodeBytesPerBytecodeWordForBaselineJIT.standardDeviation();
+ double multiplier = m_vm->machineCodeBytesPerBytecodeWordForBaselineJIT->mean() +
+ m_vm->machineCodeBytesPerBytecodeWordForBaselineJIT->standardDeviation();
// Be paranoid: silently reject bogus multipiers. Silently doing the "wrong" thing
// here is OK, since this whole method is just a heuristic.
diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.h b/Source/JavaScriptCore/bytecode/CodeBlock.h
index 9e5d607..e3431a3 100644
--- a/Source/JavaScriptCore/bytecode/CodeBlock.h
+++ b/Source/JavaScriptCore/bytecode/CodeBlock.h
@@ -293,6 +293,8 @@
{
return m_jitCodeMap.get();
}
+
+ static void clearLLIntGetByIdCache(Instruction*);
unsigned bytecodeOffset(Instruction* returnAddress)
{
@@ -1283,14 +1285,6 @@
};
#endif
-inline void clearLLIntGetByIdCache(Instruction* instruction)
-{
- instruction[0].u.opcode = LLInt::getOpcode(op_get_by_id);
- instruction[4].u.pointer = nullptr;
- instruction[5].u.pointer = nullptr;
- instruction[6].u.pointer = nullptr;
-}
-
inline Register& ExecState::r(int index)
{
CodeBlock* codeBlock = this->codeBlock();
diff --git a/Source/JavaScriptCore/bytecode/ExecutionCounter.h b/Source/JavaScriptCore/bytecode/ExecutionCounter.h
index 5002c6c..2ab450a 100644
--- a/Source/JavaScriptCore/bytecode/ExecutionCounter.h
+++ b/Source/JavaScriptCore/bytecode/ExecutionCounter.h
@@ -29,7 +29,6 @@
#include "JSGlobalObject.h"
#include "Options.h"
#include <wtf/PrintStream.h>
-#include <wtf/SimpleStats.h>
namespace JSC {
diff --git a/Source/JavaScriptCore/bytecode/Instruction.h b/Source/JavaScriptCore/bytecode/Instruction.h
index 494b000..4f74320 100644
--- a/Source/JavaScriptCore/bytecode/Instruction.h
+++ b/Source/JavaScriptCore/bytecode/Instruction.h
@@ -31,7 +31,6 @@
#include "BasicBlockLocation.h"
#include "MacroAssembler.h"
-#include "Opcode.h"
#include "PutByIdFlags.h"
#include "SymbolTable.h"
#include "TypeLocation.h"
@@ -52,6 +51,12 @@
struct LLIntCallLinkInfo;
struct ValueProfile;
+#if ENABLE(COMPUTED_GOTO_OPCODES)
+typedef void* Opcode;
+#else
+typedef OpcodeID Opcode;
+#endif
+
struct Instruction {
Instruction()
{
diff --git a/Source/JavaScriptCore/bytecode/LLIntPrototypeLoadAdaptiveStructureWatchpoint.cpp b/Source/JavaScriptCore/bytecode/LLIntPrototypeLoadAdaptiveStructureWatchpoint.cpp
index 7ae2c0d..9a5ac01 100644
--- a/Source/JavaScriptCore/bytecode/LLIntPrototypeLoadAdaptiveStructureWatchpoint.cpp
+++ b/Source/JavaScriptCore/bytecode/LLIntPrototypeLoadAdaptiveStructureWatchpoint.cpp
@@ -28,7 +28,7 @@
#include "CodeBlock.h"
#include "Instruction.h"
-#include "StructureInlines.h"
+#include "JSCInlines.h"
namespace JSC {
@@ -59,7 +59,7 @@
StringFireDetail stringDetail(out.toCString().data());
- clearLLIntGetByIdCache(m_getByIdInstruction);
+ CodeBlock::clearLLIntGetByIdCache(m_getByIdInstruction);
}
} // namespace JSC
diff --git a/Source/JavaScriptCore/bytecode/ObjectAllocationProfile.h b/Source/JavaScriptCore/bytecode/ObjectAllocationProfile.h
index 5fa706d..2beeaa7 100644
--- a/Source/JavaScriptCore/bytecode/ObjectAllocationProfile.h
+++ b/Source/JavaScriptCore/bytecode/ObjectAllocationProfile.h
@@ -45,7 +45,7 @@
{
}
- bool isNull() { return !m_allocator; }
+ bool isNull() { return !m_structure; }
void initialize(VM& vm, JSCell* owner, JSObject* prototype, unsigned inferredInlineCapacity)
{
@@ -80,14 +80,15 @@
ASSERT(inlineCapacity <= JSFinalObject::maxInlineCapacity());
size_t allocationSize = JSFinalObject::allocationSize(inlineCapacity);
- MarkedAllocator* allocator = &vm.heap.allocatorForObjectWithoutDestructor(allocationSize);
- ASSERT(allocator->cellSize());
-
+ MarkedAllocator* allocator = vm.heap.allocatorForObjectWithoutDestructor(allocationSize);
+
// Take advantage of extra inline capacity available in the size class.
- size_t slop = (allocator->cellSize() - allocationSize) / sizeof(WriteBarrier<Unknown>);
- inlineCapacity += slop;
- if (inlineCapacity > JSFinalObject::maxInlineCapacity())
- inlineCapacity = JSFinalObject::maxInlineCapacity();
+ if (allocator) {
+ size_t slop = (allocator->cellSize() - allocationSize) / sizeof(WriteBarrier<Unknown>);
+ inlineCapacity += slop;
+ if (inlineCapacity > JSFinalObject::maxInlineCapacity())
+ inlineCapacity = JSFinalObject::maxInlineCapacity();
+ }
Structure* structure = vm.prototypeMap.emptyObjectStructureForPrototype(prototype, inlineCapacity);
diff --git a/Source/JavaScriptCore/bytecode/Opcode.h b/Source/JavaScriptCore/bytecode/Opcode.h
index 4db610e..545d8e9 100644
--- a/Source/JavaScriptCore/bytecode/Opcode.h
+++ b/Source/JavaScriptCore/bytecode/Opcode.h
@@ -55,7 +55,7 @@
#define OPCODE_ID_ENUM(opcode, length) opcode,
- typedef enum { FOR_EACH_OPCODE_ID(OPCODE_ID_ENUM) } OpcodeID;
+ enum OpcodeID : unsigned { FOR_EACH_OPCODE_ID(OPCODE_ID_ENUM) };
#undef OPCODE_ID_ENUM
const int maxOpcodeLength = 9;
diff --git a/Source/JavaScriptCore/bytecode/PolymorphicAccess.cpp b/Source/JavaScriptCore/bytecode/PolymorphicAccess.cpp
index 5f057ee..5caca06 100644
--- a/Source/JavaScriptCore/bytecode/PolymorphicAccess.cpp
+++ b/Source/JavaScriptCore/bytecode/PolymorphicAccess.cpp
@@ -1206,34 +1206,25 @@
size_t newSize = newStructure()->outOfLineCapacity() * sizeof(JSValue);
if (allocatingInline) {
- CopiedAllocator* copiedAllocator = &vm.heap.storageAllocator();
-
- if (!reallocating) {
- jit.loadPtr(&copiedAllocator->m_currentRemaining, scratchGPR);
- slowPath.append(
- jit.branchSubPtr(
- CCallHelpers::Signed, CCallHelpers::TrustedImm32(newSize), scratchGPR));
- jit.storePtr(scratchGPR, &copiedAllocator->m_currentRemaining);
- jit.negPtr(scratchGPR);
- jit.addPtr(
- CCallHelpers::AbsoluteAddress(&copiedAllocator->m_currentPayloadEnd), scratchGPR);
- jit.addPtr(CCallHelpers::TrustedImm32(sizeof(JSValue)), scratchGPR);
- } else {
+ MarkedAllocator* allocator = vm.heap.allocatorForAuxiliaryData(newSize);
+
+ if (!allocator) {
+ // Yuck, this case would suck!
+ slowPath.append(jit.jump());
+ }
+
+ jit.move(CCallHelpers::TrustedImmPtr(allocator), scratchGPR2);
+ jit.emitAllocate(scratchGPR, allocator, scratchGPR2, scratchGPR3, slowPath);
+ jit.addPtr(CCallHelpers::TrustedImm32(newSize + sizeof(IndexingHeader)), scratchGPR);
+
+ if (reallocating) {
// Handle the case where we are reallocating (i.e. the old structure/butterfly
// already had out-of-line property storage).
size_t oldSize = structure()->outOfLineCapacity() * sizeof(JSValue);
ASSERT(newSize > oldSize);
jit.loadPtr(CCallHelpers::Address(baseGPR, JSObject::butterflyOffset()), scratchGPR3);
- jit.loadPtr(&copiedAllocator->m_currentRemaining, scratchGPR);
- slowPath.append(
- jit.branchSubPtr(
- CCallHelpers::Signed, CCallHelpers::TrustedImm32(newSize), scratchGPR));
- jit.storePtr(scratchGPR, &copiedAllocator->m_currentRemaining);
- jit.negPtr(scratchGPR);
- jit.addPtr(
- CCallHelpers::AbsoluteAddress(&copiedAllocator->m_currentPayloadEnd), scratchGPR);
- jit.addPtr(CCallHelpers::TrustedImm32(sizeof(JSValue)), scratchGPR);
+
// We have scratchGPR = new storage, scratchGPR3 = old storage,
// scratchGPR2 = available
for (size_t offset = 0; offset < oldSize; offset += sizeof(void*)) {
@@ -1659,6 +1650,7 @@
// Cascade through the list, preferring newer entries.
for (unsigned i = cases.size(); i--;) {
fallThrough.link(&jit);
+ fallThrough.clear();
cases[i]->generateWithGuard(state, fallThrough);
}
state.failAndRepatch.append(fallThrough);
diff --git a/Source/JavaScriptCore/bytecode/PolymorphicAccess.h b/Source/JavaScriptCore/bytecode/PolymorphicAccess.h
index b602c00..2ca4adc 100644
--- a/Source/JavaScriptCore/bytecode/PolymorphicAccess.h
+++ b/Source/JavaScriptCore/bytecode/PolymorphicAccess.h
@@ -29,10 +29,10 @@
#if ENABLE(JIT)
#include "CodeOrigin.h"
+#include "JITStubRoutine.h"
#include "JSFunctionInlines.h"
#include "MacroAssembler.h"
#include "ObjectPropertyConditionSet.h"
-#include "Opcode.h"
#include "ScratchRegisterAllocator.h"
#include "Structure.h"
#include <wtf/Vector.h>
diff --git a/Source/JavaScriptCore/bytecode/StructureStubInfo.cpp b/Source/JavaScriptCore/bytecode/StructureStubInfo.cpp
index 30dff72..38e5cef 100644
--- a/Source/JavaScriptCore/bytecode/StructureStubInfo.cpp
+++ b/Source/JavaScriptCore/bytecode/StructureStubInfo.cpp
@@ -27,6 +27,7 @@
#include "StructureStubInfo.h"
#include "JSObject.h"
+#include "JSCInlines.h"
#include "PolymorphicAccess.h"
#include "Repatch.h"
diff --git a/Source/JavaScriptCore/bytecode/StructureStubInfo.h b/Source/JavaScriptCore/bytecode/StructureStubInfo.h
index 0138466..3dcc51a 100644
--- a/Source/JavaScriptCore/bytecode/StructureStubInfo.h
+++ b/Source/JavaScriptCore/bytecode/StructureStubInfo.h
@@ -31,7 +31,6 @@
#include "JITStubRoutine.h"
#include "MacroAssembler.h"
#include "ObjectPropertyConditionSet.h"
-#include "Opcode.h"
#include "Options.h"
#include "RegisterSet.h"
#include "Structure.h"
diff --git a/Source/JavaScriptCore/bytecode/SuperSampler.cpp b/Source/JavaScriptCore/bytecode/SuperSampler.cpp
index 919259a..81e51d4 100644
--- a/Source/JavaScriptCore/bytecode/SuperSampler.cpp
+++ b/Source/JavaScriptCore/bytecode/SuperSampler.cpp
@@ -36,6 +36,7 @@
volatile uint32_t g_superSamplerCount;
+static StaticLock lock;
static double in;
static double out;
@@ -51,10 +52,13 @@
const int printingPeriod = 1000;
for (;;) {
for (int ms = 0; ms < printingPeriod; ms += sleepQuantum) {
- if (g_superSamplerCount)
- in++;
- else
- out++;
+ {
+ LockHolder locker(lock);
+ if (g_superSamplerCount)
+ in++;
+ else
+ out++;
+ }
sleepMS(sleepQuantum);
}
printSuperSamplerState();
@@ -64,11 +68,19 @@
});
}
+void resetSuperSamplerState()
+{
+ LockHolder locker(lock);
+ in = 0;
+ out = 0;
+}
+
void printSuperSamplerState()
{
if (!Options::useSuperSampler())
return;
+ LockHolder locker(lock);
double percentage = 100.0 * in / (in + out);
if (percentage != percentage)
percentage = 0.0;
diff --git a/Source/JavaScriptCore/bytecode/SuperSampler.h b/Source/JavaScriptCore/bytecode/SuperSampler.h
index 85e6ce7..22780ea 100644
--- a/Source/JavaScriptCore/bytecode/SuperSampler.h
+++ b/Source/JavaScriptCore/bytecode/SuperSampler.h
@@ -53,6 +53,7 @@
bool m_doSample;
};
+JS_EXPORT_PRIVATE void resetSuperSamplerState();
JS_EXPORT_PRIVATE void printSuperSamplerState();
} // namespace JSC
diff --git a/Source/JavaScriptCore/bytecode/UnlinkedCodeBlock.cpp b/Source/JavaScriptCore/bytecode/UnlinkedCodeBlock.cpp
index a6e0e82..1486d2a 100644
--- a/Source/JavaScriptCore/bytecode/UnlinkedCodeBlock.cpp
+++ b/Source/JavaScriptCore/bytecode/UnlinkedCodeBlock.cpp
@@ -90,11 +90,6 @@
ASSERT(m_constructorKind == static_cast<unsigned>(info.constructorKind()));
}
-VM* UnlinkedCodeBlock::vm() const
-{
- return MarkedBlock::blockFor(this)->vm();
-}
-
void UnlinkedCodeBlock::visitChildren(JSCell* cell, SlotVisitor& visitor)
{
UnlinkedCodeBlock* thisObject = jsCast<UnlinkedCodeBlock*>(cell);
diff --git a/Source/JavaScriptCore/bytecode/UnlinkedCodeBlock.h b/Source/JavaScriptCore/bytecode/UnlinkedCodeBlock.h
index 72d1696..9487ba6 100644
--- a/Source/JavaScriptCore/bytecode/UnlinkedCodeBlock.h
+++ b/Source/JavaScriptCore/bytecode/UnlinkedCodeBlock.h
@@ -284,8 +284,6 @@
void addExceptionHandler(const UnlinkedHandlerInfo& handler) { createRareDataIfNecessary(); return m_rareData->m_exceptionHandlers.append(handler); }
UnlinkedHandlerInfo& exceptionHandler(int index) { ASSERT(m_rareData); return m_rareData->m_exceptionHandlers[index]; }
- VM* vm() const;
-
UnlinkedArrayProfile addArrayProfile() { return m_arrayProfileCount++; }
unsigned numberOfArrayProfiles() { return m_arrayProfileCount; }
UnlinkedArrayAllocationProfile addArrayAllocationProfile() { return m_arrayAllocationProfileCount++; }
diff --git a/Source/JavaScriptCore/bytecode/UnlinkedInstructionStream.cpp b/Source/JavaScriptCore/bytecode/UnlinkedInstructionStream.cpp
index 6a300d3..e8762ff 100644
--- a/Source/JavaScriptCore/bytecode/UnlinkedInstructionStream.cpp
+++ b/Source/JavaScriptCore/bytecode/UnlinkedInstructionStream.cpp
@@ -26,6 +26,8 @@
#include "config.h"
#include "UnlinkedInstructionStream.h"
+#include "Opcode.h"
+
namespace JSC {
static void append8(unsigned char*& ptr, unsigned char value)
diff --git a/Source/JavaScriptCore/bytecode/UnlinkedInstructionStream.h b/Source/JavaScriptCore/bytecode/UnlinkedInstructionStream.h
index a875e49..07dc6783 100644
--- a/Source/JavaScriptCore/bytecode/UnlinkedInstructionStream.h
+++ b/Source/JavaScriptCore/bytecode/UnlinkedInstructionStream.h
@@ -27,6 +27,7 @@
#ifndef UnlinkedInstructionStream_h
#define UnlinkedInstructionStream_h
+#include "Opcode.h"
#include "UnlinkedCodeBlock.h"
#include <wtf/RefCountedArray.h>
diff --git a/Source/JavaScriptCore/dfg/DFGCallArrayAllocatorSlowPathGenerator.h b/Source/JavaScriptCore/dfg/DFGCallArrayAllocatorSlowPathGenerator.h
index 01d834c..b537a83 100644
--- a/Source/JavaScriptCore/dfg/DFGCallArrayAllocatorSlowPathGenerator.h
+++ b/Source/JavaScriptCore/dfg/DFGCallArrayAllocatorSlowPathGenerator.h
@@ -38,7 +38,7 @@
class CallArrayAllocatorSlowPathGenerator : public JumpingSlowPathGenerator<MacroAssembler::JumpList> {
public:
CallArrayAllocatorSlowPathGenerator(
- MacroAssembler::JumpList from, SpeculativeJIT* jit, P_JITOperation_EStZ function,
+ MacroAssembler::JumpList from, SpeculativeJIT* jit, P_JITOperation_EStZB function,
GPRReg resultGPR, GPRReg storageGPR, Structure* structure, size_t size)
: JumpingSlowPathGenerator<MacroAssembler::JumpList>(from, jit)
, m_function(function)
@@ -57,7 +57,7 @@
linkFrom(jit);
for (unsigned i = 0; i < m_plans.size(); ++i)
jit->silentSpill(m_plans[i]);
- jit->callOperation(m_function, m_resultGPR, m_structure, m_size);
+ jit->callOperation(m_function, m_resultGPR, m_structure, m_size, m_storageGPR);
GPRReg canTrample = SpeculativeJIT::pickCanTrample(m_resultGPR);
for (unsigned i = m_plans.size(); i--;)
jit->silentFill(m_plans[i], canTrample);
@@ -67,7 +67,7 @@
}
private:
- P_JITOperation_EStZ m_function;
+ P_JITOperation_EStZB m_function;
GPRReg m_resultGPR;
GPRReg m_storageGPR;
Structure* m_structure;
@@ -78,14 +78,15 @@
class CallArrayAllocatorWithVariableSizeSlowPathGenerator : public JumpingSlowPathGenerator<MacroAssembler::JumpList> {
public:
CallArrayAllocatorWithVariableSizeSlowPathGenerator(
- MacroAssembler::JumpList from, SpeculativeJIT* jit, P_JITOperation_EStZ function,
- GPRReg resultGPR, Structure* contiguousStructure, Structure* arrayStorageStructure, GPRReg sizeGPR)
+ MacroAssembler::JumpList from, SpeculativeJIT* jit, P_JITOperation_EStZB function,
+ GPRReg resultGPR, Structure* contiguousStructure, Structure* arrayStorageStructure, GPRReg sizeGPR, GPRReg storageGPR)
: JumpingSlowPathGenerator<MacroAssembler::JumpList>(from, jit)
, m_function(function)
, m_resultGPR(resultGPR)
, m_contiguousStructure(contiguousStructure)
, m_arrayStorageOrContiguousStructure(arrayStorageStructure)
, m_sizeGPR(sizeGPR)
+ , m_storageGPR(storageGPR)
{
jit->silentSpillAllRegistersImpl(false, m_plans, resultGPR);
}
@@ -96,7 +97,7 @@
linkFrom(jit);
for (unsigned i = 0; i < m_plans.size(); ++i)
jit->silentSpill(m_plans[i]);
- GPRReg scratchGPR = AssemblyHelpers::selectScratchGPR(m_sizeGPR);
+ GPRReg scratchGPR = AssemblyHelpers::selectScratchGPR(m_sizeGPR, m_storageGPR);
if (m_contiguousStructure != m_arrayStorageOrContiguousStructure) {
MacroAssembler::Jump bigLength = jit->m_jit.branch32(MacroAssembler::AboveOrEqual, m_sizeGPR, MacroAssembler::TrustedImm32(MIN_ARRAY_STORAGE_CONSTRUCTION_LENGTH));
jit->m_jit.move(MacroAssembler::TrustedImmPtr(m_contiguousStructure), scratchGPR);
@@ -106,7 +107,7 @@
done.link(&jit->m_jit);
} else
jit->m_jit.move(MacroAssembler::TrustedImmPtr(m_contiguousStructure), scratchGPR);
- jit->callOperation(m_function, m_resultGPR, scratchGPR, m_sizeGPR);
+ jit->callOperation(m_function, m_resultGPR, scratchGPR, m_sizeGPR, m_storageGPR);
GPRReg canTrample = SpeculativeJIT::pickCanTrample(m_resultGPR);
for (unsigned i = m_plans.size(); i--;)
jit->silentFill(m_plans[i], canTrample);
@@ -115,11 +116,12 @@
}
private:
- P_JITOperation_EStZ m_function;
+ P_JITOperation_EStZB m_function;
GPRReg m_resultGPR;
Structure* m_contiguousStructure;
Structure* m_arrayStorageOrContiguousStructure;
GPRReg m_sizeGPR;
+ GPRReg m_storageGPR;
Vector<SilentRegisterSavePlan, 2> m_plans;
};
diff --git a/Source/JavaScriptCore/dfg/DFGOperations.cpp b/Source/JavaScriptCore/dfg/DFGOperations.cpp
index bfa8353..b387aa6 100644
--- a/Source/JavaScriptCore/dfg/DFGOperations.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOperations.cpp
@@ -933,17 +933,20 @@
return bitwise_cast<char*>(JSArray::create(*vm, arrayStructure));
}
-char* JIT_OPERATION operationNewArrayWithSize(ExecState* exec, Structure* arrayStructure, int32_t size)
+char* JIT_OPERATION operationNewArrayWithSize(ExecState* exec, Structure* arrayStructure, int32_t size, Butterfly* butterfly)
{
- VM* vm = &exec->vm();
- NativeCallFrameTracer tracer(vm, exec);
- auto scope = DECLARE_THROW_SCOPE(*vm);
+ VM& vm = exec->vm();
+ NativeCallFrameTracer tracer(&vm, exec);
+ auto scope = DECLARE_THROW_SCOPE(vm);
if (UNLIKELY(size < 0))
return bitwise_cast<char*>(throwException(exec, scope, createRangeError(exec, ASCIILiteral("Array size is not a small enough positive integer."))));
- JSArray* result = JSArray::create(*vm, arrayStructure, size);
- result->butterfly(); // Ensure that the backing store is in to-space.
+ JSArray* result;
+ if (butterfly)
+ result = JSArray::createWithButterfly(vm, arrayStructure, butterfly);
+ else
+ result = JSArray::create(vm, arrayStructure, size);
return bitwise_cast<char*>(result);
}
@@ -1629,13 +1632,13 @@
return 0;
}
-char* JIT_OPERATION operationNewRawObject(ExecState* exec, Structure* structure, int32_t length)
+char* JIT_OPERATION operationNewRawObject(ExecState* exec, Structure* structure, int32_t length, Butterfly* butterfly)
{
VM& vm = exec->vm();
NativeCallFrameTracer tracer(&vm, exec);
- Butterfly* butterfly;
- if (structure->outOfLineCapacity() || hasIndexedProperties(structure->indexingType())) {
+ if (!butterfly
+ && (structure->outOfLineCapacity() || hasIndexedProperties(structure->indexingType()))) {
IndexingHeader header;
header.setVectorLength(length);
header.setPublicLength(0);
@@ -1644,28 +1647,29 @@
vm, nullptr, 0, structure->outOfLineCapacity(),
hasIndexedProperties(structure->indexingType()), header,
length * sizeof(EncodedJSValue));
- } else
- butterfly = nullptr;
+ }
JSObject* result = JSObject::createRawObject(exec, structure, butterfly);
result->butterfly(); // Ensure that the butterfly is in to-space.
return bitwise_cast<char*>(result);
}
-JSCell* JIT_OPERATION operationNewObjectWithButterfly(ExecState* exec, Structure* structure)
+JSCell* JIT_OPERATION operationNewObjectWithButterfly(ExecState* exec, Structure* structure, Butterfly* butterfly)
{
VM& vm = exec->vm();
NativeCallFrameTracer tracer(&vm, exec);
- Butterfly* butterfly = Butterfly::create(
- vm, nullptr, 0, structure->outOfLineCapacity(), false, IndexingHeader(), 0);
+ if (!butterfly) {
+ butterfly = Butterfly::create(
+ vm, nullptr, 0, structure->outOfLineCapacity(), false, IndexingHeader(), 0);
+ }
JSObject* result = JSObject::createRawObject(exec, structure, butterfly);
result->butterfly(); // Ensure that the butterfly is in to-space.
return result;
}
-JSCell* JIT_OPERATION operationNewObjectWithButterflyWithIndexingHeaderAndVectorLength(ExecState* exec, Structure* structure, unsigned length)
+JSCell* JIT_OPERATION operationNewObjectWithButterflyWithIndexingHeaderAndVectorLength(ExecState* exec, Structure* structure, unsigned length, Butterfly* butterfly)
{
VM& vm = exec->vm();
NativeCallFrameTracer tracer(&vm, exec);
@@ -1673,10 +1677,14 @@
IndexingHeader header;
header.setVectorLength(length);
header.setPublicLength(0);
- Butterfly* butterfly = Butterfly::create(
- vm, nullptr, 0, structure->outOfLineCapacity(), true, header,
- sizeof(EncodedJSValue) * length);
-
+ if (butterfly)
+ *butterfly->indexingHeader() = header;
+ else {
+ butterfly = Butterfly::create(
+ vm, nullptr, 0, structure->outOfLineCapacity(), true, header,
+ sizeof(EncodedJSValue) * length);
+ }
+
// Paradoxically this may allocate a JSArray. That's totally cool.
JSObject* result = JSObject::createRawObject(exec, structure, butterfly);
result->butterfly(); // Ensure that the butterfly is in to-space.
diff --git a/Source/JavaScriptCore/dfg/DFGOperations.h b/Source/JavaScriptCore/dfg/DFGOperations.h
index b66fc1e..2c392c5 100644
--- a/Source/JavaScriptCore/dfg/DFGOperations.h
+++ b/Source/JavaScriptCore/dfg/DFGOperations.h
@@ -70,7 +70,7 @@
char* JIT_OPERATION operationNewArray(ExecState*, Structure*, void*, size_t) WTF_INTERNAL;
char* JIT_OPERATION operationNewArrayBuffer(ExecState*, Structure*, size_t, size_t) WTF_INTERNAL;
char* JIT_OPERATION operationNewEmptyArray(ExecState*, Structure*) WTF_INTERNAL;
-char* JIT_OPERATION operationNewArrayWithSize(ExecState*, Structure*, int32_t) WTF_INTERNAL;
+char* JIT_OPERATION operationNewArrayWithSize(ExecState*, Structure*, int32_t, Butterfly*) WTF_INTERNAL;
char* JIT_OPERATION operationNewInt8ArrayWithSize(ExecState*, Structure*, int32_t) WTF_INTERNAL;
char* JIT_OPERATION operationNewInt8ArrayWithOneArgument(ExecState*, Structure*, EncodedJSValue) WTF_INTERNAL;
char* JIT_OPERATION operationNewInt16ArrayWithSize(ExecState*, Structure*, int32_t) WTF_INTERNAL;
@@ -176,9 +176,9 @@
size_t JIT_OPERATION operationDefaultHasInstance(ExecState*, JSCell* value, JSCell* proto);
-char* JIT_OPERATION operationNewRawObject(ExecState*, Structure*, int32_t) WTF_INTERNAL;
-JSCell* JIT_OPERATION operationNewObjectWithButterfly(ExecState*, Structure*) WTF_INTERNAL;
-JSCell* JIT_OPERATION operationNewObjectWithButterflyWithIndexingHeaderAndVectorLength(ExecState*, Structure*, unsigned length) WTF_INTERNAL;
+char* JIT_OPERATION operationNewRawObject(ExecState*, Structure*, int32_t, Butterfly*) WTF_INTERNAL;
+JSCell* JIT_OPERATION operationNewObjectWithButterfly(ExecState*, Structure*, Butterfly*) WTF_INTERNAL;
+JSCell* JIT_OPERATION operationNewObjectWithButterflyWithIndexingHeaderAndVectorLength(ExecState*, Structure*, unsigned length, Butterfly*) WTF_INTERNAL;
void JIT_OPERATION operationProcessTypeProfilerLogDFG(ExecState*) WTF_INTERNAL;
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
index 91dfe82..87fe277 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
@@ -94,7 +94,7 @@
GPRReg scratch2GPR = scratch2.gpr();
ASSERT(vectorLength >= numElements);
- vectorLength = std::max(BASE_VECTOR_LEN, vectorLength);
+ vectorLength = Butterfly::optimalContiguousVectorLength(structure, vectorLength);
JITCompiler::JumpList slowCases;
@@ -103,23 +103,30 @@
size += vectorLength * sizeof(JSValue) + sizeof(IndexingHeader);
size += outOfLineCapacity * sizeof(JSValue);
+ m_jit.move(TrustedImmPtr(0), storageGPR);
+
if (size) {
- slowCases.append(
- emitAllocateBasicStorage(TrustedImm32(size), storageGPR));
- if (hasIndexingHeader)
- m_jit.subPtr(TrustedImm32(vectorLength * sizeof(JSValue)), storageGPR);
- else
- m_jit.addPtr(TrustedImm32(sizeof(IndexingHeader)), storageGPR);
- } else
- m_jit.move(TrustedImmPtr(0), storageGPR);
+ if (MarkedAllocator* allocator = m_jit.vm()->heap.allocatorForAuxiliaryData(size)) {
+ m_jit.move(TrustedImmPtr(allocator), scratchGPR);
+ m_jit.emitAllocate(storageGPR, allocator, scratchGPR, scratch2GPR, slowCases);
+
+ m_jit.addPtr(
+ TrustedImm32(outOfLineCapacity * sizeof(JSValue) + sizeof(IndexingHeader)),
+ storageGPR);
+
+ if (hasIndexingHeader)
+ m_jit.store32(TrustedImm32(vectorLength), MacroAssembler::Address(storageGPR, Butterfly::offsetOfVectorLength()));
+ } else
+ slowCases.append(m_jit.jump());
+ }
size_t allocationSize = JSFinalObject::allocationSize(inlineCapacity);
- MarkedAllocator* allocatorPtr = &m_jit.vm()->heap.allocatorForObjectWithoutDestructor(allocationSize);
- m_jit.move(TrustedImmPtr(allocatorPtr), scratchGPR);
- emitAllocateJSObject(resultGPR, scratchGPR, TrustedImmPtr(structure), storageGPR, scratch2GPR, slowCases);
-
- if (hasIndexingHeader)
- m_jit.store32(TrustedImm32(vectorLength), MacroAssembler::Address(storageGPR, Butterfly::offsetOfVectorLength()));
+ MarkedAllocator* allocatorPtr = m_jit.vm()->heap.allocatorForObjectWithoutDestructor(allocationSize);
+ if (allocatorPtr) {
+ m_jit.move(TrustedImmPtr(allocatorPtr), scratchGPR);
+ emitAllocateJSObject(resultGPR, allocatorPtr, scratchGPR, TrustedImmPtr(structure), storageGPR, scratch2GPR, slowCases);
+ } else
+ slowCases.append(m_jit.jump());
// I want a slow path that also loads out the storage pointer, and that's
// what this custom CallArrayAllocatorSlowPathGenerator gives me. It's a lot
@@ -128,14 +135,20 @@
slowCases, this, operationNewRawObject, resultGPR, storageGPR,
structure, vectorLength));
- if (hasDouble(structure->indexingType()) && numElements < vectorLength) {
+ if (numElements < vectorLength) {
#if USE(JSVALUE64)
- m_jit.move(TrustedImm64(bitwise_cast<int64_t>(PNaN)), scratchGPR);
+ if (hasDouble(structure->indexingType()))
+ m_jit.move(TrustedImm64(bitwise_cast<int64_t>(PNaN)), scratchGPR);
+ else
+ m_jit.move(TrustedImm64(JSValue::encode(JSValue())), scratchGPR);
for (unsigned i = numElements; i < vectorLength; ++i)
m_jit.store64(scratchGPR, MacroAssembler::Address(storageGPR, sizeof(double) * i));
#else
EncodedValueDescriptor value;
- value.asInt64 = JSValue::encode(JSValue(JSValue::EncodeAsDouble, PNaN));
+ if (hasDouble(structure->indexingType()))
+ value.asInt64 = JSValue::encode(JSValue(JSValue::EncodeAsDouble, PNaN));
+ else
+ value.asInt64 = JSValue::encode(JSValue());
for (unsigned i = numElements; i < vectorLength; ++i) {
m_jit.store32(TrustedImm32(value.asBits.tag), MacroAssembler::Address(storageGPR, sizeof(double) * i + OBJECT_OFFSETOF(JSValue, u.asBits.tag)));
m_jit.store32(TrustedImm32(value.asBits.payload), MacroAssembler::Address(storageGPR, sizeof(double) * i + OBJECT_OFFSETOF(JSValue, u.asBits.payload)));
@@ -3823,9 +3836,10 @@
GPRReg scratchGPR = scratch.gpr();
JITCompiler::JumpList slowPath;
- MarkedAllocator& markedAllocator = m_jit.vm()->heap.allocatorForObjectWithDestructor(sizeof(JSRopeString));
- m_jit.move(TrustedImmPtr(&markedAllocator), allocatorGPR);
- emitAllocateJSCell(resultGPR, allocatorGPR, TrustedImmPtr(m_jit.vm()->stringStructure.get()), scratchGPR, slowPath);
+ MarkedAllocator* markedAllocator = m_jit.vm()->heap.allocatorForObjectWithDestructor(sizeof(JSRopeString));
+ RELEASE_ASSERT(markedAllocator);
+ m_jit.move(TrustedImmPtr(markedAllocator), allocatorGPR);
+ emitAllocateJSCell(resultGPR, markedAllocator, allocatorGPR, TrustedImmPtr(m_jit.vm()->stringStructure.get()), scratchGPR, slowPath);
m_jit.storePtr(TrustedImmPtr(0), JITCompiler::Address(resultGPR, JSString::offsetOfValue()));
for (unsigned i = 0; i < numOpGPRs; ++i)
@@ -6908,7 +6922,14 @@
void SpeculativeJIT::compileAllocatePropertyStorage(Node* node)
{
- if (node->transition()->previous->couldHaveIndexingHeader()) {
+ ASSERT(!node->transition()->previous->outOfLineCapacity());
+ ASSERT(initialOutOfLineCapacity == node->transition()->next->outOfLineCapacity());
+
+ size_t size = initialOutOfLineCapacity * sizeof(JSValue);
+
+ MarkedAllocator* allocator = m_jit.vm()->heap.allocatorForAuxiliaryData(size);
+
+ if (!allocator || node->transition()->previous->couldHaveIndexingHeader()) {
SpeculateCellOperand base(this, node->child1());
GPRReg baseGPR = base.gpr();
@@ -6925,18 +6946,18 @@
SpeculateCellOperand base(this, node->child1());
GPRTemporary scratch1(this);
+ GPRTemporary scratch2(this);
+ GPRTemporary scratch3(this);
GPRReg baseGPR = base.gpr();
GPRReg scratchGPR1 = scratch1.gpr();
+ GPRReg scratchGPR2 = scratch2.gpr();
+ GPRReg scratchGPR3 = scratch3.gpr();
- ASSERT(!node->transition()->previous->outOfLineCapacity());
- ASSERT(initialOutOfLineCapacity == node->transition()->next->outOfLineCapacity());
-
- JITCompiler::Jump slowPath =
- emitAllocateBasicStorage(
- TrustedImm32(initialOutOfLineCapacity * sizeof(JSValue)), scratchGPR1);
-
- m_jit.addPtr(JITCompiler::TrustedImm32(sizeof(IndexingHeader)), scratchGPR1);
+ m_jit.move(JITCompiler::TrustedImmPtr(allocator), scratchGPR2);
+ JITCompiler::JumpList slowPath;
+ m_jit.emitAllocate(scratchGPR1, allocator, scratchGPR2, scratchGPR3, slowPath);
+ m_jit.addPtr(JITCompiler::TrustedImm32(size + sizeof(IndexingHeader)), scratchGPR1);
addSlowPathGenerator(
slowPathCall(slowPath, this, operationAllocatePropertyStorageWithInitialCapacity, scratchGPR1));
@@ -6951,8 +6972,10 @@
size_t oldSize = node->transition()->previous->outOfLineCapacity() * sizeof(JSValue);
size_t newSize = oldSize * outOfLineGrowthFactor;
ASSERT(newSize == node->transition()->next->outOfLineCapacity() * sizeof(JSValue));
+
+ MarkedAllocator* allocator = m_jit.vm()->heap.allocatorForAuxiliaryData(newSize);
- if (node->transition()->previous->couldHaveIndexingHeader()) {
+ if (!allocator || node->transition()->previous->couldHaveIndexingHeader()) {
SpeculateCellOperand base(this, node->child1());
GPRReg baseGPR = base.gpr();
@@ -6971,16 +6994,19 @@
StorageOperand oldStorage(this, node->child2());
GPRTemporary scratch1(this);
GPRTemporary scratch2(this);
+ GPRTemporary scratch3(this);
GPRReg baseGPR = base.gpr();
GPRReg oldStorageGPR = oldStorage.gpr();
GPRReg scratchGPR1 = scratch1.gpr();
GPRReg scratchGPR2 = scratch2.gpr();
-
- JITCompiler::Jump slowPath =
- emitAllocateBasicStorage(TrustedImm32(newSize), scratchGPR1);
-
- m_jit.addPtr(JITCompiler::TrustedImm32(sizeof(IndexingHeader)), scratchGPR1);
+ GPRReg scratchGPR3 = scratch3.gpr();
+
+ JITCompiler::JumpList slowPath;
+ m_jit.move(JITCompiler::TrustedImmPtr(allocator), scratchGPR2);
+ m_jit.emitAllocate(scratchGPR1, allocator, scratchGPR2, scratchGPR3, slowPath);
+
+ m_jit.addPtr(JITCompiler::TrustedImm32(newSize + sizeof(IndexingHeader)), scratchGPR1);
addSlowPathGenerator(
slowPathCall(slowPath, this, operationAllocatePropertyStorage, scratchGPR1, newSize / sizeof(JSValue)));
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
index b9a8d0f..5be01c9 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
@@ -1003,6 +1003,26 @@
m_jit.setupArgumentsWithExecState(arg1, arg2);
return appendCallSetResult(operation, result);
}
+ JITCompiler::Call callOperation(P_JITOperation_EStZB operation, GPRReg result, Structure* structure, GPRReg arg2, GPRReg butterfly)
+ {
+ m_jit.setupArgumentsWithExecState(TrustedImmPtr(structure), arg2, butterfly);
+ return appendCallSetResult(operation, result);
+ }
+ JITCompiler::Call callOperation(P_JITOperation_EStZB operation, GPRReg result, Structure* structure, size_t arg2, GPRReg butterfly)
+ {
+ m_jit.setupArgumentsWithExecState(TrustedImmPtr(structure), TrustedImm32(arg2), butterfly);
+ return appendCallSetResult(operation, result);
+ }
+ JITCompiler::Call callOperation(P_JITOperation_EStZB operation, GPRReg result, GPRReg arg1, GPRReg arg2, GPRReg butterfly)
+ {
+ m_jit.setupArgumentsWithExecState(arg1, arg2, butterfly);
+ return appendCallSetResult(operation, result);
+ }
+ JITCompiler::Call callOperation(P_JITOperation_EStZB operation, GPRReg result, GPRReg arg1, GPRReg arg2, Butterfly* butterfly)
+ {
+ m_jit.setupArgumentsWithExecState(arg1, arg2, TrustedImmPtr(butterfly));
+ return appendCallSetResult(operation, result);
+ }
JITCompiler::Call callOperation(P_JITOperation_EStPS operation, GPRReg result, Structure* structure, void* pointer, size_t size)
{
m_jit.setupArgumentsWithExecState(TrustedImmPtr(structure), TrustedImmPtr(pointer), TrustedImmPtr(size));
@@ -2557,18 +2577,21 @@
// Allocator for a cell of a specific size.
template <typename StructureType> // StructureType can be GPR or ImmPtr.
- void emitAllocateJSCell(GPRReg resultGPR, GPRReg allocatorGPR, StructureType structure,
+ void emitAllocateJSCell(
+ GPRReg resultGPR, MarkedAllocator* allocator, GPRReg allocatorGPR, StructureType structure,
GPRReg scratchGPR, MacroAssembler::JumpList& slowPath)
{
- m_jit.emitAllocateJSCell(resultGPR, allocatorGPR, structure, scratchGPR, slowPath);
+ m_jit.emitAllocateJSCell(resultGPR, allocator, allocatorGPR, structure, scratchGPR, slowPath);
}
// Allocator for an object of a specific size.
template <typename StructureType, typename StorageType> // StructureType and StorageType can be GPR or ImmPtr.
- void emitAllocateJSObject(GPRReg resultGPR, GPRReg allocatorGPR, StructureType structure,
+ void emitAllocateJSObject(
+ GPRReg resultGPR, MarkedAllocator* allocator, GPRReg allocatorGPR, StructureType structure,
StorageType storage, GPRReg scratchGPR, MacroAssembler::JumpList& slowPath)
{
- m_jit.emitAllocateJSObject(resultGPR, allocatorGPR, structure, storage, scratchGPR, slowPath);
+ m_jit.emitAllocateJSObject(
+ resultGPR, allocator, allocatorGPR, structure, storage, scratchGPR, slowPath);
}
template <typename ClassType, typename StructureType, typename StorageType> // StructureType and StorageType can be GPR or ImmPtr.
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
index f5a790da..e37fa13 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
@@ -3866,8 +3866,7 @@
bigLength.link(&m_jit);
m_jit.move(TrustedImmPtr(globalObject->arrayStructureForIndexingTypeDuringAllocation(ArrayWithArrayStorage)), structureGPR);
done.link(&m_jit);
- callOperation(
- operationNewArrayWithSize, resultGPR, structureGPR, sizeGPR);
+ callOperation(operationNewArrayWithSize, resultGPR, structureGPR, sizeGPR, nullptr);
m_jit.exceptionCheck();
cellResult(resultGPR, node);
break;
@@ -4033,7 +4032,7 @@
m_jit.loadPtr(JITCompiler::Address(rareDataGPR, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfAllocator()), allocatorGPR);
m_jit.loadPtr(JITCompiler::Address(rareDataGPR, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfStructure()), structureGPR);
slowPath.append(m_jit.branchTestPtr(MacroAssembler::Zero, allocatorGPR));
- emitAllocateJSObject(resultGPR, allocatorGPR, structureGPR, TrustedImmPtr(0), scratchGPR, slowPath);
+ emitAllocateJSObject(resultGPR, nullptr, allocatorGPR, structureGPR, TrustedImmPtr(0), scratchGPR, slowPath);
addSlowPathGenerator(slowPathCall(slowPath, this, operationCreateThis, resultGPR, calleeGPR, node->inlineCapacity()));
@@ -4054,10 +4053,10 @@
Structure* structure = node->structure();
size_t allocationSize = JSFinalObject::allocationSize(structure->inlineCapacity());
- MarkedAllocator* allocatorPtr = &m_jit.vm()->heap.allocatorForObjectWithoutDestructor(allocationSize);
+ MarkedAllocator* allocatorPtr = m_jit.vm()->heap.allocatorForObjectWithoutDestructor(allocationSize);
m_jit.move(TrustedImmPtr(allocatorPtr), allocatorGPR);
- emitAllocateJSObject(resultGPR, allocatorGPR, TrustedImmPtr(structure), TrustedImmPtr(0), scratchGPR, slowPath);
+ emitAllocateJSObject(resultGPR, allocatorPtr, allocatorGPR, TrustedImmPtr(structure), TrustedImmPtr(0), scratchGPR, slowPath);
addSlowPathGenerator(slowPathCall(slowPath, this, operationNewObject, resultGPR, structure));
@@ -5364,41 +5363,47 @@
GPRReg scratchGPR = scratch.gpr();
GPRReg scratch2GPR = scratch2.gpr();
+ m_jit.move(TrustedImmPtr(0), storageGPR);
+
MacroAssembler::JumpList slowCases;
if (shouldConvertLargeSizeToArrayStorage)
slowCases.append(m_jit.branch32(MacroAssembler::AboveOrEqual, sizeGPR, TrustedImm32(MIN_ARRAY_STORAGE_CONSTRUCTION_LENGTH)));
-
+
ASSERT((1 << 3) == sizeof(JSValue));
m_jit.move(sizeGPR, scratchGPR);
m_jit.lshift32(TrustedImm32(3), scratchGPR);
m_jit.add32(TrustedImm32(sizeof(IndexingHeader)), scratchGPR, resultGPR);
- slowCases.append(
- emitAllocateBasicStorage(resultGPR, storageGPR));
- m_jit.subPtr(scratchGPR, storageGPR);
- Structure* structure = globalObject->arrayStructureForIndexingTypeDuringAllocation(indexingType);
- emitAllocateJSObject<JSArray>(resultGPR, TrustedImmPtr(structure), storageGPR, scratchGPR, scratch2GPR, slowCases);
-
+ m_jit.emitAllocateVariableSized(
+ storageGPR, m_jit.vm()->heap.subspaceForAuxiliaryData(), resultGPR, scratchGPR,
+ scratch2GPR, slowCases);
+ m_jit.addPtr(TrustedImm32(sizeof(IndexingHeader)), storageGPR);
+
m_jit.store32(sizeGPR, MacroAssembler::Address(storageGPR, Butterfly::offsetOfPublicLength()));
m_jit.store32(sizeGPR, MacroAssembler::Address(storageGPR, Butterfly::offsetOfVectorLength()));
+
+ JSValue hole;
+ if (hasDouble(indexingType))
+ hole = JSValue(JSValue::EncodeAsDouble, PNaN);
+ else
+ hole = JSValue();
+
+ m_jit.move(sizeGPR, scratchGPR);
+ MacroAssembler::Jump done = m_jit.branchTest32(MacroAssembler::Zero, scratchGPR);
+ MacroAssembler::Label loop = m_jit.label();
+ m_jit.sub32(TrustedImm32(1), scratchGPR);
+ m_jit.store32(TrustedImm32(hole.u.asBits.tag), MacroAssembler::BaseIndex(storageGPR, scratchGPR, MacroAssembler::TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.tag)));
+ m_jit.store32(TrustedImm32(hole.u.asBits.payload), MacroAssembler::BaseIndex(storageGPR, scratchGPR, MacroAssembler::TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.payload)));
+ m_jit.branchTest32(MacroAssembler::NonZero, scratchGPR).linkTo(loop, &m_jit);
+ done.link(&m_jit);
- if (hasDouble(indexingType)) {
- JSValue nan = JSValue(JSValue::EncodeAsDouble, PNaN);
-
- m_jit.move(sizeGPR, scratchGPR);
- MacroAssembler::Jump done = m_jit.branchTest32(MacroAssembler::Zero, scratchGPR);
- MacroAssembler::Label loop = m_jit.label();
- m_jit.sub32(TrustedImm32(1), scratchGPR);
- m_jit.store32(TrustedImm32(nan.u.asBits.tag), MacroAssembler::BaseIndex(storageGPR, scratchGPR, MacroAssembler::TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.tag)));
- m_jit.store32(TrustedImm32(nan.u.asBits.payload), MacroAssembler::BaseIndex(storageGPR, scratchGPR, MacroAssembler::TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.payload)));
- m_jit.branchTest32(MacroAssembler::NonZero, scratchGPR).linkTo(loop, &m_jit);
- done.link(&m_jit);
- }
-
+ Structure* structure = globalObject->arrayStructureForIndexingTypeDuringAllocation(indexingType);
+ emitAllocateJSObject<JSArray>(resultGPR, TrustedImmPtr(structure), storageGPR, scratchGPR, scratch2GPR, slowCases);
+
addSlowPathGenerator(std::make_unique<CallArrayAllocatorWithVariableSizeSlowPathGenerator>(
slowCases, this, operationNewArrayWithSize, resultGPR,
structure,
shouldConvertLargeSizeToArrayStorage ? globalObject->arrayStructureForIndexingTypeDuringAllocation(ArrayWithArrayStorage) : structure,
- sizeGPR));
+ sizeGPR, storageGPR));
}
#endif
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
index 45cb64a..bfb61ef 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
@@ -3818,7 +3818,7 @@
bigLength.link(&m_jit);
m_jit.move(TrustedImmPtr(globalObject->arrayStructureForIndexingTypeDuringAllocation(ArrayWithArrayStorage)), structureGPR);
done.link(&m_jit);
- callOperation(operationNewArrayWithSize, resultGPR, structureGPR, sizeGPR);
+ callOperation(operationNewArrayWithSize, resultGPR, structureGPR, sizeGPR, nullptr);
m_jit.exceptionCheck();
cellResult(resultGPR, node);
break;
@@ -3975,7 +3975,7 @@
m_jit.loadPtr(JITCompiler::Address(rareDataGPR, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfAllocator()), allocatorGPR);
m_jit.loadPtr(JITCompiler::Address(rareDataGPR, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfStructure()), structureGPR);
slowPath.append(m_jit.branchTestPtr(MacroAssembler::Zero, allocatorGPR));
- emitAllocateJSObject(resultGPR, allocatorGPR, structureGPR, TrustedImmPtr(0), scratchGPR, slowPath);
+ emitAllocateJSObject(resultGPR, nullptr, allocatorGPR, structureGPR, TrustedImmPtr(0), scratchGPR, slowPath);
addSlowPathGenerator(slowPathCall(slowPath, this, operationCreateThis, resultGPR, calleeGPR, node->inlineCapacity()));
@@ -3996,10 +3996,10 @@
Structure* structure = node->structure();
size_t allocationSize = JSFinalObject::allocationSize(structure->inlineCapacity());
- MarkedAllocator* allocatorPtr = &m_jit.vm()->heap.allocatorForObjectWithoutDestructor(allocationSize);
+ MarkedAllocator* allocatorPtr = m_jit.vm()->heap.allocatorForObjectWithoutDestructor(allocationSize);
m_jit.move(TrustedImmPtr(allocatorPtr), allocatorGPR);
- emitAllocateJSObject(resultGPR, allocatorGPR, TrustedImmPtr(structure), TrustedImmPtr(0), scratchGPR, slowPath);
+ emitAllocateJSObject(resultGPR, allocatorPtr, allocatorGPR, TrustedImmPtr(structure), TrustedImmPtr(0), scratchGPR, slowPath);
addSlowPathGenerator(slowPathCall(slowPath, this, operationNewObject, resultGPR, structure));
@@ -5236,7 +5236,7 @@
unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex;
auto triggerIterator = m_jit.jitCode()->tierUpEntryTriggers.find(bytecodeIndex);
- RELEASE_ASSERT(triggerIterator != m_jit.jitCode()->tierUpEntryTriggers.end());
+ DFG_ASSERT(m_jit.graph(), node, triggerIterator != m_jit.jitCode()->tierUpEntryTriggers.end());
uint8_t* forceEntryTrigger = &(m_jit.jitCode()->tierUpEntryTriggers.find(bytecodeIndex)->value);
MacroAssembler::Jump forceOSREntry = m_jit.branchTest8(MacroAssembler::NonZero, MacroAssembler::AbsoluteAddress(forceEntryTrigger));
@@ -5420,39 +5420,45 @@
GPRReg scratchGPR = scratch.gpr();
GPRReg scratch2GPR = scratch2.gpr();
+ m_jit.move(TrustedImmPtr(0), storageGPR);
+
MacroAssembler::JumpList slowCases;
if (shouldConvertLargeSizeToArrayStorage)
slowCases.append(m_jit.branch32(MacroAssembler::AboveOrEqual, sizeGPR, TrustedImm32(MIN_ARRAY_STORAGE_CONSTRUCTION_LENGTH)));
-
+
ASSERT((1 << 3) == sizeof(JSValue));
m_jit.move(sizeGPR, scratchGPR);
m_jit.lshift32(TrustedImm32(3), scratchGPR);
m_jit.add32(TrustedImm32(sizeof(IndexingHeader)), scratchGPR, resultGPR);
- slowCases.append(
- emitAllocateBasicStorage(resultGPR, storageGPR));
- m_jit.subPtr(scratchGPR, storageGPR);
- Structure* structure = globalObject->arrayStructureForIndexingTypeDuringAllocation(indexingType);
- emitAllocateJSObject<JSArray>(resultGPR, TrustedImmPtr(structure), storageGPR, scratchGPR, scratch2GPR, slowCases);
-
+ m_jit.emitAllocateVariableSized(
+ storageGPR, m_jit.vm()->heap.subspaceForAuxiliaryData(), resultGPR, scratchGPR,
+ scratch2GPR, slowCases);
+ m_jit.addPtr(TrustedImm32(sizeof(IndexingHeader)), storageGPR);
+
m_jit.store32(sizeGPR, MacroAssembler::Address(storageGPR, Butterfly::offsetOfPublicLength()));
m_jit.store32(sizeGPR, MacroAssembler::Address(storageGPR, Butterfly::offsetOfVectorLength()));
-
- if (hasDouble(indexingType)) {
+
+ if (hasDouble(indexingType))
m_jit.move(TrustedImm64(bitwise_cast<int64_t>(PNaN)), scratchGPR);
- m_jit.move(sizeGPR, scratch2GPR);
- MacroAssembler::Jump done = m_jit.branchTest32(MacroAssembler::Zero, scratch2GPR);
- MacroAssembler::Label loop = m_jit.label();
- m_jit.sub32(TrustedImm32(1), scratch2GPR);
- m_jit.store64(scratchGPR, MacroAssembler::BaseIndex(storageGPR, scratch2GPR, MacroAssembler::TimesEight));
- m_jit.branchTest32(MacroAssembler::NonZero, scratch2GPR).linkTo(loop, &m_jit);
- done.link(&m_jit);
- }
+ else
+ m_jit.move(TrustedImm64(JSValue::encode(JSValue())), scratchGPR);
+ m_jit.move(sizeGPR, scratch2GPR);
+ MacroAssembler::Jump done = m_jit.branchTest32(MacroAssembler::Zero, scratch2GPR);
+ MacroAssembler::Label loop = m_jit.label();
+ m_jit.sub32(TrustedImm32(1), scratch2GPR);
+ m_jit.store64(scratchGPR, MacroAssembler::BaseIndex(storageGPR, scratch2GPR, MacroAssembler::TimesEight));
+ m_jit.branchTest32(MacroAssembler::NonZero, scratch2GPR).linkTo(loop, &m_jit);
+ done.link(&m_jit);
+
+ Structure* structure = globalObject->arrayStructureForIndexingTypeDuringAllocation(indexingType);
+
+ emitAllocateJSObject<JSArray>(resultGPR, TrustedImmPtr(structure), storageGPR, scratchGPR, scratch2GPR, slowCases);
addSlowPathGenerator(std::make_unique<CallArrayAllocatorWithVariableSizeSlowPathGenerator>(
slowCases, this, operationNewArrayWithSize, resultGPR,
structure,
shouldConvertLargeSizeToArrayStorage ? globalObject->arrayStructureForIndexingTypeDuringAllocation(ArrayWithArrayStorage) : structure,
- sizeGPR));
+ sizeGPR, storageGPR));
}
#endif
diff --git a/Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp b/Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp
index 5002b79..bfd7849 100644
--- a/Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp
@@ -40,6 +40,7 @@
#include "RegExpConstructor.h"
#include "StringPrototype.h"
#include <cstdlib>
+#include <wtf/text/StringBuilder.h>
namespace JSC { namespace DFG {
@@ -420,7 +421,7 @@
dataLog("Giving up because of pattern limit.\n");
break;
}
-
+
unsigned lastIndex;
if (regExp->globalOrSticky()) {
// This will only work if we can prove what the value of lastIndex is. To do this
@@ -468,7 +469,7 @@
FrozenValue* constructorFrozenValue = m_graph.freeze(constructor);
MatchResult result;
- Vector<int, 32> ovector;
+ Vector<int> ovector;
// We have to call the kind of match function that the main thread would have called.
// Otherwise, we might not have the desired Yarr code compiled, and the match will fail.
if (m_node->op() == RegExpExec) {
@@ -514,7 +515,8 @@
}
unsigned publicLength = resultArray.size();
- unsigned vectorLength = std::max(BASE_VECTOR_LEN, publicLength);
+ unsigned vectorLength =
+ Butterfly::optimalContiguousVectorLength(structure, publicLength);
UniquedStringImpl* indexUID = vm().propertyNames->index.impl();
UniquedStringImpl* inputUID = vm().propertyNames->input.impl();
@@ -649,7 +651,7 @@
bool ok = true;
do {
MatchResult result;
- Vector<int, 32> ovector;
+ Vector<int> ovector;
// Model which version of match() is called by the main thread.
if (replace.isEmpty() && regExp->global()) {
if (!regExp->matchConcurrently(vm(), string, startPosition, result)) {
diff --git a/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.h b/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.h
index cff05a8..d2e16f3 100644
--- a/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.h
+++ b/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.h
@@ -76,7 +76,6 @@
macro(JSString_value, JSString::offsetOfValue()) \
macro(JSSymbolTableObject_symbolTable, JSSymbolTableObject::offsetOfSymbolTable()) \
macro(JSWrapperObject_internalValue, JSWrapperObject::internalValueOffset()) \
- macro(MarkedAllocator_freeListHead, MarkedAllocator::offsetOfFreeListHead()) \
macro(RegExpConstructor_cachedResult_lastRegExp, RegExpConstructor::offsetOfCachedResult() + RegExpCachedResult::offsetOfLastRegExp()) \
macro(RegExpConstructor_cachedResult_lastInput, RegExpConstructor::offsetOfCachedResult() + RegExpCachedResult::offsetOfLastInput()) \
macro(RegExpConstructor_cachedResult_result_start, RegExpConstructor::offsetOfCachedResult() + RegExpCachedResult::offsetOfResult() + OBJECT_OFFSETOF(MatchResult, start)) \
@@ -109,8 +108,7 @@
macro(JSEnvironmentRecord_variables, JSEnvironmentRecord::offsetOfVariables(), sizeof(EncodedJSValue)) \
macro(JSPropertyNameEnumerator_cachedPropertyNamesVectorContents, 0, sizeof(WriteBarrier<JSString>)) \
macro(JSRopeString_fibers, JSRopeString::offsetOfFibers(), sizeof(WriteBarrier<JSString>)) \
- macro(MarkedSpace_Subspace_impreciseAllocators, OBJECT_OFFSETOF(MarkedSpace::Subspace, impreciseAllocators), sizeof(MarkedAllocator)) \
- macro(MarkedSpace_Subspace_preciseAllocators, OBJECT_OFFSETOF(MarkedSpace::Subspace, preciseAllocators), sizeof(MarkedAllocator)) \
+ macro(MarkedSpace_Subspace_allocatorForSizeStep, OBJECT_OFFSETOF(MarkedSpace::Subspace, allocatorForSizeStep), sizeof(MarkedAllocator*)) \
macro(ScopedArguments_overflowStorage, ScopedArguments::overflowStorageOffset(), sizeof(EncodedJSValue)) \
macro(WriteBarrierBuffer_bufferContents, 0, sizeof(JSCell*)) \
macro(characters8, 0, sizeof(LChar)) \
diff --git a/Source/JavaScriptCore/ftl/FTLCompile.cpp b/Source/JavaScriptCore/ftl/FTLCompile.cpp
index 5f8ae85..39eb666 100644
--- a/Source/JavaScriptCore/ftl/FTLCompile.cpp
+++ b/Source/JavaScriptCore/ftl/FTLCompile.cpp
@@ -42,6 +42,7 @@
#include "FTLJITCode.h"
#include "FTLThunks.h"
#include "JITSubGenerator.h"
+#include "JSCInlines.h"
#include "LinkBuffer.h"
#include "PCToCodeOriginMap.h"
#include "ScratchRegisterAllocator.h"
diff --git a/Source/JavaScriptCore/ftl/FTLJITFinalizer.cpp b/Source/JavaScriptCore/ftl/FTLJITFinalizer.cpp
index 00a88f0..dcf3dad 100644
--- a/Source/JavaScriptCore/ftl/FTLJITFinalizer.cpp
+++ b/Source/JavaScriptCore/ftl/FTLJITFinalizer.cpp
@@ -32,6 +32,7 @@
#include "DFGPlan.h"
#include "FTLState.h"
#include "FTLThunks.h"
+#include "JSCInlines.h"
#include "ProfilerDatabase.h"
namespace JSC { namespace FTL {
diff --git a/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp b/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
index 28e63ff..e4a6f14 100644
--- a/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
+++ b/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
@@ -3815,7 +3815,7 @@
size, m_out.constInt32(DirectArguments::allocationSize(minCapacity)));
fastObject = allocateVariableSizedObject<DirectArguments>(
- size, structure, m_out.intPtrZero, slowPath);
+ m_out.zeroExtPtr(size), structure, m_out.intPtrZero, slowPath);
}
m_out.store32(length.value, fastObject, m_heaps.DirectArguments_length);
@@ -3915,10 +3915,11 @@
LBasicBlock continuation = m_out.newBlock();
LValue arrayLength = lowInt32(m_node->child1());
LBasicBlock loopStart = m_out.newBlock();
- bool shouldLargeArraySizeCreateArrayStorage = false;
- LValue array = compileAllocateArrayWithSize(arrayLength, ArrayWithContiguous, shouldLargeArraySizeCreateArrayStorage);
-
- LValue butterfly = m_out.loadPtr(array, m_heaps.JSObject_butterfly);
+ JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node->origin.semantic);
+ Structure* structure = globalObject->arrayStructureForIndexingTypeDuringAllocation(ArrayWithContiguous);
+ ArrayValues arrayValues = allocateUninitializedContiguousJSArray(arrayLength, structure);
+ LValue array = arrayValues.array;
+ LValue butterfly = arrayValues.butterfly;
ValueFromBlock startLength = m_out.anchor(arrayLength);
LValue argumentRegion = m_out.add(getArgumentsStart(), m_out.constInt64(sizeof(Register) * m_node->numberOfArgumentsToSkip()));
m_out.branch(m_out.equal(arrayLength, m_out.constInt32(0)),
@@ -3930,7 +3931,7 @@
m_out.addIncomingToPhi(phiOffset, m_out.anchor(currentOffset));
LValue loadedValue = m_out.load64(m_out.baseIndex(m_heaps.variables, argumentRegion, m_out.zeroExtPtr(currentOffset)));
IndexedAbstractHeap& heap = m_heaps.indexedContiguousProperties;
- m_out.store(loadedValue, m_out.baseIndex(heap, butterfly, m_out.zeroExtPtr(currentOffset)), Output::Store64);
+ m_out.store64(loadedValue, m_out.baseIndex(heap, butterfly, m_out.zeroExtPtr(currentOffset)));
m_out.branch(m_out.equal(currentOffset, m_out.constInt32(0)), unsure(continuation), unsure(loopStart));
m_out.appendTo(continuation, lastNext);
@@ -3987,7 +3988,8 @@
if (!globalObject->isHavingABadTime() && !hasAnyArrayStorage(m_node->indexingType())) {
unsigned numElements = m_node->numChildren();
- ArrayValues arrayValues = allocateJSArray(structure, numElements);
+ ArrayValues arrayValues =
+ allocateUninitializedContiguousJSArray(m_out.constInt32(numElements), structure);
for (unsigned operandIndex = 0; operandIndex < m_node->numChildren(); ++operandIndex) {
Edge edge = m_graph.varArgChild(m_node, operandIndex);
@@ -4063,7 +4065,8 @@
if (!globalObject->isHavingABadTime() && !hasAnyArrayStorage(m_node->indexingType())) {
unsigned numElements = m_node->numConstants();
- ArrayValues arrayValues = allocateJSArray(structure, numElements);
+ ArrayValues arrayValues =
+ allocateUninitializedContiguousJSArray(m_out.constInt32(numElements), structure);
JSValue* data = codeBlock()->constantBuffer(m_node->startConstant());
for (unsigned index = 0; index < m_node->numConstants(); ++index) {
@@ -4089,88 +4092,6 @@
m_out.constIntPtr(m_node->numConstants())));
}
- LValue compileAllocateArrayWithSize(LValue publicLength, IndexingType indexingType, bool shouldLargeArraySizeCreateArrayStorage = true)
- {
- JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node->origin.semantic);
- Structure* structure = globalObject->arrayStructureForIndexingTypeDuringAllocation(indexingType);
- ASSERT(
- hasUndecided(structure->indexingType())
- || hasInt32(structure->indexingType())
- || hasDouble(structure->indexingType())
- || hasContiguous(structure->indexingType()));
-
- LBasicBlock fastCase = m_out.newBlock();
- LBasicBlock largeCase = shouldLargeArraySizeCreateArrayStorage ? m_out.newBlock() : nullptr;
- LBasicBlock failCase = m_out.newBlock();
- LBasicBlock continuation = m_out.newBlock();
- LBasicBlock lastNext = nullptr;
- if (shouldLargeArraySizeCreateArrayStorage) {
- m_out.branch(
- m_out.aboveOrEqual(publicLength, m_out.constInt32(MIN_ARRAY_STORAGE_CONSTRUCTION_LENGTH)),
- rarely(largeCase), usually(fastCase));
- lastNext = m_out.appendTo(fastCase, largeCase);
- }
-
-
- // We don't round up to BASE_VECTOR_LEN for new Array(blah).
- LValue vectorLength = publicLength;
-
- LValue payloadSize =
- m_out.shl(m_out.zeroExt(vectorLength, pointerType()), m_out.constIntPtr(3));
-
- LValue butterflySize = m_out.add(
- payloadSize, m_out.constIntPtr(sizeof(IndexingHeader)));
-
- LValue endOfStorage = allocateBasicStorageAndGetEnd(butterflySize, failCase);
-
- LValue butterfly = m_out.sub(endOfStorage, payloadSize);
-
- LValue object = allocateObject<JSArray>(structure, butterfly, failCase);
-
- m_out.store32(publicLength, butterfly, m_heaps.Butterfly_publicLength);
- m_out.store32(vectorLength, butterfly, m_heaps.Butterfly_vectorLength);
-
- initializeArrayElements(indexingType, vectorLength, butterfly);
-
- ValueFromBlock fastResult = m_out.anchor(object);
- m_out.jump(continuation);
-
- LValue structureValue;
- if (shouldLargeArraySizeCreateArrayStorage) {
- LBasicBlock slowCase = m_out.newBlock();
-
- m_out.appendTo(largeCase, failCase);
- ValueFromBlock largeStructure = m_out.anchor(m_out.constIntPtr(
- globalObject->arrayStructureForIndexingTypeDuringAllocation(ArrayWithArrayStorage)));
- m_out.jump(slowCase);
-
- m_out.appendTo(failCase, slowCase);
- ValueFromBlock failStructure = m_out.anchor(m_out.constIntPtr(structure));
- m_out.jump(slowCase);
-
- m_out.appendTo(slowCase, continuation);
- structureValue = m_out.phi(
- pointerType(), largeStructure, failStructure);
- } else {
- ASSERT(!lastNext);
- lastNext = m_out.appendTo(failCase, continuation);
- structureValue = m_out.constIntPtr(structure);
- }
-
- LValue slowResultValue = lazySlowPath(
- [=] (const Vector<Location>& locations) -> RefPtr<LazySlowPath::Generator> {
- return createLazyCallGenerator(
- operationNewArrayWithSize, locations[0].directGPR(),
- locations[1].directGPR(), locations[2].directGPR());
- },
- structureValue, publicLength);
- ValueFromBlock slowResult = m_out.anchor(slowResultValue);
- m_out.jump(continuation);
-
- m_out.appendTo(continuation, lastNext);
- return m_out.phi(pointerType(), fastResult, slowResult);
- }
-
void compileNewArrayWithSize()
{
LValue publicLength = lowInt32(m_node->child1());
@@ -4180,7 +4101,11 @@
m_node->indexingType());
if (!globalObject->isHavingABadTime() && !hasAnyArrayStorage(m_node->indexingType())) {
- setJSValue(compileAllocateArrayWithSize(publicLength, m_node->indexingType()));
+ setJSValue(
+ allocateJSArray(
+ publicLength,
+ globalObject->arrayStructureForIndexingTypeDuringAllocation(
+ m_node->indexingType())).array);
return;
}
@@ -4189,7 +4114,7 @@
m_out.constIntPtr(
globalObject->arrayStructureForIndexingTypeDuringAllocation(ArrayWithArrayStorage)),
m_out.constIntPtr(structure));
- setJSValue(vmCall(Int64, m_out.operation(operationNewArrayWithSize), m_callFrame, structureValue, publicLength));
+ setJSValue(vmCall(Int64, m_out.operation(operationNewArrayWithSize), m_callFrame, structureValue, publicLength, m_out.intPtrZero));
}
void compileNewTypedArray()
@@ -4449,13 +4374,12 @@
LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
- MarkedAllocator& allocator =
+ MarkedAllocator* allocator =
vm().heap.allocatorForObjectWithDestructor(sizeof(JSRopeString));
+ DFG_ASSERT(m_graph, m_node, allocator);
LValue result = allocateCell(
- m_out.constIntPtr(&allocator),
- vm().stringStructure.get(),
- slowPath);
+ m_out.constIntPtr(allocator), vm().stringStructure.get(), slowPath);
m_out.storePtr(m_out.intPtrZero, result, m_heaps.JSString_value);
for (unsigned i = 0; i < numKids; ++i)
@@ -7004,7 +6928,8 @@
if (structure->outOfLineCapacity() || hasIndexedProperties(structure->indexingType())) {
size_t allocationSize = JSFinalObject::allocationSize(structure->inlineCapacity());
- MarkedAllocator* allocator = &vm().heap.allocatorForObjectWithoutDestructor(allocationSize);
+ MarkedAllocator* cellAllocator = vm().heap.allocatorForObjectWithoutDestructor(allocationSize);
+ DFG_ASSERT(m_graph, m_node, cellAllocator);
bool hasIndexingHeader = hasIndexedProperties(structure->indexingType());
unsigned indexingHeaderSize = 0;
@@ -7029,7 +6954,7 @@
indexingPayloadSizeInBytes =
m_out.mul(m_out.zeroExtPtr(vectorLength), m_out.intPtrEight);
}
-
+
LValue butterflySize = m_out.add(
m_out.constIntPtr(
structure->outOfLineCapacity() * sizeof(JSValue) + indexingHeaderSize),
@@ -7040,22 +6965,31 @@
LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
- LValue endOfStorage = allocateBasicStorageAndGetEnd(butterflySize, slowPath);
+ ValueFromBlock noButterfly = m_out.anchor(m_out.intPtrZero);
+
+ LValue startOfStorage = allocateHeapCell(
+ allocatorForSize(vm().heap.subspaceForAuxiliaryData(), butterflySize, slowPath),
+ slowPath);
LValue fastButterflyValue = m_out.add(
- m_out.sub(endOfStorage, indexingPayloadSizeInBytes),
- m_out.constIntPtr(sizeof(IndexingHeader) - indexingHeaderSize));
+ startOfStorage,
+ m_out.constIntPtr(
+ structure->outOfLineCapacity() * sizeof(JSValue) + sizeof(IndexingHeader)));
+
+ ValueFromBlock haveButterfly = m_out.anchor(fastButterflyValue);
m_out.store32(vectorLength, fastButterflyValue, m_heaps.Butterfly_vectorLength);
LValue fastObjectValue = allocateObject(
- m_out.constIntPtr(allocator), structure, fastButterflyValue, slowPath);
+ m_out.constIntPtr(cellAllocator), structure, fastButterflyValue, slowPath);
ValueFromBlock fastObject = m_out.anchor(fastObjectValue);
ValueFromBlock fastButterfly = m_out.anchor(fastButterflyValue);
m_out.jump(continuation);
m_out.appendTo(slowPath, continuation);
+
+ LValue butterflyValue = m_out.phi(pointerType(), noButterfly, haveButterfly);
LValue slowObjectValue;
if (hasIndexingHeader) {
@@ -7064,16 +6998,17 @@
return createLazyCallGenerator(
operationNewObjectWithButterflyWithIndexingHeaderAndVectorLength,
locations[0].directGPR(), CCallHelpers::TrustedImmPtr(structure),
- locations[1].directGPR());
+ locations[1].directGPR(), locations[2].directGPR());
},
- vectorLength);
+ vectorLength, butterflyValue);
} else {
slowObjectValue = lazySlowPath(
[=] (const Vector<Location>& locations) -> RefPtr<LazySlowPath::Generator> {
return createLazyCallGenerator(
operationNewObjectWithButterfly, locations[0].directGPR(),
- CCallHelpers::TrustedImmPtr(structure));
- });
+ CCallHelpers::TrustedImmPtr(structure), locations[1].directGPR());
+ },
+ butterflyValue);
}
ValueFromBlock slowObject = m_out.anchor(slowObjectValue);
ValueFromBlock slowButterfly = m_out.anchor(
@@ -7088,7 +7023,7 @@
m_out.store32(publicLength, butterfly, m_heaps.Butterfly_publicLength);
- initializeArrayElements(structure->indexingType(), vectorLength, butterfly);
+ initializeArrayElements(structure->indexingType(), m_out.int32Zero, vectorLength, butterfly);
HashMap<int32_t, LValue, DefaultHash<int32_t>::Hash, WTF::UnsignedWithZeroKeyHashTraits<int32_t>> indexMap;
Vector<int32_t> indices;
@@ -7833,35 +7768,54 @@
return result;
}
- void initializeArrayElements(IndexingType indexingType, LValue vectorLength, LValue butterfly)
+ void initializeArrayElements(IndexingType indexingType, LValue begin, LValue end, LValue butterfly)
{
- if (!hasDouble(indexingType)) {
- // The GC already initialized everything to JSValue() for us.
+ if (hasUndecided(indexingType))
return;
+
+ if (begin == end)
+ return;
+
+ IndexedAbstractHeap* heap = m_heaps.forIndexingType(indexingType);
+ DFG_ASSERT(m_graph, m_node, heap);
+
+ LValue hole;
+ if (hasDouble(indexingType))
+ hole = m_out.constInt64(bitwise_cast<int64_t>(PNaN));
+ else
+ hole = m_out.constInt64(JSValue::encode(JSValue()));
+
+ const uint64_t unrollingLimit = 10;
+ if (begin->hasInt() && end->hasInt()) {
+ uint64_t beginConst = static_cast<uint64_t>(begin->asInt());
+ uint64_t endConst = static_cast<uint64_t>(end->asInt());
+
+ if (endConst - beginConst <= unrollingLimit) {
+ for (uint64_t i = beginConst; i < endConst; ++i)
+ m_out.store64(hole, butterfly, heap->at(i));
+ return;
+ }
}
// Doubles must be initialized to PNaN.
LBasicBlock initLoop = m_out.newBlock();
LBasicBlock initDone = m_out.newBlock();
- ValueFromBlock originalIndex = m_out.anchor(vectorLength);
+ ValueFromBlock originalIndex = m_out.anchor(end);
ValueFromBlock originalPointer = m_out.anchor(butterfly);
- m_out.branch(
- m_out.notZero32(vectorLength), unsure(initLoop), unsure(initDone));
+ m_out.branch(m_out.notEqual(end, begin), unsure(initLoop), unsure(initDone));
LBasicBlock initLastNext = m_out.appendTo(initLoop, initDone);
LValue index = m_out.phi(Int32, originalIndex);
LValue pointer = m_out.phi(pointerType(), originalPointer);
- m_out.store64(
- m_out.constInt64(bitwise_cast<int64_t>(PNaN)),
- TypedPointer(m_heaps.indexedDoubleProperties.atAnyIndex(), pointer));
+ m_out.store64(hole, TypedPointer(heap->atAnyIndex(), pointer));
LValue nextIndex = m_out.sub(index, m_out.int32One);
m_out.addIncomingToPhi(index, m_out.anchor(nextIndex));
m_out.addIncomingToPhi(pointer, m_out.anchor(m_out.add(pointer, m_out.intPtrEight)));
m_out.branch(
- m_out.notZero32(nextIndex), unsure(initLoop), unsure(initDone));
+ m_out.notEqual(nextIndex, begin), unsure(initLoop), unsure(initDone));
m_out.appendTo(initDone, initLastNext);
}
@@ -7916,13 +7870,12 @@
LBasicBlock continuation = m_out.newBlock();
LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
-
- LValue endOfStorage = allocateBasicStorageAndGetEnd(
- m_out.constIntPtr(sizeInValues * sizeof(JSValue)), slowPath);
-
+
+ size_t sizeInBytes = sizeInValues * sizeof(JSValue);
+ MarkedAllocator* allocator = vm().heap.allocatorForAuxiliaryData(sizeInBytes);
+ LValue startOfStorage = allocateHeapCell(m_out.constIntPtr(allocator), slowPath);
ValueFromBlock fastButterfly = m_out.anchor(
- m_out.add(m_out.constIntPtr(sizeof(IndexingHeader)), endOfStorage));
-
+ m_out.add(m_out.constIntPtr(sizeInBytes + sizeof(IndexingHeader)), startOfStorage));
m_out.jump(continuation);
m_out.appendTo(slowPath, continuation);
@@ -8487,29 +8440,64 @@
setJSValue(patchpoint);
}
- LValue allocateCell(LValue allocator, LBasicBlock slowPath)
+ LValue allocateHeapCell(LValue allocator, LBasicBlock slowPath)
{
- LBasicBlock success = m_out.newBlock();
-
- LValue result;
- LValue condition;
- if (Options::forceGCSlowPaths()) {
- result = m_out.intPtrZero;
- condition = m_out.booleanFalse;
- } else {
- result = m_out.loadPtr(
- allocator, m_heaps.MarkedAllocator_freeListHead);
- condition = m_out.notNull(result);
+ MarkedAllocator* actualAllocator = nullptr;
+ if (allocator->hasIntPtr())
+ actualAllocator = bitwise_cast<MarkedAllocator*>(allocator->asIntPtr());
+
+ if (!actualAllocator) {
+ // This means that either we know that the allocator is null or we don't know what the
+ // allocator is. In either case, we need the null check.
+ LBasicBlock haveAllocator = m_out.newBlock();
+ LBasicBlock lastNext = m_out.insertNewBlocksBefore(haveAllocator);
+ m_out.branch(allocator, usually(haveAllocator), rarely(slowPath));
+ m_out.appendTo(haveAllocator, lastNext);
}
- m_out.branch(condition, usually(success), rarely(slowPath));
- m_out.appendTo(success);
+ LBasicBlock continuation = m_out.newBlock();
- m_out.storePtr(
- m_out.loadPtr(result, m_heaps.JSCell_freeListNext),
- allocator, m_heaps.MarkedAllocator_freeListHead);
-
- return result;
+ LBasicBlock lastNext = m_out.insertNewBlocksBefore(continuation);
+
+ PatchpointValue* patchpoint = m_out.patchpoint(pointerType());
+ patchpoint->effects.terminal = true;
+ patchpoint->appendSomeRegister(allocator);
+ patchpoint->numGPScratchRegisters++;
+ patchpoint->resultConstraint = ValueRep::SomeEarlyRegister;
+
+ m_out.appendSuccessor(usually(continuation));
+ m_out.appendSuccessor(rarely(slowPath));
+
+ patchpoint->setGenerator(
+ [=] (CCallHelpers& jit, const StackmapGenerationParams& params) {
+ CCallHelpers::JumpList jumpToSlowPath;
+
+ // We use a patchpoint to emit the allocation path because whenever we mess with
+ // allocation paths, we already reason about them at the machine code level. We know
+ // exactly what instruction sequence we want. We're confident that no compiler
+ // optimization could make this code better. So, it's best to have the code in
+ // AssemblyHelpers::emitAllocate(). That way, the same optimized path is shared by
+ // all of the compiler tiers.
+ jit.emitAllocateWithNonNullAllocator(
+ params[0].gpr(), actualAllocator, params[1].gpr(), params.gpScratch(0),
+ jumpToSlowPath);
+
+ CCallHelpers::Jump jumpToSuccess;
+ if (!params.fallsThroughToSuccessor(0))
+ jumpToSuccess = jit.jump();
+
+ Vector<Box<CCallHelpers::Label>> labels = params.successorLabels();
+
+ params.addLatePath(
+ [=] (CCallHelpers& jit) {
+ jumpToSlowPath.linkTo(*labels[1], &jit);
+ if (jumpToSuccess.isSet())
+ jumpToSuccess.linkTo(*labels[0], &jit);
+ });
+ });
+
+ m_out.appendTo(continuation, lastNext);
+ return patchpoint;
}
void storeStructure(LValue object, Structure* structure)
@@ -8522,7 +8510,7 @@
LValue allocateCell(LValue allocator, Structure* structure, LBasicBlock slowPath)
{
- LValue result = allocateCell(allocator, slowPath);
+ LValue result = allocateHeapCell(allocator, slowPath);
storeStructure(result, structure);
return result;
}
@@ -8539,7 +8527,7 @@
LValue allocateObject(
size_t size, Structure* structure, LValue butterfly, LBasicBlock slowPath)
{
- MarkedAllocator* allocator = &vm().heap.allocatorForObjectOfType<ClassType>(size);
+ MarkedAllocator* allocator = vm().heap.allocatorForObjectOfType<ClassType>(size);
return allocateObject(m_out.constIntPtr(allocator), structure, butterfly, slowPath);
}
@@ -8550,46 +8538,60 @@
ClassType::allocationSize(0), structure, butterfly, slowPath);
}
+ LValue allocatorForSize(LValue subspace, LValue size, LBasicBlock slowPath)
+ {
+ static_assert(!(MarkedSpace::sizeStep & (MarkedSpace::sizeStep - 1)), "MarkedSpace::sizeStep must be a power of two.");
+
+ // Try to do some constant-folding here.
+ if (subspace->hasIntPtr() && size->hasIntPtr()) {
+ MarkedSpace::Subspace* actualSubspace = bitwise_cast<MarkedSpace::Subspace*>(subspace->asIntPtr());
+ size_t actualSize = size->asIntPtr();
+
+ MarkedAllocator* actualAllocator = MarkedSpace::allocatorFor(*actualSubspace, actualSize);
+ if (!actualAllocator) {
+ LBasicBlock continuation = m_out.newBlock();
+ LBasicBlock lastNext = m_out.insertNewBlocksBefore(continuation);
+ m_out.jump(slowPath);
+ m_out.appendTo(continuation, lastNext);
+ return m_out.intPtrZero;
+ }
+
+ return m_out.constIntPtr(actualAllocator);
+ }
+
+ unsigned stepShift = getLSBSet(MarkedSpace::sizeStep);
+
+ LBasicBlock continuation = m_out.newBlock();
+
+ LBasicBlock lastNext = m_out.insertNewBlocksBefore(continuation);
+
+ LValue sizeClassIndex = m_out.lShr(
+ m_out.add(size, m_out.constIntPtr(MarkedSpace::sizeStep - 1)),
+ m_out.constInt32(stepShift));
+
+ m_out.branch(
+ m_out.above(sizeClassIndex, m_out.constIntPtr(MarkedSpace::largeCutoff >> stepShift)),
+ rarely(slowPath), usually(continuation));
+
+ m_out.appendTo(continuation, lastNext);
+
+ return m_out.loadPtr(
+ m_out.baseIndex(
+ m_heaps.MarkedSpace_Subspace_allocatorForSizeStep,
+ subspace, m_out.sub(sizeClassIndex, m_out.intPtrOne)));
+ }
+
+ LValue allocatorForSize(MarkedSpace::Subspace& subspace, LValue size, LBasicBlock slowPath)
+ {
+ return allocatorForSize(m_out.constIntPtr(&subspace), size, slowPath);
+ }
+
template<typename ClassType>
LValue allocateVariableSizedObject(
LValue size, Structure* structure, LValue butterfly, LBasicBlock slowPath)
{
- static_assert(!(MarkedSpace::preciseStep & (MarkedSpace::preciseStep - 1)), "MarkedSpace::preciseStep must be a power of two.");
- static_assert(!(MarkedSpace::impreciseStep & (MarkedSpace::impreciseStep - 1)), "MarkedSpace::impreciseStep must be a power of two.");
-
- LValue subspace = m_out.constIntPtr(&vm().heap.subspaceForObjectOfType<ClassType>());
-
- LBasicBlock smallCaseBlock = m_out.newBlock();
- LBasicBlock largeOrOversizeCaseBlock = m_out.newBlock();
- LBasicBlock largeCaseBlock = m_out.newBlock();
- LBasicBlock continuation = m_out.newBlock();
-
- LValue uproundedSize = m_out.add(size, m_out.constInt32(MarkedSpace::preciseStep - 1));
- LValue isSmall = m_out.below(uproundedSize, m_out.constInt32(MarkedSpace::preciseCutoff));
- m_out.branch(isSmall, unsure(smallCaseBlock), unsure(largeOrOversizeCaseBlock));
-
- LBasicBlock lastNext = m_out.appendTo(smallCaseBlock, largeOrOversizeCaseBlock);
- TypedPointer address = m_out.baseIndex(
- m_heaps.MarkedSpace_Subspace_preciseAllocators, subspace,
- m_out.zeroExtPtr(m_out.lShr(uproundedSize, m_out.constInt32(getLSBSet(MarkedSpace::preciseStep)))));
- ValueFromBlock smallAllocator = m_out.anchor(address.value());
- m_out.jump(continuation);
-
- m_out.appendTo(largeOrOversizeCaseBlock, largeCaseBlock);
- m_out.branch(
- m_out.below(uproundedSize, m_out.constInt32(MarkedSpace::impreciseCutoff)),
- usually(largeCaseBlock), rarely(slowPath));
-
- m_out.appendTo(largeCaseBlock, continuation);
- address = m_out.baseIndex(
- m_heaps.MarkedSpace_Subspace_impreciseAllocators, subspace,
- m_out.zeroExtPtr(m_out.lShr(uproundedSize, m_out.constInt32(getLSBSet(MarkedSpace::impreciseStep)))));
- ValueFromBlock largeAllocator = m_out.anchor(address.value());
- m_out.jump(continuation);
-
- m_out.appendTo(continuation, lastNext);
- LValue allocator = m_out.phi(pointerType(), smallAllocator, largeAllocator);
-
+ LValue allocator = allocatorForSize(
+ vm().heap.subspaceForObjectOfType<ClassType>(), size, slowPath);
return allocateObject(allocator, structure, butterfly, slowPath);
}
@@ -8622,7 +8624,11 @@
LValue allocateObject(Structure* structure)
{
size_t allocationSize = JSFinalObject::allocationSize(structure->inlineCapacity());
- MarkedAllocator* allocator = &vm().heap.allocatorForObjectWithoutDestructor(allocationSize);
+ MarkedAllocator* allocator = vm().heap.allocatorForObjectWithoutDestructor(allocationSize);
+
+ // FIXME: If the allocator is null, we could simply emit a normal C call to the allocator
+ // instead of putting it on the slow path.
+ // https://bugs.webkit.org/show_bug.cgi?id=161062
LBasicBlock slowPath = m_out.newBlock();
LBasicBlock continuation = m_out.newBlock();
@@ -8665,75 +8671,122 @@
LValue array;
LValue butterfly;
};
- ArrayValues allocateJSArray(
- Structure* structure, unsigned numElements, LBasicBlock slowPath)
+
+ ArrayValues allocateJSArray(LValue publicLength, Structure* structure, bool shouldInitializeElements = true, bool shouldLargeArraySizeCreateArrayStorage = true)
{
+ JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node->origin.semantic);
+ IndexingType indexingType = structure->indexingType();
ASSERT(
- hasUndecided(structure->indexingType())
- || hasInt32(structure->indexingType())
- || hasDouble(structure->indexingType())
- || hasContiguous(structure->indexingType()));
+ hasUndecided(indexingType)
+ || hasInt32(indexingType)
+ || hasDouble(indexingType)
+ || hasContiguous(indexingType));
+
+ LBasicBlock fastCase = m_out.newBlock();
+ LBasicBlock largeCase = m_out.newBlock();
+ LBasicBlock failCase = m_out.newBlock();
+ LBasicBlock continuation = m_out.newBlock();
+ LBasicBlock slowCase = m_out.newBlock();
- unsigned vectorLength = std::max(BASE_VECTOR_LEN, numElements);
+ LBasicBlock lastNext = m_out.insertNewBlocksBefore(fastCase);
- LValue endOfStorage = allocateBasicStorageAndGetEnd(
- m_out.constIntPtr(sizeof(JSValue) * vectorLength + sizeof(IndexingHeader)),
- slowPath);
+ ValueFromBlock noButterfly = m_out.anchor(m_out.intPtrZero);
- LValue butterfly = m_out.sub(
- endOfStorage, m_out.constIntPtr(sizeof(JSValue) * vectorLength));
+ LValue predicate;
+ if (shouldLargeArraySizeCreateArrayStorage)
+ predicate = m_out.aboveOrEqual(publicLength, m_out.constInt32(MIN_ARRAY_STORAGE_CONSTRUCTION_LENGTH));
+ else
+ predicate = m_out.booleanFalse;
- LValue object = allocateObject<JSArray>(
- structure, butterfly, slowPath);
+ m_out.branch(predicate, rarely(largeCase), usually(fastCase));
- m_out.store32(m_out.constInt32(numElements), butterfly, m_heaps.Butterfly_publicLength);
- m_out.store32(m_out.constInt32(vectorLength), butterfly, m_heaps.Butterfly_vectorLength);
-
- if (hasDouble(structure->indexingType())) {
- for (unsigned i = numElements; i < vectorLength; ++i) {
- m_out.store64(
- m_out.constInt64(bitwise_cast<int64_t>(PNaN)),
- butterfly, m_heaps.indexedDoubleProperties[i]);
+ m_out.appendTo(fastCase, largeCase);
+
+ LValue vectorLength = nullptr;
+ if (publicLength->hasInt32()) {
+ unsigned publicLengthConst = static_cast<unsigned>(publicLength->asInt32());
+ if (publicLengthConst <= MAX_STORAGE_VECTOR_LENGTH) {
+ vectorLength = m_out.constInt32(
+ Butterfly::optimalContiguousVectorLength(
+ structure->outOfLineCapacity(), publicLengthConst));
}
}
- return ArrayValues(object, butterfly);
- }
+ if (!vectorLength) {
+ // We don't compute the optimal vector length for new Array(blah) where blah is not
+ // statically known, since the compute effort of doing it here is probably not worth it.
+ vectorLength = publicLength;
+ }
+
+ LValue payloadSize =
+ m_out.shl(m_out.zeroExt(vectorLength, pointerType()), m_out.constIntPtr(3));
+
+ LValue butterflySize = m_out.add(
+ payloadSize, m_out.constIntPtr(sizeof(IndexingHeader)));
+
+ LValue allocator = allocatorForSize(
+ vm().heap.subspaceForAuxiliaryData(), butterflySize, failCase);
+ LValue startOfStorage = allocateHeapCell(allocator, failCase);
+
+ LValue butterfly = m_out.add(startOfStorage, m_out.constIntPtr(sizeof(IndexingHeader)));
+
+ m_out.store32(publicLength, butterfly, m_heaps.Butterfly_publicLength);
+ m_out.store32(vectorLength, butterfly, m_heaps.Butterfly_vectorLength);
- ArrayValues allocateJSArray(Structure* structure, unsigned numElements)
- {
- LBasicBlock slowPath = m_out.newBlock();
- LBasicBlock continuation = m_out.newBlock();
+ initializeArrayElements(
+ indexingType,
+ shouldInitializeElements ? m_out.int32Zero : publicLength, vectorLength,
+ butterfly);
- LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
+ ValueFromBlock haveButterfly = m_out.anchor(butterfly);
- ArrayValues fastValues = allocateJSArray(structure, numElements, slowPath);
- ValueFromBlock fastArray = m_out.anchor(fastValues.array);
- ValueFromBlock fastButterfly = m_out.anchor(fastValues.butterfly);
-
+ LValue object = allocateObject<JSArray>(structure, butterfly, failCase);
+
+ ValueFromBlock fastResult = m_out.anchor(object);
+ ValueFromBlock fastButterfly = m_out.anchor(butterfly);
m_out.jump(continuation);
- m_out.appendTo(slowPath, continuation);
+ m_out.appendTo(largeCase, failCase);
+ ValueFromBlock largeStructure = m_out.anchor(
+ m_out.constIntPtr(
+ globalObject->arrayStructureForIndexingTypeDuringAllocation(ArrayWithArrayStorage)));
+ m_out.jump(slowCase);
+
+ m_out.appendTo(failCase, slowCase);
+ ValueFromBlock failStructure = m_out.anchor(m_out.constIntPtr(structure));
+ m_out.jump(slowCase);
+
+ m_out.appendTo(slowCase, continuation);
+ LValue structureValue = m_out.phi(pointerType(), largeStructure, failStructure);
+ LValue butterflyValue = m_out.phi(pointerType(), noButterfly, haveButterfly);
- LValue slowArrayValue = lazySlowPath(
+ LValue slowResultValue = lazySlowPath(
[=] (const Vector<Location>& locations) -> RefPtr<LazySlowPath::Generator> {
return createLazyCallGenerator(
operationNewArrayWithSize, locations[0].directGPR(),
- CCallHelpers::TrustedImmPtr(structure), CCallHelpers::TrustedImm32(numElements));
- });
- ValueFromBlock slowArray = m_out.anchor(slowArrayValue);
+ locations[1].directGPR(), locations[2].directGPR(), locations[3].directGPR());
+ },
+ structureValue, publicLength, butterflyValue);
+ ValueFromBlock slowResult = m_out.anchor(slowResultValue);
ValueFromBlock slowButterfly = m_out.anchor(
- m_out.loadPtr(slowArrayValue, m_heaps.JSObject_butterfly));
-
+ m_out.loadPtr(slowResultValue, m_heaps.JSObject_butterfly));
m_out.jump(continuation);
m_out.appendTo(continuation, lastNext);
-
return ArrayValues(
- m_out.phi(pointerType(), fastArray, slowArray),
+ m_out.phi(pointerType(), fastResult, slowResult),
m_out.phi(pointerType(), fastButterfly, slowButterfly));
}
+ ArrayValues allocateUninitializedContiguousJSArray(LValue publicLength, Structure* structure)
+ {
+ bool shouldInitializeElements = false;
+ bool shouldLargeArraySizeCreateArrayStorage = false;
+ return allocateJSArray(
+ publicLength, structure, shouldInitializeElements,
+ shouldLargeArraySizeCreateArrayStorage);
+ }
+
LValue ensureShadowChickenPacket()
{
LBasicBlock slowCase = m_out.newBlock();
diff --git a/Source/JavaScriptCore/ftl/FTLOutput.cpp b/Source/JavaScriptCore/ftl/FTLOutput.cpp
index fea12dc..2b1f163 100644
--- a/Source/JavaScriptCore/ftl/FTLOutput.cpp
+++ b/Source/JavaScriptCore/ftl/FTLOutput.cpp
@@ -100,7 +100,9 @@
LValue Output::constBool(bool value)
{
- return m_block->appendNew<B3::Const32Value>(m_proc, origin(), value);
+ if (value)
+ return booleanTrue;
+ return booleanFalse;
}
LValue Output::constInt32(int32_t value)
@@ -125,6 +127,10 @@
LValue Output::add(LValue left, LValue right)
{
+ if (Value* result = left->addConstant(m_proc, right)) {
+ m_block->append(result);
+ return result;
+ }
return m_block->appendNew<B3::Value>(m_proc, B3::Add, origin(), left, right);
}
@@ -205,17 +211,32 @@
LValue Output::shl(LValue left, LValue right)
{
- return m_block->appendNew<B3::Value>(m_proc, B3::Shl, origin(), left, castToInt32(right));
+ right = castToInt32(right);
+ if (Value* result = left->shlConstant(m_proc, right)) {
+ m_block->append(result);
+ return result;
+ }
+ return m_block->appendNew<B3::Value>(m_proc, B3::Shl, origin(), left, right);
}
LValue Output::aShr(LValue left, LValue right)
{
- return m_block->appendNew<B3::Value>(m_proc, B3::SShr, origin(), left, castToInt32(right));
+ right = castToInt32(right);
+ if (Value* result = left->sShrConstant(m_proc, right)) {
+ m_block->append(result);
+ return result;
+ }
+ return m_block->appendNew<B3::Value>(m_proc, B3::SShr, origin(), left, right);
}
LValue Output::lShr(LValue left, LValue right)
{
- return m_block->appendNew<B3::Value>(m_proc, B3::ZShr, origin(), left, castToInt32(right));
+ right = castToInt32(right);
+ if (Value* result = left->zShrConstant(m_proc, right)) {
+ m_block->append(result);
+ return result;
+ }
+ return m_block->appendNew<B3::Value>(m_proc, B3::ZShr, origin(), left, right);
}
LValue Output::bitNot(LValue value)
@@ -343,6 +364,8 @@
{
if (value->type() == type)
return value;
+ if (value->hasInt32())
+ return m_block->appendIntConstant(m_proc, origin(), Int64, static_cast<uint64_t>(static_cast<uint32_t>(value->asInt32())));
return m_block->appendNew<B3::Value>(m_proc, B3::ZExt32, origin(), value);
}
@@ -358,8 +381,11 @@
LValue Output::castToInt32(LValue value)
{
- return value->type() == B3::Int32 ? value :
- m_block->appendNew<B3::Value>(m_proc, B3::Trunc, origin(), value);
+ if (value->type() == Int32)
+ return value;
+ if (value->hasInt64())
+ return constInt32(static_cast<int32_t>(value->asInt64()));
+ return m_block->appendNew<B3::Value>(m_proc, B3::Trunc, origin(), value);
}
LValue Output::doubleToFloat(LValue value)
@@ -453,51 +479,81 @@
LValue Output::equal(LValue left, LValue right)
{
+ TriState result = left->equalConstant(right);
+ if (result != MixedTriState)
+ return constBool(result == TrueTriState);
return m_block->appendNew<B3::Value>(m_proc, B3::Equal, origin(), left, right);
}
LValue Output::notEqual(LValue left, LValue right)
{
+ TriState result = left->notEqualConstant(right);
+ if (result != MixedTriState)
+ return constBool(result == TrueTriState);
return m_block->appendNew<B3::Value>(m_proc, B3::NotEqual, origin(), left, right);
}
LValue Output::above(LValue left, LValue right)
{
+ TriState result = left->aboveConstant(right);
+ if (result != MixedTriState)
+ return constBool(result == TrueTriState);
return m_block->appendNew<B3::Value>(m_proc, B3::Above, origin(), left, right);
}
LValue Output::aboveOrEqual(LValue left, LValue right)
{
+ TriState result = left->aboveEqualConstant(right);
+ if (result != MixedTriState)
+ return constBool(result == TrueTriState);
return m_block->appendNew<B3::Value>(m_proc, B3::AboveEqual, origin(), left, right);
}
LValue Output::below(LValue left, LValue right)
{
+ TriState result = left->belowConstant(right);
+ if (result != MixedTriState)
+ return constBool(result == TrueTriState);
return m_block->appendNew<B3::Value>(m_proc, B3::Below, origin(), left, right);
}
LValue Output::belowOrEqual(LValue left, LValue right)
{
+ TriState result = left->belowEqualConstant(right);
+ if (result != MixedTriState)
+ return constBool(result == TrueTriState);
return m_block->appendNew<B3::Value>(m_proc, B3::BelowEqual, origin(), left, right);
}
LValue Output::greaterThan(LValue left, LValue right)
{
+ TriState result = left->greaterThanConstant(right);
+ if (result != MixedTriState)
+ return constBool(result == TrueTriState);
return m_block->appendNew<B3::Value>(m_proc, B3::GreaterThan, origin(), left, right);
}
LValue Output::greaterThanOrEqual(LValue left, LValue right)
{
+ TriState result = left->greaterEqualConstant(right);
+ if (result != MixedTriState)
+ return constBool(result == TrueTriState);
return m_block->appendNew<B3::Value>(m_proc, B3::GreaterEqual, origin(), left, right);
}
LValue Output::lessThan(LValue left, LValue right)
{
+ TriState result = left->lessThanConstant(right);
+ if (result != MixedTriState)
+ return constBool(result == TrueTriState);
return m_block->appendNew<B3::Value>(m_proc, B3::LessThan, origin(), left, right);
}
LValue Output::lessThanOrEqual(LValue left, LValue right)
{
+ TriState result = left->lessEqualConstant(right);
+ if (result != MixedTriState)
+ return constBool(result == TrueTriState);
return m_block->appendNew<B3::Value>(m_proc, B3::LessEqual, origin(), left, right);
}
@@ -583,6 +639,12 @@
LValue Output::select(LValue value, LValue taken, LValue notTaken)
{
+ if (value->hasInt32()) {
+ if (value->asInt32())
+ return taken;
+ else
+ return notTaken;
+ }
return m_block->appendNew<B3::Value>(m_proc, B3::Select, origin(), value, taken, notTaken);
}
@@ -621,6 +683,11 @@
m_block->appendNewControlValue(m_proc, B3::Oops, origin());
}
+void Output::appendSuccessor(WeightedTarget target)
+{
+ m_block->appendSuccessor(target.frequentedBlock());
+}
+
CheckValue* Output::speculate(LValue value)
{
return m_block->appendNew<B3::CheckValue>(m_proc, B3::Check, origin(), value);
@@ -741,7 +808,8 @@
void Output::addIncomingToPhi(LValue phi, ValueFromBlock value)
{
- value.value()->as<B3::UpsilonValue>()->setPhi(phi);
+ if (value)
+ value.value()->as<B3::UpsilonValue>()->setPhi(phi);
}
} } // namespace JSC::FTL
diff --git a/Source/JavaScriptCore/ftl/FTLOutput.h b/Source/JavaScriptCore/ftl/FTLOutput.h
index 39d3edf..d0c867b 100644
--- a/Source/JavaScriptCore/ftl/FTLOutput.h
+++ b/Source/JavaScriptCore/ftl/FTLOutput.h
@@ -398,6 +398,8 @@
void ret(LValue);
void unreachable();
+
+ void appendSuccessor(WeightedTarget);
B3::CheckValue* speculate(LValue);
B3::CheckValue* speculateAdd(LValue, LValue);
diff --git a/Source/JavaScriptCore/ftl/FTLValueFromBlock.h b/Source/JavaScriptCore/ftl/FTLValueFromBlock.h
index 31246fd..bf39dd4 100644
--- a/Source/JavaScriptCore/ftl/FTLValueFromBlock.h
+++ b/Source/JavaScriptCore/ftl/FTLValueFromBlock.h
@@ -45,6 +45,8 @@
, m_block(block)
{
}
+
+ explicit operator bool() const { return m_value || m_block; }
LValue value() const { return m_value; }
LBasicBlock block() const { return m_block; }
diff --git a/Source/JavaScriptCore/ftl/FTLWeightedTarget.h b/Source/JavaScriptCore/ftl/FTLWeightedTarget.h
index dab5a7e..a57fffe 100644
--- a/Source/JavaScriptCore/ftl/FTLWeightedTarget.h
+++ b/Source/JavaScriptCore/ftl/FTLWeightedTarget.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014, 2016 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -55,6 +55,11 @@
LBasicBlock target() const { return m_target; }
Weight weight() const { return m_weight; }
+ B3::FrequentedBlock frequentedBlock() const
+ {
+ return B3::FrequentedBlock(target(), weight().frequencyClass());
+ }
+
private:
LBasicBlock m_target;
Weight m_weight;
diff --git a/Source/JavaScriptCore/heap/CellContainer.h b/Source/JavaScriptCore/heap/CellContainer.h
new file mode 100644
index 0000000..df09815
--- /dev/null
+++ b/Source/JavaScriptCore/heap/CellContainer.h
@@ -0,0 +1,93 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#include <wtf/StdLibExtras.h>
+
+namespace JSC {
+
+class HeapCell;
+class LargeAllocation;
+class MarkedBlock;
+class WeakSet;
+
+// This is how we abstract over either MarkedBlock& or LargeAllocation&. Put things in here as you
+// find need for them.
+
+class CellContainer {
+public:
+ CellContainer()
+ : m_encodedPointer(0)
+ {
+ }
+
+ CellContainer(MarkedBlock& markedBlock)
+ : m_encodedPointer(bitwise_cast<uintptr_t>(&markedBlock))
+ {
+ }
+
+ CellContainer(LargeAllocation& largeAllocation)
+ : m_encodedPointer(bitwise_cast<uintptr_t>(&largeAllocation) | isLargeAllocationBit)
+ {
+ }
+
+ explicit operator bool() const { return !!m_encodedPointer; }
+
+ bool isMarkedBlock() const { return m_encodedPointer && !(m_encodedPointer & isLargeAllocationBit); }
+ bool isLargeAllocation() const { return m_encodedPointer & isLargeAllocationBit; }
+
+ MarkedBlock& markedBlock() const
+ {
+ ASSERT(isMarkedBlock());
+ return *bitwise_cast<MarkedBlock*>(m_encodedPointer);
+ }
+
+ LargeAllocation& largeAllocation() const
+ {
+ ASSERT(isLargeAllocation());
+ return *bitwise_cast<LargeAllocation*>(m_encodedPointer - isLargeAllocationBit);
+ }
+
+ void flipIfNecessary(uint64_t heapVersion);
+ void flipIfNecessary();
+
+ bool isMarked() const;
+ bool isMarked(HeapCell*) const;
+ bool isMarkedOrNewlyAllocated(HeapCell*) const;
+
+ void noteMarked();
+
+ size_t cellSize() const;
+
+ WeakSet& weakSet() const;
+
+private:
+ static const uintptr_t isLargeAllocationBit = 1;
+ uintptr_t m_encodedPointer;
+};
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/heap/CellContainerInlines.h b/Source/JavaScriptCore/heap/CellContainerInlines.h
new file mode 100644
index 0000000..86a741b
--- /dev/null
+++ b/Source/JavaScriptCore/heap/CellContainerInlines.h
@@ -0,0 +1,89 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#include "CellContainer.h"
+#include "JSCell.h"
+#include "LargeAllocation.h"
+#include "MarkedBlock.h"
+
+namespace JSC {
+
+inline bool CellContainer::isMarked() const
+{
+ if (isLargeAllocation())
+ return true;
+ return markedBlock().handle().isMarked();
+}
+
+inline bool CellContainer::isMarked(HeapCell* cell) const
+{
+ if (isLargeAllocation())
+ return largeAllocation().isMarked();
+ return markedBlock().isMarked(cell);
+}
+
+inline bool CellContainer::isMarkedOrNewlyAllocated(HeapCell* cell) const
+{
+ if (isLargeAllocation())
+ return largeAllocation().isMarkedOrNewlyAllocated();
+ return markedBlock().isMarkedOrNewlyAllocated(cell);
+}
+
+inline void CellContainer::noteMarked()
+{
+ if (!isLargeAllocation())
+ markedBlock().noteMarked();
+}
+
+inline size_t CellContainer::cellSize() const
+{
+ if (isLargeAllocation())
+ return largeAllocation().cellSize();
+ return markedBlock().cellSize();
+}
+
+inline WeakSet& CellContainer::weakSet() const
+{
+ if (isLargeAllocation())
+ return largeAllocation().weakSet();
+ return markedBlock().weakSet();
+}
+
+inline void CellContainer::flipIfNecessary(uint64_t heapVersion)
+{
+ if (!isLargeAllocation())
+ markedBlock().flipIfNecessary(heapVersion);
+}
+
+inline void CellContainer::flipIfNecessary()
+{
+ if (!isLargeAllocation())
+ markedBlock().flipIfNecessary();
+}
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/heap/ConservativeRoots.cpp b/Source/JavaScriptCore/heap/ConservativeRoots.cpp
index e1e1200..87cb398 100644
--- a/Source/JavaScriptCore/heap/ConservativeRoots.cpp
+++ b/Source/JavaScriptCore/heap/ConservativeRoots.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2011 Apple Inc. All rights reserved.
+ * Copyright (C) 2011, 2016 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -31,6 +31,8 @@
#include "CopiedSpace.h"
#include "CopiedSpaceInlines.h"
#include "HeapInlines.h"
+#include "HeapUtil.h"
+#include "JITStubRoutineSet.h"
#include "JSCell.h"
#include "JSObject.h"
#include "JSCInlines.h"
@@ -39,46 +41,46 @@
namespace JSC {
-ConservativeRoots::ConservativeRoots(MarkedBlockSet* blocks, CopiedSpace* copiedSpace)
+ConservativeRoots::ConservativeRoots(Heap& heap)
: m_roots(m_inlineRoots)
, m_size(0)
, m_capacity(inlineCapacity)
- , m_blocks(blocks)
- , m_copiedSpace(copiedSpace)
+ , m_heap(heap)
{
}
ConservativeRoots::~ConservativeRoots()
{
if (m_roots != m_inlineRoots)
- OSAllocator::decommitAndRelease(m_roots, m_capacity * sizeof(JSCell*));
+ OSAllocator::decommitAndRelease(m_roots, m_capacity * sizeof(HeapCell*));
}
void ConservativeRoots::grow()
{
size_t newCapacity = m_capacity == inlineCapacity ? nonInlineCapacity : m_capacity * 2;
- JSCell** newRoots = static_cast<JSCell**>(OSAllocator::reserveAndCommit(newCapacity * sizeof(JSCell*)));
- memcpy(newRoots, m_roots, m_size * sizeof(JSCell*));
+ HeapCell** newRoots = static_cast<HeapCell**>(OSAllocator::reserveAndCommit(newCapacity * sizeof(HeapCell*)));
+ memcpy(newRoots, m_roots, m_size * sizeof(HeapCell*));
if (m_roots != m_inlineRoots)
- OSAllocator::decommitAndRelease(m_roots, m_capacity * sizeof(JSCell*));
+ OSAllocator::decommitAndRelease(m_roots, m_capacity * sizeof(HeapCell*));
m_capacity = newCapacity;
m_roots = newRoots;
}
template<typename MarkHook>
-inline void ConservativeRoots::genericAddPointer(void* p, TinyBloomFilter filter, MarkHook& markHook)
+inline void ConservativeRoots::genericAddPointer(void* p, int64_t version, TinyBloomFilter filter, MarkHook& markHook)
{
markHook.mark(p);
- m_copiedSpace->pinIfNecessary(p);
+ m_heap.storageSpace().pinIfNecessary(p);
- if (!Heap::isPointerGCObject(filter, *m_blocks, p))
- return;
-
- if (m_size == m_capacity)
- grow();
-
- m_roots[m_size++] = static_cast<JSCell*>(p);
+ HeapUtil::findGCObjectPointersForMarking(
+ m_heap, version, filter, p,
+ [&] (void* p) {
+ if (m_size == m_capacity)
+ grow();
+
+ m_roots[m_size++] = bitwise_cast<HeapCell*>(p);
+ });
}
template<typename MarkHook>
@@ -94,9 +96,10 @@
RELEASE_ASSERT(isPointerAligned(begin));
RELEASE_ASSERT(isPointerAligned(end));
- TinyBloomFilter filter = m_blocks->filter(); // Make a local copy of filter to show the compiler it won't alias, and can be register-allocated.
+ TinyBloomFilter filter = m_heap.objectSpace().blocks().filter(); // Make a local copy of filter to show the compiler it won't alias, and can be register-allocated.
+ int64_t version = m_heap.objectSpace().version();
for (char** it = static_cast<char**>(begin); it != static_cast<char**>(end); ++it)
- genericAddPointer(*it, filter, markHook);
+ genericAddPointer(*it, version, filter, markHook);
}
class DummyMarkHook {
diff --git a/Source/JavaScriptCore/heap/ConservativeRoots.h b/Source/JavaScriptCore/heap/ConservativeRoots.h
index 6343548..b570cb3 100644
--- a/Source/JavaScriptCore/heap/ConservativeRoots.h
+++ b/Source/JavaScriptCore/heap/ConservativeRoots.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2009 Apple Inc. All rights reserved.
+ * Copyright (C) 2009, 2016 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -31,12 +31,12 @@
namespace JSC {
class CodeBlockSet;
+class HeapCell;
class JITStubRoutineSet;
-class JSCell;
class ConservativeRoots {
public:
- ConservativeRoots(MarkedBlockSet*, CopiedSpace*);
+ ConservativeRoots(Heap&);
~ConservativeRoots();
void add(void* begin, void* end);
@@ -44,26 +44,25 @@
void add(void* begin, void* end, JITStubRoutineSet&, CodeBlockSet&);
size_t size();
- JSCell** roots();
+ HeapCell** roots();
private:
static const size_t inlineCapacity = 128;
- static const size_t nonInlineCapacity = 8192 / sizeof(JSCell*);
+ static const size_t nonInlineCapacity = 8192 / sizeof(HeapCell*);
template<typename MarkHook>
- void genericAddPointer(void*, TinyBloomFilter, MarkHook&);
+ void genericAddPointer(void*, int64_t heapVersion, TinyBloomFilter, MarkHook&);
template<typename MarkHook>
void genericAddSpan(void*, void* end, MarkHook&);
void grow();
- JSCell** m_roots;
+ HeapCell** m_roots;
size_t m_size;
size_t m_capacity;
- MarkedBlockSet* m_blocks;
- CopiedSpace* m_copiedSpace;
- JSCell* m_inlineRoots[inlineCapacity];
+ Heap& m_heap;
+ HeapCell* m_inlineRoots[inlineCapacity];
};
inline size_t ConservativeRoots::size()
@@ -71,7 +70,7 @@
return m_size;
}
-inline JSCell** ConservativeRoots::roots()
+inline HeapCell** ConservativeRoots::roots()
{
return m_roots;
}
diff --git a/Source/JavaScriptCore/heap/CopyToken.h b/Source/JavaScriptCore/heap/CopyToken.h
index e8f8109..927b24a 100644
--- a/Source/JavaScriptCore/heap/CopyToken.h
+++ b/Source/JavaScriptCore/heap/CopyToken.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013, 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2013, 2015-2016 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -29,7 +29,6 @@
namespace JSC {
enum CopyToken {
- ButterflyCopyToken,
TypedArrayVectorCopyToken,
MapBackingStoreCopyToken,
DirectArgumentsOverridesCopyToken
diff --git a/Source/JavaScriptCore/heap/FreeList.cpp b/Source/JavaScriptCore/heap/FreeList.cpp
new file mode 100644
index 0000000..43bc7ae
--- /dev/null
+++ b/Source/JavaScriptCore/heap/FreeList.cpp
@@ -0,0 +1,37 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "config.h"
+#include "FreeList.h"
+
+namespace JSC {
+
+void FreeList::dump(PrintStream& out) const
+{
+ out.print("{head = ", RawPointer(head), ", payloadEnd = ", RawPointer(payloadEnd), ", remaining = ", remaining, ", originalSize = ", originalSize, "}");
+}
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/heap/FreeList.h b/Source/JavaScriptCore/heap/FreeList.h
new file mode 100644
index 0000000..842caa6
--- /dev/null
+++ b/Source/JavaScriptCore/heap/FreeList.h
@@ -0,0 +1,91 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#include <wtf/PrintStream.h>
+
+namespace JSC {
+
+struct FreeCell {
+ FreeCell* next;
+};
+
+// This representation of a FreeList is convenient for the MarkedAllocator.
+
+struct FreeList {
+ FreeCell* head { nullptr };
+ char* payloadEnd { nullptr };
+ unsigned remaining { 0 };
+ unsigned originalSize { 0 };
+
+ FreeList()
+ {
+ }
+
+ static FreeList list(FreeCell* head, unsigned bytes)
+ {
+ FreeList result;
+ result.head = head;
+ result.remaining = 0;
+ result.originalSize = bytes;
+ return result;
+ }
+
+ static FreeList bump(char* payloadEnd, unsigned remaining)
+ {
+ FreeList result;
+ result.payloadEnd = payloadEnd;
+ result.remaining = remaining;
+ result.originalSize = remaining;
+ return result;
+ }
+
+ bool operator==(const FreeList& other) const
+ {
+ return head == other.head
+ && payloadEnd == other.payloadEnd
+ && remaining == other.remaining
+ && originalSize == other.originalSize;
+ }
+
+ bool operator!=(const FreeList& other) const
+ {
+ return !(*this == other);
+ }
+
+ explicit operator bool() const
+ {
+ return *this != FreeList();
+ }
+
+ bool allocationWillFail() const { return !head && !remaining; }
+ bool allocationWillSucceed() const { return !allocationWillFail(); }
+
+ void dump(PrintStream&) const;
+};
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/heap/GCTypeMap.h b/Source/JavaScriptCore/heap/GCTypeMap.h
new file mode 100644
index 0000000..d0df25c
--- /dev/null
+++ b/Source/JavaScriptCore/heap/GCTypeMap.h
@@ -0,0 +1,56 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#include "HeapOperation.h"
+#include <wtf/Assertions.h>
+
+namespace JSC {
+
+template<typename T>
+struct GCTypeMap {
+ T eden;
+ T full;
+
+ T& operator[](HeapOperation operation)
+ {
+ if (operation == FullCollection)
+ return full;
+ ASSERT(operation == EdenCollection);
+ return eden;
+ }
+
+ const T& operator[](HeapOperation operation) const
+ {
+ if (operation == FullCollection)
+ return full;
+ ASSERT(operation == EdenCollection);
+ return eden;
+ }
+};
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/heap/Heap.cpp b/Source/JavaScriptCore/heap/Heap.cpp
index 09611aa..1e69a07 100644
--- a/Source/JavaScriptCore/heap/Heap.cpp
+++ b/Source/JavaScriptCore/heap/Heap.cpp
@@ -31,6 +31,7 @@
#include "FullGCActivityCallback.h"
#include "GCActivityCallback.h"
#include "GCIncomingRefCountedSetInlines.h"
+#include "GCTypeMap.h"
#include "HeapHelperPool.h"
#include "HeapIterationScope.h"
#include "HeapProfiler.h"
@@ -40,6 +41,7 @@
#include "HeapVerifier.h"
#include "IncrementalSweeper.h"
#include "Interpreter.h"
+#include "JITStubRoutineSet.h"
#include "JITWorklist.h"
#include "JSCInlines.h"
#include "JSGlobalObject.h"
@@ -47,6 +49,7 @@
#include "JSVirtualMachineInternal.h"
#include "SamplingProfiler.h"
#include "ShadowChicken.h"
+#include "SuperSampler.h"
#include "TypeProfilerLog.h"
#include "UnlinkedCodeBlock.h"
#include "VM.h"
@@ -57,6 +60,7 @@
#include <wtf/ParallelVectorIterator.h>
#include <wtf/ProcessID.h>
#include <wtf/RAMSize.h>
+#include <wtf/SimpleStats.h>
#if USE(FOUNDATION)
#if __has_include(<objc/objc-internal.h>)
@@ -74,159 +78,16 @@
namespace {
static const size_t largeHeapSize = 32 * MB; // About 1.5X the average webpage.
-static const size_t smallHeapSize = 1 * MB; // Matches the FastMalloc per-thread cache.
+const size_t smallHeapSize = 1 * MB; // Matches the FastMalloc per-thread cache.
-#define ENABLE_GC_LOGGING 0
-
-#if ENABLE(GC_LOGGING)
-#if COMPILER(CLANG)
-#define DEFINE_GC_LOGGING_GLOBAL(type, name, arguments) \
-_Pragma("clang diagnostic push") \
-_Pragma("clang diagnostic ignored \"-Wglobal-constructors\"") \
-_Pragma("clang diagnostic ignored \"-Wexit-time-destructors\"") \
-static type name arguments; \
-_Pragma("clang diagnostic pop")
-#else
-#define DEFINE_GC_LOGGING_GLOBAL(type, name, arguments) \
-static type name arguments;
-#endif // COMPILER(CLANG)
-
-struct GCTimer {
- GCTimer(const char* name)
- : name(name)
- {
- }
- ~GCTimer()
- {
- logData(allCollectionData, "(All)");
- logData(edenCollectionData, "(Eden)");
- logData(fullCollectionData, "(Full)");
- }
-
- struct TimeRecord {
- TimeRecord()
- : time(0)
- , min(std::numeric_limits<double>::infinity())
- , max(0)
- , count(0)
- {
- }
-
- double time;
- double min;
- double max;
- size_t count;
- };
-
- void logData(const TimeRecord& data, const char* extra)
- {
- dataLogF("[%d] %s (Parent: %s) %s: %.2lfms (avg. %.2lf, min. %.2lf, max. %.2lf, count %lu)\n",
- getCurrentProcessID(),
- name,
- parent ? parent->name : "nullptr",
- extra,
- data.time * 1000,
- data.time * 1000 / data.count,
- data.min * 1000,
- data.max * 1000,
- data.count);
- }
-
- void updateData(TimeRecord& data, double duration)
- {
- if (duration < data.min)
- data.min = duration;
- if (duration > data.max)
- data.max = duration;
- data.count++;
- data.time += duration;
- }
-
- void didFinishPhase(HeapOperation collectionType, double duration)
- {
- TimeRecord& data = collectionType == EdenCollection ? edenCollectionData : fullCollectionData;
- updateData(data, duration);
- updateData(allCollectionData, duration);
- }
-
- static GCTimer* s_currentGlobalTimer;
-
- TimeRecord allCollectionData;
- TimeRecord fullCollectionData;
- TimeRecord edenCollectionData;
- const char* name;
- GCTimer* parent { nullptr };
-};
-
-GCTimer* GCTimer::s_currentGlobalTimer = nullptr;
-
-struct GCTimerScope {
- GCTimerScope(GCTimer& timer, HeapOperation collectionType)
- : timer(timer)
- , start(WTF::monotonicallyIncreasingTime())
- , collectionType(collectionType)
- {
- timer.parent = GCTimer::s_currentGlobalTimer;
- GCTimer::s_currentGlobalTimer = &timer;
- }
- ~GCTimerScope()
- {
- double delta = WTF::monotonicallyIncreasingTime() - start;
- timer.didFinishPhase(collectionType, delta);
- GCTimer::s_currentGlobalTimer = timer.parent;
- }
- GCTimer& timer;
- double start;
- HeapOperation collectionType;
-};
-
-struct GCCounter {
- GCCounter(const char* name)
- : name(name)
- , count(0)
- , total(0)
- , min(10000000)
- , max(0)
- {
- }
-
- void add(size_t amount)
- {
- count++;
- total += amount;
- if (amount < min)
- min = amount;
- if (amount > max)
- max = amount;
- }
- ~GCCounter()
- {
- dataLogF("[%d] %s: %zu values (avg. %zu, min. %zu, max. %zu)\n", getCurrentProcessID(), name, total, total / count, min, max);
- }
- const char* name;
- size_t count;
- size_t total;
- size_t min;
- size_t max;
-};
-
-#define GCPHASE(name) DEFINE_GC_LOGGING_GLOBAL(GCTimer, name##Timer, (#name)); GCTimerScope name##TimerScope(name##Timer, m_operationInProgress)
-#define GCCOUNTER(name, value) do { DEFINE_GC_LOGGING_GLOBAL(GCCounter, name##Counter, (#name)); name##Counter.add(value); } while (false)
-
-#else
-
-#define GCPHASE(name) do { } while (false)
-#define GCCOUNTER(name, value) do { } while (false)
-#endif
-
-static inline size_t minHeapSize(HeapType heapType, size_t ramSize)
+size_t minHeapSize(HeapType heapType, size_t ramSize)
{
if (heapType == LargeHeap)
return min(largeHeapSize, ramSize / 4);
return smallHeapSize;
}
-static inline size_t proportionalHeapSize(size_t heapSize, size_t ramSize)
+size_t proportionalHeapSize(size_t heapSize, size_t ramSize)
{
// Try to stay under 1/2 RAM size to leave room for the DOM, rendering, networking, etc.
if (heapSize < ramSize / 4)
@@ -236,12 +97,12 @@
return 1.25 * heapSize;
}
-static inline bool isValidSharedInstanceThreadState(VM* vm)
+bool isValidSharedInstanceThreadState(VM* vm)
{
return vm->currentThreadIsHoldingAPILock();
}
-static inline bool isValidThreadState(VM* vm)
+bool isValidThreadState(VM* vm)
{
if (vm->atomicStringTable() != wtfThreadData().atomicStringTable())
return false;
@@ -252,7 +113,7 @@
return true;
}
-static inline void recordType(TypeCountSet& set, JSCell* cell)
+void recordType(TypeCountSet& set, JSCell* cell)
{
const char* typeName = "[unknown]";
const ClassInfo* info = cell->classInfo();
@@ -261,6 +122,69 @@
set.add(typeName);
}
+bool measurePhaseTiming()
+{
+ return false;
+}
+
+HashMap<const char*, GCTypeMap<SimpleStats>>& timingStats()
+{
+ static HashMap<const char*, GCTypeMap<SimpleStats>>* result;
+ static std::once_flag once;
+ std::call_once(
+ once,
+ [] {
+ result = new HashMap<const char*, GCTypeMap<SimpleStats>>();
+ });
+ return *result;
+}
+
+SimpleStats& timingStats(const char* name, HeapOperation operation)
+{
+ return timingStats().add(name, GCTypeMap<SimpleStats>()).iterator->value[operation];
+}
+
+class TimingScope {
+public:
+ TimingScope(HeapOperation operation, const char* name)
+ : m_operation(operation)
+ , m_name(name)
+ {
+ if (measurePhaseTiming())
+ m_before = monotonicallyIncreasingTimeMS();
+ }
+
+ TimingScope(Heap& heap, const char* name)
+ : TimingScope(heap.operationInProgress(), name)
+ {
+ }
+
+ void setOperation(HeapOperation operation)
+ {
+ m_operation = operation;
+ }
+
+ void setOperation(Heap& heap)
+ {
+ setOperation(heap.operationInProgress());
+ }
+
+ ~TimingScope()
+ {
+ if (measurePhaseTiming()) {
+ double after = monotonicallyIncreasingTimeMS();
+ double timing = after - m_before;
+ SimpleStats& stats = timingStats(m_name, m_operation);
+ stats.add(timing);
+ dataLog("[GC:", m_operation, "] ", m_name, " took: ", timing, " ms (average ", stats.mean(), " ms).\n");
+ }
+ }
+private:
+ HeapOperation m_operation;
+ double m_before;
+ const char* m_name;
+};
+
} // anonymous namespace
Heap::Heap(VM* vm, HeapType heapType)
@@ -287,6 +211,8 @@
, m_machineThreads(this)
, m_slotVisitor(*this)
, m_handleSet(vm)
+ , m_codeBlocks(std::make_unique<CodeBlockSet>())
+ , m_jitStubRoutines(std::make_unique<JITStubRoutineSet>())
, m_isSafeToCollect(false)
, m_writeBarrierBuffer(256)
, m_vm(vm)
@@ -331,7 +257,7 @@
RELEASE_ASSERT(m_operationInProgress == NoOperation);
m_arrayBuffers.lastChanceToFinalize();
- m_codeBlocks.lastChanceToFinalize();
+ m_codeBlocks->lastChanceToFinalize();
m_objectSpace.lastChanceToFinalize();
releaseDelayedReleasedObjects();
@@ -434,7 +360,6 @@
void Heap::finalizeUnconditionalFinalizers()
{
- GCPHASE(FinalizeUnconditionalFinalizers);
m_slotVisitor.finalizeUnconditionalFinalizers();
}
@@ -460,76 +385,86 @@
void Heap::markRoots(double gcStartTime, void* stackOrigin, void* stackTop, MachineThreads::RegisterState& calleeSavedRegisters)
{
- GCPHASE(MarkRoots);
+ TimingScope markRootsTimingScope(*this, "Heap::markRoots");
+
ASSERT(isValidThreadState(m_vm));
- // We gather conservative roots before clearing mark bits because conservative
- // gathering uses the mark bits to determine whether a reference is valid.
- ConservativeRoots conservativeRoots(&m_objectSpace.blocks(), &m_storageSpace);
- gatherStackRoots(conservativeRoots, stackOrigin, stackTop, calleeSavedRegisters);
- gatherJSStackRoots(conservativeRoots);
- gatherScratchBufferRoots(conservativeRoots);
+ HeapRootVisitor heapRootVisitor(m_slotVisitor);
+
+ ConservativeRoots conservativeRoots(*this);
+ {
+ TimingScope preConvergenceTimingScope(*this, "Heap::markRoots before convergence");
+ // We gather conservative roots before clearing mark bits because conservative
+ // gathering uses the mark bits to determine whether a reference is valid.
+ {
+ TimingScope preConvergenceTimingScope(*this, "Heap::markRoots conservative scan");
+ SuperSamplerScope superSamplerScope(false);
+ gatherStackRoots(conservativeRoots, stackOrigin, stackTop, calleeSavedRegisters);
+ gatherJSStackRoots(conservativeRoots);
+ gatherScratchBufferRoots(conservativeRoots);
+ }
#if ENABLE(DFG_JIT)
- DFG::rememberCodeBlocks(*m_vm);
+ DFG::rememberCodeBlocks(*m_vm);
#endif
#if ENABLE(SAMPLING_PROFILER)
- if (SamplingProfiler* samplingProfiler = m_vm->samplingProfiler()) {
- // Note that we need to own the lock from now until we're done
- // marking the SamplingProfiler's data because once we verify the
- // SamplingProfiler's stack traces, we don't want it to accumulate
- // more stack traces before we get the chance to mark it.
- // This lock is released inside visitSamplingProfiler().
- samplingProfiler->getLock().lock();
- samplingProfiler->processUnverifiedStackTraces();
- }
+ if (SamplingProfiler* samplingProfiler = m_vm->samplingProfiler()) {
+ // Note that we need to own the lock from now until we're done
+ // marking the SamplingProfiler's data because once we verify the
+ // SamplingProfiler's stack traces, we don't want it to accumulate
+ // more stack traces before we get the chance to mark it.
+ // This lock is released inside visitSamplingProfiler().
+ samplingProfiler->getLock().lock();
+ samplingProfiler->processUnverifiedStackTraces();
+ }
#endif // ENABLE(SAMPLING_PROFILER)
- if (m_operationInProgress == FullCollection) {
- m_opaqueRoots.clear();
- m_slotVisitor.clearMarkStack();
+ if (m_operationInProgress == FullCollection) {
+ m_opaqueRoots.clear();
+ m_slotVisitor.clearMarkStack();
+ }
+
+ clearLivenessData();
+
+ m_parallelMarkersShouldExit = false;
+
+ m_helperClient.setFunction(
+ [this] () {
+ SlotVisitor* slotVisitor;
+ {
+ LockHolder locker(m_parallelSlotVisitorLock);
+ if (m_availableParallelSlotVisitors.isEmpty()) {
+ std::unique_ptr<SlotVisitor> newVisitor =
+ std::make_unique<SlotVisitor>(*this);
+ slotVisitor = newVisitor.get();
+ m_parallelSlotVisitors.append(WTFMove(newVisitor));
+ } else
+ slotVisitor = m_availableParallelSlotVisitors.takeLast();
+ }
+
+ WTF::registerGCThread();
+
+ {
+ ParallelModeEnabler parallelModeEnabler(*slotVisitor);
+ slotVisitor->didStartMarking();
+ slotVisitor->drainFromShared(SlotVisitor::SlaveDrain);
+ }
+
+ {
+ LockHolder locker(m_parallelSlotVisitorLock);
+ m_availableParallelSlotVisitors.append(slotVisitor);
+ }
+ });
+
+ m_slotVisitor.didStartMarking();
}
-
- clearLivenessData();
-
- m_parallelMarkersShouldExit = false;
-
- m_helperClient.setFunction(
- [this] () {
- SlotVisitor* slotVisitor;
- {
- LockHolder locker(m_parallelSlotVisitorLock);
- if (m_availableParallelSlotVisitors.isEmpty()) {
- std::unique_ptr<SlotVisitor> newVisitor =
- std::make_unique<SlotVisitor>(*this);
- slotVisitor = newVisitor.get();
- m_parallelSlotVisitors.append(WTFMove(newVisitor));
- } else
- slotVisitor = m_availableParallelSlotVisitors.takeLast();
- }
-
- WTF::registerGCThread();
-
- {
- ParallelModeEnabler parallelModeEnabler(*slotVisitor);
- slotVisitor->didStartMarking();
- slotVisitor->drainFromShared(SlotVisitor::SlaveDrain);
- }
-
- {
- LockHolder locker(m_parallelSlotVisitorLock);
- m_availableParallelSlotVisitors.append(slotVisitor);
- }
- });
-
- m_slotVisitor.didStartMarking();
- HeapRootVisitor heapRootVisitor(m_slotVisitor);
-
{
+ SuperSamplerScope superSamplerScope(false);
+ TimingScope convergenceTimingScope(*this, "Heap::markRoots convergence");
ParallelModeEnabler enabler(m_slotVisitor);
-
+
m_slotVisitor.donateAndDrain();
visitExternalRememberedSet();
visitSmallStrings();
@@ -544,6 +479,8 @@
traceCodeBlocksAndJITStubRoutines();
converge();
}
+
+ TimingScope postConvergenceTimingScope(*this, "Heap::markRoots after convergence");
// Weak references must be marked last because their liveness depends on
// the liveness of the rest of the object graph.
@@ -561,7 +498,7 @@
void Heap::copyBackingStores()
{
- GCPHASE(CopyBackingStores);
+ SuperSamplerScope superSamplerScope(false);
if (m_operationInProgress == EdenCollection)
m_storageSpace.startedCopying<EdenCollection>();
else {
@@ -598,12 +535,6 @@
CopyWorkList& workList = block->workList();
for (CopyWorklistItem item : workList) {
- if (item.token() == ButterflyCopyToken) {
- JSObject::copyBackingStore(
- item.cell(), copyVisitor, ButterflyCopyToken);
- continue;
- }
-
item.cell()->methodTable()->copyBackingStore(
item.cell(), copyVisitor, item.token());
}
@@ -619,16 +550,14 @@
void Heap::gatherStackRoots(ConservativeRoots& roots, void* stackOrigin, void* stackTop, MachineThreads::RegisterState& calleeSavedRegisters)
{
- GCPHASE(GatherStackRoots);
- m_jitStubRoutines.clearMarks();
- m_machineThreads.gatherConservativeRoots(roots, m_jitStubRoutines, m_codeBlocks, stackOrigin, stackTop, calleeSavedRegisters);
+ m_jitStubRoutines->clearMarks();
+ m_machineThreads.gatherConservativeRoots(roots, *m_jitStubRoutines, *m_codeBlocks, stackOrigin, stackTop, calleeSavedRegisters);
}
void Heap::gatherJSStackRoots(ConservativeRoots& roots)
{
#if !ENABLE(JIT)
- GCPHASE(GatherJSStackRoots);
- m_vm->interpreter->cloopStack().gatherConservativeRoots(roots, m_jitStubRoutines, m_codeBlocks);
+ m_vm->interpreter->cloopStack().gatherConservativeRoots(roots, *m_jitStubRoutines, *m_codeBlocks);
#else
UNUSED_PARAM(roots);
#endif
@@ -637,7 +566,6 @@
void Heap::gatherScratchBufferRoots(ConservativeRoots& roots)
{
#if ENABLE(DFG_JIT)
- GCPHASE(GatherScratchBufferRoots);
m_vm->gatherConservativeRoots(roots);
#else
UNUSED_PARAM(roots);
@@ -646,12 +574,19 @@
void Heap::clearLivenessData()
{
- GCPHASE(ClearLivenessData);
+ TimingScope timingScope(*this, "Heap::clearLivenessData");
if (m_operationInProgress == FullCollection)
- m_codeBlocks.clearMarksForFullCollection();
-
- m_objectSpace.clearNewlyAllocated();
- m_objectSpace.clearMarks();
+ m_codeBlocks->clearMarksForFullCollection();
+
+ {
+ TimingScope clearNewlyAllocatedTimingScope(*this, "m_objectSpace.clearNewlyAllocated");
+ m_objectSpace.clearNewlyAllocated();
+ }
+
+ {
+ TimingScope clearMarksTimingScope(*this, "m_objectSpace.clearMarks");
+ m_objectSpace.flip();
+ }
}
void Heap::visitExternalRememberedSet()
@@ -663,7 +598,6 @@
void Heap::visitSmallStrings()
{
- GCPHASE(VisitSmallStrings);
if (!m_vm->smallStrings.needsToBeVisited(m_operationInProgress))
return;
@@ -675,7 +609,6 @@
void Heap::visitConservativeRoots(ConservativeRoots& roots)
{
- GCPHASE(VisitConservativeRoots);
m_slotVisitor.append(roots);
if (Options::logGC() == GCLogging::Verbose)
@@ -698,7 +631,6 @@
void Heap::removeDeadCompilerWorklistEntries()
{
#if ENABLE(DFG_JIT)
- GCPHASE(FinalizeDFGWorklists);
for (auto worklist : m_suspendedCompilerWorklists)
worklist->removeDeadPlans(*m_vm);
#endif
@@ -732,7 +664,6 @@
void Heap::gatherExtraHeapSnapshotData(HeapProfiler& heapProfiler)
{
- GCPHASE(GatherExtraHeapSnapshotData);
if (HeapSnapshotBuilder* builder = heapProfiler.activeSnapshotBuilder()) {
HeapIterationScope heapIterationScope(*this);
GatherHeapSnapshotData functor(*builder);
@@ -758,7 +689,6 @@
void Heap::removeDeadHeapSnapshotNodes(HeapProfiler& heapProfiler)
{
- GCPHASE(RemoveDeadHeapSnapshotNodes);
if (HeapSnapshot* snapshot = heapProfiler.mostRecentSnapshot()) {
HeapIterationScope heapIterationScope(*this);
RemoveDeadHeapSnapshotNodes functor(*snapshot);
@@ -769,8 +699,6 @@
void Heap::visitProtectedObjects(HeapRootVisitor& heapRootVisitor)
{
- GCPHASE(VisitProtectedObjects);
-
for (auto& pair : m_protectedValues)
heapRootVisitor.visit(&pair.key);
@@ -782,7 +710,6 @@
void Heap::visitArgumentBuffers(HeapRootVisitor& visitor)
{
- GCPHASE(MarkingArgumentBuffers);
if (!m_markListSet || !m_markListSet->size())
return;
@@ -796,7 +723,6 @@
void Heap::visitException(HeapRootVisitor& visitor)
{
- GCPHASE(MarkingException);
if (!m_vm->exception() && !m_vm->lastException())
return;
@@ -811,7 +737,6 @@
void Heap::visitStrongHandles(HeapRootVisitor& visitor)
{
- GCPHASE(VisitStrongHandles);
m_handleSet.visitStrongHandles(visitor);
if (Options::logGC() == GCLogging::Verbose)
@@ -822,7 +747,6 @@
void Heap::visitHandleStack(HeapRootVisitor& visitor)
{
- GCPHASE(VisitHandleStack);
m_handleStack.visit(visitor);
if (Options::logGC() == GCLogging::Verbose)
@@ -836,7 +760,6 @@
#if ENABLE(SAMPLING_PROFILER)
if (SamplingProfiler* samplingProfiler = m_vm->samplingProfiler()) {
ASSERT(samplingProfiler->getLock().isLocked());
- GCPHASE(VisitSamplingProfiler);
samplingProfiler->visit(m_slotVisitor);
if (Options::logGC() == GCLogging::Verbose)
dataLog("Sampling Profiler data:\n", m_slotVisitor);
@@ -854,8 +777,7 @@
void Heap::traceCodeBlocksAndJITStubRoutines()
{
- GCPHASE(TraceCodeBlocksAndJITStubRoutines);
- m_jitStubRoutines.traceMarkedStubRoutines(m_slotVisitor);
+ m_jitStubRoutines->traceMarkedStubRoutines(m_slotVisitor);
if (Options::logGC() == GCLogging::Verbose)
dataLog("Code Blocks and JIT Stub Routines:\n", m_slotVisitor);
@@ -865,15 +787,17 @@
void Heap::converge()
{
- GCPHASE(Convergence);
m_slotVisitor.drainFromShared(SlotVisitor::MasterDrain);
}
void Heap::visitWeakHandles(HeapRootVisitor& visitor)
{
- GCPHASE(VisitingLiveWeakHandles);
+ TimingScope timingScope(*this, "Heap::visitWeakHandles");
while (true) {
- m_objectSpace.visitWeakSets(visitor);
+ {
+ TimingScope timingScope(*this, "m_objectSpace.visitWeakSets");
+ m_objectSpace.visitWeakSets(visitor);
+ }
harvestWeakReferences();
visitCompilerWorklistWeakReferences();
if (m_slotVisitor.isEmpty())
@@ -892,8 +816,6 @@
void Heap::updateObjectCounts(double gcStartTime)
{
- GCCOUNTER(VisitedValueCount, m_slotVisitor.visitCount() + threadVisitCount());
-
if (Options::logGC() == GCLogging::Verbose) {
size_t visitCount = m_slotVisitor.visitCount();
visitCount += threadVisitCount();
@@ -1033,7 +955,6 @@
void Heap::clearUnmarkedExecutables()
{
- GCPHASE(ClearUnmarkedExecutables);
for (unsigned i = m_executables.size(); i--;) {
ExecutableBase* current = m_executables[i];
if (isMarked(current))
@@ -1051,10 +972,9 @@
void Heap::deleteUnmarkedCompiledCode()
{
- GCPHASE(DeleteCodeBlocks);
clearUnmarkedExecutables();
- m_codeBlocks.deleteUnmarkedAndUnreferenced(m_operationInProgress);
- m_jitStubRoutines.deleteUnmarkedJettisonedStubRoutines();
+ m_codeBlocks->deleteUnmarkedAndUnreferenced(m_operationInProgress);
+ m_jitStubRoutines->deleteUnmarkedJettisonedStubRoutines();
}
void Heap::addToRememberedSet(const JSCell* cell)
@@ -1073,10 +993,11 @@
void Heap::collectAllGarbage()
{
+ SuperSamplerScope superSamplerScope(false);
if (!m_isSafeToCollect)
return;
- collect(FullCollection);
+ collectWithoutAnySweep(FullCollection);
DeferGCForAWhile deferGC(*this);
if (UNLIKELY(Options::useImmortalObjects()))
@@ -1090,7 +1011,16 @@
sweepAllLogicallyEmptyWeakBlocks();
}
-NEVER_INLINE void Heap::collect(HeapOperation collectionType)
+void Heap::collect(HeapOperation collectionType)
+{
+ SuperSamplerScope superSamplerScope(false);
+ if (!m_isSafeToCollect)
+ return;
+
+ collectWithoutAnySweep(collectionType);
+}
+
+NEVER_INLINE void Heap::collectWithoutAnySweep(HeapOperation collectionType)
{
void* stackTop;
ALLOCATE_AND_GET_REGISTER_STATE(registers);
@@ -1102,6 +1032,9 @@
NEVER_INLINE void Heap::collectImpl(HeapOperation collectionType, void* stackOrigin, void* stackTop, MachineThreads::RegisterState& calleeSavedRegisters)
{
+ SuperSamplerScope superSamplerScope(false);
+ TimingScope collectImplTimingScope(collectionType, "Heap::collectImpl");
+
#if ENABLE(ALLOCATION_LOGGING)
dataLogF("JSC GC starting collection.\n");
#endif
@@ -1112,45 +1045,55 @@
before = currentTimeMS();
}
- if (vm()->typeProfiler()) {
- DeferGCForAWhile awhile(*this);
- vm()->typeProfilerLog()->processLogEntries(ASCIILiteral("GC"));
- }
+ double gcStartTime;
+ {
+ TimingScope earlyTimingScope(collectionType, "Heap::collectImpl before markRoots");
+
+ if (vm()->typeProfiler()) {
+ DeferGCForAWhile awhile(*this);
+ vm()->typeProfilerLog()->processLogEntries(ASCIILiteral("GC"));
+ }
#if ENABLE(JIT)
- {
- DeferGCForAWhile awhile(*this);
- JITWorklist::instance()->completeAllForVM(*m_vm);
- }
+ {
+ DeferGCForAWhile awhile(*this);
+ JITWorklist::instance()->completeAllForVM(*m_vm);
+ }
#endif // ENABLE(JIT)
- vm()->shadowChicken().update(*vm(), vm()->topCallFrame);
+ vm()->shadowChicken().update(*vm(), vm()->topCallFrame);
- RELEASE_ASSERT(!m_deferralDepth);
- ASSERT(vm()->currentThreadIsHoldingAPILock());
- RELEASE_ASSERT(vm()->atomicStringTable() == wtfThreadData().atomicStringTable());
- ASSERT(m_isSafeToCollect);
- RELEASE_ASSERT(m_operationInProgress == NoOperation);
+ RELEASE_ASSERT(!m_deferralDepth);
+ ASSERT(vm()->currentThreadIsHoldingAPILock());
+ RELEASE_ASSERT(vm()->atomicStringTable() == wtfThreadData().atomicStringTable());
+ ASSERT(m_isSafeToCollect);
+ RELEASE_ASSERT(m_operationInProgress == NoOperation);
- suspendCompilerThreads();
- willStartCollection(collectionType);
- GCPHASE(Collect);
+ suspendCompilerThreads();
+ willStartCollection(collectionType);
+
+ collectImplTimingScope.setOperation(*this);
+ earlyTimingScope.setOperation(*this);
- double gcStartTime = WTF::monotonicallyIncreasingTime();
- if (m_verifier) {
- // Verify that live objects from the last GC cycle haven't been corrupted by
- // mutators before we begin this new GC cycle.
- m_verifier->verify(HeapVerifier::Phase::BeforeGC);
+ gcStartTime = WTF::monotonicallyIncreasingTime();
+ if (m_verifier) {
+ // Verify that live objects from the last GC cycle haven't been corrupted by
+ // mutators before we begin this new GC cycle.
+ m_verifier->verify(HeapVerifier::Phase::BeforeGC);
- m_verifier->initializeGCCycle();
- m_verifier->gatherLiveObjects(HeapVerifier::Phase::BeforeMarking);
+ m_verifier->initializeGCCycle();
+ m_verifier->gatherLiveObjects(HeapVerifier::Phase::BeforeMarking);
+ }
+
+ flushOldStructureIDTables();
+ stopAllocation();
+ prepareForMarking();
+ flushWriteBarrierBuffer();
}
- flushOldStructureIDTables();
- stopAllocation();
- flushWriteBarrierBuffer();
-
markRoots(gcStartTime, stackOrigin, stackTop, calleeSavedRegisters);
+
+ TimingScope lateTimingScope(*this, "Heap::collectImpl after markRoots");
if (m_verifier) {
m_verifier->gatherLiveObjects(HeapVerifier::Phase::AfterMarking);
@@ -1164,9 +1107,7 @@
pruneStaleEntriesFromWeakGCMaps();
sweepArrayBuffers();
snapshotMarkedSpace();
-
copyBackingStores();
-
finalizeUnconditionalFinalizers();
removeDeadCompilerWorklistEntries();
deleteUnmarkedCompiledCode();
@@ -1179,7 +1120,8 @@
updateAllocationLimits();
didFinishCollection(gcStartTime);
resumeCompilerThreads();
-
+ sweepLargeAllocations();
+
if (m_verifier) {
m_verifier->trimDeadObjects();
m_verifier->verify(HeapVerifier::Phase::AfterGC);
@@ -1191,10 +1133,14 @@
}
}
+void Heap::sweepLargeAllocations()
+{
+ m_objectSpace.sweepLargeAllocations();
+}
+
void Heap::suspendCompilerThreads()
{
#if ENABLE(DFG_JIT)
- GCPHASE(SuspendCompilerThreads);
ASSERT(m_suspendedCompilerWorklists.isEmpty());
for (unsigned i = DFG::numberOfWorklists(); i--;) {
if (DFG::Worklist* worklist = DFG::worklistForIndexOrNull(i)) {
@@ -1207,8 +1153,6 @@
void Heap::willStartCollection(HeapOperation collectionType)
{
- GCPHASE(StartingCollection);
-
if (Options::logGC())
dataLog("=> ");
@@ -1246,13 +1190,11 @@
void Heap::flushOldStructureIDTables()
{
- GCPHASE(FlushOldStructureIDTables);
m_structureIDTable.flushOldTables();
}
void Heap::flushWriteBarrierBuffer()
{
- GCPHASE(FlushWriteBarrierBuffer);
if (m_operationInProgress == EdenCollection) {
m_writeBarrierBuffer.flush(*this);
return;
@@ -1262,21 +1204,23 @@
void Heap::stopAllocation()
{
- GCPHASE(StopAllocation);
m_objectSpace.stopAllocating();
if (m_operationInProgress == FullCollection)
m_storageSpace.didStartFullCollection();
}
+void Heap::prepareForMarking()
+{
+ m_objectSpace.prepareForMarking();
+}
+
void Heap::reapWeakHandles()
{
- GCPHASE(ReapingWeakHandles);
m_objectSpace.reapWeakSets();
}
void Heap::pruneStaleEntriesFromWeakGCMaps()
{
- GCPHASE(PruningStaleEntriesFromWeakGCMaps);
if (m_operationInProgress != FullCollection)
return;
for (auto& pruneCallback : m_weakGCMaps.values())
@@ -1285,34 +1229,42 @@
void Heap::sweepArrayBuffers()
{
- GCPHASE(SweepingArrayBuffers);
m_arrayBuffers.sweep();
}
struct MarkedBlockSnapshotFunctor : public MarkedBlock::VoidFunctor {
- MarkedBlockSnapshotFunctor(Vector<MarkedBlock*>& blocks)
+ MarkedBlockSnapshotFunctor(Vector<MarkedBlock::Handle*>& blocks)
: m_index(0)
, m_blocks(blocks)
{
}
- void operator()(MarkedBlock* block) const { m_blocks[m_index++] = block; }
+ void operator()(MarkedBlock::Handle* block) const
+ {
+ block->setIsOnBlocksToSweep(true);
+ m_blocks[m_index++] = block;
+ }
// FIXME: This is a mutable field becaue this isn't a C++ lambda.
// https://bugs.webkit.org/show_bug.cgi?id=159644
mutable size_t m_index;
- Vector<MarkedBlock*>& m_blocks;
+ Vector<MarkedBlock::Handle*>& m_blocks;
};
void Heap::snapshotMarkedSpace()
{
- GCPHASE(SnapshotMarkedSpace);
-
+ TimingScope timingScope(*this, "Heap::snapshotMarkedSpace");
+ // FIXME: This should probably be renamed. It's not actually snapshotting all of MarkedSpace.
+ // This is used by IncrementalSweeper, so it only needs to snapshot blocks. However, if we ever
+ // wanted to add other snapshotting login, we'd probably put it here.
+
if (m_operationInProgress == EdenCollection) {
- m_blockSnapshot.appendVector(m_objectSpace.blocksWithNewObjects());
- // Sort and deduplicate the block snapshot since we might be appending to an unfinished work list.
- std::sort(m_blockSnapshot.begin(), m_blockSnapshot.end());
- m_blockSnapshot.shrink(std::unique(m_blockSnapshot.begin(), m_blockSnapshot.end()) - m_blockSnapshot.begin());
+ for (MarkedBlock::Handle* handle : m_objectSpace.blocksWithNewObjects()) {
+ if (handle->isOnBlocksToSweep())
+ continue;
+ m_blockSnapshot.append(handle);
+ handle->setIsOnBlocksToSweep(true);
+ }
} else {
m_blockSnapshot.resizeToFit(m_objectSpace.blocks().set().size());
MarkedBlockSnapshotFunctor functor(m_blockSnapshot);
@@ -1322,14 +1274,11 @@
void Heap::deleteSourceProviderCaches()
{
- GCPHASE(DeleteSourceProviderCaches);
m_vm->clearSourceProviderCaches();
}
void Heap::notifyIncrementalSweeper()
{
- GCPHASE(NotifyIncrementalSweeper);
-
if (m_operationInProgress == FullCollection) {
if (!m_logicallyEmptyWeakBlocks.isEmpty())
m_indexOfNextLogicallyEmptyWeakBlockToSweep = 0;
@@ -1340,19 +1289,22 @@
void Heap::writeBarrierCurrentlyExecutingCodeBlocks()
{
- GCPHASE(WriteBarrierCurrentlyExecutingCodeBlocks);
- m_codeBlocks.writeBarrierCurrentlyExecutingCodeBlocks(this);
+ m_codeBlocks->writeBarrierCurrentlyExecutingCodeBlocks(this);
}
void Heap::resetAllocators()
{
- GCPHASE(ResetAllocators);
m_objectSpace.resetAllocators();
}
void Heap::updateAllocationLimits()
{
- GCPHASE(UpdateAllocationLimits);
+ static const bool verbose = false;
+
+ if (verbose) {
+ dataLog("\n");
+ dataLog("bytesAllocatedThisCycle = ", m_bytesAllocatedThisCycle, "\n");
+ }
// Calculate our current heap size threshold for the purpose of figuring out when we should
// run another collection. This isn't the same as either size() or capacity(), though it should
@@ -1369,6 +1321,8 @@
// of fragmentation, this may be substantial. Fortunately, marked space rarely fragments because
// cells usually have a narrow range of sizes. So, the underestimation is probably OK.
currentHeapSize += m_totalBytesVisited;
+ if (verbose)
+ dataLog("totalBytesVisited = ", m_totalBytesVisited, ", currentHeapSize = ", currentHeapSize, "\n");
// For copied space, we use the capacity of storage space. This is because copied space may get
// badly fragmented between full collections. This arises when each eden collection evacuates
@@ -1384,10 +1338,15 @@
// https://bugs.webkit.org/show_bug.cgi?id=150268
ASSERT(m_totalBytesCopied <= m_storageSpace.size());
currentHeapSize += m_storageSpace.capacity();
+ if (verbose)
+ dataLog("storageSpace.capacity() = ", m_storageSpace.capacity(), ", currentHeapSize = ", currentHeapSize, "\n");
// It's up to the user to ensure that extraMemorySize() ends up corresponding to allocation-time
// extra memory reporting.
currentHeapSize += extraMemorySize();
+
+ if (verbose)
+ dataLog("extraMemorySize() = ", extraMemorySize(), ", currentHeapSize = ", currentHeapSize, "\n");
if (Options::gcMaxHeapSize() && currentHeapSize > Options::gcMaxHeapSize())
HeapStatistics::exitWithFailure();
@@ -1397,29 +1356,38 @@
// the new allocation limit based on the current size of the heap, with a
// fixed minimum.
m_maxHeapSize = max(minHeapSize(m_heapType, m_ramSize), proportionalHeapSize(currentHeapSize, m_ramSize));
- m_maxEdenSize = m_maxHeapSize - currentHeapSize;
- m_sizeAfterLastFullCollect = currentHeapSize;
- m_bytesAbandonedSinceLastFullCollect = 0;
- } else {
- static const bool verbose = false;
-
- ASSERT(currentHeapSize >= m_sizeAfterLastCollect);
- m_maxEdenSize = m_maxHeapSize - currentHeapSize;
- m_sizeAfterLastEdenCollect = currentHeapSize;
- if (verbose) {
- dataLog("Max heap size: ", m_maxHeapSize, "\n");
- dataLog("Current heap size: ", currentHeapSize, "\n");
- dataLog("Size after last eden collection: ", m_sizeAfterLastEdenCollect, "\n");
- }
- double edenToOldGenerationRatio = (double)m_maxEdenSize / (double)m_maxHeapSize;
if (verbose)
- dataLog("Eden to old generation ratio: ", edenToOldGenerationRatio, "\n");
+ dataLog("Full: maxHeapSize = ", m_maxHeapSize, "\n");
+ m_maxEdenSize = m_maxHeapSize - currentHeapSize;
+ if (verbose)
+ dataLog("Full: maxEdenSize = ", m_maxEdenSize, "\n");
+ m_sizeAfterLastFullCollect = currentHeapSize;
+ if (verbose)
+ dataLog("Full: sizeAfterLastFullCollect = ", currentHeapSize, "\n");
+ m_bytesAbandonedSinceLastFullCollect = 0;
+ if (verbose)
+ dataLog("Full: bytesAbandonedSinceLastFullCollect = ", 0, "\n");
+ } else {
+ ASSERT(currentHeapSize >= m_sizeAfterLastCollect);
+ // Theoretically, we shouldn't ever scan more memory than the heap size we planned to have.
+ // But we are sloppy, so we have to defend against the overflow.
+ m_maxEdenSize = currentHeapSize > m_maxHeapSize ? 0 : m_maxHeapSize - currentHeapSize;
+ if (verbose)
+ dataLog("Eden: maxEdenSize = ", m_maxEdenSize, "\n");
+ m_sizeAfterLastEdenCollect = currentHeapSize;
+ if (verbose)
+ dataLog("Eden: sizeAfterLastEdenCollect = ", currentHeapSize, "\n");
+ double edenToOldGenerationRatio = (double)m_maxEdenSize / (double)m_maxHeapSize;
double minEdenToOldGenerationRatio = 1.0 / 3.0;
if (edenToOldGenerationRatio < minEdenToOldGenerationRatio)
m_shouldDoFullCollection = true;
// This seems suspect at first, but what it does is ensure that the nursery size is fixed.
m_maxHeapSize += currentHeapSize - m_sizeAfterLastCollect;
+ if (verbose)
+ dataLog("Eden: maxHeapSize = ", m_maxHeapSize, "\n");
m_maxEdenSize = m_maxHeapSize - currentHeapSize;
+ if (verbose)
+ dataLog("Eden: maxEdenSize = ", m_maxEdenSize, "\n");
if (m_fullActivityCallback) {
ASSERT(currentHeapSize >= m_sizeAfterLastFullCollect);
m_fullActivityCallback->didAllocate(currentHeapSize - m_sizeAfterLastFullCollect);
@@ -1427,6 +1395,8 @@
}
m_sizeAfterLastCollect = currentHeapSize;
+ if (verbose)
+ dataLog("sizeAfterLastCollect = ", m_sizeAfterLastCollect, "\n");
m_bytesAllocatedThisCycle = 0;
if (Options::logGC())
@@ -1435,7 +1405,6 @@
void Heap::didFinishCollection(double gcStartTime)
{
- GCPHASE(FinishingCollection);
double gcEndTime = WTF::monotonicallyIncreasingTime();
HeapOperation operation = m_operationInProgress;
if (m_operationInProgress == FullCollection)
@@ -1471,7 +1440,6 @@
void Heap::resumeCompilerThreads()
{
#if ENABLE(DFG_JIT)
- GCPHASE(ResumeCompilerThreads);
for (auto worklist : m_suspendedCompilerWorklists)
worklist->resumeAllThreads();
m_suspendedCompilerWorklists.clear();
@@ -1580,7 +1548,7 @@
if (cell->isZapped())
current++;
- void* limit = static_cast<void*>(reinterpret_cast<char*>(cell) + MarkedBlock::blockFor(cell)->cellSize());
+ void* limit = static_cast<void*>(reinterpret_cast<char*>(cell) + cell->cellSize());
for (; current < limit; current++)
*current = zombifiedBits;
}
@@ -1686,4 +1654,12 @@
return result;
}
+void Heap::forEachCodeBlockImpl(const ScopedLambda<bool(CodeBlock*)>& func)
+{
+ // We don't know the full set of CodeBlocks until compilation has terminated.
+ completeAllJITPlans();
+
+ return m_codeBlocks->iterate(func);
+}
+
} // namespace JSC
diff --git a/Source/JavaScriptCore/heap/Heap.h b/Source/JavaScriptCore/heap/Heap.h
index 5554a1f..5439161 100644
--- a/Source/JavaScriptCore/heap/Heap.h
+++ b/Source/JavaScriptCore/heap/Heap.h
@@ -23,14 +23,12 @@
#define Heap_h
#include "ArrayBuffer.h"
-#include "CodeBlockSet.h"
#include "CopyVisitor.h"
#include "GCIncomingRefCountedSet.h"
#include "HandleSet.h"
#include "HandleStack.h"
#include "HeapObserver.h"
#include "HeapOperation.h"
-#include "JITStubRoutineSet.h"
#include "ListableHandler.h"
#include "MachineStackMarker.h"
#include "MarkedAllocator.h"
@@ -53,6 +51,7 @@
namespace JSC {
class CodeBlock;
+class CodeBlockSet;
class CopiedSpace;
class EdenGCActivityCallback;
class ExecutableBase;
@@ -65,6 +64,7 @@
class HeapVerifier;
class IncrementalSweeper;
class JITStubRoutine;
+class JITStubRoutineSet;
class JSCell;
class JSValue;
class LLIntOffsetsExtractor;
@@ -83,13 +83,15 @@
enum HeapType { SmallHeap, LargeHeap };
+class HeapUtil;
+
class Heap {
WTF_MAKE_NONCOPYABLE(Heap);
public:
friend class JIT;
friend class DFG::SpeculativeJIT;
static Heap* heap(const JSValue); // 0 for immediate values
- static Heap* heap(const JSCell*);
+ static Heap* heap(const HeapCell*);
// This constant determines how many blocks we iterate between checks of our
// deadline when calling Heap::isPagedOut. Decreasing it will cause us to detect
@@ -99,13 +101,10 @@
static bool isLive(const void*);
static bool isMarked(const void*);
- static bool testAndSetMarked(const void*);
+ static bool testAndSetMarked(int64_t, const void*);
static void setMarked(const void*);
-
- // This function must be run after stopAllocation() is called and
- // before liveness data is cleared to be accurate.
- static bool isPointerGCObject(TinyBloomFilter, MarkedBlockSet&, void* pointer);
- static bool isValueGCObject(TinyBloomFilter, MarkedBlockSet&, JSValue);
+
+ static size_t cellSize(const void*);
void writeBarrier(const JSCell*);
void writeBarrier(const JSCell*, JSValue);
@@ -147,10 +146,14 @@
MarkedSpace::Subspace& subspaceForObjectDestructor() { return m_objectSpace.subspaceForObjectsWithDestructor(); }
MarkedSpace::Subspace& subspaceForAuxiliaryData() { return m_objectSpace.subspaceForAuxiliaryData(); }
template<typename ClassType> MarkedSpace::Subspace& subspaceForObjectOfType();
- MarkedAllocator& allocatorForObjectWithoutDestructor(size_t bytes) { return m_objectSpace.allocatorFor(bytes); }
- MarkedAllocator& allocatorForObjectWithDestructor(size_t bytes) { return m_objectSpace.destructorAllocatorFor(bytes); }
- template<typename ClassType> MarkedAllocator& allocatorForObjectOfType(size_t bytes);
+ MarkedAllocator* allocatorForObjectWithoutDestructor(size_t bytes) { return m_objectSpace.allocatorFor(bytes); }
+ MarkedAllocator* allocatorForObjectWithDestructor(size_t bytes) { return m_objectSpace.destructorAllocatorFor(bytes); }
+ template<typename ClassType> MarkedAllocator* allocatorForObjectOfType(size_t bytes);
+ MarkedAllocator* allocatorForAuxiliaryData(size_t bytes) { return m_objectSpace.auxiliaryAllocatorFor(bytes); }
CopiedAllocator& storageAllocator() { return m_storageSpace.allocator(); }
+ void* allocateAuxiliary(JSCell* intendedOwner, size_t);
+ void* tryAllocateAuxiliary(JSCell* intendedOwner, size_t);
+ void* tryReallocateAuxiliary(JSCell* intendedOwner, void* oldBase, size_t oldSize, size_t newSize);
CheckedBoolean tryAllocateStorage(JSCell* intendedOwner, size_t, void**);
CheckedBoolean tryReallocateStorage(JSCell* intendedOwner, void**, size_t, size_t);
void ascribeOwner(JSCell* intendedOwner, void*);
@@ -230,7 +233,7 @@
void didAllocate(size_t);
bool isPagedOut(double deadline);
- const JITStubRoutineSet& jitStubRoutines() { return m_jitStubRoutines; }
+ const JITStubRoutineSet& jitStubRoutines() { return *m_jitStubRoutines; }
void addReference(JSCell*, ArrayBuffer*);
@@ -238,7 +241,7 @@
StructureIDTable& structureIDTable() { return m_structureIDTable; }
- CodeBlockSet& codeBlockSet() { return m_codeBlocks; }
+ CodeBlockSet& codeBlockSet() { return *m_codeBlocks; }
#if USE(FOUNDATION)
template<typename T> void releaseSoon(RetainPtr<T>&&);
@@ -267,6 +270,7 @@
friend class GCLogging;
friend class GCThread;
friend class HandleSet;
+ friend class HeapUtil;
friend class HeapVerifier;
friend class JITStubRoutine;
friend class LLIntOffsetsExtractor;
@@ -283,6 +287,8 @@
template<typename T> friend void* allocateCell(Heap&);
template<typename T> friend void* allocateCell(Heap&, size_t);
+ void collectWithoutAnySweep(HeapOperation collectionType = AnyCollection);
+
void* allocateWithDestructor(size_t); // For use with objects with destructors.
void* allocateWithoutDestructor(size_t); // For use with objects without destructors.
template<typename ClassType> void* allocateObjectOfType(size_t); // Chooses one of the methods above based on type.
@@ -304,6 +310,7 @@
void flushOldStructureIDTables();
void flushWriteBarrierBuffer();
void stopAllocation();
+ void prepareForMarking();
void markRoots(double gcStartTime, void* stackOrigin, void* stackTop, MachineThreads::RegisterState&);
void gatherStackRoots(ConservativeRoots&, void* stackOrigin, void* stackTop, MachineThreads::RegisterState&);
@@ -348,7 +355,8 @@
void zombifyDeadObjects();
void gatherExtraHeapSnapshotData(HeapProfiler&);
void removeDeadHeapSnapshotNodes(HeapProfiler&);
-
+ void sweepLargeAllocations();
+
void sweepAllLogicallyEmptyWeakBlocks();
bool sweepNextLogicallyEmptyWeakBlock();
@@ -361,6 +369,8 @@
size_t threadVisitCount();
size_t threadBytesVisited();
size_t threadBytesCopied();
+
+ void forEachCodeBlockImpl(const ScopedLambda<bool(CodeBlock*)>&);
const HeapType m_heapType;
const size_t m_ramSize;
@@ -408,8 +418,8 @@
HandleSet m_handleSet;
HandleStack m_handleStack;
- CodeBlockSet m_codeBlocks;
- JITStubRoutineSet m_jitStubRoutines;
+ std::unique_ptr<CodeBlockSet> m_codeBlocks;
+ std::unique_ptr<JITStubRoutineSet> m_jitStubRoutines;
FinalizerOwner m_finalizerOwner;
bool m_isSafeToCollect;
@@ -428,7 +438,7 @@
RefPtr<FullGCActivityCallback> m_fullActivityCallback;
RefPtr<GCActivityCallback> m_edenActivityCallback;
std::unique_ptr<IncrementalSweeper> m_sweeper;
- Vector<MarkedBlock*> m_blockSnapshot;
+ Vector<MarkedBlock::Handle*> m_blockSnapshot;
Vector<HeapObserver*> m_observers;
diff --git a/Source/JavaScriptCore/heap/HeapCell.h b/Source/JavaScriptCore/heap/HeapCell.h
index 242cf45..73feeb1 100644
--- a/Source/JavaScriptCore/heap/HeapCell.h
+++ b/Source/JavaScriptCore/heap/HeapCell.h
@@ -25,8 +25,17 @@
#pragma once
+#include "DestructionMode.h"
+
namespace JSC {
+class CellContainer;
+class Heap;
+class LargeAllocation;
+class MarkedBlock;
+class VM;
+struct AllocatorAttributes;
+
class HeapCell {
public:
enum Kind : int8_t {
@@ -38,6 +47,25 @@
void zap() { *reinterpret_cast<uintptr_t**>(this) = 0; }
bool isZapped() const { return !*reinterpret_cast<uintptr_t* const*>(this); }
+
+ bool isLargeAllocation() const;
+ CellContainer cellContainer() const;
+ MarkedBlock& markedBlock() const;
+ LargeAllocation& largeAllocation() const;
+
+ // If you want performance and you know that your cell is small, you can do this instead:
+ // ASSERT(!cell->isLargeAllocation());
+ // cell->markedBlock().vm()
+ // We currently only use this hack for callees to make ExecState::vm() fast. It's not
+ // recommended to use it for too many other things, since the large allocation cutoff is
+ // a runtime option and its default value is small (400 bytes).
+ Heap* heap() const;
+ VM* vm() const;
+
+ size_t cellSize() const;
+ AllocatorAttributes allocatorAttributes() const;
+ DestructionMode destructionMode() const;
+ Kind cellKind() const;
};
} // namespace JSC
diff --git a/Source/JavaScriptCore/heap/HeapCellInlines.h b/Source/JavaScriptCore/heap/HeapCellInlines.h
new file mode 100644
index 0000000..dd36d12
--- /dev/null
+++ b/Source/JavaScriptCore/heap/HeapCellInlines.h
@@ -0,0 +1,94 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#include "CellContainer.h"
+#include "HeapCell.h"
+#include "LargeAllocation.h"
+#include "MarkedBlock.h"
+
+namespace JSC {
+
+ALWAYS_INLINE bool HeapCell::isLargeAllocation() const
+{
+ return LargeAllocation::isLargeAllocation(const_cast<HeapCell*>(this));
+}
+
+ALWAYS_INLINE CellContainer HeapCell::cellContainer() const
+{
+ if (isLargeAllocation())
+ return largeAllocation();
+ return markedBlock();
+}
+
+ALWAYS_INLINE MarkedBlock& HeapCell::markedBlock() const
+{
+ return *MarkedBlock::blockFor(this);
+}
+
+ALWAYS_INLINE LargeAllocation& HeapCell::largeAllocation() const
+{
+ return *LargeAllocation::fromCell(const_cast<HeapCell*>(this));
+}
+
+ALWAYS_INLINE Heap* HeapCell::heap() const
+{
+ return &vm()->heap;
+}
+
+ALWAYS_INLINE VM* HeapCell::vm() const
+{
+ if (isLargeAllocation())
+ return largeAllocation().vm();
+ return markedBlock().vm();
+}
+
+ALWAYS_INLINE size_t HeapCell::cellSize() const
+{
+ if (isLargeAllocation())
+ return largeAllocation().cellSize();
+ return markedBlock().cellSize();
+}
+
+ALWAYS_INLINE AllocatorAttributes HeapCell::allocatorAttributes() const
+{
+ if (isLargeAllocation())
+ return largeAllocation().attributes();
+ return markedBlock().attributes();
+}
+
+ALWAYS_INLINE DestructionMode HeapCell::destructionMode() const
+{
+ return allocatorAttributes().destruction;
+}
+
+ALWAYS_INLINE HeapCell::Kind HeapCell::cellKind() const
+{
+ return allocatorAttributes().cellKind;
+}
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/heap/HeapInlines.h b/Source/JavaScriptCore/heap/HeapInlines.h
index 3367957..842fc09 100644
--- a/Source/JavaScriptCore/heap/HeapInlines.h
+++ b/Source/JavaScriptCore/heap/HeapInlines.h
@@ -28,6 +28,9 @@
#include "CopyBarrier.h"
#include "Heap.h"
+#include "HeapCellInlines.h"
+#include "IndexingHeader.h"
+#include "JSCallee.h"
#include "JSCell.h"
#include "Structure.h"
#include <type_traits>
@@ -59,9 +62,9 @@
return m_operationInProgress == FullCollection || m_operationInProgress == EdenCollection;
}
-inline Heap* Heap::heap(const JSCell* cell)
+ALWAYS_INLINE Heap* Heap::heap(const HeapCell* cell)
{
- return MarkedBlock::blockFor(cell)->heap();
+ return cell->heap();
}
inline Heap* Heap::heap(const JSValue v)
@@ -71,24 +74,51 @@
return heap(v.asCell());
}
-inline bool Heap::isLive(const void* cell)
+inline bool Heap::isLive(const void* rawCell)
{
- return MarkedBlock::blockFor(cell)->isLiveCell(cell);
+ HeapCell* cell = bitwise_cast<HeapCell*>(rawCell);
+ if (cell->isLargeAllocation())
+ return cell->largeAllocation().isLive();
+ MarkedBlock& block = cell->markedBlock();
+ block.flipIfNecessary(block.vm()->heap.objectSpace().version());
+ return block.handle().isLiveCell(cell);
}
-inline bool Heap::isMarked(const void* cell)
+ALWAYS_INLINE bool Heap::isMarked(const void* rawCell)
{
- return MarkedBlock::blockFor(cell)->isMarked(cell);
+ HeapCell* cell = bitwise_cast<HeapCell*>(rawCell);
+ if (cell->isLargeAllocation())
+ return cell->largeAllocation().isMarked();
+ MarkedBlock& block = cell->markedBlock();
+ block.flipIfNecessary(block.vm()->heap.objectSpace().version());
+ return block.isMarked(cell);
}
-inline bool Heap::testAndSetMarked(const void* cell)
+ALWAYS_INLINE bool Heap::testAndSetMarked(int64_t version, const void* rawCell)
{
- return MarkedBlock::blockFor(cell)->testAndSetMarked(cell);
+ HeapCell* cell = bitwise_cast<HeapCell*>(rawCell);
+ if (cell->isLargeAllocation())
+ return cell->largeAllocation().testAndSetMarked();
+ MarkedBlock& block = cell->markedBlock();
+ block.flipIfNecessaryConcurrently(version);
+ return block.testAndSetMarked(cell);
}
-inline void Heap::setMarked(const void* cell)
+inline void Heap::setMarked(const void* rawCell)
{
- MarkedBlock::blockFor(cell)->setMarked(cell);
+ HeapCell* cell = bitwise_cast<HeapCell*>(rawCell);
+ if (cell->isLargeAllocation()) {
+ cell->largeAllocation().setMarked();
+ return;
+ }
+ MarkedBlock& block = cell->markedBlock();
+ block.flipIfNecessary(block.vm()->heap.objectSpace().version());
+ block.setMarked(cell);
+}
+
+ALWAYS_INLINE size_t Heap::cellSize(const void* rawCell)
+{
+ return bitwise_cast<HeapCell*>(rawCell)->cellSize();
}
inline void Heap::writeBarrier(const JSCell* from, JSValue to)
@@ -165,12 +195,9 @@
deprecatedReportExtraMemorySlowCase(size);
}
-template<typename Functor> inline void Heap::forEachCodeBlock(const Functor& functor)
+template<typename Functor> inline void Heap::forEachCodeBlock(const Functor& func)
{
- // We don't know the full set of CodeBlocks until compilation has terminated.
- completeAllJITPlans();
-
- return m_codeBlocks.iterate<Functor>(functor);
+ forEachCodeBlockImpl(scopedLambdaRef<bool(CodeBlock*)>(func));
}
template<typename Functor> inline void Heap::forEachProtectedCell(const Functor& functor)
@@ -199,7 +226,7 @@
}
template<typename ClassType>
-void* Heap::allocateObjectOfType(size_t bytes)
+inline void* Heap::allocateObjectOfType(size_t bytes)
{
// JSCell::classInfo() expects objects allocated with normal destructor to derive from JSDestructibleObject.
ASSERT((!ClassType::needsDestruction || (ClassType::StructureFlags & StructureIsImmortal) || std::is_convertible<ClassType, JSDestructibleObject>::value));
@@ -210,7 +237,7 @@
}
template<typename ClassType>
-MarkedSpace::Subspace& Heap::subspaceForObjectOfType()
+inline MarkedSpace::Subspace& Heap::subspaceForObjectOfType()
{
// JSCell::classInfo() expects objects allocated with normal destructor to derive from JSDestructibleObject.
ASSERT((!ClassType::needsDestruction || (ClassType::StructureFlags & StructureIsImmortal) || std::is_convertible<ClassType, JSDestructibleObject>::value));
@@ -221,14 +248,50 @@
}
template<typename ClassType>
-MarkedAllocator& Heap::allocatorForObjectOfType(size_t bytes)
+inline MarkedAllocator* Heap::allocatorForObjectOfType(size_t bytes)
{
// JSCell::classInfo() expects objects allocated with normal destructor to derive from JSDestructibleObject.
ASSERT((!ClassType::needsDestruction || (ClassType::StructureFlags & StructureIsImmortal) || std::is_convertible<ClassType, JSDestructibleObject>::value));
-
+
+ MarkedAllocator* result;
if (ClassType::needsDestruction)
- return allocatorForObjectWithDestructor(bytes);
- return allocatorForObjectWithoutDestructor(bytes);
+ result = allocatorForObjectWithDestructor(bytes);
+ else
+ result = allocatorForObjectWithoutDestructor(bytes);
+
+ ASSERT(result || !ClassType::info()->isSubClassOf(JSCallee::info()));
+ return result;
+}
+
+inline void* Heap::allocateAuxiliary(JSCell* intendedOwner, size_t bytes)
+{
+ void* result = m_objectSpace.allocateAuxiliary(bytes);
+#if ENABLE(ALLOCATION_LOGGING)
+ dataLogF("JSC GC allocating %lu bytes of auxiliary for %p: %p.\n", bytes, intendedOwner, result);
+#else
+ UNUSED_PARAM(intendedOwner);
+#endif
+ return result;
+}
+
+inline void* Heap::tryAllocateAuxiliary(JSCell* intendedOwner, size_t bytes)
+{
+ void* result = m_objectSpace.tryAllocateAuxiliary(bytes);
+#if ENABLE(ALLOCATION_LOGGING)
+ dataLogF("JSC GC allocating %lu bytes of auxiliary for %p: %p.\n", bytes, intendedOwner, result);
+#else
+ UNUSED_PARAM(intendedOwner);
+#endif
+ return result;
+}
+
+inline void* Heap::tryReallocateAuxiliary(JSCell* intendedOwner, void* oldBase, size_t oldSize, size_t newSize)
+{
+ void* newBase = tryAllocateAuxiliary(intendedOwner, newSize);
+ if (!newBase)
+ return nullptr;
+ memcpy(newBase, oldBase, oldSize);
+ return newBase;
}
inline CheckedBoolean Heap::tryAllocateStorage(JSCell* intendedOwner, size_t bytes, void** outPtr)
@@ -354,33 +417,6 @@
#endif
}
-inline bool Heap::isPointerGCObject(TinyBloomFilter filter, MarkedBlockSet& markedBlockSet, void* pointer)
-{
- MarkedBlock* candidate = MarkedBlock::blockFor(pointer);
- if (filter.ruleOut(bitwise_cast<Bits>(candidate))) {
- ASSERT(!candidate || !markedBlockSet.set().contains(candidate));
- return false;
- }
-
- if (!MarkedBlock::isAtomAligned(pointer))
- return false;
-
- if (!markedBlockSet.set().contains(candidate))
- return false;
-
- if (!candidate->isLiveCell(pointer))
- return false;
-
- return true;
-}
-
-inline bool Heap::isValueGCObject(TinyBloomFilter filter, MarkedBlockSet& markedBlockSet, JSValue value)
-{
- if (!value.isCell())
- return false;
- return isPointerGCObject(filter, markedBlockSet, static_cast<void*>(value.asCell()));
-}
-
} // namespace JSC
#endif // HeapInlines_h
diff --git a/Source/JavaScriptCore/heap/HeapOperation.cpp b/Source/JavaScriptCore/heap/HeapOperation.cpp
new file mode 100644
index 0000000..8715314
--- /dev/null
+++ b/Source/JavaScriptCore/heap/HeapOperation.cpp
@@ -0,0 +1,60 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "config.h"
+#include "HeapOperation.h"
+
+#include <wtf/PrintStream.h>
+
+namespace WTF {
+
+using namespace JSC;
+
+void printInternal(PrintStream& out, HeapOperation operation)
+{
+ switch (operation) {
+ case NoOperation:
+ out.print("None");
+ return;
+ case Allocation:
+ out.print("Alloc");
+ return;
+ case FullCollection:
+ out.print("Full");
+ return;
+ case EdenCollection:
+ out.print("Eden");
+ return;
+ case AnyCollection:
+ out.print("Any");
+ return;
+ }
+ RELEASE_ASSERT_NOT_REACHED();
+}
+
+} // namespace WTF
+
+
+
diff --git a/Source/JavaScriptCore/heap/HeapOperation.h b/Source/JavaScriptCore/heap/HeapOperation.h
index 272e3c0..03d14aa 100644
--- a/Source/JavaScriptCore/heap/HeapOperation.h
+++ b/Source/JavaScriptCore/heap/HeapOperation.h
@@ -32,4 +32,12 @@
} // namespace JSC
+namespace WTF {
+
+class PrintStream;
+
+void printInternal(PrintStream& out, JSC::HeapOperation);
+
+} // namespace WTF
+
#endif // HeapOperation_h
diff --git a/Source/JavaScriptCore/heap/HeapUtil.h b/Source/JavaScriptCore/heap/HeapUtil.h
new file mode 100644
index 0000000..9c78f1e
--- /dev/null
+++ b/Source/JavaScriptCore/heap/HeapUtil.h
@@ -0,0 +1,190 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+namespace JSC {
+
+// Are you tired of waiting for all of WebKit to build because you changed the implementation of a
+// function in HeapInlines.h? Does it bother you that you're waiting on rebuilding the JS DOM
+// bindings even though your change is in a function called from only 2 .cpp files? Then HeapUtil.h
+// is for you! Everything in this class should be a static method that takes a Heap& if needed.
+// This is a friend of Heap, so you can access all of Heap's privates.
+//
+// This ends up being an issue because Heap exposes a lot of methods that ought to be inline for
+// performance or that must be inline because they are templates. This class ought to contain
+// methods that are used for the implementation of the collector, or for unusual clients that need
+// to reach deep into the collector for some reason. Don't put things in here that would cause you
+// to have to include it from more than a handful of places, since that would defeat the purpose.
+// This class isn't here to look pretty. It's to let us hack the GC more easily!
+
+class HeapUtil {
+public:
+ // This function must be run after stopAllocation() is called and
+ // before liveness data is cleared to be accurate.
+ template<typename Func>
+ static void findGCObjectPointersForMarking(
+ Heap& heap, int64_t heapVersion, TinyBloomFilter filter, void* passedPointer,
+ const Func& func)
+ {
+ const HashSet<MarkedBlock*>& set = heap.objectSpace().blocks().set();
+
+ char* pointer = static_cast<char*>(passedPointer);
+
+ // It could point to a large allocation.
+ if (heap.objectSpace().largeAllocationsForThisCollectionSize()) {
+ if (heap.objectSpace().largeAllocationsForThisCollectionBegin()[0]->aboveLowerBound(pointer)
+ && heap.objectSpace().largeAllocationsForThisCollectionEnd()[-1]->belowUpperBound(pointer)) {
+ LargeAllocation** result = approximateBinarySearch<LargeAllocation*>(
+ heap.objectSpace().largeAllocationsForThisCollectionBegin(),
+ heap.objectSpace().largeAllocationsForThisCollectionSize(),
+ LargeAllocation::fromCell(pointer),
+ [] (LargeAllocation** ptr) -> LargeAllocation* { return *ptr; });
+ if (result) {
+ if (result > heap.objectSpace().largeAllocationsForThisCollectionBegin()
+ && result[-1]->contains(pointer))
+ func(result[-1]->cell());
+ if (result[0]->contains(pointer))
+ func(result[0]->cell());
+ if (result + 1 < heap.objectSpace().largeAllocationsForThisCollectionEnd()
+ && result[1]->contains(pointer))
+ func(result[1]->cell());
+ }
+ }
+ }
+
+ MarkedBlock* candidate = MarkedBlock::blockFor(pointer);
+ // It's possible for a butterfly pointer to point past the end of a butterfly. Check this now.
+ if (pointer <= bitwise_cast<char*>(candidate) + sizeof(IndexingHeader)) {
+ // We may be interested in the last cell of the previous MarkedBlock.
+ char* previousPointer = pointer - sizeof(IndexingHeader) - 1;
+ MarkedBlock* previousCandidate = MarkedBlock::blockFor(previousPointer);
+ if (!filter.ruleOut(bitwise_cast<Bits>(previousCandidate))
+ && set.contains(previousCandidate)
+ && previousCandidate->handle().cellKind() == HeapCell::Auxiliary) {
+ previousCandidate->flipIfNecessary(heapVersion);
+ previousPointer = static_cast<char*>(previousCandidate->handle().cellAlign(previousPointer));
+ if (previousCandidate->handle().isLiveCell(previousPointer))
+ func(previousPointer);
+ }
+ }
+
+ if (filter.ruleOut(bitwise_cast<Bits>(candidate))) {
+ ASSERT(!candidate || !set.contains(candidate));
+ return;
+ }
+
+ if (!set.contains(candidate))
+ return;
+
+ candidate->flipIfNecessary(heapVersion);
+
+ auto tryPointer = [&] (void* pointer) {
+ if (candidate->handle().isLiveCell(pointer))
+ func(pointer);
+ };
+
+ if (candidate->handle().cellKind() == HeapCell::JSCell) {
+ if (!MarkedBlock::isAtomAligned(pointer))
+ return;
+
+ tryPointer(pointer);
+ return;
+ }
+
+ // A butterfly could point into the middle of an object.
+ char* alignedPointer = static_cast<char*>(candidate->handle().cellAlign(pointer));
+ tryPointer(alignedPointer);
+
+ // Also, a butterfly could point at the end of an object plus sizeof(IndexingHeader). In that
+ // case, this is pointing to the object to the right of the one we should be marking.
+ if (candidate->atomNumber(alignedPointer) > MarkedBlock::firstAtom()
+ && pointer <= alignedPointer + sizeof(IndexingHeader))
+ tryPointer(alignedPointer - candidate->cellSize());
+ }
+
+ static bool isPointerGCObjectJSCell(
+ Heap& heap, TinyBloomFilter filter, const void* pointer)
+ {
+ // It could point to a large allocation.
+ const Vector<LargeAllocation*>& largeAllocations = heap.objectSpace().largeAllocations();
+ if (!largeAllocations.isEmpty()) {
+ if (largeAllocations[0]->aboveLowerBound(pointer)
+ && largeAllocations.last()->belowUpperBound(pointer)) {
+ LargeAllocation*const* result = approximateBinarySearch<LargeAllocation*const>(
+ largeAllocations.begin(), largeAllocations.size(),
+ LargeAllocation::fromCell(pointer),
+ [] (LargeAllocation*const* ptr) -> LargeAllocation* { return *ptr; });
+ if (result) {
+ if (result > largeAllocations.begin()
+ && result[-1]->cell() == pointer
+ && result[-1]->attributes().cellKind == HeapCell::JSCell)
+ return true;
+ if (result[0]->cell() == pointer
+ && result[0]->attributes().cellKind == HeapCell::JSCell)
+ return true;
+ if (result + 1 < largeAllocations.end()
+ && result[1]->cell() == pointer
+ && result[1]->attributes().cellKind == HeapCell::JSCell)
+ return true;
+ }
+ }
+ }
+
+ const HashSet<MarkedBlock*>& set = heap.objectSpace().blocks().set();
+
+ MarkedBlock* candidate = MarkedBlock::blockFor(pointer);
+ if (filter.ruleOut(bitwise_cast<Bits>(candidate))) {
+ ASSERT(!candidate || !set.contains(candidate));
+ return false;
+ }
+
+ if (!MarkedBlock::isAtomAligned(pointer))
+ return false;
+
+ if (!set.contains(candidate))
+ return false;
+
+ if (candidate->handle().cellKind() != HeapCell::JSCell)
+ return false;
+
+ candidate->flipIfNecessary();
+ if (!candidate->handle().isLiveCell(pointer))
+ return false;
+
+ return true;
+ }
+
+ static bool isValueGCObject(
+ Heap& heap, TinyBloomFilter filter, JSValue value)
+ {
+ if (!value.isCell())
+ return false;
+ return isPointerGCObjectJSCell(heap, filter, static_cast<void*>(value.asCell()));
+ }
+};
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/heap/IncrementalSweeper.cpp b/Source/JavaScriptCore/heap/IncrementalSweeper.cpp
index eee7f85..5204e34 100644
--- a/Source/JavaScriptCore/heap/IncrementalSweeper.cpp
+++ b/Source/JavaScriptCore/heap/IncrementalSweeper.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2012 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2016 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -128,7 +128,8 @@
bool IncrementalSweeper::sweepNextBlock()
{
while (!m_blocksToSweep.isEmpty()) {
- MarkedBlock* block = m_blocksToSweep.takeLast();
+ MarkedBlock::Handle* block = m_blocksToSweep.takeLast();
+ block->setIsOnBlocksToSweep(false);
if (!block->needsSweeping())
continue;
diff --git a/Source/JavaScriptCore/heap/IncrementalSweeper.h b/Source/JavaScriptCore/heap/IncrementalSweeper.h
index 4447bca..c610c8d 100644
--- a/Source/JavaScriptCore/heap/IncrementalSweeper.h
+++ b/Source/JavaScriptCore/heap/IncrementalSweeper.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2012 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2016 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -27,6 +27,7 @@
#define IncrementalSweeper_h
#include "HeapTimer.h"
+#include "MarkedBlock.h"
#include <wtf/Vector.h>
namespace JSC {
@@ -55,7 +56,7 @@
void scheduleTimer();
void cancelTimer();
- Vector<MarkedBlock*>& m_blocksToSweep;
+ Vector<MarkedBlock::Handle*>& m_blocksToSweep;
#endif
};
diff --git a/Source/JavaScriptCore/heap/LargeAllocation.cpp b/Source/JavaScriptCore/heap/LargeAllocation.cpp
new file mode 100644
index 0000000..09ca109
--- /dev/null
+++ b/Source/JavaScriptCore/heap/LargeAllocation.cpp
@@ -0,0 +1,112 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "config.h"
+#include "LargeAllocation.h"
+
+#include "Heap.h"
+#include "JSCInlines.h"
+#include "Operations.h"
+
+namespace JSC {
+
+LargeAllocation* LargeAllocation::tryCreate(Heap& heap, size_t size, const AllocatorAttributes& attributes)
+{
+ void* space = tryFastAlignedMalloc(alignment, headerSize() + size);
+ if (!space)
+ return nullptr;
+ if (scribbleFreeCells())
+ scribble(space, size);
+ return new (NotNull, space) LargeAllocation(heap, size, attributes);
+}
+
+LargeAllocation::LargeAllocation(Heap& heap, size_t size, const AllocatorAttributes& attributes)
+ : m_cellSize(size)
+ , m_isNewlyAllocated(true)
+ , m_hasValidCell(true)
+ , m_attributes(attributes)
+ , m_weakSet(heap.vm(), *this)
+{
+ m_isMarked.store(0);
+}
+
+void LargeAllocation::lastChanceToFinalize()
+{
+ m_weakSet.lastChanceToFinalize();
+ clearMarked();
+ clearNewlyAllocated();
+ sweep();
+}
+
+void LargeAllocation::shrink()
+{
+ m_weakSet.shrink();
+}
+
+void LargeAllocation::visitWeakSet(HeapRootVisitor& visitor)
+{
+ m_weakSet.visit(visitor);
+}
+
+void LargeAllocation::reapWeakSet()
+{
+ return m_weakSet.reap();
+}
+
+void LargeAllocation::flip()
+{
+ ASSERT(heap()->operationInProgress() == FullCollection);
+ clearMarked();
+}
+
+bool LargeAllocation::isEmpty()
+{
+ return !isMarked() && m_weakSet.isEmpty() && !isNewlyAllocated();
+}
+
+void LargeAllocation::sweep()
+{
+ m_weakSet.sweep();
+
+ if (m_hasValidCell && !isLive()) {
+ if (m_attributes.destruction == NeedsDestruction)
+ static_cast<JSCell*>(cell())->callDestructor(*vm());
+ m_hasValidCell = false;
+ }
+}
+
+void LargeAllocation::destroy()
+{
+ this->~LargeAllocation();
+ fastAlignedFree(this);
+}
+
+void LargeAllocation::dump(PrintStream& out) const
+{
+ out.print(RawPointer(this), ":(cell at ", RawPointer(cell()), " with size ", m_cellSize, " and attributes ", m_attributes, ")");
+}
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/heap/LargeAllocation.h b/Source/JavaScriptCore/heap/LargeAllocation.h
new file mode 100644
index 0000000..cf353b0
--- /dev/null
+++ b/Source/JavaScriptCore/heap/LargeAllocation.h
@@ -0,0 +1,153 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#include "MarkedBlock.h"
+#include "WeakSet.h"
+
+namespace JSC {
+
+// WebKit has a good malloc that already knows what to do for large allocations. The GC shouldn't
+// have to think about such things. That's where LargeAllocation comes in. We will allocate large
+// objects directly using malloc, and put the LargeAllocation header just before them. We can detect
+// when a HeapCell* is a LargeAllocation because it will have the MarkedBlock::atomSize / 2 bit set.
+
+class LargeAllocation {
+public:
+ static LargeAllocation* tryCreate(Heap&, size_t, const AllocatorAttributes&);
+
+ static LargeAllocation* fromCell(const void* cell)
+ {
+ return bitwise_cast<LargeAllocation*>(bitwise_cast<char*>(cell) - headerSize());
+ }
+
+ HeapCell* cell() const
+ {
+ return bitwise_cast<HeapCell*>(bitwise_cast<char*>(this) + headerSize());
+ }
+
+ static bool isLargeAllocation(HeapCell* cell)
+ {
+ return bitwise_cast<uintptr_t>(cell) & halfAlignment;
+ }
+
+ void lastChanceToFinalize();
+
+ Heap* heap() const { return m_weakSet.heap(); }
+ VM* vm() const { return m_weakSet.vm(); }
+ WeakSet& weakSet() { return m_weakSet; }
+
+ void shrink();
+
+ void visitWeakSet(HeapRootVisitor&);
+ void reapWeakSet();
+
+ void clearNewlyAllocated() { m_isNewlyAllocated = false; }
+ void flip();
+
+ bool isNewlyAllocated() const { return m_isNewlyAllocated; }
+ ALWAYS_INLINE bool isMarked() { return m_isMarked.load(std::memory_order_relaxed); }
+ bool isMarkedOrNewlyAllocated() { return isMarked() || isNewlyAllocated(); }
+ bool isLive() { return isMarkedOrNewlyAllocated(); }
+
+ bool hasValidCell() const { return m_hasValidCell; }
+
+ bool isEmpty();
+
+ size_t cellSize() const { return m_cellSize; }
+
+ bool aboveLowerBound(const void* rawPtr)
+ {
+ char* ptr = bitwise_cast<char*>(rawPtr);
+ char* begin = bitwise_cast<char*>(cell());
+ return ptr >= begin;
+ }
+
+ bool belowUpperBound(const void* rawPtr)
+ {
+ char* ptr = bitwise_cast<char*>(rawPtr);
+ char* begin = bitwise_cast<char*>(cell());
+ char* end = begin + cellSize();
+ // We cannot #include IndexingHeader.h because reasons. The fact that IndexingHeader is 8
+ // bytes is wired deep into our engine, so this isn't so bad.
+ size_t sizeOfIndexingHeader = 8;
+ return ptr <= end + sizeOfIndexingHeader;
+ }
+
+ bool contains(const void* rawPtr)
+ {
+ return aboveLowerBound(rawPtr) && belowUpperBound(rawPtr);
+ }
+
+ const AllocatorAttributes& attributes() const { return m_attributes; }
+
+ void flipIfNecessary(uint64_t) { }
+ void flipIfNecessaryConcurrently(uint64_t) { }
+
+ ALWAYS_INLINE bool testAndSetMarked()
+ {
+ // This method is usually called when the object is already marked. This avoids us
+ // having to CAS in that case. It's profitable to reduce the total amount of CAS
+ // traffic.
+ if (isMarked())
+ return true;
+ return !m_isMarked.compareExchangeStrong(false, true);
+ }
+ ALWAYS_INLINE bool testAndSetMarked(HeapCell*) { return testAndSetMarked(); }
+ void setMarked() { m_isMarked.store(true); }
+ void clearMarked() { m_isMarked.store(false); }
+
+ void noteMarked() { }
+
+ void sweep();
+
+ void destroy();
+
+ void dump(PrintStream&) const;
+
+private:
+ LargeAllocation(Heap&, size_t, const AllocatorAttributes&);
+
+ static const unsigned alignment = MarkedBlock::atomSize;
+ static const unsigned halfAlignment = alignment / 2;
+
+ static unsigned headerSize();
+
+ size_t m_cellSize;
+ bool m_isNewlyAllocated;
+ bool m_hasValidCell;
+ Atomic<bool> m_isMarked;
+ AllocatorAttributes m_attributes;
+ WeakSet m_weakSet;
+};
+
+inline unsigned LargeAllocation::headerSize()
+{
+ return ((sizeof(LargeAllocation) + halfAlignment - 1) & ~(halfAlignment - 1)) | halfAlignment;
+}
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/heap/MarkedAllocator.cpp b/Source/JavaScriptCore/heap/MarkedAllocator.cpp
index d9e03af..375395f 100644
--- a/Source/JavaScriptCore/heap/MarkedAllocator.cpp
+++ b/Source/JavaScriptCore/heap/MarkedAllocator.cpp
@@ -30,17 +30,31 @@
#include "Heap.h"
#include "IncrementalSweeper.h"
#include "JSCInlines.h"
+#include "SuperSampler.h"
#include "VM.h"
#include <wtf/CurrentTime.h>
namespace JSC {
-static bool isListPagedOut(double deadline, DoublyLinkedList<MarkedBlock>& list)
+MarkedAllocator::MarkedAllocator(Heap* heap, MarkedSpace* markedSpace, size_t cellSize, const AllocatorAttributes& attributes)
+ : m_currentBlock(0)
+ , m_lastActiveBlock(0)
+ , m_nextBlockToSweep(nullptr)
+ , m_cellSize(static_cast<unsigned>(cellSize))
+ , m_attributes(attributes)
+ , m_heap(heap)
+ , m_markedSpace(markedSpace)
+{
+}
+
+bool MarkedAllocator::isPagedOut(double deadline)
{
unsigned itersSinceLastTimeCheck = 0;
- MarkedBlock* block = list.head();
+ MarkedBlock::Handle* block = m_blockList.begin();
while (block) {
- block = block->next();
+ block = filterNextBlock(block->next());
+ if (block)
+ block->flipIfNecessary(); // Forces us to touch the memory of the block, but has no semantic effect.
++itersSinceLastTimeCheck;
if (itersSinceLastTimeCheck >= Heap::s_timeCheckResolution) {
double currentTime = WTF::monotonicallyIncreasingTime();
@@ -52,79 +66,104 @@
return false;
}
-bool MarkedAllocator::isPagedOut(double deadline)
+void MarkedAllocator::retire(MarkedBlock::Handle* block)
{
- if (isListPagedOut(deadline, m_blockList))
- return true;
- return false;
-}
-
-void MarkedAllocator::retire(MarkedBlock* block, MarkedBlock::FreeList& freeList)
-{
- m_blockList.remove(block);
+ LockHolder locker(m_lock); // This will be called in parallel during GC.
+ if (block == m_currentBlock) {
+ // This happens when the mutator is running. We finished a full GC and marked too few things
+ // to retire. Then we started allocating in this block. Then a barrier ran, which marked an
+ // object in this block, which put it over the retirement threshold. It's OK to simply do
+ // nothing in that case.
+ return;
+ }
+ if (block == m_lastActiveBlock) {
+ // This can easily happen during marking. It would be easy to handle this case, but it's
+ // just as easy to ignore it.
+ return;
+ }
+ RELEASE_ASSERT(block->isOnList());
+ if (block == m_nextBlockToSweep)
+ m_nextBlockToSweep = filterNextBlock(block->next());
+ block->remove();
m_retiredBlocks.push(block);
- block->didRetireBlock(freeList);
}
-inline void* MarkedAllocator::tryAllocateHelper(size_t bytes)
+MarkedBlock::Handle* MarkedAllocator::filterNextBlock(MarkedBlock::Handle* block)
{
+ if (block == m_blockList.end())
+ return nullptr;
+ return block;
+}
+
+void MarkedAllocator::setNextBlockToSweep(MarkedBlock::Handle* block)
+{
+ m_nextBlockToSweep = filterNextBlock(block);
+}
+
+void* MarkedAllocator::tryAllocateWithoutCollectingImpl()
+{
+ SuperSamplerScope superSamplerScope(false);
+
if (m_currentBlock) {
ASSERT(m_currentBlock == m_nextBlockToSweep);
m_currentBlock->didConsumeFreeList();
- m_nextBlockToSweep = m_currentBlock->next();
+ setNextBlockToSweep(m_currentBlock->next());
}
+
+ setFreeList(FreeList());
- MarkedBlock* next;
- for (MarkedBlock*& block = m_nextBlockToSweep; block; block = next) {
- next = block->next();
+ RELEASE_ASSERT(m_nextBlockToSweep != m_blockList.end());
- MarkedBlock::FreeList freeList = block->sweep(MarkedBlock::SweepToFreeList);
+ MarkedBlock::Handle* next;
+ for (MarkedBlock::Handle*& block = m_nextBlockToSweep; block; block = next) {
+ next = filterNextBlock(block->next());
+
+ // It would be super weird if the blocks we are sweeping have anything allocated during this
+ // cycle.
+ ASSERT(!block->hasAnyNewlyAllocated());
- double utilization = ((double)MarkedBlock::blockSize - (double)freeList.bytes) / (double)MarkedBlock::blockSize;
- if (utilization >= Options::minMarkedBlockUtilization()) {
- ASSERT(freeList.bytes || !freeList.head);
- retire(block, freeList);
- continue;
- }
-
- if (bytes > block->cellSize()) {
- block->stopAllocating(freeList);
+ FreeList freeList = block->sweep(MarkedBlock::Handle::SweepToFreeList);
+
+ // It's possible to stumble on a complete-full block. Marking tries to retire these, but
+ // that algorithm is racy and may forget to do it sometimes.
+ if (freeList.allocationWillFail()) {
+ ASSERT(block->isFreeListed());
+ block->unsweepWithNoNewlyAllocated();
+ ASSERT(block->isMarked());
+ retire(block);
continue;
}
m_currentBlock = block;
- m_freeList = freeList;
+ setFreeList(freeList);
break;
}
- if (!m_freeList.head) {
+ if (!m_freeList) {
m_currentBlock = 0;
return 0;
}
- ASSERT(m_freeList.head);
- void* head = tryPopFreeList(bytes);
- ASSERT(head);
+ void* result;
+ if (m_freeList.remaining) {
+ unsigned cellSize = m_cellSize;
+ m_freeList.remaining -= cellSize;
+ result = m_freeList.payloadEnd - m_freeList.remaining - cellSize;
+ } else {
+ FreeCell* head = m_freeList.head;
+ m_freeList.head = head->next;
+ result = head;
+ }
+ RELEASE_ASSERT(result);
m_markedSpace->didAllocateInBlock(m_currentBlock);
- return head;
+ return result;
}
-inline void* MarkedAllocator::tryPopFreeList(size_t bytes)
-{
- ASSERT(m_currentBlock);
- if (bytes > m_currentBlock->cellSize())
- return 0;
-
- MarkedBlock::FreeCell* head = m_freeList.head;
- m_freeList.head = head->next;
- return head;
-}
-
-inline void* MarkedAllocator::tryAllocate(size_t bytes)
+inline void* MarkedAllocator::tryAllocateWithoutCollecting()
{
ASSERT(!m_heap->isBusy());
m_heap->m_operationInProgress = Allocation;
- void* result = tryAllocateHelper(bytes);
+ void* result = tryAllocateWithoutCollectingImpl();
m_heap->m_operationInProgress = NoOperation;
ASSERT(result || !m_currentBlock);
@@ -146,100 +185,145 @@
allocationCount = 0;
}
-void* MarkedAllocator::allocateSlowCase(size_t bytes)
+void* MarkedAllocator::allocateSlowCase()
{
+ bool crashOnFailure = true;
+ return allocateSlowCaseImpl(crashOnFailure);
+}
+
+void* MarkedAllocator::tryAllocateSlowCase()
+{
+ bool crashOnFailure = false;
+ return allocateSlowCaseImpl(crashOnFailure);
+}
+
+void* MarkedAllocator::allocateSlowCaseImpl(bool crashOnFailure)
+{
+ SuperSamplerScope superSamplerScope(false);
ASSERT(m_heap->vm()->currentThreadIsHoldingAPILock());
doTestCollectionsIfNeeded();
ASSERT(!m_markedSpace->isIterating());
- ASSERT(!m_freeList.head);
- m_heap->didAllocate(m_freeList.bytes);
+ m_heap->didAllocate(m_freeList.originalSize);
- void* result = tryAllocate(bytes);
+ void* result = tryAllocateWithoutCollecting();
if (LIKELY(result != 0))
return result;
if (m_heap->collectIfNecessaryOrDefer()) {
- result = tryAllocate(bytes);
+ result = tryAllocateWithoutCollecting();
if (result)
return result;
}
ASSERT(!m_heap->shouldCollect());
- MarkedBlock* block = allocateBlock(bytes);
- ASSERT(block);
+ MarkedBlock::Handle* block = tryAllocateBlock();
+ if (!block) {
+ if (crashOnFailure)
+ RELEASE_ASSERT_NOT_REACHED();
+ else
+ return nullptr;
+ }
addBlock(block);
- result = tryAllocate(bytes);
+ result = tryAllocateWithoutCollecting();
ASSERT(result);
return result;
}
-MarkedBlock* MarkedAllocator::allocateBlock(size_t bytes)
+static size_t blockHeaderSize()
{
- size_t minBlockSize = MarkedBlock::blockSize;
- size_t minAllocationSize = WTF::roundUpToMultipleOf<MarkedBlock::atomSize>(sizeof(MarkedBlock)) + WTF::roundUpToMultipleOf<MarkedBlock::atomSize>(bytes);
- minAllocationSize = WTF::roundUpToMultipleOf(WTF::pageSize(), minAllocationSize);
- size_t blockSize = std::max(minBlockSize, minAllocationSize);
-
- size_t cellSize = m_cellSize ? m_cellSize : WTF::roundUpToMultipleOf<MarkedBlock::atomSize>(bytes);
-
- return MarkedBlock::create(*m_heap, this, blockSize, cellSize, m_attributes);
+ return WTF::roundUpToMultipleOf<MarkedBlock::atomSize>(sizeof(MarkedBlock));
}
-void MarkedAllocator::addBlock(MarkedBlock* block)
+size_t MarkedAllocator::blockSizeForBytes(size_t bytes)
+{
+ size_t minBlockSize = MarkedBlock::blockSize;
+ size_t minAllocationSize = blockHeaderSize() + WTF::roundUpToMultipleOf<MarkedBlock::atomSize>(bytes);
+ minAllocationSize = WTF::roundUpToMultipleOf(WTF::pageSize(), minAllocationSize);
+ return std::max(minBlockSize, minAllocationSize);
+}
+
+MarkedBlock::Handle* MarkedAllocator::tryAllocateBlock()
+{
+ SuperSamplerScope superSamplerScope(false);
+ return MarkedBlock::tryCreate(*m_heap, this, m_cellSize, m_attributes);
+}
+
+void MarkedAllocator::addBlock(MarkedBlock::Handle* block)
{
ASSERT(!m_currentBlock);
- ASSERT(!m_freeList.head);
+ ASSERT(!m_freeList);
m_blockList.append(block);
- m_nextBlockToSweep = block;
+ setNextBlockToSweep(block);
m_markedSpace->didAddBlock(block);
}
-void MarkedAllocator::removeBlock(MarkedBlock* block)
+void MarkedAllocator::removeBlock(MarkedBlock::Handle* block)
{
if (m_currentBlock == block) {
- m_currentBlock = m_currentBlock->next();
- m_freeList = MarkedBlock::FreeList();
+ m_currentBlock = filterNextBlock(m_currentBlock->next());
+ setFreeList(FreeList());
}
if (m_nextBlockToSweep == block)
- m_nextBlockToSweep = m_nextBlockToSweep->next();
+ setNextBlockToSweep(m_nextBlockToSweep->next());
block->willRemoveBlock();
m_blockList.remove(block);
}
+void MarkedAllocator::stopAllocating()
+{
+ if (m_heap->operationInProgress() == FullCollection)
+ m_blockList.takeFrom(m_retiredBlocks);
+
+ ASSERT(!m_lastActiveBlock);
+ if (!m_currentBlock) {
+ ASSERT(!m_freeList);
+ return;
+ }
+
+ m_currentBlock->stopAllocating(m_freeList);
+ m_lastActiveBlock = m_currentBlock;
+ m_currentBlock = 0;
+ m_freeList = FreeList();
+}
+
void MarkedAllocator::reset()
{
m_lastActiveBlock = 0;
m_currentBlock = 0;
- m_freeList = MarkedBlock::FreeList();
- if (m_heap->operationInProgress() == FullCollection)
- m_blockList.append(m_retiredBlocks);
+ setFreeList(FreeList());
- m_nextBlockToSweep = m_blockList.head();
+ setNextBlockToSweep(m_blockList.begin());
if (UNLIKELY(Options::useImmortalObjects())) {
- MarkedBlock* next;
- for (MarkedBlock*& block = m_nextBlockToSweep; block; block = next) {
- next = block->next();
+ MarkedBlock::Handle* next;
+ for (MarkedBlock::Handle*& block = m_nextBlockToSweep; block; block = next) {
+ next = filterNextBlock(block->next());
- MarkedBlock::FreeList freeList = block->sweep(MarkedBlock::SweepToFreeList);
- retire(block, freeList);
+ FreeList freeList = block->sweep(MarkedBlock::Handle::SweepToFreeList);
+ block->zap(freeList);
+ retire(block);
}
}
}
void MarkedAllocator::lastChanceToFinalize()
{
- m_blockList.append(m_retiredBlocks);
+ m_blockList.takeFrom(m_retiredBlocks);
forEachBlock(
- [&] (MarkedBlock* block) {
+ [&] (MarkedBlock::Handle* block) {
block->lastChanceToFinalize();
});
}
+void MarkedAllocator::setFreeList(const FreeList& freeList)
+{
+ m_freeList = freeList;
+}
+
} // namespace JSC
diff --git a/Source/JavaScriptCore/heap/MarkedAllocator.h b/Source/JavaScriptCore/heap/MarkedAllocator.h
index 60bdaa0..d49d1f8 100644
--- a/Source/JavaScriptCore/heap/MarkedAllocator.h
+++ b/Source/JavaScriptCore/heap/MarkedAllocator.h
@@ -27,8 +27,9 @@
#define MarkedAllocator_h
#include "AllocatorAttributes.h"
+#include "FreeList.h"
#include "MarkedBlock.h"
-#include <wtf/DoublyLinkedList.h>
+#include <wtf/SentinelLinkedList.h>
namespace JSC {
@@ -40,9 +41,10 @@
friend class LLIntOffsetsExtractor;
public:
- static ptrdiff_t offsetOfFreeListHead();
+ static ptrdiff_t offsetOfFreeList();
+ static ptrdiff_t offsetOfCellSize();
- MarkedAllocator();
+ MarkedAllocator(Heap*, MarkedSpace*, size_t cellSize, const AllocatorAttributes&);
void lastChanceToFinalize();
void reset();
void stopAllocating();
@@ -52,97 +54,99 @@
bool needsDestruction() const { return m_attributes.destruction == NeedsDestruction; }
DestructionMode destruction() const { return m_attributes.destruction; }
HeapCell::Kind cellKind() const { return m_attributes.cellKind; }
- void* allocate(size_t);
+ void* allocate();
+ void* tryAllocate();
Heap* heap() { return m_heap; }
- MarkedBlock* takeLastActiveBlock()
+ MarkedBlock::Handle* takeLastActiveBlock()
{
- MarkedBlock* block = m_lastActiveBlock;
+ MarkedBlock::Handle* block = m_lastActiveBlock;
m_lastActiveBlock = 0;
return block;
}
template<typename Functor> void forEachBlock(const Functor&);
- void addBlock(MarkedBlock*);
- void removeBlock(MarkedBlock*);
- void init(Heap*, MarkedSpace*, size_t cellSize, const AllocatorAttributes&);
+ void addBlock(MarkedBlock::Handle*);
+ void removeBlock(MarkedBlock::Handle*);
bool isPagedOut(double deadline);
+
+ static size_t blockSizeForBytes(size_t);
private:
- JS_EXPORT_PRIVATE void* allocateSlowCase(size_t);
- void* tryAllocate(size_t);
- void* tryAllocateHelper(size_t);
- void* tryPopFreeList(size_t);
- MarkedBlock* allocateBlock(size_t);
- ALWAYS_INLINE void doTestCollectionsIfNeeded();
- void retire(MarkedBlock*, MarkedBlock::FreeList&);
+ friend class MarkedBlock;
- MarkedBlock::FreeList m_freeList;
- MarkedBlock* m_currentBlock;
- MarkedBlock* m_lastActiveBlock;
- MarkedBlock* m_nextBlockToSweep;
- DoublyLinkedList<MarkedBlock> m_blockList;
- DoublyLinkedList<MarkedBlock> m_retiredBlocks;
- size_t m_cellSize;
+ JS_EXPORT_PRIVATE void* allocateSlowCase();
+ JS_EXPORT_PRIVATE void* tryAllocateSlowCase();
+ void* allocateSlowCaseImpl(bool crashOnFailure);
+ void* tryAllocateWithoutCollecting();
+ void* tryAllocateWithoutCollectingImpl();
+ MarkedBlock::Handle* tryAllocateBlock();
+ ALWAYS_INLINE void doTestCollectionsIfNeeded();
+ void retire(MarkedBlock::Handle*);
+
+ void setFreeList(const FreeList&);
+
+ MarkedBlock::Handle* filterNextBlock(MarkedBlock::Handle*);
+ void setNextBlockToSweep(MarkedBlock::Handle*);
+
+ FreeList m_freeList;
+ MarkedBlock::Handle* m_currentBlock;
+ MarkedBlock::Handle* m_lastActiveBlock;
+ MarkedBlock::Handle* m_nextBlockToSweep;
+ SentinelLinkedList<MarkedBlock::Handle, BasicRawSentinelNode<MarkedBlock::Handle>> m_blockList;
+ SentinelLinkedList<MarkedBlock::Handle, BasicRawSentinelNode<MarkedBlock::Handle>> m_retiredBlocks;
+ Lock m_lock;
+ unsigned m_cellSize;
AllocatorAttributes m_attributes;
Heap* m_heap;
MarkedSpace* m_markedSpace;
};
-inline ptrdiff_t MarkedAllocator::offsetOfFreeListHead()
+inline ptrdiff_t MarkedAllocator::offsetOfFreeList()
{
- return OBJECT_OFFSETOF(MarkedAllocator, m_freeList) + OBJECT_OFFSETOF(MarkedBlock::FreeList, head);
+ return OBJECT_OFFSETOF(MarkedAllocator, m_freeList);
}
-inline MarkedAllocator::MarkedAllocator()
- : m_currentBlock(0)
- , m_lastActiveBlock(0)
- , m_nextBlockToSweep(0)
- , m_cellSize(0)
- , m_heap(0)
- , m_markedSpace(0)
+inline ptrdiff_t MarkedAllocator::offsetOfCellSize()
{
+ return OBJECT_OFFSETOF(MarkedAllocator, m_cellSize);
}
-inline void MarkedAllocator::init(Heap* heap, MarkedSpace* markedSpace, size_t cellSize, const AllocatorAttributes& attributes)
+ALWAYS_INLINE void* MarkedAllocator::tryAllocate()
{
- m_heap = heap;
- m_markedSpace = markedSpace;
- m_cellSize = cellSize;
- m_attributes = attributes;
-}
-
-inline void* MarkedAllocator::allocate(size_t bytes)
-{
- MarkedBlock::FreeCell* head = m_freeList.head;
- if (UNLIKELY(!head)) {
- void* result = allocateSlowCase(bytes);
-#ifndef NDEBUG
- memset(result, 0xCD, bytes);
-#endif
- return result;
+ unsigned remaining = m_freeList.remaining;
+ if (remaining) {
+ unsigned cellSize = m_cellSize;
+ remaining -= cellSize;
+ m_freeList.remaining = remaining;
+ return m_freeList.payloadEnd - remaining - cellSize;
}
+ FreeCell* head = m_freeList.head;
+ if (UNLIKELY(!head))
+ return tryAllocateSlowCase();
+
m_freeList.head = head->next;
-#ifndef NDEBUG
- memset(head, 0xCD, bytes);
-#endif
return head;
}
-inline void MarkedAllocator::stopAllocating()
+ALWAYS_INLINE void* MarkedAllocator::allocate()
{
- ASSERT(!m_lastActiveBlock);
- if (!m_currentBlock) {
- ASSERT(!m_freeList.head);
- return;
+ unsigned remaining = m_freeList.remaining;
+ if (remaining) {
+ unsigned cellSize = m_cellSize;
+ remaining -= cellSize;
+ m_freeList.remaining = remaining;
+ return m_freeList.payloadEnd - remaining - cellSize;
}
- m_currentBlock->stopAllocating(m_freeList);
- m_lastActiveBlock = m_currentBlock;
- m_currentBlock = 0;
- m_freeList = MarkedBlock::FreeList();
+ FreeCell* head = m_freeList.head;
+ if (UNLIKELY(!head))
+ return allocateSlowCase();
+
+ m_freeList.head = head->next;
+ return head;
}
inline void MarkedAllocator::resumeAllocating()
@@ -157,16 +161,8 @@
template <typename Functor> inline void MarkedAllocator::forEachBlock(const Functor& functor)
{
- MarkedBlock* next;
- for (MarkedBlock* block = m_blockList.head(); block; block = next) {
- next = block->next();
- functor(block);
- }
-
- for (MarkedBlock* block = m_retiredBlocks.head(); block; block = next) {
- next = block->next();
- functor(block);
- }
+ m_blockList.forEach(functor);
+ m_retiredBlocks.forEach(functor);
}
} // namespace JSC
diff --git a/Source/JavaScriptCore/heap/MarkedBlock.cpp b/Source/JavaScriptCore/heap/MarkedBlock.cpp
index ae48499..874cec3 100644
--- a/Source/JavaScriptCore/heap/MarkedBlock.cpp
+++ b/Source/JavaScriptCore/heap/MarkedBlock.cpp
@@ -29,74 +29,107 @@
#include "JSCell.h"
#include "JSDestructibleObject.h"
#include "JSCInlines.h"
+#include "SuperSampler.h"
namespace JSC {
static const bool computeBalance = false;
static size_t balance;
-MarkedBlock* MarkedBlock::create(Heap& heap, MarkedAllocator* allocator, size_t capacity, size_t cellSize, const AllocatorAttributes& attributes)
+MarkedBlock::Handle* MarkedBlock::tryCreate(Heap& heap, MarkedAllocator* allocator, size_t cellSize, const AllocatorAttributes& attributes)
{
if (computeBalance) {
balance++;
if (!(balance % 10))
dataLog("MarkedBlock Balance: ", balance, "\n");
}
- MarkedBlock* block = new (NotNull, fastAlignedMalloc(blockSize, capacity)) MarkedBlock(allocator, capacity, cellSize, attributes);
- heap.didAllocateBlock(capacity);
- return block;
+ void* blockSpace = tryFastAlignedMalloc(blockSize, blockSize);
+ if (!blockSpace)
+ return nullptr;
+ if (scribbleFreeCells())
+ scribble(blockSpace, blockSize);
+ return new Handle(heap, allocator, cellSize, attributes, blockSpace);
}
-void MarkedBlock::destroy(Heap& heap, MarkedBlock* block)
+MarkedBlock::Handle::Handle(Heap& heap, MarkedAllocator* allocator, size_t cellSize, const AllocatorAttributes& attributes, void* blockSpace)
+ : m_atomsPerCell((cellSize + atomSize - 1) / atomSize)
+ , m_endAtom(atomsPerBlock - m_atomsPerCell + 1)
+ , m_attributes(attributes)
+ , m_state(New) // All cells start out unmarked.
+ , m_allocator(allocator)
+ , m_weakSet(allocator->heap()->vm(), CellContainer())
{
+ m_block = new (NotNull, blockSpace) MarkedBlock(*heap.vm(), *this);
+
+ m_weakSet.setContainer(*m_block);
+
+ heap.didAllocateBlock(blockSize);
+ HEAP_LOG_BLOCK_STATE_TRANSITION(this);
+ ASSERT(allocator);
+ if (m_attributes.cellKind != HeapCell::JSCell)
+ RELEASE_ASSERT(m_attributes.destruction == DoesNotNeedDestruction);
+}
+
+MarkedBlock::Handle::~Handle()
+{
+ Heap& heap = *this->heap();
if (computeBalance) {
balance--;
if (!(balance % 10))
dataLog("MarkedBlock Balance: ", balance, "\n");
}
- size_t capacity = block->capacity();
- block->~MarkedBlock();
- fastAlignedFree(block);
- heap.didFreeBlock(capacity);
+ m_block->~MarkedBlock();
+ fastAlignedFree(m_block);
+ heap.didFreeBlock(blockSize);
}
-MarkedBlock::MarkedBlock(MarkedAllocator* allocator, size_t capacity, size_t cellSize, const AllocatorAttributes& attributes)
- : DoublyLinkedListNode<MarkedBlock>()
- , m_atomsPerCell((cellSize + atomSize - 1) / atomSize)
- , m_endAtom((allocator->cellSize() ? atomsPerBlock - m_atomsPerCell : firstAtom()) + 1)
- , m_capacity(capacity)
- , m_attributes(attributes)
- , m_allocator(allocator)
- , m_state(New) // All cells start out unmarked.
- , m_weakSet(allocator->heap()->vm(), *this)
+MarkedBlock::MarkedBlock(VM& vm, Handle& handle)
+ : m_needsDestruction(handle.needsDestruction())
+ , m_handle(handle)
+ , m_vm(&vm)
+ , m_version(vm.heap.objectSpace().version())
{
- ASSERT(allocator);
- HEAP_LOG_BLOCK_STATE_TRANSITION(this);
- if (m_attributes.cellKind != HeapCell::JSCell)
- RELEASE_ASSERT(m_attributes.destruction == DoesNotNeedDestruction);
-}
-
-inline void MarkedBlock::callDestructor(HeapCell* cell)
-{
- // A previous eager sweep may already have run cell's destructor.
- if (cell->isZapped())
- return;
+ unsigned cellsPerBlock = MarkedSpace::blockPayload / handle.cellSize();
+ double markCountBias = -(Options::minMarkedBlockUtilization() * cellsPerBlock);
- JSCell* jsCell = static_cast<JSCell*>(cell);
-
- ASSERT(jsCell->structureID());
- if (jsCell->inlineTypeFlags() & StructureIsImmortal)
- jsCell->structure(*vm())->classInfo()->methodTable.destroy(jsCell);
- else
- jsCast<JSDestructibleObject*>(jsCell)->classInfo()->methodTable.destroy(jsCell);
- cell->zap();
+ // The mark count bias should be comfortably within this range.
+ RELEASE_ASSERT(markCountBias > static_cast<double>(std::numeric_limits<int16_t>::min()));
+ RELEASE_ASSERT(markCountBias < 0);
+
+ m_markCountBias = static_cast<int16_t>(markCountBias);
+
+ m_biasedMarkCount = m_markCountBias; // This means we haven't marked anything yet.
}
-template<MarkedBlock::BlockState blockState, MarkedBlock::SweepMode sweepMode, bool callDestructors>
-MarkedBlock::FreeList MarkedBlock::specializedSweep()
+template<MarkedBlock::BlockState blockState, MarkedBlock::Handle::SweepMode sweepMode, DestructionMode destructionMode, MarkedBlock::Handle::ScribbleMode scribbleMode, MarkedBlock::Handle::NewlyAllocatedMode newlyAllocatedMode>
+FreeList MarkedBlock::Handle::specializedSweep()
{
- ASSERT(blockState != Allocated && blockState != FreeListed);
- ASSERT(!(!callDestructors && sweepMode == SweepOnly));
+ SuperSamplerScope superSamplerScope(false);
+ ASSERT(blockState == New || blockState == Marked);
+ ASSERT(!(destructionMode == DoesNotNeedDestruction && sweepMode == SweepOnly));
+
+ assertFlipped();
+ MarkedBlock& block = this->block();
+
+ bool isNewBlock = blockState == New;
+ bool isEmptyBlock = !block.hasAnyMarked()
+ && newlyAllocatedMode == DoesNotHaveNewlyAllocated
+ && destructionMode == DoesNotNeedDestruction;
+ if (Options::useBumpAllocator() && (isNewBlock || isEmptyBlock)) {
+ ASSERT(block.m_marks.isEmpty());
+
+ char* startOfLastCell = static_cast<char*>(cellAlign(block.atoms() + m_endAtom - 1));
+ char* payloadEnd = startOfLastCell + cellSize();
+ RELEASE_ASSERT(payloadEnd - MarkedBlock::blockSize <= bitwise_cast<char*>(&block));
+ char* payloadBegin = bitwise_cast<char*>(block.atoms() + firstAtom());
+ if (scribbleMode == Scribble)
+ scribble(payloadBegin, payloadEnd - payloadBegin);
+ m_state = ((sweepMode == SweepToFreeList) ? FreeListed : Marked);
+ FreeList result = FreeList::bump(payloadEnd, payloadEnd - payloadBegin);
+ if (false)
+ dataLog("Quickly swept block ", RawPointer(this), " with cell size ", cellSize(), " and attributes ", m_attributes, ": ", result, "\n");
+ return result;
+ }
// This produces a free list that is ordered in reverse through the block.
// This is fine, since the allocation code makes no assumptions about the
@@ -104,16 +137,20 @@
FreeCell* head = 0;
size_t count = 0;
for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
- if (blockState == Marked && (m_marks.get(i) || (m_newlyAllocated && m_newlyAllocated->get(i))))
+ if (blockState == Marked
+ && (block.m_marks.get(i)
+ || (newlyAllocatedMode == HasNewlyAllocated && m_newlyAllocated->get(i))))
continue;
- HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&atoms()[i]);
+ HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&block.atoms()[i]);
- if (callDestructors && blockState != New)
- callDestructor(cell);
+ if (destructionMode == NeedsDestruction && blockState != New)
+ static_cast<JSCell*>(cell)->callDestructor(*vm());
if (sweepMode == SweepToFreeList) {
FreeCell* freeCell = reinterpret_cast<FreeCell*>(cell);
+ if (scribbleMode == Scribble)
+ scribble(freeCell, cellSize());
freeCell->next = head;
head = freeCell;
++count;
@@ -122,15 +159,20 @@
// We only want to discard the newlyAllocated bits if we're creating a FreeList,
// otherwise we would lose information on what's currently alive.
- if (sweepMode == SweepToFreeList && m_newlyAllocated)
+ if (sweepMode == SweepToFreeList && newlyAllocatedMode == HasNewlyAllocated)
m_newlyAllocated = nullptr;
- m_state = ((sweepMode == SweepToFreeList) ? FreeListed : Marked);
- return FreeList(head, count * cellSize());
+ FreeList result = FreeList::list(head, count * cellSize());
+ m_state = (sweepMode == SweepToFreeList ? FreeListed : Marked);
+ if (false)
+ dataLog("Slowly swept block ", RawPointer(&block), " with cell size ", cellSize(), " and attributes ", m_attributes, ": ", result, "\n");
+ return result;
}
-MarkedBlock::FreeList MarkedBlock::sweep(SweepMode sweepMode)
+FreeList MarkedBlock::Handle::sweep(SweepMode sweepMode)
{
+ flipIfNecessary();
+
HEAP_LOG_BLOCK_STATE_TRANSITION(this);
m_weakSet.sweep();
@@ -139,68 +181,95 @@
return FreeList();
if (m_attributes.destruction == NeedsDestruction)
- return sweepHelper<true>(sweepMode);
- return sweepHelper<false>(sweepMode);
+ return sweepHelperSelectScribbleMode<NeedsDestruction>(sweepMode);
+ return sweepHelperSelectScribbleMode<DoesNotNeedDestruction>(sweepMode);
}
-template<bool callDestructors>
-MarkedBlock::FreeList MarkedBlock::sweepHelper(SweepMode sweepMode)
+template<DestructionMode destructionMode>
+FreeList MarkedBlock::Handle::sweepHelperSelectScribbleMode(SweepMode sweepMode)
+{
+ if (scribbleFreeCells())
+ return sweepHelperSelectStateAndSweepMode<destructionMode, Scribble>(sweepMode);
+ return sweepHelperSelectStateAndSweepMode<destructionMode, DontScribble>(sweepMode);
+}
+
+template<DestructionMode destructionMode, MarkedBlock::Handle::ScribbleMode scribbleMode>
+FreeList MarkedBlock::Handle::sweepHelperSelectStateAndSweepMode(SweepMode sweepMode)
{
switch (m_state) {
case New:
ASSERT(sweepMode == SweepToFreeList);
- return specializedSweep<New, SweepToFreeList, callDestructors>();
+ return specializedSweep<New, SweepToFreeList, destructionMode, scribbleMode, DoesNotHaveNewlyAllocated>();
case FreeListed:
// Happens when a block transitions to fully allocated.
ASSERT(sweepMode == SweepToFreeList);
return FreeList();
- case Retired:
case Allocated:
RELEASE_ASSERT_NOT_REACHED();
return FreeList();
case Marked:
- return sweepMode == SweepToFreeList
- ? specializedSweep<Marked, SweepToFreeList, callDestructors>()
- : specializedSweep<Marked, SweepOnly, callDestructors>();
+ if (m_newlyAllocated) {
+ return sweepMode == SweepToFreeList
+ ? specializedSweep<Marked, SweepToFreeList, destructionMode, scribbleMode, HasNewlyAllocated>()
+ : specializedSweep<Marked, SweepOnly, destructionMode, scribbleMode, HasNewlyAllocated>();
+ } else {
+ return sweepMode == SweepToFreeList
+ ? specializedSweep<Marked, SweepToFreeList, destructionMode, scribbleMode, DoesNotHaveNewlyAllocated>()
+ : specializedSweep<Marked, SweepOnly, destructionMode, scribbleMode, DoesNotHaveNewlyAllocated>();
+ }
}
RELEASE_ASSERT_NOT_REACHED();
return FreeList();
}
+void MarkedBlock::Handle::unsweepWithNoNewlyAllocated()
+{
+ flipIfNecessary();
+
+ HEAP_LOG_BLOCK_STATE_TRANSITION(this);
+
+ RELEASE_ASSERT(m_state == FreeListed);
+ m_state = Marked;
+}
+
class SetNewlyAllocatedFunctor : public MarkedBlock::VoidFunctor {
public:
- SetNewlyAllocatedFunctor(MarkedBlock* block)
+ SetNewlyAllocatedFunctor(MarkedBlock::Handle* block)
: m_block(block)
{
}
IterationStatus operator()(HeapCell* cell, HeapCell::Kind) const
{
- ASSERT(MarkedBlock::blockFor(cell) == m_block);
+ ASSERT(MarkedBlock::blockFor(cell) == &m_block->block());
m_block->setNewlyAllocated(cell);
return IterationStatus::Continue;
}
private:
- MarkedBlock* m_block;
+ MarkedBlock::Handle* m_block;
};
-void MarkedBlock::stopAllocating(const FreeList& freeList)
+void MarkedBlock::Handle::stopAllocating(const FreeList& freeList)
{
+ flipIfNecessary();
HEAP_LOG_BLOCK_STATE_TRANSITION(this);
- FreeCell* head = freeList.head;
if (m_state == Marked) {
- // If the block is in the Marked state then we know that:
- // 1) It was not used for allocation during the previous allocation cycle.
- // 2) It may have dead objects, and we only know them to be dead by the
- // fact that their mark bits are unset.
+ // If the block is in the Marked state then we know that one of these
+ // conditions holds:
+ //
+ // - It was not used for allocation during the previous allocation cycle.
+ // It may have dead objects, and we only know them to be dead by the
+ // fact that their mark bits are unset.
+ //
+ // - Someone had already done stopAllocating(), for example because of
+ // heap iteration, and they had already
// Hence if the block is Marked we need to leave it Marked.
-
- ASSERT(!head);
+ ASSERT(freeList.allocationWillFail());
return;
}
-
+
ASSERT(m_state == FreeListed);
// Roll back to a coherent state for Heap introspection. Cells newly
@@ -213,57 +282,29 @@
SetNewlyAllocatedFunctor functor(this);
forEachCell(functor);
- FreeCell* next;
- for (FreeCell* current = head; current; current = next) {
- next = current->next;
- if (m_attributes.destruction == NeedsDestruction)
- reinterpret_cast<HeapCell*>(current)->zap();
- clearNewlyAllocated(current);
- }
+ forEachFreeCell(
+ freeList,
+ [&] (HeapCell* cell) {
+ if (m_attributes.destruction == NeedsDestruction)
+ cell->zap();
+ clearNewlyAllocated(cell);
+ });
m_state = Marked;
}
-void MarkedBlock::clearMarks()
+void MarkedBlock::Handle::lastChanceToFinalize()
{
- if (heap()->operationInProgress() == JSC::EdenCollection)
- this->clearMarksWithCollectionType<EdenCollection>();
- else
- this->clearMarksWithCollectionType<FullCollection>();
-}
-
-template <HeapOperation collectionType>
-void MarkedBlock::clearMarksWithCollectionType()
-{
- ASSERT(collectionType == FullCollection || collectionType == EdenCollection);
- HEAP_LOG_BLOCK_STATE_TRANSITION(this);
-
- ASSERT(m_state != New && m_state != FreeListed);
- if (collectionType == FullCollection) {
- m_marks.clearAll();
- // This will become true at the end of the mark phase. We set it now to
- // avoid an extra pass to do so later.
- m_state = Marked;
- return;
- }
-
- ASSERT(collectionType == EdenCollection);
- // If a block was retired then there's no way an EdenCollection can un-retire it.
- if (m_state != Retired)
- m_state = Marked;
-}
-
-void MarkedBlock::lastChanceToFinalize()
-{
+ m_block->clearMarks();
m_weakSet.lastChanceToFinalize();
clearNewlyAllocated();
- clearMarksWithCollectionType<FullCollection>();
sweep();
}
-MarkedBlock::FreeList MarkedBlock::resumeAllocating()
+FreeList MarkedBlock::Handle::resumeAllocating()
{
+ flipIfNecessary();
HEAP_LOG_BLOCK_STATE_TRANSITION(this);
ASSERT(m_state == Marked);
@@ -274,32 +315,148 @@
return FreeList();
}
- // Re-create our free list from before stopping allocation.
+ // Re-create our free list from before stopping allocation. Note that this may return an empty
+ // freelist, in which case the block will still be Marked!
return sweep(SweepToFreeList);
}
-void MarkedBlock::didRetireBlock(const FreeList& freeList)
+void MarkedBlock::Handle::zap(const FreeList& freeList)
{
- HEAP_LOG_BLOCK_STATE_TRANSITION(this);
- FreeCell* head = freeList.head;
+ forEachFreeCell(
+ freeList,
+ [&] (HeapCell* cell) {
+ if (m_attributes.destruction == NeedsDestruction)
+ cell->zap();
+ });
+}
- // Currently we don't notify the Heap that we're giving up on this block.
- // The Heap might be able to make a better decision about how many bytes should
- // be allocated before the next collection if it knew about this retired block.
- // On the other hand we'll waste at most 10% of our Heap space between FullCollections
- // and only under heavy fragmentation.
-
- // We need to zap the free list when retiring a block so that we don't try to destroy
- // previously destroyed objects when we re-sweep the block in the future.
- FreeCell* next;
- for (FreeCell* current = head; current; current = next) {
- next = current->next;
- if (m_attributes.destruction == NeedsDestruction)
- reinterpret_cast<HeapCell*>(current)->zap();
+template<typename Func>
+void MarkedBlock::Handle::forEachFreeCell(const FreeList& freeList, const Func& func)
+{
+ if (freeList.remaining) {
+ for (unsigned remaining = freeList.remaining; remaining; remaining -= cellSize())
+ func(bitwise_cast<HeapCell*>(freeList.payloadEnd - remaining));
+ } else {
+ for (FreeCell* current = freeList.head; current;) {
+ FreeCell* next = current->next;
+ func(bitwise_cast<HeapCell*>(current));
+ current = next;
+ }
}
+}
+void MarkedBlock::flipIfNecessary()
+{
+ flipIfNecessary(vm()->heap.objectSpace().version());
+}
+
+void MarkedBlock::Handle::flipIfNecessary()
+{
+ block().flipIfNecessary();
+}
+
+void MarkedBlock::flipIfNecessarySlow()
+{
+ ASSERT(m_version != vm()->heap.objectSpace().version());
+ clearMarks();
+}
+
+void MarkedBlock::flipIfNecessaryConcurrentlySlow()
+{
+ LockHolder locker(m_lock);
+ if (m_version != vm()->heap.objectSpace().version())
+ clearMarks();
+}
+
+void MarkedBlock::clearMarks()
+{
+ m_marks.clearAll();
+ clearHasAnyMarked();
+ // This will become true at the end of the mark phase. We set it now to
+ // avoid an extra pass to do so later.
+ handle().m_state = Marked;
+ WTF::storeStoreFence();
+ m_version = vm()->heap.objectSpace().version();
+}
+
+#if !ASSERT_DISABLED
+void MarkedBlock::assertFlipped()
+{
+ ASSERT(m_version == vm()->heap.objectSpace().version());
+}
+#endif // !ASSERT_DISABLED
+
+bool MarkedBlock::needsFlip()
+{
+ return vm()->heap.objectSpace().version() != m_version;
+}
+
+bool MarkedBlock::Handle::needsFlip()
+{
+ return m_block->needsFlip();
+}
+
+void MarkedBlock::Handle::willRemoveBlock()
+{
+ flipIfNecessary();
+}
+
+void MarkedBlock::Handle::didConsumeFreeList()
+{
+ flipIfNecessary();
+ HEAP_LOG_BLOCK_STATE_TRANSITION(this);
+
ASSERT(m_state == FreeListed);
- m_state = Retired;
+
+ m_state = Allocated;
+}
+
+size_t MarkedBlock::markCount()
+{
+ flipIfNecessary();
+ return m_marks.count();
+}
+
+bool MarkedBlock::Handle::isEmpty()
+{
+ flipIfNecessary();
+ return m_state == Marked && !block().hasAnyMarked() && m_weakSet.isEmpty() && (!m_newlyAllocated || m_newlyAllocated->isEmpty());
+}
+
+void MarkedBlock::clearHasAnyMarked()
+{
+ m_biasedMarkCount = m_markCountBias;
+}
+
+void MarkedBlock::noteMarkedSlow()
+{
+ handle().m_allocator->retire(&handle());
}
} // namespace JSC
+
+namespace WTF {
+
+using namespace JSC;
+
+void printInternal(PrintStream& out, MarkedBlock::BlockState blockState)
+{
+ switch (blockState) {
+ case MarkedBlock::New:
+ out.print("New");
+ return;
+ case MarkedBlock::FreeListed:
+ out.print("FreeListed");
+ return;
+ case MarkedBlock::Allocated:
+ out.print("Allocated");
+ return;
+ case MarkedBlock::Marked:
+ out.print("Marked");
+ return;
+ }
+ RELEASE_ASSERT_NOT_REACHED();
+}
+
+} // namespace WTF
+
diff --git a/Source/JavaScriptCore/heap/MarkedBlock.h b/Source/JavaScriptCore/heap/MarkedBlock.h
index 7ef1014..68d767e 100644
--- a/Source/JavaScriptCore/heap/MarkedBlock.h
+++ b/Source/JavaScriptCore/heap/MarkedBlock.h
@@ -24,6 +24,7 @@
#include "AllocatorAttributes.h"
#include "DestructionMode.h"
+#include "FreeList.h"
#include "HeapCell.h"
#include "HeapOperation.h"
#include "IterationStatus.h"
@@ -34,484 +35,668 @@
#include <wtf/HashFunctions.h>
#include <wtf/StdLibExtras.h>
+namespace JSC {
+
+class Heap;
+class JSCell;
+class MarkedAllocator;
+
+typedef uintptr_t Bits;
+
// Set to log state transitions of blocks.
#define HEAP_LOG_BLOCK_STATE_TRANSITIONS 0
#if HEAP_LOG_BLOCK_STATE_TRANSITIONS
-#define HEAP_LOG_BLOCK_STATE_TRANSITION(block) do { \
- dataLogF( \
- "%s:%d %s: block %s = %p, %d\n", \
- __FILE__, __LINE__, __FUNCTION__, \
- #block, (block), (block)->m_state); \
+#define HEAP_LOG_BLOCK_STATE_TRANSITION(handle) do { \
+ dataLogF( \
+ "%s:%d %s: block %s = %p, %d\n", \
+ __FILE__, __LINE__, __FUNCTION__, \
+ #handle, &(handle)->block(), (handle)->m_state); \
} while (false)
#else
-#define HEAP_LOG_BLOCK_STATE_TRANSITION(block) ((void)0)
+#define HEAP_LOG_BLOCK_STATE_TRANSITION(handle) ((void)0)
#endif
-namespace JSC {
-
- class Heap;
- class JSCell;
- class MarkedAllocator;
+// A marked block is a page-aligned container for heap-allocated objects.
+// Objects are allocated within cells of the marked block. For a given
+// marked block, all cells have the same size. Objects smaller than the
+// cell size may be allocated in the marked block, in which case the
+// allocation suffers from internal fragmentation: wasted space whose
+// size is equal to the difference between the cell size and the object
+// size.
- typedef uintptr_t Bits;
+class MarkedBlock {
+ WTF_MAKE_NONCOPYABLE(MarkedBlock);
+ friend class LLIntOffsetsExtractor;
+ friend struct VerifyMarked;
- // A marked block is a page-aligned container for heap-allocated objects.
- // Objects are allocated within cells of the marked block. For a given
- // marked block, all cells have the same size. Objects smaller than the
- // cell size may be allocated in the marked block, in which case the
- // allocation suffers from internal fragmentation: wasted space whose
- // size is equal to the difference between the cell size and the object
- // size.
+public:
+ class Handle;
+private:
+ friend class Handle;
+public:
+ enum BlockState : uint8_t { New, FreeListed, Allocated, Marked };
+
+ static const size_t atomSize = 16; // bytes
+ static const size_t blockSize = 16 * KB;
+ static const size_t blockMask = ~(blockSize - 1); // blockSize must be a power of two.
- class MarkedBlock : public DoublyLinkedListNode<MarkedBlock> {
- friend class WTF::DoublyLinkedListNode<MarkedBlock>;
- friend class LLIntOffsetsExtractor;
- friend struct VerifyMarkedOrRetired;
+ static const size_t atomsPerBlock = blockSize / atomSize;
+
+ static_assert(!(MarkedBlock::atomSize & (MarkedBlock::atomSize - 1)), "MarkedBlock::atomSize must be a power of two.");
+ static_assert(!(MarkedBlock::blockSize & (MarkedBlock::blockSize - 1)), "MarkedBlock::blockSize must be a power of two.");
+
+ struct VoidFunctor {
+ typedef void ReturnType;
+ void returnValue() { }
+ };
+
+ class CountFunctor {
public:
- static const size_t atomSize = 16; // bytes
- static const size_t blockSize = 16 * KB;
- static const size_t blockMask = ~(blockSize - 1); // blockSize must be a power of two.
+ typedef size_t ReturnType;
- static const size_t atomsPerBlock = blockSize / atomSize;
+ CountFunctor() : m_count(0) { }
+ void count(size_t count) const { m_count += count; }
+ ReturnType returnValue() const { return m_count; }
- static_assert(!(MarkedBlock::atomSize & (MarkedBlock::atomSize - 1)), "MarkedBlock::atomSize must be a power of two.");
- static_assert(!(MarkedBlock::blockSize & (MarkedBlock::blockSize - 1)), "MarkedBlock::blockSize must be a power of two.");
-
- struct FreeCell {
- FreeCell* next;
- };
+ private:
+ // FIXME: This is mutable because we're using a functor rather than C++ lambdas.
+ // https://bugs.webkit.org/show_bug.cgi?id=159644
+ mutable ReturnType m_count;
+ };
- struct FreeList {
- FreeCell* head;
- size_t bytes;
+ class Handle : public BasicRawSentinelNode<Handle> {
+ WTF_MAKE_NONCOPYABLE(Handle);
+ WTF_MAKE_FAST_ALLOCATED;
+ friend class DoublyLinkedListNode<Handle>;
+ friend class LLIntOffsetsExtractor;
+ friend class MarkedBlock;
+ friend struct VerifyMarked;
+ public:
+
+ ~Handle();
+
+ MarkedBlock& block();
+
+ void* cellAlign(void*);
+
+ bool isEmpty();
- FreeList();
- FreeList(FreeCell*, size_t);
- };
-
- struct VoidFunctor {
- typedef void ReturnType;
- void returnValue() { }
- };
-
- class CountFunctor {
- public:
- typedef size_t ReturnType;
-
- CountFunctor() : m_count(0) { }
- void count(size_t count) const { m_count += count; }
- ReturnType returnValue() const { return m_count; }
-
- private:
- // FIXME: This is mutable because we're using a functor rather than C++ lambdas.
- // https://bugs.webkit.org/show_bug.cgi?id=159644
- mutable ReturnType m_count;
- };
-
- static MarkedBlock* create(Heap&, MarkedAllocator*, size_t capacity, size_t cellSize, const AllocatorAttributes&);
- static void destroy(Heap&, MarkedBlock*);
-
- static bool isAtomAligned(const void*);
- static MarkedBlock* blockFor(const void*);
- static size_t firstAtom();
-
void lastChanceToFinalize();
MarkedAllocator* allocator() const;
Heap* heap() const;
VM* vm() const;
WeakSet& weakSet();
-
+
enum SweepMode { SweepOnly, SweepToFreeList };
FreeList sweep(SweepMode = SweepOnly);
-
+
+ void unsweepWithNoNewlyAllocated();
+
+ void zap(const FreeList&);
+
void shrink();
-
- void visitWeakSet(HeapRootVisitor&);
+
+ unsigned visitWeakSet(HeapRootVisitor&);
void reapWeakSet();
-
+
// While allocating from a free list, MarkedBlock temporarily has bogus
// cell liveness data. To restore accurate cell liveness data, call one
// of these functions:
void didConsumeFreeList(); // Call this once you've allocated all the items in the free list.
void stopAllocating(const FreeList&);
FreeList resumeAllocating(); // Call this if you canonicalized a block for some non-collection related purpose.
-
+
// Returns true if the "newly allocated" bitmap was non-null
// and was successfully cleared and false otherwise.
bool clearNewlyAllocated();
- void clearMarks();
- template <HeapOperation collectionType>
- void clearMarksWithCollectionType();
-
- size_t markCount();
- bool isEmpty();
-
+
+ void flipForEdenCollection();
+
size_t cellSize();
const AllocatorAttributes& attributes() const;
DestructionMode destruction() const;
bool needsDestruction() const;
HeapCell::Kind cellKind() const;
-
+
+ size_t markCount();
size_t size();
- size_t capacity();
-
- bool isMarked(const void*);
- bool testAndSetMarked(const void*);
+
bool isLive(const HeapCell*);
bool isLiveCell(const void*);
- bool isAtom(const void*);
bool isMarkedOrNewlyAllocated(const HeapCell*);
- void setMarked(const void*);
- void clearMarked(const void*);
-
+
bool isNewlyAllocated(const void*);
void setNewlyAllocated(const void*);
void clearNewlyAllocated(const void*);
-
+
+ bool hasAnyNewlyAllocated() const { return !!m_newlyAllocated; }
+
bool isAllocated() const;
- bool isMarkedOrRetired() const;
+ bool isMarked() const;
+ bool isFreeListed() const;
bool needsSweeping() const;
- void didRetireBlock(const FreeList&);
void willRemoveBlock();
template <typename Functor> IterationStatus forEachCell(const Functor&);
template <typename Functor> IterationStatus forEachLiveCell(const Functor&);
template <typename Functor> IterationStatus forEachDeadCell(const Functor&);
-
- private:
- static const size_t atomAlignmentMask = atomSize - 1;
-
- // During allocation, we look for available space in free lists in blocks.
- // If a block's utilization is sufficiently high (i.e. it's almost full),
- // we want to remove that block as a candidate for allocating to reduce
- // the likelihood of allocation having to take a slow path. When the
- // block is in this state, we say that it is "Retired".
- //
- // A full GC can take a Retired blocks out of retirement. An eden GC
- // will simply ignore Retired blocks (i.e. they will not be swept even
- // if they no longer have live objects).
-
- enum BlockState { New, FreeListed, Allocated, Marked, Retired };
- template<bool callDestructors> FreeList sweepHelper(SweepMode = SweepOnly);
-
- typedef char Atom[atomSize];
-
- MarkedBlock(MarkedAllocator*, size_t capacity, size_t cellSize, const AllocatorAttributes&);
- Atom* atoms();
- size_t atomNumber(const void*);
- void callDestructor(HeapCell*);
- template<BlockState, SweepMode, bool callDestructors> FreeList specializedSweep();
+
+ bool needsFlip();
+
+ void flipIfNecessaryConcurrently(uint64_t heapVersion);
+ void flipIfNecessary(uint64_t heapVersion);
+ void flipIfNecessary();
+
+ void assertFlipped();
+
+ bool isOnBlocksToSweep() const { return m_isOnBlocksToSweep; }
+ void setIsOnBlocksToSweep(bool value) { m_isOnBlocksToSweep = value; }
- MarkedBlock* m_prev;
- MarkedBlock* m_next;
-
+ BlockState state() const { return m_state; }
+
+ private:
+ Handle(Heap&, MarkedAllocator*, size_t cellSize, const AllocatorAttributes&, void*);
+
+ template<DestructionMode>
+ FreeList sweepHelperSelectScribbleMode(SweepMode = SweepOnly);
+
+ enum ScribbleMode { DontScribble, Scribble };
+
+ template<DestructionMode, ScribbleMode>
+ FreeList sweepHelperSelectStateAndSweepMode(SweepMode = SweepOnly);
+
+ enum NewlyAllocatedMode { HasNewlyAllocated, DoesNotHaveNewlyAllocated };
+
+ template<BlockState, SweepMode, DestructionMode, ScribbleMode, NewlyAllocatedMode>
+ FreeList specializedSweep();
+
+ template<typename Func>
+ void forEachFreeCell(const FreeList&, const Func&);
+
+ MarkedBlock::Handle* m_prev;
+ MarkedBlock::Handle* m_next;
+
size_t m_atomsPerCell;
size_t m_endAtom; // This is a fuzzy end. Always test for < m_endAtom.
- WTF::Bitmap<atomsPerBlock, WTF::BitmapAtomic, uint8_t> m_marks;
+
std::unique_ptr<WTF::Bitmap<atomsPerBlock>> m_newlyAllocated;
-
- size_t m_capacity;
+
AllocatorAttributes m_attributes;
- MarkedAllocator* m_allocator;
BlockState m_state;
+ bool m_isOnBlocksToSweep { false };
+
+ MarkedAllocator* m_allocator;
WeakSet m_weakSet;
+
+ MarkedBlock* m_block;
};
+
+ static MarkedBlock::Handle* tryCreate(Heap&, MarkedAllocator*, size_t cellSize, const AllocatorAttributes&);
+
+ Handle& handle();
+
+ VM* vm() const;
- inline MarkedBlock::FreeList::FreeList()
- : head(0)
- , bytes(0)
- {
+ static bool isAtomAligned(const void*);
+ static MarkedBlock* blockFor(const void*);
+ static size_t firstAtom();
+ size_t atomNumber(const void*);
+
+ size_t markCount();
+
+ bool isMarked(const void*);
+ bool testAndSetMarked(const void*);
+
+ bool isMarkedOrNewlyAllocated(const HeapCell*);
+
+ bool isAtom(const void*);
+ void setMarked(const void*);
+ void clearMarked(const void*);
+
+ size_t cellSize();
+ const AllocatorAttributes& attributes() const;
+
+ bool hasAnyMarked() const;
+ void noteMarked();
+
+ WeakSet& weakSet();
+
+ bool needsFlip();
+
+ void flipIfNecessaryConcurrently(uint64_t heapVersion);
+ void flipIfNecessary(uint64_t heapVersion);
+ void flipIfNecessary();
+
+ void assertFlipped();
+
+ bool needsDestruction() const { return m_needsDestruction; }
+
+private:
+ static const size_t atomAlignmentMask = atomSize - 1;
+
+ typedef char Atom[atomSize];
+
+ MarkedBlock(VM&, Handle&);
+ Atom* atoms();
+
+ void flipIfNecessaryConcurrentlySlow();
+ void flipIfNecessarySlow();
+ void clearMarks();
+ void clearHasAnyMarked();
+
+ void noteMarkedSlow();
+
+ WTF::Bitmap<atomsPerBlock, WTF::BitmapAtomic, uint8_t> m_marks;
+
+ bool m_needsDestruction;
+ Lock m_lock;
+
+ // The actual mark count can be computed by doing: m_biasedMarkCount - m_markCountBias. Note
+ // that this count is racy. It will accurately detect whether or not exactly zero things were
+ // marked, but if N things got marked, then this may report anything in the range [1, N] (or
+ // before unbiased, it would be [1 + m_markCountBias, N + m_markCountBias].)
+ int16_t m_biasedMarkCount;
+
+ // We bias the mark count so that if m_biasedMarkCount >= 0 then the block should be retired.
+ // We go to all this trouble to make marking a bit faster: this way, marking knows when to
+ // retire a block using a js/jns on m_biasedMarkCount.
+ //
+ // For example, if a block has room for 100 objects and retirement happens whenever 90% are
+ // live, then m_markCountBias will be -90. This way, when marking begins, this will cause us to
+ // set m_biasedMarkCount to -90 as well, since:
+ //
+ // m_biasedMarkCount = actualMarkCount + m_markCountBias.
+ //
+ // Marking an object will increment m_biasedMarkCount. Once 90 objects get marked, we will have
+ // m_biasedMarkCount = 0, which will trigger retirement. In other words, we want to set
+ // m_markCountBias like so:
+ //
+ // m_markCountBias = -(minMarkedBlockUtilization * cellsPerBlock)
+ //
+ // All of this also means that you can detect if any objects are marked by doing:
+ //
+ // m_biasedMarkCount != m_markCountBias
+ int16_t m_markCountBias;
+
+ Handle& m_handle;
+ VM* m_vm;
+
+ uint64_t m_version;
+};
+
+inline MarkedBlock::Handle& MarkedBlock::handle()
+{
+ return m_handle;
+}
+
+inline MarkedBlock& MarkedBlock::Handle::block()
+{
+ return *m_block;
+}
+
+inline size_t MarkedBlock::firstAtom()
+{
+ return WTF::roundUpToMultipleOf<atomSize>(sizeof(MarkedBlock)) / atomSize;
+}
+
+inline MarkedBlock::Atom* MarkedBlock::atoms()
+{
+ return reinterpret_cast<Atom*>(this);
+}
+
+inline bool MarkedBlock::isAtomAligned(const void* p)
+{
+ return !(reinterpret_cast<Bits>(p) & atomAlignmentMask);
+}
+
+inline void* MarkedBlock::Handle::cellAlign(void* p)
+{
+ Bits base = reinterpret_cast<Bits>(block().atoms() + firstAtom());
+ Bits bits = reinterpret_cast<Bits>(p);
+ bits -= base;
+ bits -= bits % cellSize();
+ bits += base;
+ return reinterpret_cast<void*>(bits);
+}
+
+inline MarkedBlock* MarkedBlock::blockFor(const void* p)
+{
+ return reinterpret_cast<MarkedBlock*>(reinterpret_cast<Bits>(p) & blockMask);
+}
+
+inline MarkedAllocator* MarkedBlock::Handle::allocator() const
+{
+ return m_allocator;
+}
+
+inline Heap* MarkedBlock::Handle::heap() const
+{
+ return m_weakSet.heap();
+}
+
+inline VM* MarkedBlock::Handle::vm() const
+{
+ return m_weakSet.vm();
+}
+
+inline VM* MarkedBlock::vm() const
+{
+ return m_vm;
+}
+
+inline WeakSet& MarkedBlock::Handle::weakSet()
+{
+ return m_weakSet;
+}
+
+inline WeakSet& MarkedBlock::weakSet()
+{
+ return m_handle.weakSet();
+}
+
+inline void MarkedBlock::Handle::shrink()
+{
+ m_weakSet.shrink();
+}
+
+inline unsigned MarkedBlock::Handle::visitWeakSet(HeapRootVisitor& heapRootVisitor)
+{
+ return m_weakSet.visit(heapRootVisitor);
+}
+
+inline void MarkedBlock::Handle::reapWeakSet()
+{
+ m_weakSet.reap();
+}
+
+inline size_t MarkedBlock::Handle::cellSize()
+{
+ return m_atomsPerCell * atomSize;
+}
+
+inline size_t MarkedBlock::cellSize()
+{
+ return m_handle.cellSize();
+}
+
+inline const AllocatorAttributes& MarkedBlock::Handle::attributes() const
+{
+ return m_attributes;
+}
+
+inline const AllocatorAttributes& MarkedBlock::attributes() const
+{
+ return m_handle.attributes();
+}
+
+inline bool MarkedBlock::Handle::needsDestruction() const
+{
+ return m_attributes.destruction == NeedsDestruction;
+}
+
+inline DestructionMode MarkedBlock::Handle::destruction() const
+{
+ return m_attributes.destruction;
+}
+
+inline HeapCell::Kind MarkedBlock::Handle::cellKind() const
+{
+ return m_attributes.cellKind;
+}
+
+inline size_t MarkedBlock::Handle::markCount()
+{
+ return m_block->markCount();
+}
+
+inline size_t MarkedBlock::Handle::size()
+{
+ return markCount() * cellSize();
+}
+
+inline size_t MarkedBlock::atomNumber(const void* p)
+{
+ return (reinterpret_cast<Bits>(p) - reinterpret_cast<Bits>(this)) / atomSize;
+}
+
+inline void MarkedBlock::flipIfNecessary(uint64_t heapVersion)
+{
+ if (UNLIKELY(heapVersion != m_version))
+ flipIfNecessarySlow();
+}
+
+inline void MarkedBlock::flipIfNecessaryConcurrently(uint64_t heapVersion)
+{
+ if (UNLIKELY(heapVersion != m_version))
+ flipIfNecessaryConcurrentlySlow();
+ WTF::loadLoadFence();
+}
+
+inline void MarkedBlock::Handle::flipIfNecessary(uint64_t heapVersion)
+{
+ block().flipIfNecessary(heapVersion);
+}
+
+inline void MarkedBlock::Handle::flipIfNecessaryConcurrently(uint64_t heapVersion)
+{
+ block().flipIfNecessaryConcurrently(heapVersion);
+}
+
+inline void MarkedBlock::Handle::flipForEdenCollection()
+{
+ assertFlipped();
+
+ HEAP_LOG_BLOCK_STATE_TRANSITION(this);
+
+ ASSERT(m_state != New && m_state != FreeListed);
+
+ m_state = Marked;
+}
+
+#if ASSERT_DISABLED
+inline void MarkedBlock::assertFlipped()
+{
+}
+#endif // ASSERT_DISABLED
+
+inline void MarkedBlock::Handle::assertFlipped()
+{
+ block().assertFlipped();
+}
+
+inline bool MarkedBlock::isMarked(const void* p)
+{
+ assertFlipped();
+ return m_marks.get(atomNumber(p));
+}
+
+inline bool MarkedBlock::testAndSetMarked(const void* p)
+{
+ assertFlipped();
+ return m_marks.concurrentTestAndSet(atomNumber(p));
+}
+
+inline bool MarkedBlock::Handle::isNewlyAllocated(const void* p)
+{
+ return m_newlyAllocated->get(m_block->atomNumber(p));
+}
+
+inline void MarkedBlock::Handle::setNewlyAllocated(const void* p)
+{
+ m_newlyAllocated->set(m_block->atomNumber(p));
+}
+
+inline void MarkedBlock::Handle::clearNewlyAllocated(const void* p)
+{
+ m_newlyAllocated->clear(m_block->atomNumber(p));
+}
+
+inline bool MarkedBlock::Handle::clearNewlyAllocated()
+{
+ if (m_newlyAllocated) {
+ m_newlyAllocated = nullptr;
+ return true;
}
+ return false;
+}
- inline MarkedBlock::FreeList::FreeList(FreeCell* head, size_t bytes)
- : head(head)
- , bytes(bytes)
- {
- }
+inline bool MarkedBlock::Handle::isMarkedOrNewlyAllocated(const HeapCell* cell)
+{
+ ASSERT(m_state == Marked);
+ return m_block->isMarked(cell) || (m_newlyAllocated && isNewlyAllocated(cell));
+}
- inline size_t MarkedBlock::firstAtom()
- {
- return WTF::roundUpToMultipleOf<atomSize>(sizeof(MarkedBlock)) / atomSize;
- }
+inline bool MarkedBlock::isMarkedOrNewlyAllocated(const HeapCell* cell)
+{
+ ASSERT(m_handle.m_state == Marked);
+ return isMarked(cell) || (m_handle.m_newlyAllocated && m_handle.isNewlyAllocated(cell));
+}
- inline MarkedBlock::Atom* MarkedBlock::atoms()
- {
- return reinterpret_cast<Atom*>(this);
- }
+inline bool MarkedBlock::Handle::isLive(const HeapCell* cell)
+{
+ assertFlipped();
+ switch (m_state) {
+ case Allocated:
+ return true;
- inline bool MarkedBlock::isAtomAligned(const void* p)
- {
- return !(reinterpret_cast<Bits>(p) & atomAlignmentMask);
- }
+ case Marked:
+ return isMarkedOrNewlyAllocated(cell);
- inline MarkedBlock* MarkedBlock::blockFor(const void* p)
- {
- return reinterpret_cast<MarkedBlock*>(reinterpret_cast<Bits>(p) & blockMask);
- }
-
- inline MarkedAllocator* MarkedBlock::allocator() const
- {
- return m_allocator;
- }
-
- inline Heap* MarkedBlock::heap() const
- {
- return m_weakSet.heap();
- }
-
- inline VM* MarkedBlock::vm() const
- {
- return m_weakSet.vm();
- }
-
- inline WeakSet& MarkedBlock::weakSet()
- {
- return m_weakSet;
- }
-
- inline void MarkedBlock::shrink()
- {
- m_weakSet.shrink();
- }
-
- inline void MarkedBlock::visitWeakSet(HeapRootVisitor& heapRootVisitor)
- {
- m_weakSet.visit(heapRootVisitor);
- }
-
- inline void MarkedBlock::reapWeakSet()
- {
- m_weakSet.reap();
- }
-
- inline void MarkedBlock::willRemoveBlock()
- {
- ASSERT(m_state != Retired);
- }
-
- inline void MarkedBlock::didConsumeFreeList()
- {
- HEAP_LOG_BLOCK_STATE_TRANSITION(this);
-
- ASSERT(m_state == FreeListed);
- m_state = Allocated;
- }
-
- inline size_t MarkedBlock::markCount()
- {
- return m_marks.count();
- }
-
- inline bool MarkedBlock::isEmpty()
- {
- return m_marks.isEmpty() && m_weakSet.isEmpty() && (!m_newlyAllocated || m_newlyAllocated->isEmpty());
- }
-
- inline size_t MarkedBlock::cellSize()
- {
- return m_atomsPerCell * atomSize;
- }
-
- inline const AllocatorAttributes& MarkedBlock::attributes() const
- {
- return m_attributes;
- }
-
- inline bool MarkedBlock::needsDestruction() const
- {
- return m_attributes.destruction == NeedsDestruction;
- }
-
- inline DestructionMode MarkedBlock::destruction() const
- {
- return m_attributes.destruction;
- }
-
- inline HeapCell::Kind MarkedBlock::cellKind() const
- {
- return m_attributes.cellKind;
- }
-
- inline size_t MarkedBlock::size()
- {
- return markCount() * cellSize();
- }
-
- inline size_t MarkedBlock::capacity()
- {
- return m_capacity;
- }
-
- inline size_t MarkedBlock::atomNumber(const void* p)
- {
- return (reinterpret_cast<Bits>(p) - reinterpret_cast<Bits>(this)) / atomSize;
- }
-
- inline bool MarkedBlock::isMarked(const void* p)
- {
- return m_marks.get(atomNumber(p));
- }
-
- inline bool MarkedBlock::testAndSetMarked(const void* p)
- {
- return m_marks.concurrentTestAndSet(atomNumber(p));
- }
-
- inline void MarkedBlock::setMarked(const void* p)
- {
- m_marks.set(atomNumber(p));
- }
-
- inline void MarkedBlock::clearMarked(const void* p)
- {
- ASSERT(m_marks.get(atomNumber(p)));
- m_marks.clear(atomNumber(p));
- }
-
- inline bool MarkedBlock::isNewlyAllocated(const void* p)
- {
- return m_newlyAllocated->get(atomNumber(p));
- }
-
- inline void MarkedBlock::setNewlyAllocated(const void* p)
- {
- m_newlyAllocated->set(atomNumber(p));
- }
-
- inline void MarkedBlock::clearNewlyAllocated(const void* p)
- {
- m_newlyAllocated->clear(atomNumber(p));
- }
-
- inline bool MarkedBlock::clearNewlyAllocated()
- {
- if (m_newlyAllocated) {
- m_newlyAllocated = nullptr;
- return true;
- }
- return false;
- }
-
- inline bool MarkedBlock::isMarkedOrNewlyAllocated(const HeapCell* cell)
- {
- ASSERT(m_state == Retired || m_state == Marked);
- return m_marks.get(atomNumber(cell)) || (m_newlyAllocated && isNewlyAllocated(cell));
- }
-
- inline bool MarkedBlock::isLive(const HeapCell* cell)
- {
- switch (m_state) {
- case Allocated:
- return true;
-
- case Retired:
- case Marked:
- return isMarkedOrNewlyAllocated(cell);
-
- case New:
- case FreeListed:
- RELEASE_ASSERT_NOT_REACHED();
- return false;
- }
-
+ case New:
+ case FreeListed:
RELEASE_ASSERT_NOT_REACHED();
return false;
}
- inline bool MarkedBlock::isAtom(const void* p)
- {
- ASSERT(MarkedBlock::isAtomAligned(p));
- size_t atomNumber = this->atomNumber(p);
- size_t firstAtom = this->firstAtom();
- if (atomNumber < firstAtom) // Filters pointers into MarkedBlock metadata.
- return false;
- if ((atomNumber - firstAtom) % m_atomsPerCell) // Filters pointers into cell middles.
- return false;
- if (atomNumber >= m_endAtom) // Filters pointers into invalid cells out of the range.
- return false;
- return true;
+ RELEASE_ASSERT_NOT_REACHED();
+ return false;
+}
+
+inline bool MarkedBlock::isAtom(const void* p)
+{
+ ASSERT(MarkedBlock::isAtomAligned(p));
+ size_t atomNumber = this->atomNumber(p);
+ size_t firstAtom = MarkedBlock::firstAtom();
+ if (atomNumber < firstAtom) // Filters pointers into MarkedBlock metadata.
+ return false;
+ if ((atomNumber - firstAtom) % m_handle.m_atomsPerCell) // Filters pointers into cell middles.
+ return false;
+ if (atomNumber >= m_handle.m_endAtom) // Filters pointers into invalid cells out of the range.
+ return false;
+ return true;
+}
+
+inline bool MarkedBlock::Handle::isLiveCell(const void* p)
+{
+ if (!m_block->isAtom(p))
+ return false;
+ return isLive(static_cast<const HeapCell*>(p));
+}
+
+template <typename Functor>
+inline IterationStatus MarkedBlock::Handle::forEachCell(const Functor& functor)
+{
+ HeapCell::Kind kind = m_attributes.cellKind;
+ for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
+ HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&m_block->atoms()[i]);
+ if (functor(cell, kind) == IterationStatus::Done)
+ return IterationStatus::Done;
}
+ return IterationStatus::Continue;
+}
- inline bool MarkedBlock::isLiveCell(const void* p)
- {
- if (!isAtom(p))
- return false;
- return isLive(static_cast<const HeapCell*>(p));
+template <typename Functor>
+inline IterationStatus MarkedBlock::Handle::forEachLiveCell(const Functor& functor)
+{
+ flipIfNecessary();
+ HeapCell::Kind kind = m_attributes.cellKind;
+ for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
+ HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&m_block->atoms()[i]);
+ if (!isLive(cell))
+ continue;
+
+ if (functor(cell, kind) == IterationStatus::Done)
+ return IterationStatus::Done;
}
+ return IterationStatus::Continue;
+}
- template <typename Functor> inline IterationStatus MarkedBlock::forEachCell(const Functor& functor)
- {
- HeapCell::Kind kind = m_attributes.cellKind;
- for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
- HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&atoms()[i]);
- if (functor(cell, kind) == IterationStatus::Done)
- return IterationStatus::Done;
- }
- return IterationStatus::Continue;
+template <typename Functor>
+inline IterationStatus MarkedBlock::Handle::forEachDeadCell(const Functor& functor)
+{
+ flipIfNecessary();
+ HeapCell::Kind kind = m_attributes.cellKind;
+ for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
+ HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&m_block->atoms()[i]);
+ if (isLive(cell))
+ continue;
+
+ if (functor(cell, kind) == IterationStatus::Done)
+ return IterationStatus::Done;
}
+ return IterationStatus::Continue;
+}
- template <typename Functor> inline IterationStatus MarkedBlock::forEachLiveCell(const Functor& functor)
- {
- HeapCell::Kind kind = m_attributes.cellKind;
- for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
- HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&atoms()[i]);
- if (!isLive(cell))
- continue;
+inline bool MarkedBlock::Handle::needsSweeping() const
+{
+ const_cast<MarkedBlock::Handle*>(this)->flipIfNecessary();
+ return m_state == Marked;
+}
- if (functor(cell, kind) == IterationStatus::Done)
- return IterationStatus::Done;
- }
- return IterationStatus::Continue;
- }
+inline bool MarkedBlock::Handle::isAllocated() const
+{
+ const_cast<MarkedBlock::Handle*>(this)->flipIfNecessary();
+ return m_state == Allocated;
+}
- template <typename Functor> inline IterationStatus MarkedBlock::forEachDeadCell(const Functor& functor)
- {
- HeapCell::Kind kind = m_attributes.cellKind;
- for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
- HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&atoms()[i]);
- if (isLive(cell))
- continue;
+inline bool MarkedBlock::Handle::isMarked() const
+{
+ const_cast<MarkedBlock::Handle*>(this)->flipIfNecessary();
+ return m_state == Marked;
+}
- if (functor(cell, kind) == IterationStatus::Done)
- return IterationStatus::Done;
- }
- return IterationStatus::Continue;
- }
+inline bool MarkedBlock::Handle::isFreeListed() const
+{
+ const_cast<MarkedBlock::Handle*>(this)->flipIfNecessary();
+ return m_state == FreeListed;
+}
- inline bool MarkedBlock::needsSweeping() const
- {
- return m_state == Marked;
- }
+inline bool MarkedBlock::hasAnyMarked() const
+{
+ return m_biasedMarkCount != m_markCountBias;
+}
- inline bool MarkedBlock::isAllocated() const
- {
- return m_state == Allocated;
- }
-
- inline bool MarkedBlock::isMarkedOrRetired() const
- {
- return m_state == Marked || m_state == Retired;
- }
+inline void MarkedBlock::noteMarked()
+{
+ // This is racy by design. We don't want to pay the price of an atomic increment!
+ int16_t biasedMarkCount = m_biasedMarkCount;
+ ++biasedMarkCount;
+ m_biasedMarkCount = biasedMarkCount;
+ if (UNLIKELY(!biasedMarkCount))
+ noteMarkedSlow();
+}
} // namespace JSC
namespace WTF {
- struct MarkedBlockHash : PtrHash<JSC::MarkedBlock*> {
- static unsigned hash(JSC::MarkedBlock* const& key)
- {
- // Aligned VM regions tend to be monotonically increasing integers,
- // which is a great hash function, but we have to remove the low bits,
- // since they're always zero, which is a terrible hash function!
- return reinterpret_cast<JSC::Bits>(key) / JSC::MarkedBlock::blockSize;
- }
- };
+struct MarkedBlockHash : PtrHash<JSC::MarkedBlock*> {
+ static unsigned hash(JSC::MarkedBlock* const& key)
+ {
+ // Aligned VM regions tend to be monotonically increasing integers,
+ // which is a great hash function, but we have to remove the low bits,
+ // since they're always zero, which is a terrible hash function!
+ return reinterpret_cast<JSC::Bits>(key) / JSC::MarkedBlock::blockSize;
+ }
+};
- template<> struct DefaultHash<JSC::MarkedBlock*> {
- typedef MarkedBlockHash Hash;
- };
+template<> struct DefaultHash<JSC::MarkedBlock*> {
+ typedef MarkedBlockHash Hash;
+};
+
+void printInternal(PrintStream& out, JSC::MarkedBlock::BlockState);
} // namespace WTF
diff --git a/Source/JavaScriptCore/heap/MarkedSpace.cpp b/Source/JavaScriptCore/heap/MarkedSpace.cpp
index bbf7557..c6c01b4 100644
--- a/Source/JavaScriptCore/heap/MarkedSpace.cpp
+++ b/Source/JavaScriptCore/heap/MarkedSpace.cpp
@@ -24,17 +24,175 @@
#include "IncrementalSweeper.h"
#include "JSObject.h"
#include "JSCInlines.h"
+#include "SuperSampler.h"
+#include <wtf/ListDump.h>
namespace JSC {
+std::array<size_t, MarkedSpace::numSizeClasses> MarkedSpace::s_sizeClassForSizeStep;
+
+namespace {
+
+const Vector<size_t>& sizeClasses()
+{
+ static Vector<size_t>* result;
+ static std::once_flag once;
+ std::call_once(
+ once,
+ [] {
+ result = new Vector<size_t>();
+
+ auto add = [&] (size_t sizeClass) {
+ if (Options::dumpSizeClasses())
+ dataLog("Adding JSC MarkedSpace size class: ", sizeClass, "\n");
+ // Perform some validation as we go.
+ RELEASE_ASSERT(!(sizeClass % MarkedSpace::sizeStep));
+ if (result->isEmpty())
+ RELEASE_ASSERT(sizeClass == MarkedSpace::sizeStep);
+ else
+ RELEASE_ASSERT(sizeClass > result->last());
+ result->append(sizeClass);
+ };
+
+ // This is a definition of the size classes in our GC. It must define all of the
+ // size classes from sizeStep up to largeCutoff.
+
+ // Have very precise size classes for the small stuff. This is a loop to make it easy to reduce
+ // atomSize.
+ for (size_t size = MarkedSpace::sizeStep; size < MarkedSpace::preciseCutoff; size += MarkedSpace::sizeStep)
+ add(size);
+
+ // We want to make sure that the remaining size classes minimize internal fragmentation (i.e.
+ // the wasted space at the tail end of a MarkedBlock) while proceeding roughly in an exponential
+ // way starting at just above the precise size classes to four cells per block.
+
+ if (Options::dumpSizeClasses())
+ dataLog(" Marked block payload size: ", static_cast<size_t>(MarkedSpace::blockPayload), "\n");
+
+ for (unsigned i = 0; ; ++i) {
+ double approximateSize = MarkedSpace::preciseCutoff * pow(Options::sizeClassProgression(), i);
+
+ if (Options::dumpSizeClasses())
+ dataLog(" Next size class as a double: ", approximateSize, "\n");
+
+ size_t approximateSizeInBytes = static_cast<size_t>(approximateSize);
+
+ if (Options::dumpSizeClasses())
+ dataLog(" Next size class as bytes: ", approximateSizeInBytes, "\n");
+
+ // Make sure that the computer did the math correctly.
+ RELEASE_ASSERT(approximateSizeInBytes >= MarkedSpace::preciseCutoff);
+
+ if (approximateSizeInBytes > MarkedSpace::largeCutoff)
+ break;
+
+ size_t sizeClass =
+ WTF::roundUpToMultipleOf<MarkedSpace::sizeStep>(approximateSizeInBytes);
+
+ if (Options::dumpSizeClasses())
+ dataLog(" Size class: ", sizeClass, "\n");
+
+ // Optimize the size class so that there isn't any slop at the end of the block's
+ // payload.
+ unsigned cellsPerBlock = MarkedSpace::blockPayload / sizeClass;
+ size_t possiblyBetterSizeClass = (MarkedSpace::blockPayload / cellsPerBlock) & ~(MarkedSpace::sizeStep - 1);
+
+ if (Options::dumpSizeClasses())
+ dataLog(" Possibly better size class: ", possiblyBetterSizeClass, "\n");
+
+ // The size class we just came up with is better than the other one if it reduces
+ // total wastage assuming we only allocate cells of that size.
+ size_t originalWastage = MarkedSpace::blockPayload - cellsPerBlock * sizeClass;
+ size_t newWastage = (possiblyBetterSizeClass - sizeClass) * cellsPerBlock;
+
+ if (Options::dumpSizeClasses())
+ dataLog(" Original wastage: ", originalWastage, ", new wastage: ", newWastage, "\n");
+
+ size_t betterSizeClass;
+ if (newWastage > originalWastage)
+ betterSizeClass = sizeClass;
+ else
+ betterSizeClass = possiblyBetterSizeClass;
+
+ if (Options::dumpSizeClasses())
+ dataLog(" Choosing size class: ", betterSizeClass, "\n");
+
+ if (betterSizeClass == result->last()) {
+ // Defense for when expStep is small.
+ continue;
+ }
+
+ // This is usually how we get out of the loop.
+ if (betterSizeClass > MarkedSpace::largeCutoff
+ || betterSizeClass > Options::largeAllocationCutoff())
+ break;
+
+ add(betterSizeClass);
+ }
+
+ if (Options::dumpSizeClasses())
+ dataLog("JSC Heap MarkedSpace size class dump: ", listDump(*result), "\n");
+
+ // We have an optimiation in MarkedSpace::optimalSizeFor() that assumes things about
+ // the size class table. This checks our results against that function's assumptions.
+ for (size_t size = MarkedSpace::sizeStep, i = 0; size <= MarkedSpace::preciseCutoff; size += MarkedSpace::sizeStep, i++)
+ RELEASE_ASSERT(result->at(i) == size);
+ });
+ return *result;
+}
+
+template<typename TableType, typename SizeClassCons, typename DefaultCons>
+void buildSizeClassTable(TableType& table, const SizeClassCons& cons, const DefaultCons& defaultCons)
+{
+ size_t nextIndex = 0;
+ for (size_t sizeClass : sizeClasses()) {
+ auto entry = cons(sizeClass);
+ size_t index = MarkedSpace::sizeClassToIndex(sizeClass);
+ for (size_t i = nextIndex; i <= index; ++i)
+ table[i] = entry;
+ nextIndex = index + 1;
+ }
+ for (size_t i = nextIndex; i < MarkedSpace::numSizeClasses; ++i)
+ table[i] = defaultCons(MarkedSpace::indexToSizeClass(i));
+}
+
+} // anonymous namespace
+
+void MarkedSpace::initializeSizeClassForStepSize()
+{
+ // We call this multiple times and we may call it simultaneously from multiple threads. That's
+ // OK, since it always stores the same values into the table.
+
+ buildSizeClassTable(
+ s_sizeClassForSizeStep,
+ [&] (size_t sizeClass) -> size_t {
+ return sizeClass;
+ },
+ [&] (size_t sizeClass) -> size_t {
+ return sizeClass;
+ });
+}
+
MarkedSpace::MarkedSpace(Heap* heap)
: m_heap(heap)
, m_capacity(0)
, m_isIterating(false)
{
- forEachAllocator(
- [&] (MarkedAllocator& allocator, size_t cellSize, AllocatorAttributes attributes) -> IterationStatus {
- allocator.init(heap, this, cellSize, attributes);
+ initializeSizeClassForStepSize();
+
+ forEachSubspace(
+ [&] (Subspace& subspace, AllocatorAttributes attributes) -> IterationStatus {
+ subspace.attributes = attributes;
+
+ buildSizeClassTable(
+ subspace.allocatorForSizeStep,
+ [&] (size_t sizeClass) -> MarkedAllocator* {
+ return subspace.bagOfAllocators.add(heap, this, sizeClass, attributes);
+ },
+ [&] (size_t) -> MarkedAllocator* {
+ return nullptr;
+ });
+
return IterationStatus::Continue;
});
}
@@ -42,7 +200,7 @@
MarkedSpace::~MarkedSpace()
{
forEachBlock(
- [&] (MarkedBlock* block) {
+ [&] (MarkedBlock::Handle* block) {
freeBlock(block);
});
ASSERT(!m_blocks.set().size());
@@ -52,28 +210,85 @@
{
stopAllocating();
forEachAllocator(
- [&] (MarkedAllocator& allocator, size_t, AllocatorAttributes) -> IterationStatus {
+ [&] (MarkedAllocator& allocator) -> IterationStatus {
allocator.lastChanceToFinalize();
return IterationStatus::Continue;
});
+ for (LargeAllocation* allocation : m_largeAllocations)
+ allocation->lastChanceToFinalize();
+}
+
+void* MarkedSpace::allocate(Subspace& subspace, size_t bytes)
+{
+ if (MarkedAllocator* allocator = allocatorFor(subspace, bytes))
+ return allocator->allocate();
+ return allocateLarge(subspace, bytes);
+}
+
+void* MarkedSpace::tryAllocate(Subspace& subspace, size_t bytes)
+{
+ if (MarkedAllocator* allocator = allocatorFor(subspace, bytes))
+ return allocator->tryAllocate();
+ return tryAllocateLarge(subspace, bytes);
+}
+
+void* MarkedSpace::allocateLarge(Subspace& subspace, size_t size)
+{
+ void* result = tryAllocateLarge(subspace, size);
+ RELEASE_ASSERT(result);
+ return result;
+}
+
+void* MarkedSpace::tryAllocateLarge(Subspace& subspace, size_t size)
+{
+ m_heap->collectIfNecessaryOrDefer();
+
+ size = WTF::roundUpToMultipleOf<sizeStep>(size);
+ LargeAllocation* allocation = LargeAllocation::tryCreate(*m_heap, size, subspace.attributes);
+ if (!allocation)
+ return nullptr;
+
+ m_largeAllocations.append(allocation);
+ m_heap->didAllocate(size);
+ m_capacity += size;
+ return allocation->cell();
}
void MarkedSpace::sweep()
{
m_heap->sweeper()->willFinishSweeping();
forEachBlock(
- [&] (MarkedBlock* block) {
+ [&] (MarkedBlock::Handle* block) {
block->sweep();
});
}
+void MarkedSpace::sweepLargeAllocations()
+{
+ RELEASE_ASSERT(m_largeAllocationsNurseryOffset == m_largeAllocations.size());
+ unsigned srcIndex = m_largeAllocationsNurseryOffsetForSweep;
+ unsigned dstIndex = srcIndex;
+ while (srcIndex < m_largeAllocations.size()) {
+ LargeAllocation* allocation = m_largeAllocations[srcIndex++];
+ allocation->sweep();
+ if (allocation->isEmpty()) {
+ m_capacity -= allocation->cellSize();
+ allocation->destroy();
+ continue;
+ }
+ m_largeAllocations[dstIndex++] = allocation;
+ }
+ m_largeAllocations.resize(dstIndex);
+ m_largeAllocationsNurseryOffset = m_largeAllocations.size();
+}
+
void MarkedSpace::zombifySweep()
{
if (Options::logGC())
dataLog("Zombifying sweep...");
m_heap->sweeper()->willFinishSweeping();
forEachBlock(
- [&] (MarkedBlock* block) {
+ [&] (MarkedBlock::Handle* block) {
if (block->needsSweeping())
block->sweep();
});
@@ -82,103 +297,106 @@
void MarkedSpace::resetAllocators()
{
forEachAllocator(
- [&] (MarkedAllocator& allocator, size_t, AllocatorAttributes) -> IterationStatus {
+ [&] (MarkedAllocator& allocator) -> IterationStatus {
allocator.reset();
return IterationStatus::Continue;
});
m_blocksWithNewObjects.clear();
+ m_activeWeakSets.takeFrom(m_newActiveWeakSets);
+ if (m_heap->operationInProgress() == EdenCollection)
+ m_largeAllocationsNurseryOffsetForSweep = m_largeAllocationsNurseryOffset;
+ else
+ m_largeAllocationsNurseryOffsetForSweep = 0;
+ m_largeAllocationsNurseryOffset = m_largeAllocations.size();
}
void MarkedSpace::visitWeakSets(HeapRootVisitor& heapRootVisitor)
{
- if (m_heap->operationInProgress() == EdenCollection) {
- for (unsigned i = 0; i < m_blocksWithNewObjects.size(); ++i)
- m_blocksWithNewObjects[i]->visitWeakSet(heapRootVisitor);
- } else {
- forEachBlock(
- [&] (MarkedBlock* block) {
- block->visitWeakSet(heapRootVisitor);
- });
- }
+ auto visit = [&] (WeakSet* weakSet) {
+ weakSet->visit(heapRootVisitor);
+ };
+
+ m_newActiveWeakSets.forEach(visit);
+
+ if (m_heap->operationInProgress() == FullCollection)
+ m_activeWeakSets.forEach(visit);
}
void MarkedSpace::reapWeakSets()
{
- if (m_heap->operationInProgress() == EdenCollection) {
- for (unsigned i = 0; i < m_blocksWithNewObjects.size(); ++i)
- m_blocksWithNewObjects[i]->reapWeakSet();
- } else {
- forEachBlock(
- [&] (MarkedBlock* block) {
- block->reapWeakSet();
- });
- }
-}
-
-template <typename Functor>
-void MarkedSpace::forEachAllocator(const Functor& functor)
-{
- forEachSubspace(
- [&] (Subspace& subspace, AllocatorAttributes attributes) -> IterationStatus {
- for (size_t cellSize = preciseStep; cellSize <= preciseCutoff; cellSize += preciseStep) {
- if (functor(allocatorFor(subspace, cellSize), cellSize, attributes) == IterationStatus::Done)
- return IterationStatus::Done;
- }
- for (size_t cellSize = impreciseStart; cellSize <= impreciseCutoff; cellSize += impreciseStep) {
- if (functor(allocatorFor(subspace, cellSize), cellSize, attributes) == IterationStatus::Done)
- return IterationStatus::Done;
- }
- if (functor(subspace.largeAllocator, 0, attributes) == IterationStatus::Done)
- return IterationStatus::Done;
-
- return IterationStatus::Continue;
- });
+ auto visit = [&] (WeakSet* weakSet) {
+ weakSet->reap();
+ };
+
+ m_newActiveWeakSets.forEach(visit);
+
+ if (m_heap->operationInProgress() == FullCollection)
+ m_activeWeakSets.forEach(visit);
}
void MarkedSpace::stopAllocating()
{
ASSERT(!isIterating());
forEachAllocator(
- [&] (MarkedAllocator& allocator, size_t, AllocatorAttributes) -> IterationStatus {
+ [&] (MarkedAllocator& allocator) -> IterationStatus {
allocator.stopAllocating();
return IterationStatus::Continue;
});
}
+void MarkedSpace::prepareForMarking()
+{
+ if (m_heap->operationInProgress() == EdenCollection)
+ m_largeAllocationsOffsetForThisCollection = m_largeAllocationsNurseryOffset;
+ else
+ m_largeAllocationsOffsetForThisCollection = 0;
+ m_largeAllocationsForThisCollectionBegin = m_largeAllocations.begin() + m_largeAllocationsOffsetForThisCollection;
+ m_largeAllocationsForThisCollectionSize = m_largeAllocations.size() - m_largeAllocationsOffsetForThisCollection;
+ m_largeAllocationsForThisCollectionEnd = m_largeAllocations.end();
+ RELEASE_ASSERT(m_largeAllocationsForThisCollectionEnd == m_largeAllocationsForThisCollectionBegin + m_largeAllocationsForThisCollectionSize);
+ std::sort(
+ m_largeAllocationsForThisCollectionBegin, m_largeAllocationsForThisCollectionEnd,
+ [&] (LargeAllocation* a, LargeAllocation* b) {
+ return a < b;
+ });
+}
+
void MarkedSpace::resumeAllocating()
{
ASSERT(isIterating());
forEachAllocator(
- [&] (MarkedAllocator& allocator, size_t, AllocatorAttributes) -> IterationStatus {
+ [&] (MarkedAllocator& allocator) -> IterationStatus {
allocator.resumeAllocating();
return IterationStatus::Continue;
});
+ // Nothing to do for LargeAllocations.
}
bool MarkedSpace::isPagedOut(double deadline)
{
bool result = false;
forEachAllocator(
- [&] (MarkedAllocator& allocator, size_t, AllocatorAttributes) -> IterationStatus {
+ [&] (MarkedAllocator& allocator) -> IterationStatus {
if (allocator.isPagedOut(deadline)) {
result = true;
return IterationStatus::Done;
}
return IterationStatus::Continue;
});
+ // FIXME: Consider taking LargeAllocations into account here.
return result;
}
-void MarkedSpace::freeBlock(MarkedBlock* block)
+void MarkedSpace::freeBlock(MarkedBlock::Handle* block)
{
block->allocator()->removeBlock(block);
- m_capacity -= block->capacity();
- m_blocks.remove(block);
- MarkedBlock::destroy(*m_heap, block);
+ m_capacity -= MarkedBlock::blockSize;
+ m_blocks.remove(&block->block());
+ delete block;
}
-void MarkedSpace::freeOrShrinkBlock(MarkedBlock* block)
+void MarkedSpace::freeOrShrinkBlock(MarkedBlock::Handle* block)
{
if (!block->isEmpty()) {
block->shrink();
@@ -191,44 +409,43 @@
void MarkedSpace::shrink()
{
forEachBlock(
- [&] (MarkedBlock* block) {
+ [&] (MarkedBlock::Handle* block) {
freeOrShrinkBlock(block);
});
+ // For LargeAllocations, we do the moral equivalent in sweepLargeAllocations().
}
void MarkedSpace::clearNewlyAllocated()
{
forEachAllocator(
- [&] (MarkedAllocator& allocator, size_t size, AllocatorAttributes) -> IterationStatus {
- if (!size) {
- // This means it's a largeAllocator.
- allocator.forEachBlock(
- [&] (MarkedBlock* block) {
- block->clearNewlyAllocated();
- });
- return IterationStatus::Continue;
- }
-
- if (MarkedBlock* block = allocator.takeLastActiveBlock())
+ [&] (MarkedAllocator& allocator) -> IterationStatus {
+ if (MarkedBlock::Handle* block = allocator.takeLastActiveBlock())
block->clearNewlyAllocated();
return IterationStatus::Continue;
});
+
+ for (unsigned i = m_largeAllocationsOffsetForThisCollection; i < m_largeAllocations.size(); ++i)
+ m_largeAllocations[i]->clearNewlyAllocated();
#if !ASSERT_DISABLED
forEachBlock(
- [&] (MarkedBlock* block) {
+ [&] (MarkedBlock::Handle* block) {
ASSERT(!block->clearNewlyAllocated());
});
+
+ for (LargeAllocation* allocation : m_largeAllocations)
+ ASSERT(!allocation->isNewlyAllocated());
#endif // !ASSERT_DISABLED
}
#ifndef NDEBUG
-struct VerifyMarkedOrRetired : MarkedBlock::VoidFunctor {
- void operator()(MarkedBlock* block) const
+struct VerifyMarked : MarkedBlock::VoidFunctor {
+ void operator()(MarkedBlock::Handle* block) const
{
+ if (block->needsFlip())
+ return;
switch (block->m_state) {
case MarkedBlock::Marked:
- case MarkedBlock::Retired:
return;
default:
RELEASE_ASSERT_NOT_REACHED();
@@ -237,20 +454,19 @@
};
#endif
-void MarkedSpace::clearMarks()
+void MarkedSpace::flip()
{
if (m_heap->operationInProgress() == EdenCollection) {
for (unsigned i = 0; i < m_blocksWithNewObjects.size(); ++i)
- m_blocksWithNewObjects[i]->clearMarks();
+ m_blocksWithNewObjects[i]->flipForEdenCollection();
} else {
- forEachBlock(
- [&] (MarkedBlock* block) {
- block->clearMarks();
- });
+ m_version++; // Henceforth, flipIfNecessary() will trigger on all blocks.
+ for (LargeAllocation* allocation : m_largeAllocations)
+ allocation->flip();
}
#ifndef NDEBUG
- VerifyMarkedOrRetired verifyFunctor;
+ VerifyMarked verifyFunctor;
forEachBlock(verifyFunctor);
#endif
}
@@ -269,4 +485,63 @@
m_isIterating = false;
}
+size_t MarkedSpace::objectCount()
+{
+ size_t result = 0;
+ forEachBlock(
+ [&] (MarkedBlock::Handle* block) {
+ result += block->markCount();
+ });
+ for (LargeAllocation* allocation : m_largeAllocations) {
+ if (allocation->isMarked())
+ result++;
+ }
+ return result;
+}
+
+size_t MarkedSpace::size()
+{
+ size_t result = 0;
+ forEachBlock(
+ [&] (MarkedBlock::Handle* block) {
+ result += block->markCount() * block->cellSize();
+ });
+ for (LargeAllocation* allocation : m_largeAllocations) {
+ if (allocation->isMarked())
+ result += allocation->cellSize();
+ }
+ return result;
+}
+
+size_t MarkedSpace::capacity()
+{
+ return m_capacity;
+}
+
+void MarkedSpace::addActiveWeakSet(WeakSet* weakSet)
+{
+ // We conservatively assume that the WeakSet should belong in the new set. In fact, some weak
+ // sets might contain new weak handles even though they are tied to old objects. This slightly
+ // increases the amount of scanning that an eden collection would have to do, but the effect
+ // ought to be small.
+ m_newActiveWeakSets.append(weakSet);
+}
+
+void MarkedSpace::didAddBlock(MarkedBlock::Handle* block)
+{
+ m_capacity += MarkedBlock::blockSize;
+ m_blocks.add(&block->block());
+}
+
+void MarkedSpace::didAllocateInBlock(MarkedBlock::Handle* block)
+{
+ block->assertFlipped();
+ m_blocksWithNewObjects.append(block);
+
+ if (block->weakSet().isOnList()) {
+ block->weakSet().remove();
+ m_newActiveWeakSets.append(&block->weakSet());
+ }
+}
+
} // namespace JSC
diff --git a/Source/JavaScriptCore/heap/MarkedSpace.h b/Source/JavaScriptCore/heap/MarkedSpace.h
index 56c7b52..6ddd12e 100644
--- a/Source/JavaScriptCore/heap/MarkedSpace.h
+++ b/Source/JavaScriptCore/heap/MarkedSpace.h
@@ -23,13 +23,16 @@
#define MarkedSpace_h
#include "IterationStatus.h"
+#include "LargeAllocation.h"
#include "MarkedAllocator.h"
#include "MarkedBlock.h"
#include "MarkedBlockSet.h"
#include <array>
+#include <wtf/Bag.h>
#include <wtf/HashSet.h>
#include <wtf/Noncopyable.h>
#include <wtf/RetainPtr.h>
+#include <wtf/SentinelLinkedList.h>
#include <wtf/Vector.h>
namespace JSC {
@@ -37,42 +40,76 @@
class Heap;
class HeapIterationScope;
class LLIntOffsetsExtractor;
+class WeakSet;
class MarkedSpace {
WTF_MAKE_NONCOPYABLE(MarkedSpace);
public:
- // [ 16 ... 768 ]
- static const size_t preciseStep = MarkedBlock::atomSize;
- static const size_t preciseCutoff = 768;
- static const size_t preciseCount = preciseCutoff / preciseStep;
+ // sizeStep is really a synonym for atomSize; it's no accident that they are the same.
+ static const size_t sizeStep = MarkedBlock::atomSize;
+
+ // Sizes up to this amount get a size class for each size step.
+ static const size_t preciseCutoff = 80;
+
+ // The amount of available payload in a block is the block's size minus the header. But the
+ // header size might not be atom size aligned, so we round down the result accordingly.
+ static const size_t blockPayload = (MarkedBlock::blockSize - sizeof(MarkedBlock)) & ~(MarkedBlock::atomSize - 1);
+
+ // The largest cell we're willing to allocate in a MarkedBlock the "normal way" (i.e. using size
+ // classes, rather than a large allocation) is half the size of the payload, rounded down. This
+ // ensures that we only use the size class approach if it means being able to pack two things
+ // into one block.
+ static const size_t largeCutoff = (blockPayload / 2) & ~(sizeStep - 1);
- // [ 1024 ... blockSize/2 ]
- static const size_t impreciseStart = 1024;
- static const size_t impreciseStep = 256;
- static const size_t impreciseCutoff = MarkedBlock::blockSize / 2;
- static const size_t impreciseCount = impreciseCutoff / impreciseStep;
-
+ static const size_t numSizeClasses = largeCutoff / sizeStep;
+
+ static size_t sizeClassToIndex(size_t size)
+ {
+ ASSERT(size);
+ return (size + sizeStep - 1) / sizeStep - 1;
+ }
+
+ static size_t indexToSizeClass(size_t index)
+ {
+ return (index + 1) * sizeStep;
+ }
+
+ // Each Subspace corresponds to all of the blocks for all of the sizes for some "class" of
+ // objects. There are three classes: non-destructor JSCells, destructor JSCells, and auxiliary.
+ // MarkedSpace is set up to make it relatively easy to add new Subspaces.
struct Subspace {
- std::array<MarkedAllocator, preciseCount> preciseAllocators;
- std::array<MarkedAllocator, impreciseCount> impreciseAllocators;
- MarkedAllocator largeAllocator;
+ std::array<MarkedAllocator*, numSizeClasses> allocatorForSizeStep;
+
+ // Each MarkedAllocator is a size class.
+ Bag<MarkedAllocator> bagOfAllocators;
+
+ AllocatorAttributes attributes;
};
-
+
MarkedSpace(Heap*);
~MarkedSpace();
void lastChanceToFinalize();
- MarkedAllocator& allocatorFor(size_t);
- MarkedAllocator& destructorAllocatorFor(size_t);
- MarkedAllocator& auxiliaryAllocatorFor(size_t);
+ static size_t optimalSizeFor(size_t);
+
+ static MarkedAllocator* allocatorFor(Subspace&, size_t);
+
+ MarkedAllocator* allocatorFor(size_t);
+ MarkedAllocator* destructorAllocatorFor(size_t);
+ MarkedAllocator* auxiliaryAllocatorFor(size_t);
+
+ JS_EXPORT_PRIVATE void* allocate(Subspace&, size_t);
+ JS_EXPORT_PRIVATE void* tryAllocate(Subspace&, size_t);
+
void* allocateWithDestructor(size_t);
void* allocateWithoutDestructor(size_t);
void* allocateAuxiliary(size_t);
-
+ void* tryAllocateAuxiliary(size_t);
+
Subspace& subspaceForObjectsWithDestructor() { return m_destructorSpace; }
Subspace& subspaceForObjectsWithoutDestructor() { return m_normalSpace; }
Subspace& subspaceForAuxiliaryData() { return m_auxiliarySpace; }
-
+
void resetAllocators();
void visitWeakSets(HeapRootVisitor&);
@@ -86,6 +123,8 @@
void stopAllocating();
void resumeAllocating(); // If we just stopped allocation but we didn't do a collection, we need to resume allocation.
+
+ void prepareForMarking();
typedef HashSet<MarkedBlock*>::iterator BlockIterator;
@@ -94,42 +133,76 @@
template<typename Functor> void forEachBlock(const Functor&);
void shrink();
- void freeBlock(MarkedBlock*);
- void freeOrShrinkBlock(MarkedBlock*);
+ void freeBlock(MarkedBlock::Handle*);
+ void freeOrShrinkBlock(MarkedBlock::Handle*);
- void didAddBlock(MarkedBlock*);
- void didConsumeFreeList(MarkedBlock*);
- void didAllocateInBlock(MarkedBlock*);
+ void didAddBlock(MarkedBlock::Handle*);
+ void didConsumeFreeList(MarkedBlock::Handle*);
+ void didAllocateInBlock(MarkedBlock::Handle*);
- void clearMarks();
+ void flip();
void clearNewlyAllocated();
void sweep();
+ void sweepLargeAllocations();
void zombifySweep();
size_t objectCount();
size_t size();
size_t capacity();
bool isPagedOut(double deadline);
+
+ uint64_t version() const { return m_version; }
- const Vector<MarkedBlock*>& blocksWithNewObjects() const { return m_blocksWithNewObjects; }
+ const Vector<MarkedBlock::Handle*>& blocksWithNewObjects() const { return m_blocksWithNewObjects; }
+
+ const Vector<LargeAllocation*>& largeAllocations() const { return m_largeAllocations; }
+ unsigned largeAllocationsNurseryOffset() const { return m_largeAllocationsNurseryOffset; }
+ unsigned largeAllocationsOffsetForThisCollection() const { return m_largeAllocationsOffsetForThisCollection; }
+
+ // These are cached pointers and offsets for quickly searching the large allocations that are
+ // relevant to this collection.
+ LargeAllocation** largeAllocationsForThisCollectionBegin() const { return m_largeAllocationsForThisCollectionBegin; }
+ LargeAllocation** largeAllocationsForThisCollectionEnd() const { return m_largeAllocationsForThisCollectionEnd; }
+ unsigned largeAllocationsForThisCollectionSize() const { return m_largeAllocationsForThisCollectionSize; }
private:
friend class LLIntOffsetsExtractor;
friend class JIT;
+ friend class WeakSet;
+
+ JS_EXPORT_PRIVATE static std::array<size_t, numSizeClasses> s_sizeClassForSizeStep;
+
+ JS_EXPORT_PRIVATE void* allocateLarge(Subspace&, size_t);
+ JS_EXPORT_PRIVATE void* tryAllocateLarge(Subspace&, size_t);
+
+ static void initializeSizeClassForStepSize();
+
+ void initializeSubspace(Subspace&);
template<typename Functor> void forEachAllocator(const Functor&);
template<typename Functor> void forEachSubspace(const Functor&);
- MarkedAllocator& allocatorFor(Subspace&, size_t);
+
+ void addActiveWeakSet(WeakSet*);
Subspace m_destructorSpace;
Subspace m_normalSpace;
Subspace m_auxiliarySpace;
Heap* m_heap;
+ uint64_t m_version { 42 }; // This can start at any value, including random garbage values.
size_t m_capacity;
bool m_isIterating;
MarkedBlockSet m_blocks;
- Vector<MarkedBlock*> m_blocksWithNewObjects;
+ Vector<MarkedBlock::Handle*> m_blocksWithNewObjects;
+ Vector<LargeAllocation*> m_largeAllocations;
+ unsigned m_largeAllocationsNurseryOffset { 0 };
+ unsigned m_largeAllocationsOffsetForThisCollection { 0 };
+ unsigned m_largeAllocationsNurseryOffsetForSweep { 0 };
+ LargeAllocation** m_largeAllocationsForThisCollectionBegin { nullptr };
+ LargeAllocation** m_largeAllocationsForThisCollectionEnd { nullptr };
+ unsigned m_largeAllocationsForThisCollectionSize { 0 };
+ SentinelLinkedList<WeakSet, BasicRawSentinelNode<WeakSet>> m_activeWeakSets;
+ SentinelLinkedList<WeakSet, BasicRawSentinelNode<WeakSet>> m_newActiveWeakSets;
};
template<typename Functor> inline void MarkedSpace::forEachLiveCell(HeapIterationScope&, const Functor& functor)
@@ -137,8 +210,14 @@
ASSERT(isIterating());
BlockIterator end = m_blocks.set().end();
for (BlockIterator it = m_blocks.set().begin(); it != end; ++it) {
- if ((*it)->forEachLiveCell(functor) == IterationStatus::Done)
- break;
+ if ((*it)->handle().forEachLiveCell(functor) == IterationStatus::Done)
+ return;
+ }
+ for (LargeAllocation* allocation : m_largeAllocations) {
+ if (allocation->isLive()) {
+ if (functor(allocation->cell(), allocation->attributes().cellKind) == IterationStatus::Done)
+ return;
+ }
}
}
@@ -147,88 +226,81 @@
ASSERT(isIterating());
BlockIterator end = m_blocks.set().end();
for (BlockIterator it = m_blocks.set().begin(); it != end; ++it) {
- if ((*it)->forEachDeadCell(functor) == IterationStatus::Done)
- break;
+ if ((*it)->handle().forEachDeadCell(functor) == IterationStatus::Done)
+ return;
+ }
+ for (LargeAllocation* allocation : m_largeAllocations) {
+ if (!allocation->isLive()) {
+ if (functor(allocation->cell(), allocation->attributes().cellKind) == IterationStatus::Done)
+ return;
+ }
}
}
-inline MarkedAllocator& MarkedSpace::allocatorFor(size_t bytes)
+inline MarkedAllocator* MarkedSpace::allocatorFor(Subspace& space, size_t bytes)
+{
+ ASSERT(bytes);
+ if (bytes <= largeCutoff)
+ return space.allocatorForSizeStep[sizeClassToIndex(bytes)];
+ return nullptr;
+}
+
+inline MarkedAllocator* MarkedSpace::allocatorFor(size_t bytes)
{
return allocatorFor(m_normalSpace, bytes);
}
-inline MarkedAllocator& MarkedSpace::destructorAllocatorFor(size_t bytes)
+inline MarkedAllocator* MarkedSpace::destructorAllocatorFor(size_t bytes)
{
return allocatorFor(m_destructorSpace, bytes);
}
-inline MarkedAllocator& MarkedSpace::auxiliaryAllocatorFor(size_t bytes)
+inline MarkedAllocator* MarkedSpace::auxiliaryAllocatorFor(size_t bytes)
{
return allocatorFor(m_auxiliarySpace, bytes);
}
inline void* MarkedSpace::allocateWithoutDestructor(size_t bytes)
{
- return allocatorFor(bytes).allocate(bytes);
+ return allocate(m_normalSpace, bytes);
}
inline void* MarkedSpace::allocateWithDestructor(size_t bytes)
{
- return destructorAllocatorFor(bytes).allocate(bytes);
+ return allocate(m_destructorSpace, bytes);
}
inline void* MarkedSpace::allocateAuxiliary(size_t bytes)
{
- return auxiliaryAllocatorFor(bytes).allocate(bytes);
+ return allocate(m_auxiliarySpace, bytes);
+}
+
+inline void* MarkedSpace::tryAllocateAuxiliary(size_t bytes)
+{
+ return tryAllocate(m_auxiliarySpace, bytes);
}
template <typename Functor> inline void MarkedSpace::forEachBlock(const Functor& functor)
{
- forEachSubspace(
- [&] (Subspace& subspace, AllocatorAttributes) -> IterationStatus {
- for (size_t i = 0; i < preciseCount; ++i)
- subspace.preciseAllocators[i].forEachBlock(functor);
- for (size_t i = 0; i < impreciseCount; ++i)
- subspace.impreciseAllocators[i].forEachBlock(functor);
- subspace.largeAllocator.forEachBlock(functor);
+ forEachAllocator(
+ [&] (MarkedAllocator& allocator) -> IterationStatus {
+ allocator.forEachBlock(functor);
return IterationStatus::Continue;
});
}
-inline void MarkedSpace::didAddBlock(MarkedBlock* block)
+template <typename Functor>
+void MarkedSpace::forEachAllocator(const Functor& functor)
{
- m_capacity += block->capacity();
- m_blocks.add(block);
-}
-
-inline void MarkedSpace::didAllocateInBlock(MarkedBlock* block)
-{
- m_blocksWithNewObjects.append(block);
-}
-
-inline size_t MarkedSpace::objectCount()
-{
- size_t result = 0;
- forEachBlock(
- [&] (MarkedBlock* block) {
- result += block->markCount();
+ forEachSubspace(
+ [&] (Subspace& subspace, AllocatorAttributes) -> IterationStatus {
+ for (MarkedAllocator* allocator : subspace.bagOfAllocators) {
+ if (functor(*allocator) == IterationStatus::Done)
+ return IterationStatus::Done;
+ }
+
+ return IterationStatus::Continue;
});
- return result;
-}
-
-inline size_t MarkedSpace::size()
-{
- size_t result = 0;
- forEachBlock(
- [&] (MarkedBlock* block) {
- result += block->markCount() * block->cellSize();
- });
- return result;
-}
-
-inline size_t MarkedSpace::capacity()
-{
- return m_capacity;
}
template<typename Functor>
@@ -251,14 +323,14 @@
func(m_auxiliarySpace, attributes);
}
-inline MarkedAllocator& MarkedSpace::allocatorFor(Subspace& space, size_t bytes)
+ALWAYS_INLINE size_t MarkedSpace::optimalSizeFor(size_t bytes)
{
ASSERT(bytes);
if (bytes <= preciseCutoff)
- return space.preciseAllocators[(bytes - 1) / preciseStep];
- if (bytes <= impreciseCutoff)
- return space.impreciseAllocators[(bytes - 1) / impreciseStep];
- return space.largeAllocator;
+ return WTF::roundUpToMultipleOf<sizeStep>(bytes);
+ if (bytes <= largeCutoff)
+ return s_sizeClassForSizeStep[sizeClassToIndex(bytes)];
+ return bytes;
}
} // namespace JSC
diff --git a/Source/JavaScriptCore/heap/SlotVisitor.cpp b/Source/JavaScriptCore/heap/SlotVisitor.cpp
index 9959af7..45d886e 100644
--- a/Source/JavaScriptCore/heap/SlotVisitor.cpp
+++ b/Source/JavaScriptCore/heap/SlotVisitor.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2012, 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2015-2016 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -31,14 +31,16 @@
#include "CopiedBlockInlines.h"
#include "CopiedSpace.h"
#include "CopiedSpaceInlines.h"
+#include "HeapCellInlines.h"
#include "HeapProfiler.h"
#include "HeapSnapshotBuilder.h"
#include "JSArray.h"
#include "JSDestructibleObject.h"
-#include "VM.h"
#include "JSObject.h"
#include "JSString.h"
#include "JSCInlines.h"
+#include "SuperSampler.h"
+#include "VM.h"
#include <wtf/Lock.h>
namespace JSC {
@@ -79,6 +81,7 @@
, m_bytesCopied(0)
, m_visitCount(0)
, m_isInParallelMode(false)
+ , m_version(42)
, m_heap(heap)
#if !ASSERT_DISABLED
, m_isCheckingForDefaultMarkViolation(false)
@@ -96,9 +99,13 @@
{
if (heap()->operationInProgress() == FullCollection)
ASSERT(m_opaqueRoots.isEmpty()); // Should have merged by now.
+ else
+ reset();
if (HeapProfiler* heapProfiler = vm().heapProfiler())
m_heapSnapshotBuilder = heapProfiler->activeSnapshotBuilder();
+
+ m_version = heap()->objectSpace().version();
}
void SlotVisitor::reset()
@@ -108,7 +115,6 @@
m_visitCount = 0;
m_heapSnapshotBuilder = nullptr;
ASSERT(!m_currentCell);
- ASSERT(m_stack.isEmpty());
}
void SlotVisitor::clearMarkStack()
@@ -118,10 +124,41 @@
void SlotVisitor::append(ConservativeRoots& conservativeRoots)
{
- JSCell** roots = conservativeRoots.roots();
+ HeapCell** roots = conservativeRoots.roots();
size_t size = conservativeRoots.size();
for (size_t i = 0; i < size; ++i)
- append(roots[i]);
+ appendJSCellOrAuxiliary(roots[i]);
+}
+
+void SlotVisitor::appendJSCellOrAuxiliary(HeapCell* heapCell)
+{
+ if (!heapCell)
+ return;
+
+ ASSERT(!m_isCheckingForDefaultMarkViolation);
+
+ if (Heap::testAndSetMarked(m_version, heapCell))
+ return;
+
+ switch (heapCell->cellKind()) {
+ case HeapCell::JSCell: {
+ JSCell* jsCell = static_cast<JSCell*>(heapCell);
+
+ if (!jsCell->structure()) {
+ ASSERT_NOT_REACHED();
+ return;
+ }
+
+ jsCell->setCellState(CellState::NewGrey);
+
+ appendToMarkStack(jsCell);
+ return;
+ }
+
+ case HeapCell::Auxiliary: {
+ noteLiveAuxiliaryCell(heapCell);
+ return;
+ } }
}
void SlotVisitor::append(JSValue value)
@@ -145,6 +182,8 @@
void SlotVisitor::setMarkedAndAppendToMarkStack(JSCell* cell)
{
+ SuperSamplerScope superSamplerScope(false);
+
ASSERT(!m_isCheckingForDefaultMarkViolation);
if (!cell)
return;
@@ -152,33 +191,86 @@
#if ENABLE(GC_VALIDATION)
validate(cell);
#endif
+
+ if (cell->isLargeAllocation())
+ setMarkedAndAppendToMarkStack(cell->largeAllocation(), cell);
+ else
+ setMarkedAndAppendToMarkStack(cell->markedBlock(), cell);
+}
- if (Heap::testAndSetMarked(cell) || !cell->structure()) {
- ASSERT(cell->structure());
+template<typename ContainerType>
+ALWAYS_INLINE void SlotVisitor::setMarkedAndAppendToMarkStack(ContainerType& container, JSCell* cell)
+{
+ container.flipIfNecessaryConcurrently(m_version);
+
+ if (container.testAndSetMarked(cell))
return;
- }
-
+
+ ASSERT(cell->structure());
+
// Indicate that the object is grey and that:
// In case of concurrent GC: it's the first time it is grey in this GC cycle.
// In case of eden collection: it's a new object that became grey rather than an old remembered object.
cell->setCellState(CellState::NewGrey);
-
- appendToMarkStack(cell);
+
+ appendToMarkStack(container, cell);
}
void SlotVisitor::appendToMarkStack(JSCell* cell)
{
+ if (cell->isLargeAllocation())
+ appendToMarkStack(cell->largeAllocation(), cell);
+ else
+ appendToMarkStack(cell->markedBlock(), cell);
+}
+
+template<typename ContainerType>
+ALWAYS_INLINE void SlotVisitor::appendToMarkStack(ContainerType& container, JSCell* cell)
+{
ASSERT(Heap::isMarked(cell));
ASSERT(!cell->isZapped());
-
+
+ container.noteMarked();
+
+ // FIXME: These "just work" because the GC resets these fields before doing anything else. But
+ // that won't be the case when we do concurrent GC.
m_visitCount++;
- m_bytesVisited += MarkedBlock::blockFor(cell)->cellSize();
+ m_bytesVisited += container.cellSize();
+
m_stack.append(cell);
if (UNLIKELY(m_heapSnapshotBuilder))
m_heapSnapshotBuilder->appendNode(cell);
}
+void SlotVisitor::markAuxiliary(const void* base)
+{
+ HeapCell* cell = bitwise_cast<HeapCell*>(base);
+
+ if (Heap::testAndSetMarked(m_version, cell)) {
+ RELEASE_ASSERT(Heap::isMarked(cell));
+ return;
+ }
+
+ noteLiveAuxiliaryCell(cell);
+}
+
+void SlotVisitor::noteLiveAuxiliaryCell(HeapCell* cell)
+{
+ // We get here once per GC under these circumstances:
+ //
+ // Eden collection: if the cell was allocated since the last collection and is live somehow.
+ //
+ // Full collection: if the cell is live somehow.
+
+ CellContainer container = cell->cellContainer();
+
+ container.noteMarked();
+
+ m_visitCount++;
+ m_bytesVisited += container.cellSize();
+}
+
class SetCurrentCellScope {
public:
SetCurrentCellScope(SlotVisitor& visitor, const JSCell* cell)
@@ -202,9 +294,9 @@
ALWAYS_INLINE void SlotVisitor::visitChildren(const JSCell* cell)
{
ASSERT(Heap::isMarked(cell));
-
+
SetCurrentCellScope currentCellScope(*this, cell);
-
+
m_currentObjectCellStateBeforeVisiting = cell->cellState();
cell->setCellState(CellState::OldBlack);
diff --git a/Source/JavaScriptCore/heap/SlotVisitor.h b/Source/JavaScriptCore/heap/SlotVisitor.h
index 1515463..cb0c1ed 100644
--- a/Source/JavaScriptCore/heap/SlotVisitor.h
+++ b/Source/JavaScriptCore/heap/SlotVisitor.h
@@ -37,8 +37,10 @@
class ConservativeRoots;
class GCThreadSharedData;
class Heap;
+class HeapCell;
class HeapSnapshotBuilder;
template<typename T> class JITWriteBarrier;
+class MarkedBlock;
class UnconditionalFinalizer;
template<typename T> class Weak;
class WeakReferenceHarvester;
@@ -104,6 +106,10 @@
void harvestWeakReferences();
void finalizeUnconditionalFinalizers();
+
+ // This informs the GC about auxiliary of some size that we are keeping alive. If you don't do
+ // this then the space will be freed at end of GC.
+ void markAuxiliary(const void* base);
void copyLater(JSCell*, CopyToken, void*, size_t);
@@ -123,11 +129,21 @@
friend class ParallelModeEnabler;
JS_EXPORT_PRIVATE void append(JSValue); // This is private to encourage clients to use WriteBarrier<T>.
+ void appendJSCellOrAuxiliary(HeapCell*);
void appendHidden(JSValue);
JS_EXPORT_PRIVATE void setMarkedAndAppendToMarkStack(JSCell*);
+
+ template<typename ContainerType>
+ void setMarkedAndAppendToMarkStack(ContainerType&, JSCell*);
+
void appendToMarkStack(JSCell*);
+ template<typename ContainerType>
+ void appendToMarkStack(ContainerType&, JSCell*);
+
+ void noteLiveAuxiliaryCell(HeapCell*);
+
JS_EXPORT_PRIVATE void mergeOpaqueRoots();
void mergeOpaqueRootsIfNecessary();
void mergeOpaqueRootsIfProfitable();
@@ -144,6 +160,8 @@
size_t m_visitCount;
bool m_isInParallelMode;
+ uint64_t m_version;
+
Heap& m_heap;
HeapSnapshotBuilder* m_heapSnapshotBuilder { nullptr };
diff --git a/Source/JavaScriptCore/heap/WeakBlock.cpp b/Source/JavaScriptCore/heap/WeakBlock.cpp
index ddbbc8c..19e99ce 100644
--- a/Source/JavaScriptCore/heap/WeakBlock.cpp
+++ b/Source/JavaScriptCore/heap/WeakBlock.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2012 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2016 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -26,6 +26,7 @@
#include "config.h"
#include "WeakBlock.h"
+#include "CellContainerInlines.h"
#include "Heap.h"
#include "HeapRootVisitor.h"
#include "JSCInlines.h"
@@ -34,10 +35,10 @@
namespace JSC {
-WeakBlock* WeakBlock::create(Heap& heap, MarkedBlock& markedBlock)
+WeakBlock* WeakBlock::create(Heap& heap, CellContainer container)
{
heap.didAllocateBlock(WeakBlock::blockSize);
- return new (NotNull, fastMalloc(blockSize)) WeakBlock(markedBlock);
+ return new (NotNull, fastMalloc(blockSize)) WeakBlock(container);
}
void WeakBlock::destroy(Heap& heap, WeakBlock* block)
@@ -47,9 +48,9 @@
heap.didFreeBlock(WeakBlock::blockSize);
}
-WeakBlock::WeakBlock(MarkedBlock& markedBlock)
+WeakBlock::WeakBlock(CellContainer container)
: DoublyLinkedListNode<WeakBlock>()
- , m_markedBlock(&markedBlock)
+ , m_container(container)
{
for (size_t i = 0; i < weakImplCount(); ++i) {
WeakImpl* weakImpl = &weakImpls()[i];
@@ -101,11 +102,13 @@
if (isEmpty())
return;
- // If this WeakBlock doesn't belong to a MarkedBlock, we won't even be here.
- ASSERT(m_markedBlock);
+ // If this WeakBlock doesn't belong to a CellContainer, we won't even be here.
+ ASSERT(m_container);
+
+ m_container.flipIfNecessary();
// We only visit after marking.
- ASSERT(m_markedBlock->isMarkedOrRetired());
+ ASSERT(m_container.isMarked());
SlotVisitor& visitor = heapRootVisitor.visitor();
@@ -119,9 +122,9 @@
continue;
const JSValue& jsValue = weakImpl->jsValue();
- if (m_markedBlock->isMarkedOrNewlyAllocated(jsValue.asCell()))
+ if (m_container.isMarkedOrNewlyAllocated(jsValue.asCell()))
continue;
-
+
if (!weakHandleOwner->isReachableFromOpaqueRoots(Handle<Unknown>::wrapSlot(&const_cast<JSValue&>(jsValue)), weakImpl->context(), visitor))
continue;
@@ -135,18 +138,20 @@
if (isEmpty())
return;
- // If this WeakBlock doesn't belong to a MarkedBlock, we won't even be here.
- ASSERT(m_markedBlock);
+ // If this WeakBlock doesn't belong to a CellContainer, we won't even be here.
+ ASSERT(m_container);
+
+ m_container.flipIfNecessary();
// We only reap after marking.
- ASSERT(m_markedBlock->isMarkedOrRetired());
+ ASSERT(m_container.isMarked());
for (size_t i = 0; i < weakImplCount(); ++i) {
WeakImpl* weakImpl = &weakImpls()[i];
if (weakImpl->state() > WeakImpl::Dead)
continue;
- if (m_markedBlock->isMarkedOrNewlyAllocated(weakImpl->jsValue().asCell())) {
+ if (m_container.isMarkedOrNewlyAllocated(weakImpl->jsValue().asCell())) {
ASSERT(weakImpl->state() == WeakImpl::Live);
continue;
}
diff --git a/Source/JavaScriptCore/heap/WeakBlock.h b/Source/JavaScriptCore/heap/WeakBlock.h
index f5fbfdc..1878f18 100644
--- a/Source/JavaScriptCore/heap/WeakBlock.h
+++ b/Source/JavaScriptCore/heap/WeakBlock.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2012 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2016 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -26,6 +26,7 @@
#ifndef WeakBlock_h
#define WeakBlock_h
+#include "CellContainer.h"
#include "WeakImpl.h"
#include <wtf/DoublyLinkedList.h>
#include <wtf/StdLibExtras.h>
@@ -34,12 +35,11 @@
class Heap;
class HeapRootVisitor;
-class MarkedBlock;
class WeakBlock : public DoublyLinkedListNode<WeakBlock> {
public:
friend class WTF::DoublyLinkedListNode<WeakBlock>;
- static const size_t blockSize = 1 * KB; // 1/16 of MarkedBlock size
+ static const size_t blockSize = 256; // 1/16 of MarkedBlock size
struct FreeCell {
FreeCell* next;
@@ -53,7 +53,7 @@
FreeCell* freeList { nullptr };
};
- static WeakBlock* create(Heap&, MarkedBlock&);
+ static WeakBlock* create(Heap&, CellContainer);
static void destroy(Heap&, WeakBlock*);
static WeakImpl* asWeakImpl(FreeCell*);
@@ -68,18 +68,18 @@
void reap();
void lastChanceToFinalize();
- void disconnectMarkedBlock() { m_markedBlock = nullptr; }
+ void disconnectContainer() { m_container = CellContainer(); }
private:
static FreeCell* asFreeCell(WeakImpl*);
- explicit WeakBlock(MarkedBlock&);
+ explicit WeakBlock(CellContainer);
void finalize(WeakImpl*);
WeakImpl* weakImpls();
size_t weakImplCount();
void addToFreeList(FreeCell**, WeakImpl*);
- MarkedBlock* m_markedBlock;
+ CellContainer m_container;
WeakBlock* m_prev;
WeakBlock* m_next;
SweepResult m_sweepResult;
diff --git a/Source/JavaScriptCore/heap/WeakSet.cpp b/Source/JavaScriptCore/heap/WeakSet.cpp
index c845192..93baae3 100644
--- a/Source/JavaScriptCore/heap/WeakSet.cpp
+++ b/Source/JavaScriptCore/heap/WeakSet.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2012 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2016 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -34,6 +34,9 @@
WeakSet::~WeakSet()
{
+ if (isOnList())
+ remove();
+
Heap& heap = *this->heap();
WeakBlock* next = 0;
for (WeakBlock* block = m_blocks.head(); block; block = next) {
@@ -53,10 +56,10 @@
if (block->isLogicallyEmptyButNotFree()) {
// If this WeakBlock is logically empty, but still has Weaks pointing into it,
// we can't destroy it just yet. Detach it from the WeakSet and hand ownership
- // to the Heap so we don't pin down the entire 64kB MarkedBlock.
+ // to the Heap so we don't pin down the entire MarkedBlock or LargeAllocation.
m_blocks.remove(block);
heap()->addLogicallyEmptyWeakBlock(block);
- block->disconnectMarkedBlock();
+ block->disconnectContainer();
}
block = nextBlock;
}
@@ -64,6 +67,22 @@
resetAllocator();
}
+void WeakSet::shrink()
+{
+ WeakBlock* next;
+ for (WeakBlock* block = m_blocks.head(); block; block = next) {
+ next = block->next();
+
+ if (block->isEmpty())
+ removeAllocator(block);
+ }
+
+ resetAllocator();
+
+ if (m_blocks.isEmpty() && isOnList())
+ remove();
+}
+
WeakBlock::FreeCell* WeakSet::findAllocator()
{
if (WeakBlock::FreeCell* allocator = tryFindAllocator())
@@ -88,7 +107,10 @@
WeakBlock::FreeCell* WeakSet::addAllocator()
{
- WeakBlock* block = WeakBlock::create(*heap(), m_markedBlock);
+ if (m_blocks.isEmpty() && !isOnList())
+ heap()->objectSpace().addActiveWeakSet(this);
+
+ WeakBlock* block = WeakBlock::create(*heap(), m_container);
heap()->didAllocate(WeakBlock::blockSize);
m_blocks.append(block);
WeakBlock::SweepResult sweepResult = block->takeSweepResult();
diff --git a/Source/JavaScriptCore/heap/WeakSet.h b/Source/JavaScriptCore/heap/WeakSet.h
index dbde510..82efa7f 100644
--- a/Source/JavaScriptCore/heap/WeakSet.h
+++ b/Source/JavaScriptCore/heap/WeakSet.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2012 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2016 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -26,31 +26,35 @@
#ifndef WeakSet_h
#define WeakSet_h
+#include "CellContainer.h"
#include "WeakBlock.h"
+#include <wtf/SentinelLinkedList.h>
namespace JSC {
class Heap;
-class MarkedBlock;
class WeakImpl;
-class WeakSet {
+class WeakSet : public BasicRawSentinelNode<WeakSet> {
friend class LLIntOffsetsExtractor;
public:
static WeakImpl* allocate(JSValue, WeakHandleOwner* = 0, void* context = 0);
static void deallocate(WeakImpl*);
- WeakSet(VM*, MarkedBlock&);
+ WeakSet(VM*, CellContainer);
~WeakSet();
void lastChanceToFinalize();
+
+ CellContainer container() const { return m_container; }
+ void setContainer(CellContainer container) { m_container = container; }
Heap* heap() const;
VM* vm() const;
bool isEmpty() const;
- void visit(HeapRootVisitor&);
+ unsigned visit(HeapRootVisitor&);
void reap();
void sweep();
void shrink();
@@ -66,14 +70,14 @@
WeakBlock* m_nextAllocator;
DoublyLinkedList<WeakBlock> m_blocks;
VM* m_vm;
- MarkedBlock& m_markedBlock;
+ CellContainer m_container;
};
-inline WeakSet::WeakSet(VM* vm, MarkedBlock& markedBlock)
+inline WeakSet::WeakSet(VM* vm, CellContainer container)
: m_allocator(0)
, m_nextAllocator(0)
, m_vm(vm)
- , m_markedBlock(markedBlock)
+ , m_container(container)
{
}
@@ -103,10 +107,14 @@
block->lastChanceToFinalize();
}
-inline void WeakSet::visit(HeapRootVisitor& visitor)
+inline unsigned WeakSet::visit(HeapRootVisitor& visitor)
{
- for (WeakBlock* block = m_blocks.head(); block; block = block->next())
+ unsigned count = 0;
+ for (WeakBlock* block = m_blocks.head(); block; block = block->next()) {
+ count++;
block->visit(visitor);
+ }
+ return count;
}
inline void WeakSet::reap()
@@ -115,19 +123,6 @@
block->reap();
}
-inline void WeakSet::shrink()
-{
- WeakBlock* next;
- for (WeakBlock* block = m_blocks.head(); block; block = next) {
- next = block->next();
-
- if (block->isEmpty())
- removeAllocator(block);
- }
-
- resetAllocator();
-}
-
inline void WeakSet::resetAllocator()
{
m_allocator = 0;
diff --git a/Source/JavaScriptCore/heap/WeakSetInlines.h b/Source/JavaScriptCore/heap/WeakSetInlines.h
index f239224..d478aec 100644
--- a/Source/JavaScriptCore/heap/WeakSetInlines.h
+++ b/Source/JavaScriptCore/heap/WeakSetInlines.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2012 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2016 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -26,13 +26,14 @@
#ifndef WeakSetInlines_h
#define WeakSetInlines_h
+#include "CellContainerInlines.h"
#include "MarkedBlock.h"
namespace JSC {
inline WeakImpl* WeakSet::allocate(JSValue jsValue, WeakHandleOwner* weakHandleOwner, void* context)
{
- WeakSet& weakSet = MarkedBlock::blockFor(jsValue.asCell())->weakSet();
+ WeakSet& weakSet = jsValue.asCell()->cellContainer().weakSet();
WeakBlock::FreeCell* allocator = weakSet.m_allocator;
if (UNLIKELY(!allocator))
allocator = weakSet.findAllocator();
diff --git a/Source/JavaScriptCore/inspector/InjectedScriptManager.cpp b/Source/JavaScriptCore/inspector/InjectedScriptManager.cpp
index d0d1263..940cd0b 100644
--- a/Source/JavaScriptCore/inspector/InjectedScriptManager.cpp
+++ b/Source/JavaScriptCore/inspector/InjectedScriptManager.cpp
@@ -35,6 +35,7 @@
#include "InjectedScriptHost.h"
#include "InjectedScriptSource.h"
#include "InspectorValues.h"
+#include "JSCInlines.h"
#include "JSInjectedScriptHost.h"
#include "JSLock.h"
#include "ScriptObject.h"
diff --git a/Source/JavaScriptCore/inspector/JSGlobalObjectInspectorController.cpp b/Source/JavaScriptCore/inspector/JSGlobalObjectInspectorController.cpp
index 53c603f..ef543ed 100644
--- a/Source/JavaScriptCore/inspector/JSGlobalObjectInspectorController.cpp
+++ b/Source/JavaScriptCore/inspector/JSGlobalObjectInspectorController.cpp
@@ -38,6 +38,7 @@
#include "InspectorFrontendRouter.h"
#include "InspectorHeapAgent.h"
#include "InspectorScriptProfilerAgent.h"
+#include "JSCInlines.h"
#include "JSGlobalObject.h"
#include "JSGlobalObjectConsoleAgent.h"
#include "JSGlobalObjectConsoleClient.h"
diff --git a/Source/JavaScriptCore/inspector/JSJavaScriptCallFrame.cpp b/Source/JavaScriptCore/inspector/JSJavaScriptCallFrame.cpp
index 504af44..86fe9bb 100644
--- a/Source/JavaScriptCore/inspector/JSJavaScriptCallFrame.cpp
+++ b/Source/JavaScriptCore/inspector/JSJavaScriptCallFrame.cpp
@@ -29,11 +29,9 @@
#include "DebuggerScope.h"
#include "Error.h"
#include "IdentifierInlines.h"
-#include "JSCJSValue.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSJavaScriptCallFramePrototype.h"
#include "ObjectConstructor.h"
-#include "StructureInlines.h"
using namespace JSC;
diff --git a/Source/JavaScriptCore/inspector/ScriptDebugServer.cpp b/Source/JavaScriptCore/inspector/ScriptDebugServer.cpp
index 4c40737..80261e8 100644
--- a/Source/JavaScriptCore/inspector/ScriptDebugServer.cpp
+++ b/Source/JavaScriptCore/inspector/ScriptDebugServer.cpp
@@ -34,6 +34,7 @@
#include "DebuggerCallFrame.h"
#include "DebuggerScope.h"
#include "Exception.h"
+#include "JSCInlines.h"
#include "JSJavaScriptCallFrame.h"
#include "JSLock.h"
#include "JavaScriptCallFrame.h"
diff --git a/Source/JavaScriptCore/inspector/agents/InspectorDebuggerAgent.cpp b/Source/JavaScriptCore/inspector/agents/InspectorDebuggerAgent.cpp
index 2c9a8d2..2812103 100644
--- a/Source/JavaScriptCore/inspector/agents/InspectorDebuggerAgent.cpp
+++ b/Source/JavaScriptCore/inspector/agents/InspectorDebuggerAgent.cpp
@@ -35,6 +35,7 @@
#include "InjectedScriptManager.h"
#include "InspectorFrontendRouter.h"
#include "InspectorValues.h"
+#include "JSCInlines.h"
#include "RegularExpression.h"
#include "ScriptDebugServer.h"
#include "ScriptObject.h"
diff --git a/Source/JavaScriptCore/interpreter/CachedCall.h b/Source/JavaScriptCore/interpreter/CachedCall.h
index e5b69dc..471985a 100644
--- a/Source/JavaScriptCore/interpreter/CachedCall.h
+++ b/Source/JavaScriptCore/interpreter/CachedCall.h
@@ -42,7 +42,8 @@
CachedCall(CallFrame* callFrame, JSFunction* function, int argumentCount)
: m_valid(false)
, m_interpreter(callFrame->interpreter())
- , m_entryScope(callFrame->vm(), function->scope()->globalObject())
+ , m_vm(callFrame->vm())
+ , m_entryScope(m_vm, function->scope()->globalObject(m_vm))
{
VM& vm = m_entryScope.vm();
auto scope = DECLARE_THROW_SCOPE(vm);
@@ -67,6 +68,7 @@
private:
bool m_valid;
Interpreter* m_interpreter;
+ VM& m_vm;
VMEntryScope m_entryScope;
ProtoCallFrame m_protoCallFrame;
Vector<JSValue> m_arguments;
diff --git a/Source/JavaScriptCore/interpreter/Interpreter.cpp b/Source/JavaScriptCore/interpreter/Interpreter.cpp
index 4ba7b7b..7394b92 100644
--- a/Source/JavaScriptCore/interpreter/Interpreter.cpp
+++ b/Source/JavaScriptCore/interpreter/Interpreter.cpp
@@ -51,6 +51,7 @@
#include "JSString.h"
#include "JSWithScope.h"
#include "LLIntCLoop.h"
+#include "LLIntData.h"
#include "LLIntThunks.h"
#include "LiteralParser.h"
#include "ObjectPrototype.h"
@@ -85,46 +86,6 @@
namespace JSC {
-intptr_t StackFrame::sourceID() const
-{
- if (!codeBlock)
- return noSourceID;
- return codeBlock->ownerScriptExecutable()->sourceID();
-}
-
-String StackFrame::sourceURL() const
-{
- if (!codeBlock)
- return ASCIILiteral("[native code]");
-
- String sourceURL = codeBlock->ownerScriptExecutable()->sourceURL();
- if (!sourceURL.isNull())
- return sourceURL;
- return emptyString();
-}
-
-String StackFrame::functionName(VM& vm) const
-{
- if (codeBlock) {
- switch (codeBlock->codeType()) {
- case EvalCode:
- return ASCIILiteral("eval code");
- case ModuleCode:
- return ASCIILiteral("module code");
- case FunctionCode:
- break;
- case GlobalCode:
- return ASCIILiteral("global code");
- default:
- ASSERT_NOT_REACHED();
- }
- }
- String name;
- if (callee)
- name = getCalculatedDisplayName(vm, callee.get()).impl();
- return name.isNull() ? emptyString() : name;
-}
-
JSValue eval(CallFrame* callFrame)
{
VM& vm = callFrame->vm();
@@ -274,6 +235,7 @@
return;
JSCell* cell = arguments.asCell();
+
switch (cell->type()) {
case DirectArgumentsType:
jsCast<DirectArguments*>(cell)->copyToArguments(callFrame, firstElementDest, offset, length);
@@ -482,48 +444,6 @@
#endif
}
-void StackFrame::computeLineAndColumn(unsigned& line, unsigned& column) const
-{
- if (!codeBlock) {
- line = 0;
- column = 0;
- return;
- }
-
- int divot = 0;
- int unusedStartOffset = 0;
- int unusedEndOffset = 0;
- codeBlock->expressionRangeForBytecodeOffset(bytecodeOffset, divot, unusedStartOffset, unusedEndOffset, line, column);
-
- ScriptExecutable* executable = codeBlock->ownerScriptExecutable();
- if (executable->hasOverrideLineNumber())
- line = executable->overrideLineNumber();
-}
-
-String StackFrame::toString(VM& vm) const
-{
- StringBuilder traceBuild;
- String functionName = this->functionName(vm);
- String sourceURL = this->sourceURL();
- traceBuild.append(functionName);
- if (!sourceURL.isEmpty()) {
- if (!functionName.isEmpty())
- traceBuild.append('@');
- traceBuild.append(sourceURL);
- if (codeBlock) {
- unsigned line;
- unsigned column;
- computeLineAndColumn(line, column);
-
- traceBuild.append(':');
- traceBuild.appendNumber(line);
- traceBuild.append(':');
- traceBuild.appendNumber(column);
- }
- }
- return traceBuild.toString().impl();
-}
-
static inline bool isWebAssemblyExecutable(ExecutableBase* executable)
{
#if !ENABLE(WEBASSEMBLY)
diff --git a/Source/JavaScriptCore/interpreter/Interpreter.h b/Source/JavaScriptCore/interpreter/Interpreter.h
index 3df0067..89331e9 100644
--- a/Source/JavaScriptCore/interpreter/Interpreter.h
+++ b/Source/JavaScriptCore/interpreter/Interpreter.h
@@ -37,6 +37,7 @@
#include "Opcode.h"
#include "SourceProvider.h"
#include "StackAlignment.h"
+#include "StackFrame.h"
#include <wtf/HashMap.h>
#include <wtf/text/StringBuilder.h>
@@ -67,7 +68,7 @@
struct ProtoCallFrame;
struct UnlinkedInstruction;
- enum UnwindStart { UnwindFromCurrentFrame, UnwindFromCallerFrame };
+ enum UnwindStart : uint8_t { UnwindFromCurrentFrame, UnwindFromCallerFrame };
enum DebugHookID {
WillExecuteProgram,
@@ -86,20 +87,6 @@
StackFrameNativeCode
};
- struct StackFrame {
- Strong<JSObject> callee;
- Strong<CodeBlock> codeBlock;
- unsigned bytecodeOffset;
-
- bool isNative() const { return !codeBlock; }
-
- void computeLineAndColumn(unsigned& line, unsigned& column) const;
- String functionName(VM&) const;
- intptr_t sourceID() const;
- String sourceURL() const;
- String toString(VM&) const;
- };
-
class SuspendExceptionScope {
public:
SuspendExceptionScope(VM* vm)
diff --git a/Source/JavaScriptCore/jit/AssemblyHelpers.h b/Source/JavaScriptCore/jit/AssemblyHelpers.h
index a3a0d21..4590a73 100644
--- a/Source/JavaScriptCore/jit/AssemblyHelpers.h
+++ b/Source/JavaScriptCore/jit/AssemblyHelpers.h
@@ -1405,33 +1405,74 @@
void emitRandomThunk(JSGlobalObject*, GPRReg scratch0, GPRReg scratch1, GPRReg scratch2, FPRReg result);
void emitRandomThunk(GPRReg scratch0, GPRReg scratch1, GPRReg scratch2, GPRReg scratch3, FPRReg result);
#endif
-
- void emitAllocate(GPRReg resultGPR, GPRReg allocatorGPR, GPRReg scratchGPR, JumpList& slowPath)
+
+ // Call this if you know that the value held in allocatorGPR is non-null. This DOES NOT mean
+ // that allocator is non-null; allocator can be null as a signal that we don't know what the
+ // value of allocatorGPR is.
+ void emitAllocateWithNonNullAllocator(GPRReg resultGPR, MarkedAllocator* allocator, GPRReg allocatorGPR, GPRReg scratchGPR, JumpList& slowPath)
{
- if (Options::forceGCSlowPaths())
+ // NOTE: This is carefully written so that we can call it while we disallow scratch
+ // register usage.
+
+ if (Options::forceGCSlowPaths()) {
slowPath.append(jump());
- else {
- loadPtr(Address(allocatorGPR, MarkedAllocator::offsetOfFreeListHead()), resultGPR);
- slowPath.append(branchTestPtr(Zero, resultGPR));
+ return;
}
+ Jump popPath;
+ Jump done;
+
+ load32(Address(allocatorGPR, MarkedAllocator::offsetOfFreeList() + OBJECT_OFFSETOF(FreeList, remaining)), resultGPR);
+ popPath = branchTest32(Zero, resultGPR);
+ if (allocator)
+ add32(TrustedImm32(-allocator->cellSize()), resultGPR, scratchGPR);
+ else {
+ move(resultGPR, scratchGPR);
+ sub32(Address(allocatorGPR, MarkedAllocator::offsetOfCellSize()), scratchGPR);
+ }
+ negPtr(resultGPR);
+ store32(scratchGPR, Address(allocatorGPR, MarkedAllocator::offsetOfFreeList() + OBJECT_OFFSETOF(FreeList, remaining)));
+ Address payloadEndAddr = Address(allocatorGPR, MarkedAllocator::offsetOfFreeList() + OBJECT_OFFSETOF(FreeList, payloadEnd));
+ if (isX86())
+ addPtr(payloadEndAddr, resultGPR);
+ else {
+ loadPtr(payloadEndAddr, scratchGPR);
+ addPtr(scratchGPR, resultGPR);
+ }
+
+ done = jump();
+
+ popPath.link(this);
+
+ loadPtr(Address(allocatorGPR, MarkedAllocator::offsetOfFreeList() + OBJECT_OFFSETOF(FreeList, head)), resultGPR);
+ slowPath.append(branchTestPtr(Zero, resultGPR));
+
// The object is half-allocated: we have what we know is a fresh object, but
// it's still on the GC's free list.
loadPtr(Address(resultGPR), scratchGPR);
- storePtr(scratchGPR, Address(allocatorGPR, MarkedAllocator::offsetOfFreeListHead()));
+ storePtr(scratchGPR, Address(allocatorGPR, MarkedAllocator::offsetOfFreeList() + OBJECT_OFFSETOF(FreeList, head)));
+
+ done.link(this);
+ }
+
+ void emitAllocate(GPRReg resultGPR, MarkedAllocator* allocator, GPRReg allocatorGPR, GPRReg scratchGPR, JumpList& slowPath)
+ {
+ if (!allocator)
+ slowPath.append(branchTestPtr(Zero, allocatorGPR));
+ emitAllocateWithNonNullAllocator(resultGPR, allocator, allocatorGPR, scratchGPR, slowPath);
}
template<typename StructureType>
- void emitAllocateJSCell(GPRReg resultGPR, GPRReg allocatorGPR, StructureType structure, GPRReg scratchGPR, JumpList& slowPath)
+ void emitAllocateJSCell(GPRReg resultGPR, MarkedAllocator* allocator, GPRReg allocatorGPR, StructureType structure, GPRReg scratchGPR, JumpList& slowPath)
{
- emitAllocate(resultGPR, allocatorGPR, scratchGPR, slowPath);
+ emitAllocate(resultGPR, allocator, allocatorGPR, scratchGPR, slowPath);
emitStoreStructureWithTypeInfo(structure, resultGPR, scratchGPR);
}
template<typename StructureType, typename StorageType>
- void emitAllocateJSObject(GPRReg resultGPR, GPRReg allocatorGPR, StructureType structure, StorageType storage, GPRReg scratchGPR, JumpList& slowPath)
+ void emitAllocateJSObject(GPRReg resultGPR, MarkedAllocator* allocator, GPRReg allocatorGPR, StructureType structure, StorageType storage, GPRReg scratchGPR, JumpList& slowPath)
{
- emitAllocateJSCell(resultGPR, allocatorGPR, structure, scratchGPR, slowPath);
+ emitAllocateJSCell(resultGPR, allocator, allocatorGPR, structure, scratchGPR, slowPath);
storePtr(storage, Address(resultGPR, JSObject::butterflyOffset()));
}
@@ -1440,9 +1481,13 @@
GPRReg resultGPR, StructureType structure, StorageType storage, GPRReg scratchGPR1,
GPRReg scratchGPR2, JumpList& slowPath, size_t size)
{
- MarkedAllocator* allocator = &vm()->heap.allocatorForObjectOfType<ClassType>(size);
+ MarkedAllocator* allocator = vm()->heap.allocatorForObjectOfType<ClassType>(size);
+ if (!allocator) {
+ slowPath.append(jump());
+ return;
+ }
move(TrustedImmPtr(allocator), scratchGPR1);
- emitAllocateJSObject(resultGPR, scratchGPR1, structure, storage, scratchGPR2, slowPath);
+ emitAllocateJSObject(resultGPR, allocator, scratchGPR1, structure, storage, scratchGPR2, slowPath);
}
template<typename ClassType, typename StructureType, typename StorageType>
@@ -1451,27 +1496,21 @@
emitAllocateJSObjectWithKnownSize<ClassType>(resultGPR, structure, storage, scratchGPR1, scratchGPR2, slowPath, ClassType::allocationSize(0));
}
+ // allocationSize can be aliased with any of the other input GPRs. If it's not aliased then it
+ // won't be clobbered.
void emitAllocateVariableSized(GPRReg resultGPR, MarkedSpace::Subspace& subspace, GPRReg allocationSize, GPRReg scratchGPR1, GPRReg scratchGPR2, JumpList& slowPath)
{
- static_assert(!(MarkedSpace::preciseStep & (MarkedSpace::preciseStep - 1)), "MarkedSpace::preciseStep must be a power of two.");
- static_assert(!(MarkedSpace::impreciseStep & (MarkedSpace::impreciseStep - 1)), "MarkedSpace::impreciseStep must be a power of two.");
+ static_assert(!(MarkedSpace::sizeStep & (MarkedSpace::sizeStep - 1)), "MarkedSpace::sizeStep must be a power of two.");
- add32(TrustedImm32(MarkedSpace::preciseStep - 1), allocationSize);
- Jump notSmall = branch32(AboveOrEqual, allocationSize, TrustedImm32(MarkedSpace::preciseCutoff));
- rshift32(allocationSize, TrustedImm32(getLSBSet(MarkedSpace::preciseStep)), scratchGPR1);
- mul32(TrustedImm32(sizeof(MarkedAllocator)), scratchGPR1, scratchGPR1);
- addPtr(TrustedImmPtr(&subspace.preciseAllocators[0]), scratchGPR1);
-
- Jump selectedSmallSpace = jump();
- notSmall.link(this);
- slowPath.append(branch32(AboveOrEqual, allocationSize, TrustedImm32(MarkedSpace::impreciseCutoff)));
- rshift32(allocationSize, TrustedImm32(getLSBSet(MarkedSpace::impreciseStep)), scratchGPR1);
- mul32(TrustedImm32(sizeof(MarkedAllocator)), scratchGPR1, scratchGPR1);
- addPtr(TrustedImmPtr(&subspace.impreciseAllocators[0]), scratchGPR1);
-
- selectedSmallSpace.link(this);
+ unsigned stepShift = getLSBSet(MarkedSpace::sizeStep);
- emitAllocate(resultGPR, scratchGPR1, scratchGPR2, slowPath);
+ add32(TrustedImm32(MarkedSpace::sizeStep - 1), allocationSize, scratchGPR1);
+ urshift32(TrustedImm32(stepShift), scratchGPR1);
+ slowPath.append(branch32(Above, scratchGPR1, TrustedImm32(MarkedSpace::largeCutoff >> stepShift)));
+ move(TrustedImmPtr(&subspace.allocatorForSizeStep[0] - 1), scratchGPR2);
+ loadPtr(BaseIndex(scratchGPR2, scratchGPR1, timesPtr()), scratchGPR1);
+
+ emitAllocate(resultGPR, nullptr, scratchGPR1, scratchGPR2, slowPath);
}
template<typename ClassType, typename StructureType>
diff --git a/Source/JavaScriptCore/jit/CCallHelpers.h b/Source/JavaScriptCore/jit/CCallHelpers.h
index 07cd0f4..864df74 100644
--- a/Source/JavaScriptCore/jit/CCallHelpers.h
+++ b/Source/JavaScriptCore/jit/CCallHelpers.h
@@ -296,6 +296,15 @@
addCallArgument(arg3);
}
+ ALWAYS_INLINE void setupArgumentsWithExecState(TrustedImmPtr arg1, TrustedImm32 arg2, GPRReg arg3)
+ {
+ resetCallArguments();
+ addCallArgument(GPRInfo::callFrameRegister);
+ addCallArgument(arg1);
+ addCallArgument(arg2);
+ addCallArgument(arg3);
+ }
+
ALWAYS_INLINE void setupArgumentsWithExecState(GPRReg arg1, GPRReg arg2, GPRReg arg3)
{
resetCallArguments();
@@ -1408,6 +1417,14 @@
move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);
}
+ ALWAYS_INLINE void setupArgumentsWithExecState(TrustedImmPtr arg1, TrustedImm32 arg2, GPRReg arg3)
+ {
+ move(arg3, GPRInfo::argumentGPR3);
+ move(arg1, GPRInfo::argumentGPR1);
+ move(arg2, GPRInfo::argumentGPR2);
+ move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);
+ }
+
ALWAYS_INLINE void setupArgumentsWithExecState(TrustedImmPtr arg1, TrustedImm32 arg2, TrustedImm32 arg3)
{
move(arg1, GPRInfo::argumentGPR1);
diff --git a/Source/JavaScriptCore/jit/GCAwareJITStubRoutine.cpp b/Source/JavaScriptCore/jit/GCAwareJITStubRoutine.cpp
index 199cec5..e98f271 100644
--- a/Source/JavaScriptCore/jit/GCAwareJITStubRoutine.cpp
+++ b/Source/JavaScriptCore/jit/GCAwareJITStubRoutine.cpp
@@ -32,6 +32,7 @@
#include "DFGCommonData.h"
#include "Heap.h"
#include "VM.h"
+#include "JITStubRoutineSet.h"
#include "JSCInlines.h"
#include "SlotVisitor.h"
#include "Structure.h"
@@ -45,7 +46,7 @@
, m_mayBeExecuting(false)
, m_isJettisoned(false)
{
- vm.heap.m_jitStubRoutines.add(this);
+ vm.heap.m_jitStubRoutines->add(this);
}
GCAwareJITStubRoutine::~GCAwareJITStubRoutine() { }
diff --git a/Source/JavaScriptCore/jit/JIT.cpp b/Source/JavaScriptCore/jit/JIT.cpp
index 12405f7..1061811 100644
--- a/Source/JavaScriptCore/jit/JIT.cpp
+++ b/Source/JavaScriptCore/jit/JIT.cpp
@@ -48,6 +48,7 @@
#include "SuperSampler.h"
#include "TypeProfilerLog.h"
#include <wtf/CryptographicallyRandomNumber.h>
+#include <wtf/SimpleStats.h>
using namespace std;
@@ -66,6 +67,14 @@
newCalleeFunction);
}
+JIT::CodeRef JIT::compileCTINativeCall(VM* vm, NativeFunction func)
+{
+ if (!vm->canUseJIT())
+ return CodeRef::createLLIntCodeRef(llint_native_call_trampoline);
+ JIT jit(vm, 0);
+ return jit.privateCompileCTINativeCall(vm, func);
+}
+
JIT::JIT(VM* vm, CodeBlock* codeBlock)
: JSInterfaceJIT(vm, codeBlock)
, m_interpreter(vm->interpreter)
@@ -786,7 +795,7 @@
patchBuffer,
("Baseline JIT code for %s", toCString(CodeBlockWithJITType(m_codeBlock, JITCode::BaselineJIT)).data()));
- m_vm->machineCodeBytesPerBytecodeWordForBaselineJIT.add(
+ m_vm->machineCodeBytesPerBytecodeWordForBaselineJIT->add(
static_cast<double>(result.size()) /
static_cast<double>(m_instructions.size()));
diff --git a/Source/JavaScriptCore/jit/JIT.h b/Source/JavaScriptCore/jit/JIT.h
index 51fcf60..f683942 100644
--- a/Source/JavaScriptCore/jit/JIT.h
+++ b/Source/JavaScriptCore/jit/JIT.h
@@ -40,17 +40,17 @@
#include "CodeBlock.h"
#include "CompactJITCodeMap.h"
-#include "Interpreter.h"
#include "JITDisassembler.h"
#include "JITInlineCacheGenerator.h"
#include "JITMathIC.h"
#include "JSInterfaceJIT.h"
-#include "Opcode.h"
#include "PCToCodeOriginMap.h"
#include "UnusedPointer.h"
namespace JSC {
+ enum OpcodeID : unsigned;
+
class ArrayAllocationProfile;
class CallLinkInfo;
class CodeBlock;
@@ -248,14 +248,7 @@
jit.privateCompileHasIndexedProperty(byValInfo, returnAddress, arrayMode);
}
- static CodeRef compileCTINativeCall(VM* vm, NativeFunction func)
- {
- if (!vm->canUseJIT()) {
- return CodeRef::createLLIntCodeRef(llint_native_call_trampoline);
- }
- JIT jit(vm, 0);
- return jit.privateCompileCTINativeCall(vm, func);
- }
+ static CodeRef compileCTINativeCall(VM*, NativeFunction);
static unsigned frameRegisterCountFor(CodeBlock*);
static int stackPointerOffsetFor(CodeBlock*);
diff --git a/Source/JavaScriptCore/jit/JITExceptions.cpp b/Source/JavaScriptCore/jit/JITExceptions.cpp
index 31b3e5c..fd6364e 100644
--- a/Source/JavaScriptCore/jit/JITExceptions.cpp
+++ b/Source/JavaScriptCore/jit/JITExceptions.cpp
@@ -90,4 +90,9 @@
RELEASE_ASSERT(catchRoutine);
}
+void genericUnwind(VM* vm, ExecState* callFrame)
+{
+ genericUnwind(vm, callFrame, UnwindFromCurrentFrame);
+}
+
} // namespace JSC
diff --git a/Source/JavaScriptCore/jit/JITExceptions.h b/Source/JavaScriptCore/jit/JITExceptions.h
index 3ccac84..b16f31a 100644
--- a/Source/JavaScriptCore/jit/JITExceptions.h
+++ b/Source/JavaScriptCore/jit/JITExceptions.h
@@ -26,15 +26,15 @@
#ifndef JITExceptions_h
#define JITExceptions_h
-#include "Interpreter.h"
-#include "JSCJSValue.h"
-
namespace JSC {
+enum UnwindStart : uint8_t;
+
class ExecState;
class VM;
-void genericUnwind(VM*, ExecState*, UnwindStart = UnwindFromCurrentFrame);
+void genericUnwind(VM*, ExecState*, UnwindStart);
+void genericUnwind(VM*, ExecState*);
} // namespace JSC
diff --git a/Source/JavaScriptCore/jit/JITOpcodes.cpp b/Source/JavaScriptCore/jit/JITOpcodes.cpp
index 2dbe86c..fb33e45 100644
--- a/Source/JavaScriptCore/jit/JITOpcodes.cpp
+++ b/Source/JavaScriptCore/jit/JITOpcodes.cpp
@@ -32,6 +32,7 @@
#include "CopiedSpaceInlines.h"
#include "Exception.h"
#include "Heap.h"
+#include "Interpreter.h"
#include "JITInlines.h"
#include "JSArray.h"
#include "JSCell.h"
@@ -83,15 +84,17 @@
{
Structure* structure = currentInstruction[3].u.objectAllocationProfile->structure();
size_t allocationSize = JSFinalObject::allocationSize(structure->inlineCapacity());
- MarkedAllocator* allocator = &m_vm->heap.allocatorForObjectWithoutDestructor(allocationSize);
+ MarkedAllocator* allocator = m_vm->heap.allocatorForObjectWithoutDestructor(allocationSize);
RegisterID resultReg = regT0;
RegisterID allocatorReg = regT1;
RegisterID scratchReg = regT2;
move(TrustedImmPtr(allocator), allocatorReg);
+ if (allocator)
+ addSlowCase(Jump());
JumpList slowCases;
- emitAllocateJSObject(resultReg, allocatorReg, TrustedImmPtr(structure), TrustedImmPtr(0), scratchReg, slowCases);
+ emitAllocateJSObject(resultReg, allocator, allocatorReg, TrustedImmPtr(structure), TrustedImmPtr(0), scratchReg, slowCases);
addSlowCase(slowCases);
emitPutVirtualRegister(currentInstruction[1].u.operand);
}
@@ -99,6 +102,7 @@
void JIT::emitSlow_op_new_object(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
{
linkSlowCase(iter);
+ linkSlowCase(iter);
int dst = currentInstruction[1].u.operand;
Structure* structure = currentInstruction[3].u.objectAllocationProfile->structure();
callOperation(operationNewObject, structure);
@@ -772,7 +776,7 @@
hasSeenMultipleCallees.link(this);
JumpList slowCases;
- emitAllocateJSObject(resultReg, allocatorReg, structureReg, TrustedImmPtr(0), scratchReg, slowCases);
+ emitAllocateJSObject(resultReg, nullptr, allocatorReg, structureReg, TrustedImmPtr(0), scratchReg, slowCases);
addSlowCase(slowCases);
emitPutVirtualRegister(currentInstruction[1].u.operand);
}
@@ -782,7 +786,8 @@
linkSlowCase(iter); // Callee::m_type != JSFunctionType.
linkSlowCase(iter); // doesn't have rare data
linkSlowCase(iter); // doesn't have an allocation profile
- linkSlowCase(iter); // allocation failed
+ linkSlowCase(iter); // allocation failed (no allocator)
+ linkSlowCase(iter); // allocation failed (allocator empty)
linkSlowCase(iter); // cached function didn't match
JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_create_this);
diff --git a/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp b/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp
index 4b34981..c9933f1 100644
--- a/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp
+++ b/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp
@@ -39,6 +39,7 @@
#include "JSPropertyNameEnumerator.h"
#include "LinkBuffer.h"
#include "MaxFrameExtentForSlowPathCall.h"
+#include "Opcode.h"
#include "SlowPathCall.h"
#include "TypeProfilerLog.h"
#include "VirtualRegister.h"
@@ -163,15 +164,17 @@
{
Structure* structure = currentInstruction[3].u.objectAllocationProfile->structure();
size_t allocationSize = JSFinalObject::allocationSize(structure->inlineCapacity());
- MarkedAllocator* allocator = &m_vm->heap.allocatorForObjectWithoutDestructor(allocationSize);
+ MarkedAllocator* allocator = m_vm->heap.allocatorForObjectWithoutDestructor(allocationSize);
RegisterID resultReg = returnValueGPR;
RegisterID allocatorReg = regT1;
RegisterID scratchReg = regT3;
move(TrustedImmPtr(allocator), allocatorReg);
+ if (allocator)
+ addSlowCase(Jump());
JumpList slowCases;
- emitAllocateJSObject(resultReg, allocatorReg, TrustedImmPtr(structure), TrustedImmPtr(0), scratchReg, slowCases);
+ emitAllocateJSObject(resultReg, allocator, allocatorReg, TrustedImmPtr(structure), TrustedImmPtr(0), scratchReg, slowCases);
addSlowCase(slowCases);
emitStoreCell(currentInstruction[1].u.operand, resultReg);
}
@@ -179,6 +182,7 @@
void JIT::emitSlow_op_new_object(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
{
linkSlowCase(iter);
+ linkSlowCase(iter);
int dst = currentInstruction[1].u.operand;
Structure* structure = currentInstruction[3].u.objectAllocationProfile->structure();
callOperation(operationNewObject, structure);
@@ -1032,7 +1036,7 @@
hasSeenMultipleCallees.link(this);
JumpList slowCases;
- emitAllocateJSObject(resultReg, allocatorReg, structureReg, TrustedImmPtr(0), scratchReg, slowCases);
+ emitAllocateJSObject(resultReg, nullptr, allocatorReg, structureReg, TrustedImmPtr(0), scratchReg, slowCases);
addSlowCase(slowCases);
emitStoreCell(currentInstruction[1].u.operand, resultReg);
}
@@ -1042,7 +1046,8 @@
linkSlowCase(iter); // Callee::m_type != JSFunctionType.
linkSlowCase(iter); // doesn't have rare data
linkSlowCase(iter); // doesn't have an allocation profile
- linkSlowCase(iter); // allocation failed
+ linkSlowCase(iter); // allocation failed (no allocator)
+ linkSlowCase(iter); // allocation failed (allocator empty)
linkSlowCase(iter); // cached function didn't match
JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_create_this);
diff --git a/Source/JavaScriptCore/jit/JITOperations.cpp b/Source/JavaScriptCore/jit/JITOperations.cpp
index f648561..4ddd111 100644
--- a/Source/JavaScriptCore/jit/JITOperations.cpp
+++ b/Source/JavaScriptCore/jit/JITOperations.cpp
@@ -44,6 +44,7 @@
#include "GetterSetter.h"
#include "HostCallReturnValue.h"
#include "ICStats.h"
+#include "Interpreter.h"
#include "JIT.h"
#include "JITExceptions.h"
#include "JITToDFGDeferredCompilationCallback.h"
@@ -480,17 +481,6 @@
repatchPutByID(exec, baseObject, structure, ident, slot, *stubInfo, Direct);
}
-void JIT_OPERATION operationReallocateStorageAndFinishPut(ExecState* exec, JSObject* base, Structure* structure, PropertyOffset offset, EncodedJSValue value)
-{
- VM& vm = exec->vm();
- NativeCallFrameTracer tracer(&vm, exec);
-
- ASSERT(structure->outOfLineCapacity() > base->structure(vm)->outOfLineCapacity());
- ASSERT(!vm.heap.storageAllocator().fastPathShouldSucceed(structure->outOfLineCapacity() * sizeof(JSValue)));
- base->setStructureAndReallocateStorageIfNecessary(vm, structure);
- base->putDirect(vm, offset, JSValue::decode(value));
-}
-
ALWAYS_INLINE static bool isStringOrSymbol(JSValue value)
{
return value.isString() || value.isSymbol();
@@ -2157,7 +2147,6 @@
NativeCallFrameTracer tracer(&vm, exec);
ASSERT(!object->structure()->outOfLineCapacity());
- DeferGC deferGC(vm.heap);
Butterfly* result = object->growOutOfLineStorage(vm, 0, initialOutOfLineCapacity);
object->setButterflyWithoutChangingStructure(vm, result);
return reinterpret_cast<char*>(result);
@@ -2168,7 +2157,6 @@
VM& vm = exec->vm();
NativeCallFrameTracer tracer(&vm, exec);
- DeferGC deferGC(vm.heap);
Butterfly* result = object->growOutOfLineStorage(vm, object->structure()->outOfLineCapacity(), newSize);
object->setButterflyWithoutChangingStructure(vm, result);
return reinterpret_cast<char*>(result);
diff --git a/Source/JavaScriptCore/jit/JITOperations.h b/Source/JavaScriptCore/jit/JITOperations.h
index dd9dc57..16a81d7 100644
--- a/Source/JavaScriptCore/jit/JITOperations.h
+++ b/Source/JavaScriptCore/jit/JITOperations.h
@@ -38,24 +38,34 @@
namespace JSC {
+typedef int64_t EncodedJSValue;
+
class ArrayAllocationProfile;
class ArrayProfile;
+class Butterfly;
class CallLinkInfo;
class CodeBlock;
class ExecState;
class JITAddGenerator;
class JSArray;
+class JSCell;
class JSFunction;
+class JSGlobalObject;
class JSLexicalEnvironment;
+class JSObject;
class JSScope;
+class JSString;
+class JSValue;
class RegExpObject;
class Register;
+class Structure;
class StructureStubInfo;
class SymbolTable;
class WatchpointSet;
struct ByValInfo;
struct InlineCallFrame;
+struct Instruction;
struct ArithProfile;
typedef ExecState CallFrame;
@@ -72,6 +82,7 @@
Aap: ArrayAllocationProfile*
Ap: ArrayProfile*
Arp: ArithProfile*
+ B: Butterfly*
By: ByValInfo*
C: JSCell*
Cb: CodeBlock*
@@ -279,6 +290,7 @@
typedef char* (JIT_OPERATION *P_JITOperation_EStPS)(ExecState*, Structure*, void*, size_t);
typedef char* (JIT_OPERATION *P_JITOperation_EStSS)(ExecState*, Structure*, size_t, size_t);
typedef char* (JIT_OPERATION *P_JITOperation_EStZ)(ExecState*, Structure*, int32_t);
+typedef char* (JIT_OPERATION *P_JITOperation_EStZB)(ExecState*, Structure*, int32_t, Butterfly*);
typedef char* (JIT_OPERATION *P_JITOperation_EZZ)(ExecState*, int32_t, int32_t);
typedef SlowPathReturnType (JIT_OPERATION *Sprt_JITOperation_ECli)(ExecState*, CallLinkInfo*);
typedef StringImpl* (JIT_OPERATION *T_JITOperation_EJss)(ExecState*, JSString*);
@@ -320,7 +332,6 @@
void JIT_OPERATION operationPutByIdNonStrictBuildList(ExecState*, StructureStubInfo*, EncodedJSValue encodedValue, EncodedJSValue encodedBase, UniquedStringImpl*) WTF_INTERNAL;
void JIT_OPERATION operationPutByIdDirectStrictBuildList(ExecState*, StructureStubInfo*, EncodedJSValue encodedValue, EncodedJSValue encodedBase, UniquedStringImpl*) WTF_INTERNAL;
void JIT_OPERATION operationPutByIdDirectNonStrictBuildList(ExecState*, StructureStubInfo*, EncodedJSValue encodedValue, EncodedJSValue encodedBase, UniquedStringImpl*) WTF_INTERNAL;
-void JIT_OPERATION operationReallocateStorageAndFinishPut(ExecState*, JSObject*, Structure*, PropertyOffset, EncodedJSValue) WTF_INTERNAL;
void JIT_OPERATION operationPutByValOptimize(ExecState*, EncodedJSValue, EncodedJSValue, EncodedJSValue, ByValInfo*) WTF_INTERNAL;
void JIT_OPERATION operationDirectPutByValOptimize(ExecState*, EncodedJSValue, EncodedJSValue, EncodedJSValue, ByValInfo*) WTF_INTERNAL;
void JIT_OPERATION operationPutByValGeneric(ExecState*, EncodedJSValue, EncodedJSValue, EncodedJSValue, ByValInfo*) WTF_INTERNAL;
diff --git a/Source/JavaScriptCore/jit/JITPropertyAccess.cpp b/Source/JavaScriptCore/jit/JITPropertyAccess.cpp
index 3343642..560268b 100644
--- a/Source/JavaScriptCore/jit/JITPropertyAccess.cpp
+++ b/Source/JavaScriptCore/jit/JITPropertyAccess.cpp
@@ -1246,7 +1246,7 @@
void JIT::emitWriteBarrier(JSCell* owner)
{
- if (!MarkedBlock::blockFor(owner)->isMarked(owner)) {
+ if (!owner->cellContainer().isMarked(owner)) {
Jump ownerIsRememberedOrInEden = jumpIfIsRememberedOrInEden(owner);
callOperation(operationUnconditionalWriteBarrier, owner);
ownerIsRememberedOrInEden.link(this);
diff --git a/Source/JavaScriptCore/jit/JITThunks.cpp b/Source/JavaScriptCore/jit/JITThunks.cpp
index 1e6c71a..1b8a874 100644
--- a/Source/JavaScriptCore/jit/JITThunks.cpp
+++ b/Source/JavaScriptCore/jit/JITThunks.cpp
@@ -30,8 +30,9 @@
#include "Executable.h"
#include "JIT.h"
-#include "VM.h"
#include "JSCInlines.h"
+#include "LLIntData.h"
+#include "VM.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/jit/JITThunks.h b/Source/JavaScriptCore/jit/JITThunks.h
index d2238c7..4176d23 100644
--- a/Source/JavaScriptCore/jit/JITThunks.h
+++ b/Source/JavaScriptCore/jit/JITThunks.h
@@ -30,7 +30,6 @@
#include "CallData.h"
#include "Intrinsic.h"
-#include "LowLevelInterpreter.h"
#include "MacroAssemblerCodeRef.h"
#include "ThunkGenerator.h"
#include "Weak.h"
diff --git a/Source/JavaScriptCore/jsc.cpp b/Source/JavaScriptCore/jsc.cpp
index 22c9c09..236cfd3 100644
--- a/Source/JavaScriptCore/jsc.cpp
+++ b/Source/JavaScriptCore/jsc.cpp
@@ -636,6 +636,8 @@
static EncodedJSValue JSC_HOST_CALL functionCheckModuleSyntax(ExecState*);
static EncodedJSValue JSC_HOST_CALL functionPlatformSupportsSamplingProfiler(ExecState*);
static EncodedJSValue JSC_HOST_CALL functionGenerateHeapSnapshot(ExecState*);
+static EncodedJSValue JSC_HOST_CALL functionResetSuperSamplerState(ExecState*);
+static EncodedJSValue JSC_HOST_CALL functionEnsureArrayStorage(ExecState*);
#if ENABLE(SAMPLING_PROFILER)
static EncodedJSValue JSC_HOST_CALL functionStartSamplingProfiler(ExecState*);
static EncodedJSValue JSC_HOST_CALL functionSamplingProfilerStackTraces(ExecState*);
@@ -872,6 +874,8 @@
addFunction(vm, "platformSupportsSamplingProfiler", functionPlatformSupportsSamplingProfiler, 0);
addFunction(vm, "generateHeapSnapshot", functionGenerateHeapSnapshot, 0);
+ addFunction(vm, "resetSuperSamplerState", functionResetSuperSamplerState, 0);
+ addFunction(vm, "ensureArrayStorage", functionEnsureArrayStorage, 0);
#if ENABLE(SAMPLING_PROFILER)
addFunction(vm, "startSamplingProfiler", functionStartSamplingProfiler, 0);
addFunction(vm, "samplingProfilerStackTraces", functionSamplingProfilerStackTraces, 0);
@@ -1213,7 +1217,7 @@
JSObject* object = jsDynamicCast<JSObject*>(exec->argument(0));
if (!object)
return JSValue::encode(jsNontrivialString(exec, ASCIILiteral("<not object>")));
- return JSValue::encode(jsNontrivialString(exec, toString("<Public length: ", object->getArrayLength(), "; vector length: ", object->getVectorLength(), ">")));
+ return JSValue::encode(jsNontrivialString(exec, toString("<Butterfly: ", RawPointer(object->butterfly()), "; public length: ", object->getArrayLength(), "; vector length: ", object->getVectorLength(), ">")));
}
class FunctionJSCStackFunctor {
@@ -1960,6 +1964,21 @@
return result;
}
+EncodedJSValue JSC_HOST_CALL functionResetSuperSamplerState(ExecState*)
+{
+ resetSuperSamplerState();
+ return JSValue::encode(jsUndefined());
+}
+
+EncodedJSValue JSC_HOST_CALL functionEnsureArrayStorage(ExecState* exec)
+{
+ for (unsigned i = 0; i < exec->argumentCount(); ++i) {
+ if (JSObject* object = jsDynamicCast<JSObject*>(exec->argument(0)))
+ object->ensureArrayStorage(exec->vm());
+ }
+ return JSValue::encode(jsUndefined());
+}
+
#if ENABLE(SAMPLING_PROFILER)
EncodedJSValue JSC_HOST_CALL functionStartSamplingProfiler(ExecState* exec)
{
@@ -2071,10 +2090,7 @@
TRY
res = jscmain(argc, argv);
EXCEPT(res = 3)
- if (Options::logHeapStatisticsAtExit())
- HeapStatistics::reportSuccess();
- if (Options::reportLLIntStats())
- LLInt::Data::finalizeStats();
+ finalizeStatsAtEndOfTesting();
#if PLATFORM(EFL)
ecore_shutdown();
diff --git a/Source/JavaScriptCore/llint/LLIntData.cpp b/Source/JavaScriptCore/llint/LLIntData.cpp
index ea79e33..6bb3e65 100644
--- a/Source/JavaScriptCore/llint/LLIntData.cpp
+++ b/Source/JavaScriptCore/llint/LLIntData.cpp
@@ -211,7 +211,7 @@
STATIC_ASSERT(GetPutInfo::initializationShift == 10);
STATIC_ASSERT(GetPutInfo::initializationBits == 0xffc00);
- STATIC_ASSERT(MarkedBlock::blockMask == ~static_cast<decltype(MarkedBlock::blockMask)>(0x3fff));
+ STATIC_ASSERT(MarkedBlock::blockSize == 16 * 1024);
ASSERT(bitwise_cast<uintptr_t>(ShadowChicken::Packet::tailMarker()) == static_cast<uintptr_t>(0x7a11));
diff --git a/Source/JavaScriptCore/llint/LLIntExceptions.cpp b/Source/JavaScriptCore/llint/LLIntExceptions.cpp
index 039936e..0450d56 100644
--- a/Source/JavaScriptCore/llint/LLIntExceptions.cpp
+++ b/Source/JavaScriptCore/llint/LLIntExceptions.cpp
@@ -29,6 +29,7 @@
#include "CodeBlock.h"
#include "Instruction.h"
#include "LLIntCommon.h"
+#include "LLIntData.h"
#include "LowLevelInterpreter.h"
#include "JSCInlines.h"
diff --git a/Source/JavaScriptCore/llint/LLIntThunks.cpp b/Source/JavaScriptCore/llint/LLIntThunks.cpp
index b333414..fef4488 100644
--- a/Source/JavaScriptCore/llint/LLIntThunks.cpp
+++ b/Source/JavaScriptCore/llint/LLIntThunks.cpp
@@ -33,6 +33,7 @@
#include "JSInterfaceJIT.h"
#include "JSObject.h"
#include "LLIntCLoop.h"
+#include "LLIntData.h"
#include "LinkBuffer.h"
#include "LowLevelInterpreter.h"
#include "ProtoCallFrame.h"
diff --git a/Source/JavaScriptCore/llint/LLIntThunks.h b/Source/JavaScriptCore/llint/LLIntThunks.h
index 9b7e266..070441e 100644
--- a/Source/JavaScriptCore/llint/LLIntThunks.h
+++ b/Source/JavaScriptCore/llint/LLIntThunks.h
@@ -32,6 +32,7 @@
class VM;
struct ProtoCallFrame;
+typedef int64_t EncodedJSValue;
extern "C" {
EncodedJSValue vmEntryToJavaScript(void*, VM*, ProtoCallFrame*);
diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter.asm
index 28433d2..68a44e7 100644
--- a/Source/JavaScriptCore/llint/LowLevelInterpreter.asm
+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter.asm
@@ -1068,24 +1068,6 @@
.argumentProfileDone:
end
-macro allocateJSObject(allocator, structure, result, scratch1, slowCase)
- const offsetOfFirstFreeCell =
- MarkedAllocator::m_freeList +
- MarkedBlock::FreeList::head
-
- # Get the object from the free list.
- loadp offsetOfFirstFreeCell[allocator], result
- btpz result, slowCase
-
- # Remove the object from the free list.
- loadp [result], scratch1
- storep scratch1, offsetOfFirstFreeCell[allocator]
-
- # Initialize the object.
- storep 0, JSObject::m_butterfly[result]
- storeStructureWithTypeInfo(result, structure, scratch1)
-end
-
macro doReturn()
restoreCalleeSavesUsedByLLInt()
restoreCallerPCAndCFR()
@@ -1307,6 +1289,18 @@
dispatch(2)
+_llint_op_create_this:
+ traceExecution()
+ callOpcodeSlowPath(_slow_path_create_this)
+ dispatch(5)
+
+
+_llint_op_new_object:
+ traceExecution()
+ callOpcodeSlowPath(_llint_slow_path_new_object)
+ dispatch(4)
+
+
_llint_op_new_func:
traceExecution()
callOpcodeSlowPath(_llint_slow_path_new_func)
diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter.cpp b/Source/JavaScriptCore/llint/LowLevelInterpreter.cpp
index 409604f..3220e88 100644
--- a/Source/JavaScriptCore/llint/LowLevelInterpreter.cpp
+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter.cpp
@@ -25,13 +25,17 @@
#include "config.h"
#include "LowLevelInterpreter.h"
+
#include "LLIntOfflineAsmConfig.h"
#include <wtf/InlineASM.h>
#if !ENABLE(JIT)
+#include "CLoopStackInlines.h"
#include "CodeBlock.h"
#include "CommonSlowPaths.h"
+#include "Interpreter.h"
#include "LLIntCLoop.h"
+#include "LLIntData.h"
#include "LLIntSlowPaths.h"
#include "JSCInlines.h"
#include <wtf/Assertions.h>
diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
index ba59f83..0732987 100644
--- a/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
@@ -305,7 +305,7 @@
_handleUncaughtException:
loadp Callee + PayloadOffset[cfr], t3
andp MarkedBlockMask, t3
- loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3
+ loadp MarkedBlock::m_vm[t3], t3
restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(t3, t0)
loadp VM::callFrameForCatch[t3], cfr
storep 0, VM::callFrameForCatch[t3]
@@ -653,7 +653,7 @@
macro branchIfException(label)
loadp Callee + PayloadOffset[cfr], t3
andp MarkedBlockMask, t3
- loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3
+ loadp MarkedBlock::m_vm[t3], t3
btiz VM::m_exception[t3], .noException
jmp label
.noException:
@@ -702,31 +702,6 @@
dispatch(2)
-_llint_op_create_this:
- traceExecution()
- loadi 8[PC], t0
- loadp PayloadOffset[cfr, t0, 8], t0
- bbneq JSCell::m_type[t0], JSFunctionType, .opCreateThisSlow
- loadp JSFunction::m_rareData[t0], t5
- btpz t5, .opCreateThisSlow
- loadp FunctionRareData::m_objectAllocationProfile + ObjectAllocationProfile::m_allocator[t5], t1
- loadp FunctionRareData::m_objectAllocationProfile + ObjectAllocationProfile::m_structure[t5], t2
- btpz t1, .opCreateThisSlow
- loadpFromInstruction(4, t5)
- bpeq t5, 1, .hasSeenMultipleCallee
- bpneq t5, t0, .opCreateThisSlow
-.hasSeenMultipleCallee:
- allocateJSObject(t1, t2, t0, t3, .opCreateThisSlow)
- loadi 4[PC], t1
- storei CellTag, TagOffset[cfr, t1, 8]
- storei t0, PayloadOffset[cfr, t1, 8]
- dispatch(5)
-
-.opCreateThisSlow:
- callOpcodeSlowPath(_slow_path_create_this)
- dispatch(5)
-
-
_llint_op_to_this:
traceExecution()
loadi 4[PC], t0
@@ -742,22 +717,6 @@
dispatch(4)
-_llint_op_new_object:
- traceExecution()
- loadpFromInstruction(3, t0)
- loadp ObjectAllocationProfile::m_allocator[t0], t1
- loadp ObjectAllocationProfile::m_structure[t0], t2
- allocateJSObject(t1, t2, t0, t3, .opNewObjectSlow)
- loadi 4[PC], t1
- storei CellTag, TagOffset[cfr, t1, 8]
- storei t0, PayloadOffset[cfr, t1, 8]
- dispatch(4)
-
-.opNewObjectSlow:
- callOpcodeSlowPath(_llint_slow_path_new_object)
- dispatch(4)
-
-
_llint_op_check_tdz:
traceExecution()
loadisFromInstruction(1, t0)
@@ -1997,7 +1956,7 @@
# and have set VM::targetInterpreterPCForThrow.
loadp Callee + PayloadOffset[cfr], t3
andp MarkedBlockMask, t3
- loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3
+ loadp MarkedBlock::m_vm[t3], t3
restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(t3, t0)
loadp VM::callFrameForCatch[t3], cfr
storep 0, VM::callFrameForCatch[t3]
@@ -2012,7 +1971,7 @@
.isCatchableException:
loadp Callee + PayloadOffset[cfr], t3
andp MarkedBlockMask, t3
- loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3
+ loadp MarkedBlock::m_vm[t3], t3
loadi VM::m_exception[t3], t0
storei 0, VM::m_exception[t3]
@@ -2047,7 +2006,7 @@
# This essentially emulates the JIT's throwing protocol.
loadp Callee[cfr], t1
andp MarkedBlockMask, t1
- loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t1], t1
+ loadp MarkedBlock::m_vm[t1], t1
copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(t1, t2)
jmp VM::targetMachinePCForThrow[t1]
@@ -2066,7 +2025,7 @@
if X86 or X86_WIN
subp 8, sp # align stack pointer
andp MarkedBlockMask, t1
- loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t1], t3
+ loadp MarkedBlock::m_vm[t1], t3
storep cfr, VM::topCallFrame[t3]
move cfr, a0 # a0 = ecx
storep a0, [sp]
@@ -2076,13 +2035,13 @@
call executableOffsetToFunction[t1]
loadp Callee + PayloadOffset[cfr], t3
andp MarkedBlockMask, t3
- loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3
+ loadp MarkedBlock::m_vm[t3], t3
addp 8, sp
elsif ARM or ARMv7 or ARMv7_TRADITIONAL or C_LOOP or MIPS or SH4
subp 8, sp # align stack pointer
# t1 already contains the Callee.
andp MarkedBlockMask, t1
- loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t1], t1
+ loadp MarkedBlock::m_vm[t1], t1
storep cfr, VM::topCallFrame[t1]
move cfr, a0
loadi Callee + PayloadOffset[cfr], t1
@@ -2095,7 +2054,7 @@
end
loadp Callee + PayloadOffset[cfr], t3
andp MarkedBlockMask, t3
- loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3
+ loadp MarkedBlock::m_vm[t3], t3
addp 8, sp
else
error
diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
index 9ec57c8..fc2923a 100644
--- a/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
@@ -277,7 +277,7 @@
_handleUncaughtException:
loadp Callee[cfr], t3
andp MarkedBlockMask, t3
- loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3
+ loadp MarkedBlock::m_vm[t3], t3
restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(t3, t0)
loadp VM::callFrameForCatch[t3], cfr
storep 0, VM::callFrameForCatch[t3]
@@ -559,7 +559,7 @@
macro branchIfException(label)
loadp Callee[cfr], t3
andp MarkedBlockMask, t3
- loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3
+ loadp MarkedBlock::m_vm[t3], t3
btqz VM::m_exception[t3], .noException
jmp label
.noException:
@@ -607,30 +607,6 @@
dispatch(2)
-_llint_op_create_this:
- traceExecution()
- loadisFromInstruction(2, t0)
- loadp [cfr, t0, 8], t0
- bbneq JSCell::m_type[t0], JSFunctionType, .opCreateThisSlow
- loadp JSFunction::m_rareData[t0], t3
- btpz t3, .opCreateThisSlow
- loadp FunctionRareData::m_objectAllocationProfile + ObjectAllocationProfile::m_allocator[t3], t1
- loadp FunctionRareData::m_objectAllocationProfile + ObjectAllocationProfile::m_structure[t3], t2
- btpz t1, .opCreateThisSlow
- loadpFromInstruction(4, t3)
- bpeq t3, 1, .hasSeenMultipleCallee
- bpneq t3, t0, .opCreateThisSlow
-.hasSeenMultipleCallee:
- allocateJSObject(t1, t2, t0, t3, .opCreateThisSlow)
- loadisFromInstruction(1, t1)
- storeq t0, [cfr, t1, 8]
- dispatch(5)
-
-.opCreateThisSlow:
- callOpcodeSlowPath(_slow_path_create_this)
- dispatch(5)
-
-
_llint_op_to_this:
traceExecution()
loadisFromInstruction(1, t0)
@@ -647,21 +623,6 @@
dispatch(4)
-_llint_op_new_object:
- traceExecution()
- loadpFromInstruction(3, t0)
- loadp ObjectAllocationProfile::m_allocator[t0], t1
- loadp ObjectAllocationProfile::m_structure[t0], t2
- allocateJSObject(t1, t2, t0, t3, .opNewObjectSlow)
- loadisFromInstruction(1, t1)
- storeq t0, [cfr, t1, 8]
- dispatch(4)
-
-.opNewObjectSlow:
- callOpcodeSlowPath(_llint_slow_path_new_object)
- dispatch(4)
-
-
_llint_op_check_tdz:
traceExecution()
loadisFromInstruction(1, t0)
@@ -1958,7 +1919,7 @@
# and have set VM::targetInterpreterPCForThrow.
loadp Callee[cfr], t3
andp MarkedBlockMask, t3
- loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3
+ loadp MarkedBlock::m_vm[t3], t3
restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(t3, t0)
loadp VM::callFrameForCatch[t3], cfr
storep 0, VM::callFrameForCatch[t3]
@@ -1977,7 +1938,7 @@
.isCatchableException:
loadp Callee[cfr], t3
andp MarkedBlockMask, t3
- loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3
+ loadp MarkedBlock::m_vm[t3], t3
loadq VM::m_exception[t3], t0
storeq 0, VM::m_exception[t3]
@@ -2004,7 +1965,7 @@
_llint_throw_from_slow_path_trampoline:
loadp Callee[cfr], t1
andp MarkedBlockMask, t1
- loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t1], t1
+ loadp MarkedBlock::m_vm[t1], t1
copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(t1, t2)
callSlowPath(_llint_slow_path_handle_exception)
@@ -2014,7 +1975,7 @@
# This essentially emulates the JIT's throwing protocol.
loadp Callee[cfr], t1
andp MarkedBlockMask, t1
- loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t1], t1
+ loadp MarkedBlock::m_vm[t1], t1
jmp VM::targetMachinePCForThrow[t1]
@@ -2029,7 +1990,7 @@
storep 0, CodeBlock[cfr]
loadp Callee[cfr], t0
andp MarkedBlockMask, t0, t1
- loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t1], t1
+ loadp MarkedBlock::m_vm[t1], t1
storep cfr, VM::topCallFrame[t1]
if ARM64 or C_LOOP
storep lr, ReturnPC[cfr]
@@ -2051,7 +2012,7 @@
end
loadp Callee[cfr], t3
andp MarkedBlockMask, t3
- loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3
+ loadp MarkedBlock::m_vm[t3], t3
functionEpilogue()
diff --git a/Source/JavaScriptCore/parser/ModuleAnalyzer.cpp b/Source/JavaScriptCore/parser/ModuleAnalyzer.cpp
index 1c70366..e646936 100644
--- a/Source/JavaScriptCore/parser/ModuleAnalyzer.cpp
+++ b/Source/JavaScriptCore/parser/ModuleAnalyzer.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2015-2016 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -26,9 +26,7 @@
#include "config.h"
#include "ModuleAnalyzer.h"
-#include "IdentifierInlines.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSGlobalObject.h"
#include "JSModuleRecord.h"
#include "ModuleScopeData.h"
diff --git a/Source/JavaScriptCore/parser/NodeConstructors.h b/Source/JavaScriptCore/parser/NodeConstructors.h
index a755639..69b4428 100644
--- a/Source/JavaScriptCore/parser/NodeConstructors.h
+++ b/Source/JavaScriptCore/parser/NodeConstructors.h
@@ -23,6 +23,7 @@
#include "Nodes.h"
#include "Lexer.h"
+#include "Opcode.h"
#include "Parser.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/parser/Nodes.h b/Source/JavaScriptCore/parser/Nodes.h
index 786a7ce..58e2745 100644
--- a/Source/JavaScriptCore/parser/Nodes.h
+++ b/Source/JavaScriptCore/parser/Nodes.h
@@ -29,7 +29,6 @@
#include "BuiltinNames.h"
#include "Error.h"
#include "JITCode.h"
-#include "Opcode.h"
#include "ParserArena.h"
#include "ParserTokens.h"
#include "ResultType.h"
@@ -41,6 +40,8 @@
namespace JSC {
+ enum OpcodeID : unsigned;
+
class ArgumentListNode;
class BytecodeGenerator;
class FunctionMetadataNode;
diff --git a/Source/JavaScriptCore/profiler/ProfilerBytecode.cpp b/Source/JavaScriptCore/profiler/ProfilerBytecode.cpp
index 6eeeb27..b76e78f 100644
--- a/Source/JavaScriptCore/profiler/ProfilerBytecode.cpp
+++ b/Source/JavaScriptCore/profiler/ProfilerBytecode.cpp
@@ -28,6 +28,7 @@
#include "JSGlobalObject.h"
#include "ObjectConstructor.h"
+#include "Opcode.h"
#include "JSCInlines.h"
namespace JSC { namespace Profiler {
diff --git a/Source/JavaScriptCore/profiler/ProfilerBytecode.h b/Source/JavaScriptCore/profiler/ProfilerBytecode.h
index 8e99c9a..2989089 100644
--- a/Source/JavaScriptCore/profiler/ProfilerBytecode.h
+++ b/Source/JavaScriptCore/profiler/ProfilerBytecode.h
@@ -27,10 +27,13 @@
#define ProfilerBytecode_h
#include "JSCJSValue.h"
-#include "Opcode.h"
#include <wtf/text/CString.h>
-namespace JSC { namespace Profiler {
+namespace JSC {
+
+enum OpcodeID : unsigned;
+
+namespace Profiler {
class Bytecode {
public:
diff --git a/Source/JavaScriptCore/profiler/ProfilerBytecodeSequence.cpp b/Source/JavaScriptCore/profiler/ProfilerBytecodeSequence.cpp
index bad72f1..07545ba 100644
--- a/Source/JavaScriptCore/profiler/ProfilerBytecodeSequence.cpp
+++ b/Source/JavaScriptCore/profiler/ProfilerBytecodeSequence.cpp
@@ -27,9 +27,10 @@
#include "ProfilerBytecodeSequence.h"
#include "CodeBlock.h"
+#include "Interpreter.h"
+#include "JSCInlines.h"
#include "JSGlobalObject.h"
#include "Operands.h"
-#include "JSCInlines.h"
#include <wtf/StringPrintStream.h>
namespace JSC { namespace Profiler {
diff --git a/Source/JavaScriptCore/runtime/ArrayConventions.cpp b/Source/JavaScriptCore/runtime/ArrayConventions.cpp
new file mode 100644
index 0000000..d1492efa
--- /dev/null
+++ b/Source/JavaScriptCore/runtime/ArrayConventions.cpp
@@ -0,0 +1,68 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "config.h"
+#include "ArrayConventions.h"
+
+#include "JSCInlines.h"
+
+namespace JSC {
+
+#if USE(JSVALUE64)
+void clearArrayMemset(WriteBarrier<Unknown>* base, unsigned count)
+{
+#if CPU(X86_64)
+ uint64_t zero = 0;
+ asm volatile (
+ "rep stosq\n\t"
+ : "+D"(base), "+c"(count)
+ : "a"(zero)
+ : "memory"
+ );
+#else // not CPU(X86_64)
+ memset(base, 0, count * sizeof(WriteBarrier<Unknown>));
+#endif // generic CPU
+}
+
+void clearArrayMemset(double* base, unsigned count)
+{
+#if CPU(X86_64)
+ uint64_t pnan = bitwise_cast<uint64_t>(PNaN);
+ asm volatile (
+ "rep stosq\n\t"
+ : "+D"(base), "+c"(count)
+ : "a"(pnan)
+ : "memory"
+ );
+#else // not CPU(X86_64)
+ // Oh no, we can't actually do any better than this!
+ for (unsigned i = count; i--;)
+ base[i] = PNaN;
+#endif // generic CPU
+}
+#endif // USE(JSVALUE64)
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/runtime/ArrayConventions.h b/Source/JavaScriptCore/runtime/ArrayConventions.h
index 9c62ea9..2d7b377 100644
--- a/Source/JavaScriptCore/runtime/ArrayConventions.h
+++ b/Source/JavaScriptCore/runtime/ArrayConventions.h
@@ -1,6 +1,6 @@
/*
* Copyright (C) 1999-2000 Harri Porten (porten@kde.org)
- * Copyright (C) 2003, 2007, 2008, 2009, 2012 Apple Inc. All rights reserved.
+ * Copyright (C) 2003, 2007, 2008, 2009, 2012, 2016 Apple Inc. All rights reserved.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -70,13 +70,15 @@
// 0xFFFFFFFF is a bit weird -- is not an array index even though it's an integer.
#define MAX_ARRAY_INDEX 0xFFFFFFFEU
-// The value BASE_VECTOR_LEN is the maximum number of vector elements we'll allocate
+// The value BASE_XXX_VECTOR_LEN is the maximum number of vector elements we'll allocate
// for an array that was created with a sepcified length (e.g. a = new Array(123))
-#define BASE_VECTOR_LEN 4U
-
+#define BASE_CONTIGUOUS_VECTOR_LEN 3U
+#define BASE_CONTIGUOUS_VECTOR_LEN_EMPTY 5U
+#define BASE_ARRAY_STORAGE_VECTOR_LEN 4U
+
// The upper bound to the size we'll grow a zero length array when the first element
// is added.
-#define FIRST_VECTOR_GROW 4U
+#define FIRST_ARRAY_STORAGE_VECTOR_GROW 4U
#define MIN_BEYOND_LENGTH_SPARSE_INDEX 1000
@@ -96,7 +98,7 @@
return i >= MIN_BEYOND_LENGTH_SPARSE_INDEX && i > length;
}
-inline IndexingHeader indexingHeaderForArray(unsigned length, unsigned vectorLength)
+inline IndexingHeader indexingHeaderForArrayStorage(unsigned length, unsigned vectorLength)
{
IndexingHeader result;
result.setPublicLength(length);
@@ -104,9 +106,42 @@
return result;
}
-inline IndexingHeader baseIndexingHeaderForArray(unsigned length)
+inline IndexingHeader baseIndexingHeaderForArrayStorage(unsigned length)
{
- return indexingHeaderForArray(length, BASE_VECTOR_LEN);
+ return indexingHeaderForArrayStorage(length, BASE_ARRAY_STORAGE_VECTOR_LEN);
+}
+
+#if USE(JSVALUE64)
+JS_EXPORT_PRIVATE void clearArrayMemset(WriteBarrier<Unknown>* base, unsigned count);
+JS_EXPORT_PRIVATE void clearArrayMemset(double* base, unsigned count);
+#endif // USE(JSVALUE64)
+
+ALWAYS_INLINE void clearArray(WriteBarrier<Unknown>* base, unsigned count)
+{
+#if USE(JSVALUE64)
+ const unsigned minCountForMemset = 100;
+ if (count >= minCountForMemset) {
+ clearArrayMemset(base, count);
+ return;
+ }
+#endif
+
+ for (unsigned i = count; i--;)
+ base[i].clear();
+}
+
+ALWAYS_INLINE void clearArray(double* base, unsigned count)
+{
+#if USE(JSVALUE64)
+ const unsigned minCountForMemset = 100;
+ if (count >= minCountForMemset) {
+ clearArrayMemset(base, count);
+ return;
+ }
+#endif
+
+ for (unsigned i = count; i--;)
+ base[i] = PNaN;
}
} // namespace JSC
diff --git a/Source/JavaScriptCore/runtime/ArrayPrototype.cpp b/Source/JavaScriptCore/runtime/ArrayPrototype.cpp
index 00de361..3d36d3a 100644
--- a/Source/JavaScriptCore/runtime/ArrayPrototype.cpp
+++ b/Source/JavaScriptCore/runtime/ArrayPrototype.cpp
@@ -1006,7 +1006,7 @@
if (UNLIKELY(vm.exception()))
return JSValue::encode(jsUndefined());
}
-
+
setLength(exec, thisObj, length - deleteCount + additionalArgs);
return JSValue::encode(result);
}
@@ -1143,6 +1143,7 @@
unsigned firstArraySize = firstButterfly->publicLength();
IndexingType type = first->mergeIndexingTypeForCopying(indexingTypeForValue(second) | IsArray);
+
if (type == NonArray)
type = first->indexingType();
@@ -1171,7 +1172,7 @@
auto scope = DECLARE_THROW_SCOPE(vm);
JSArray* firstArray = jsCast<JSArray*>(exec->uncheckedArgument(0));
-
+
// This code assumes that neither array has set Symbol.isConcatSpreadable. If the first array
// has indexed accessors then one of those accessors might change the value of Symbol.isConcatSpreadable
// on the second argument.
@@ -1187,14 +1188,15 @@
return concatAppendOne(exec, vm, firstArray, second);
JSArray* secondArray = jsCast<JSArray*>(second);
-
+
Butterfly* firstButterfly = firstArray->butterfly();
Butterfly* secondButterfly = secondArray->butterfly();
unsigned firstArraySize = firstButterfly->publicLength();
unsigned secondArraySize = secondButterfly->publicLength();
- IndexingType type = firstArray->mergeIndexingTypeForCopying(secondArray->indexingType());
+ IndexingType secondType = secondArray->indexingType();
+ IndexingType type = firstArray->mergeIndexingTypeForCopying(secondType);
if (type == NonArray || !firstArray->canFastCopy(vm, secondArray) || firstArraySize + secondArraySize >= MIN_SPARSE_ARRAY_INDEX) {
JSArray* result = constructEmptyArray(exec, nullptr, firstArraySize + secondArraySize);
if (vm.exception())
@@ -1213,7 +1215,7 @@
JSArray* result = JSArray::tryCreateUninitialized(vm, resultStructure, firstArraySize + secondArraySize);
if (!result)
return JSValue::encode(throwOutOfMemoryError(exec, scope));
-
+
if (type == ArrayWithDouble) {
double* buffer = result->butterfly()->contiguousDouble().data();
memcpy(buffer, firstButterfly->contiguousDouble().data(), sizeof(JSValue) * firstArraySize);
@@ -1221,7 +1223,12 @@
} else if (type != ArrayWithUndecided) {
WriteBarrier<Unknown>* buffer = result->butterfly()->contiguous().data();
memcpy(buffer, firstButterfly->contiguous().data(), sizeof(JSValue) * firstArraySize);
- memcpy(buffer + firstArraySize, secondButterfly->contiguous().data(), sizeof(JSValue) * secondArraySize);
+ if (secondType != ArrayWithUndecided)
+ memcpy(buffer + firstArraySize, secondButterfly->contiguous().data(), sizeof(JSValue) * secondArraySize);
+ else {
+ for (unsigned i = secondArraySize; i--;)
+ buffer[i + firstArraySize].clear();
+ }
}
result->butterfly()->setPublicLength(firstArraySize + secondArraySize);
diff --git a/Source/JavaScriptCore/runtime/ArrayStorage.h b/Source/JavaScriptCore/runtime/ArrayStorage.h
index c93dc3b..6da59fc 100644
--- a/Source/JavaScriptCore/runtime/ArrayStorage.h
+++ b/Source/JavaScriptCore/runtime/ArrayStorage.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2012 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2016 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -29,7 +29,9 @@
#include "ArrayConventions.h"
#include "Butterfly.h"
#include "IndexingHeader.h"
+#include "MarkedSpace.h"
#include "SparseArrayValueMap.h"
+#include "Structure.h"
#include "WriteBarrier.h"
#include <wtf/Noncopyable.h>
@@ -58,7 +60,7 @@
// We steal two fields from the indexing header: vectorLength and length.
unsigned length() const { return indexingHeader()->publicLength(); }
void setLength(unsigned length) { indexingHeader()->setPublicLength(length); }
- unsigned vectorLength() { return indexingHeader()->vectorLength(); }
+ unsigned vectorLength() const { return indexingHeader()->vectorLength(); }
void setVectorLength(unsigned length) { indexingHeader()->setVectorLength(length); }
ALWAYS_INLINE void copyHeaderFromDuringGC(const ArrayStorage& other)
@@ -99,6 +101,66 @@
{
return ArrayStorage::vectorOffset() + vectorLength * sizeof(WriteBarrier<Unknown>);
}
+
+ static size_t totalSizeFor(unsigned indexBias, size_t propertyCapacity, unsigned vectorLength)
+ {
+ return Butterfly::totalSize(indexBias, propertyCapacity, true, sizeFor(vectorLength));
+ }
+
+ size_t totalSize(size_t propertyCapacity) const
+ {
+ return totalSizeFor(m_indexBias, propertyCapacity, vectorLength());
+ }
+
+ size_t totalSize(Structure* structure) const
+ {
+ return totalSize(structure->outOfLineCapacity());
+ }
+
+ static unsigned availableVectorLength(unsigned indexBias, size_t propertyCapacity, unsigned vectorLength)
+ {
+ size_t cellSize = MarkedSpace::optimalSizeFor(totalSizeFor(indexBias, propertyCapacity, vectorLength));
+
+ vectorLength = (cellSize - totalSizeFor(indexBias, propertyCapacity, 0)) / sizeof(WriteBarrier<Unknown>);
+
+ return vectorLength;
+ }
+
+ static unsigned availableVectorLength(unsigned indexBias, Structure* structure, unsigned vectorLength)
+ {
+ return availableVectorLength(indexBias, structure->outOfLineCapacity(), vectorLength);
+ }
+
+ unsigned availableVectorLength(size_t propertyCapacity, unsigned vectorLength)
+ {
+ return availableVectorLength(m_indexBias, propertyCapacity, vectorLength);
+ }
+
+ unsigned availableVectorLength(Structure* structure, unsigned vectorLength)
+ {
+ return availableVectorLength(structure->outOfLineCapacity(), vectorLength);
+ }
+
+ static unsigned optimalVectorLength(unsigned indexBias, size_t propertyCapacity, unsigned vectorLength)
+ {
+ vectorLength = std::max(BASE_ARRAY_STORAGE_VECTOR_LEN, vectorLength);
+ return availableVectorLength(indexBias, propertyCapacity, vectorLength);
+ }
+
+ static unsigned optimalVectorLength(unsigned indexBias, Structure* structure, unsigned vectorLength)
+ {
+ return optimalVectorLength(indexBias, structure->outOfLineCapacity(), vectorLength);
+ }
+
+ unsigned optimalVectorLength(size_t propertyCapacity, unsigned vectorLength)
+ {
+ return optimalVectorLength(m_indexBias, propertyCapacity, vectorLength);
+ }
+
+ unsigned optimalVectorLength(Structure* structure, unsigned vectorLength)
+ {
+ return optimalVectorLength(structure->outOfLineCapacity(), vectorLength);
+ }
};
} // namespace JSC
diff --git a/Source/JavaScriptCore/runtime/AuxiliaryBarrier.h b/Source/JavaScriptCore/runtime/AuxiliaryBarrier.h
new file mode 100644
index 0000000..193c63c
--- /dev/null
+++ b/Source/JavaScriptCore/runtime/AuxiliaryBarrier.h
@@ -0,0 +1,63 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+namespace JSC {
+
+class JSCell;
+class VM;
+
+// An Auxiliary barrier is a barrier that does not try to reason about the value being stored into
+// it, other than interpreting a falsy value as not needing a barrier. It's OK to use this for either
+// JSCells or any other kind of data, so long as it responds to operator!().
+template<typename T>
+class AuxiliaryBarrier {
+public:
+ AuxiliaryBarrier() { }
+
+ template<typename U>
+ AuxiliaryBarrier(VM&, JSCell*, U&&);
+
+ void clear() { m_value = T(); }
+
+ template<typename U>
+ void set(VM&, JSCell*, U&&);
+
+ const T& get() const { return m_value; }
+
+ T* slot() { return &m_value; }
+
+ explicit operator bool() const { return !!m_value; }
+
+ template<typename U>
+ void setWithoutBarrier(U&& value) { m_value = std::forward<U>(value); }
+
+private:
+ T m_value;
+};
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/runtime/AuxiliaryBarrierInlines.h b/Source/JavaScriptCore/runtime/AuxiliaryBarrierInlines.h
new file mode 100644
index 0000000..6d4e789
--- /dev/null
+++ b/Source/JavaScriptCore/runtime/AuxiliaryBarrierInlines.h
@@ -0,0 +1,51 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#include "AuxiliaryBarrier.h"
+#include "Heap.h"
+#include "VM.h"
+
+namespace JSC {
+
+template<typename T>
+template<typename U>
+AuxiliaryBarrier<T>::AuxiliaryBarrier(VM& vm, JSCell* owner, U&& value)
+{
+ m_value = std::forward<U>(value);
+ vm.heap.writeBarrier(owner);
+}
+
+template<typename T>
+template<typename U>
+void AuxiliaryBarrier<T>::set(VM& vm, JSCell* owner, U&& value)
+{
+ m_value = std::forward<U>(value);
+ vm.heap.writeBarrier(owner);
+}
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/runtime/Butterfly.h b/Source/JavaScriptCore/runtime/Butterfly.h
index 20dccd9..94e17a1 100644
--- a/Source/JavaScriptCore/runtime/Butterfly.h
+++ b/Source/JavaScriptCore/runtime/Butterfly.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2012 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2016 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -90,6 +90,12 @@
return reinterpret_cast<Butterfly*>(static_cast<EncodedJSValue*>(base) + preCapacity + propertyCapacity + 1);
}
+ ALWAYS_INLINE static unsigned availableContiguousVectorLength(size_t propertyCapacity, unsigned vectorLength);
+ static unsigned availableContiguousVectorLength(Structure*, unsigned vectorLength);
+
+ ALWAYS_INLINE static unsigned optimalContiguousVectorLength(size_t propertyCapacity, unsigned vectorLength);
+ static unsigned optimalContiguousVectorLength(Structure*, unsigned vectorLength);
+
// This method is here not just because it's handy, but to remind you that
// the whole point of butterflies is to do evil pointer arithmetic.
static Butterfly* fromPointer(char* ptr)
diff --git a/Source/JavaScriptCore/runtime/ButterflyInlines.h b/Source/JavaScriptCore/runtime/ButterflyInlines.h
index 3fd8dc1..b34ad0e 100644
--- a/Source/JavaScriptCore/runtime/ButterflyInlines.h
+++ b/Source/JavaScriptCore/runtime/ButterflyInlines.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2012 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2016 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -35,12 +35,38 @@
namespace JSC {
+ALWAYS_INLINE unsigned Butterfly::availableContiguousVectorLength(size_t propertyCapacity, unsigned vectorLength)
+{
+ size_t cellSize = totalSize(0, propertyCapacity, true, sizeof(EncodedJSValue) * vectorLength);
+ cellSize = MarkedSpace::optimalSizeFor(cellSize);
+ vectorLength = (cellSize - totalSize(0, propertyCapacity, true, 0)) / sizeof(EncodedJSValue);
+ return vectorLength;
+}
+
+ALWAYS_INLINE unsigned Butterfly::availableContiguousVectorLength(Structure* structure, unsigned vectorLength)
+{
+ return availableContiguousVectorLength(structure ? structure->outOfLineCapacity() : 0, vectorLength);
+}
+
+ALWAYS_INLINE unsigned Butterfly::optimalContiguousVectorLength(size_t propertyCapacity, unsigned vectorLength)
+{
+ if (!vectorLength)
+ vectorLength = BASE_CONTIGUOUS_VECTOR_LEN_EMPTY;
+ else
+ vectorLength = std::max(BASE_CONTIGUOUS_VECTOR_LEN, vectorLength);
+ return availableContiguousVectorLength(propertyCapacity, vectorLength);
+}
+
+ALWAYS_INLINE unsigned Butterfly::optimalContiguousVectorLength(Structure* structure, unsigned vectorLength)
+{
+ return optimalContiguousVectorLength(structure ? structure->outOfLineCapacity() : 0, vectorLength);
+}
+
inline Butterfly* Butterfly::createUninitialized(VM& vm, JSCell* intendedOwner, size_t preCapacity, size_t propertyCapacity, bool hasIndexingHeader, size_t indexingPayloadSizeInBytes)
{
- void* temp;
size_t size = totalSize(preCapacity, propertyCapacity, hasIndexingHeader, indexingPayloadSizeInBytes);
- RELEASE_ASSERT(vm.heap.tryAllocateStorage(intendedOwner, size, &temp));
- Butterfly* result = fromBase(temp, preCapacity, propertyCapacity);
+ void* base = vm.heap.allocateAuxiliary(intendedOwner, size);
+ Butterfly* result = fromBase(base, preCapacity, propertyCapacity);
return result;
}
@@ -119,7 +145,8 @@
void* theBase = base(0, propertyCapacity);
size_t oldSize = totalSize(0, propertyCapacity, hadIndexingHeader, oldIndexingPayloadSizeInBytes);
size_t newSize = totalSize(0, propertyCapacity, true, newIndexingPayloadSizeInBytes);
- if (!vm.heap.tryReallocateStorage(intendedOwner, &theBase, oldSize, newSize))
+ theBase = vm.heap.tryReallocateAuxiliary(intendedOwner, theBase, oldSize, newSize);
+ if (!theBase)
return 0;
return fromBase(theBase, 0, propertyCapacity);
}
diff --git a/Source/JavaScriptCore/runtime/ClonedArguments.cpp b/Source/JavaScriptCore/runtime/ClonedArguments.cpp
index 75e10a0..39ca4f3 100644
--- a/Source/JavaScriptCore/runtime/ClonedArguments.cpp
+++ b/Source/JavaScriptCore/runtime/ClonedArguments.cpp
@@ -44,16 +44,19 @@
ClonedArguments* ClonedArguments::createEmpty(
VM& vm, Structure* structure, JSFunction* callee, unsigned length)
{
- unsigned vectorLength = std::max(BASE_VECTOR_LEN, length);
+ unsigned vectorLength = length;
if (vectorLength > MAX_STORAGE_VECTOR_LENGTH)
return 0;
- void* temp;
- if (!vm.heap.tryAllocateStorage(0, Butterfly::totalSize(0, structure->outOfLineCapacity(), true, vectorLength * sizeof(EncodedJSValue)), &temp))
+ void* temp = vm.heap.tryAllocateAuxiliary(nullptr, Butterfly::totalSize(0, structure->outOfLineCapacity(), true, vectorLength * sizeof(EncodedJSValue)));
+ if (!temp)
return 0;
Butterfly* butterfly = Butterfly::fromBase(temp, 0, structure->outOfLineCapacity());
butterfly->setVectorLength(vectorLength);
butterfly->setPublicLength(length);
+
+ for (unsigned i = length; i < vectorLength; ++i)
+ butterfly->contiguous()[i].clear();
ClonedArguments* result =
new (NotNull, allocateCell<ClonedArguments>(vm.heap))
diff --git a/Source/JavaScriptCore/runtime/CommonSlowPathsExceptions.cpp b/Source/JavaScriptCore/runtime/CommonSlowPathsExceptions.cpp
index b0c8fd7..f6799c1 100644
--- a/Source/JavaScriptCore/runtime/CommonSlowPathsExceptions.cpp
+++ b/Source/JavaScriptCore/runtime/CommonSlowPathsExceptions.cpp
@@ -28,6 +28,7 @@
#include "CallFrame.h"
#include "CodeBlock.h"
+#include "Interpreter.h"
#include "JITExceptions.h"
#include "LLIntCommon.h"
#include "JSCInlines.h"
diff --git a/Source/JavaScriptCore/runtime/CommonSlowPathsExceptions.h b/Source/JavaScriptCore/runtime/CommonSlowPathsExceptions.h
index adcbfd4..bf7d498 100644
--- a/Source/JavaScriptCore/runtime/CommonSlowPathsExceptions.h
+++ b/Source/JavaScriptCore/runtime/CommonSlowPathsExceptions.h
@@ -26,11 +26,10 @@
#ifndef CommonSlowPathExceptions_h
#define CommonSlowPathExceptions_h
-#include "MacroAssemblerCodeRef.h"
-
namespace JSC {
class ExecState;
+class JSObject;
namespace CommonSlowPaths {
diff --git a/Source/JavaScriptCore/runtime/DataView.cpp b/Source/JavaScriptCore/runtime/DataView.cpp
index 78e743f..4fc73bf 100644
--- a/Source/JavaScriptCore/runtime/DataView.cpp
+++ b/Source/JavaScriptCore/runtime/DataView.cpp
@@ -26,6 +26,7 @@
#include "config.h"
#include "DataView.h"
+#include "JSCInlines.h"
#include "JSDataView.h"
#include "JSGlobalObject.h"
diff --git a/Source/JavaScriptCore/runtime/DirectArguments.h b/Source/JavaScriptCore/runtime/DirectArguments.h
index e6d60a8..33bb0a3 100644
--- a/Source/JavaScriptCore/runtime/DirectArguments.h
+++ b/Source/JavaScriptCore/runtime/DirectArguments.h
@@ -26,6 +26,7 @@
#ifndef DirectArguments_h
#define DirectArguments_h
+#include "CopyBarrier.h"
#include "DirectArgumentsOffset.h"
#include "GenericArguments.h"
diff --git a/Source/JavaScriptCore/runtime/ECMAScriptSpecInternalFunctions.cpp b/Source/JavaScriptCore/runtime/ECMAScriptSpecInternalFunctions.cpp
index 0aab6c7..306a6f9 100644
--- a/Source/JavaScriptCore/runtime/ECMAScriptSpecInternalFunctions.cpp
+++ b/Source/JavaScriptCore/runtime/ECMAScriptSpecInternalFunctions.cpp
@@ -28,7 +28,7 @@
#include "CallFrame.h"
#include "ConstructData.h"
-#include "JSCJSValueInlines.h"
+#include "JSCInlines.h"
#include "RegExpObject.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/Error.cpp b/Source/JavaScriptCore/runtime/Error.cpp
index 529e20c..5cb7197 100644
--- a/Source/JavaScriptCore/runtime/Error.cpp
+++ b/Source/JavaScriptCore/runtime/Error.cpp
@@ -28,14 +28,16 @@
#include "ErrorConstructor.h"
#include "ExceptionHelpers.h"
#include "FunctionPrototype.h"
+#include "Interpreter.h"
#include "JSArray.h"
#include "JSFunction.h"
#include "JSGlobalObject.h"
#include "JSObject.h"
#include "JSString.h"
-#include "NativeErrorConstructor.h"
#include "JSCInlines.h"
+#include "NativeErrorConstructor.h"
#include "SourceCode.h"
+#include "StackFrame.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/Error.h b/Source/JavaScriptCore/runtime/Error.h
index a5a0e7b..9ddd5eb 100644
--- a/Source/JavaScriptCore/runtime/Error.h
+++ b/Source/JavaScriptCore/runtime/Error.h
@@ -25,7 +25,6 @@
#include "ErrorInstance.h"
#include "InternalFunction.h"
-#include "Interpreter.h"
#include "JSObject.h"
#include "ThrowScope.h"
#include <stdint.h>
diff --git a/Source/JavaScriptCore/runtime/ErrorInstance.cpp b/Source/JavaScriptCore/runtime/ErrorInstance.cpp
index 24a7f53..6c0ea29 100644
--- a/Source/JavaScriptCore/runtime/ErrorInstance.cpp
+++ b/Source/JavaScriptCore/runtime/ErrorInstance.cpp
@@ -26,6 +26,7 @@
#include "JSScope.h"
#include "JSCInlines.h"
#include "JSGlobalObjectFunctions.h"
+#include <wtf/text/StringBuilder.h>
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/ErrorInstance.h b/Source/JavaScriptCore/runtime/ErrorInstance.h
index c823d83..d21f3be 100644
--- a/Source/JavaScriptCore/runtime/ErrorInstance.h
+++ b/Source/JavaScriptCore/runtime/ErrorInstance.h
@@ -21,7 +21,7 @@
#ifndef ErrorInstance_h
#define ErrorInstance_h
-#include "Interpreter.h"
+#include "JSObject.h"
#include "RuntimeType.h"
#include "SourceProvider.h"
diff --git a/Source/JavaScriptCore/runtime/Exception.cpp b/Source/JavaScriptCore/runtime/Exception.cpp
index 051d762..7169abc 100644
--- a/Source/JavaScriptCore/runtime/Exception.cpp
+++ b/Source/JavaScriptCore/runtime/Exception.cpp
@@ -26,6 +26,7 @@
#include "config.h"
#include "Exception.h"
+#include "Interpreter.h"
#include "JSCInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/Exception.h b/Source/JavaScriptCore/runtime/Exception.h
index a73c680..4972757 100644
--- a/Source/JavaScriptCore/runtime/Exception.h
+++ b/Source/JavaScriptCore/runtime/Exception.h
@@ -26,7 +26,8 @@
#ifndef Exception_h
#define Exception_h
-#include "Interpreter.h"
+#include "JSObject.h"
+#include "StackFrame.h"
#include <wtf/Vector.h>
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/GeneratorPrototype.cpp b/Source/JavaScriptCore/runtime/GeneratorPrototype.cpp
index e823b3d..4d14c49 100644
--- a/Source/JavaScriptCore/runtime/GeneratorPrototype.cpp
+++ b/Source/JavaScriptCore/runtime/GeneratorPrototype.cpp
@@ -27,10 +27,8 @@
#include "GeneratorPrototype.h"
#include "JSCBuiltins.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSGlobalObject.h"
-#include "StructureInlines.h"
#include "GeneratorPrototype.lut.h"
diff --git a/Source/JavaScriptCore/runtime/InternalFunction.cpp b/Source/JavaScriptCore/runtime/InternalFunction.cpp
index 1801b61..6d7ecf3 100644
--- a/Source/JavaScriptCore/runtime/InternalFunction.cpp
+++ b/Source/JavaScriptCore/runtime/InternalFunction.cpp
@@ -37,6 +37,8 @@
InternalFunction::InternalFunction(VM& vm, Structure* structure)
: JSDestructibleObject(vm, structure)
{
+ // exec->vm() wants callees to not be large allocations.
+ RELEASE_ASSERT(!isLargeAllocation());
}
void InternalFunction::finishCreation(VM& vm, const String& name)
diff --git a/Source/JavaScriptCore/runtime/IntlCollator.cpp b/Source/JavaScriptCore/runtime/IntlCollator.cpp
index a468951..feb1f64 100644
--- a/Source/JavaScriptCore/runtime/IntlCollator.cpp
+++ b/Source/JavaScriptCore/runtime/IntlCollator.cpp
@@ -1,6 +1,7 @@
/*
* Copyright (C) 2015 Andy VanWagoner (thetalecrafter@gmail.com)
* Copyright (C) 2015 Sukolsak Sakshuwong (sukolsak@gmail.com)
+ * Copyright (C) 2016 Apple Inc. All Rights Reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -33,8 +34,7 @@
#include "IntlCollatorConstructor.h"
#include "IntlObject.h"
#include "JSBoundFunction.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "ObjectConstructor.h"
#include "SlotVisitorInlines.h"
#include "StructureInlines.h"
diff --git a/Source/JavaScriptCore/runtime/IntlCollatorConstructor.cpp b/Source/JavaScriptCore/runtime/IntlCollatorConstructor.cpp
index 2b899b2..f11cca0 100644
--- a/Source/JavaScriptCore/runtime/IntlCollatorConstructor.cpp
+++ b/Source/JavaScriptCore/runtime/IntlCollatorConstructor.cpp
@@ -33,11 +33,8 @@
#include "IntlCollator.h"
#include "IntlCollatorPrototype.h"
#include "IntlObject.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "Lookup.h"
-#include "SlotVisitorInlines.h"
-#include "StructureInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/IntlCollatorPrototype.cpp b/Source/JavaScriptCore/runtime/IntlCollatorPrototype.cpp
index 7c16f5c..489b9f7 100644
--- a/Source/JavaScriptCore/runtime/IntlCollatorPrototype.cpp
+++ b/Source/JavaScriptCore/runtime/IntlCollatorPrototype.cpp
@@ -32,10 +32,7 @@
#include "Error.h"
#include "IntlCollator.h"
#include "JSBoundFunction.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
-#include "JSObject.h"
-#include "StructureInlines.h"
+#include "JSCInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/IntlDateTimeFormat.cpp b/Source/JavaScriptCore/runtime/IntlDateTimeFormat.cpp
index 27677ae..85913af 100644
--- a/Source/JavaScriptCore/runtime/IntlDateTimeFormat.cpp
+++ b/Source/JavaScriptCore/runtime/IntlDateTimeFormat.cpp
@@ -34,12 +34,12 @@
#include "IntlDateTimeFormatConstructor.h"
#include "IntlObject.h"
#include "JSBoundFunction.h"
-#include "JSCellInlines.h"
#include "JSCInlines.h"
#include "ObjectConstructor.h"
#include <unicode/ucal.h>
#include <unicode/udatpg.h>
#include <unicode/uenum.h>
+#include <wtf/text/StringBuilder.h>
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/IntlDateTimeFormatConstructor.cpp b/Source/JavaScriptCore/runtime/IntlDateTimeFormatConstructor.cpp
index d1b894e..da83a71 100644
--- a/Source/JavaScriptCore/runtime/IntlDateTimeFormatConstructor.cpp
+++ b/Source/JavaScriptCore/runtime/IntlDateTimeFormatConstructor.cpp
@@ -33,11 +33,8 @@
#include "IntlDateTimeFormatPrototype.h"
#include "IntlObject.h"
#include "IntlObjectInlines.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "Lookup.h"
-#include "SlotVisitorInlines.h"
-#include "StructureInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/IntlDateTimeFormatPrototype.cpp b/Source/JavaScriptCore/runtime/IntlDateTimeFormatPrototype.cpp
index 8ba608d..455d17f 100644
--- a/Source/JavaScriptCore/runtime/IntlDateTimeFormatPrototype.cpp
+++ b/Source/JavaScriptCore/runtime/IntlDateTimeFormatPrototype.cpp
@@ -35,10 +35,8 @@
#include "IntlDateTimeFormat.h"
#include "IntlObject.h"
#include "JSBoundFunction.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSObjectInlines.h"
-#include "StructureInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/IntlNumberFormat.cpp b/Source/JavaScriptCore/runtime/IntlNumberFormat.cpp
index b7c06ee..ee5d7a9 100644
--- a/Source/JavaScriptCore/runtime/IntlNumberFormat.cpp
+++ b/Source/JavaScriptCore/runtime/IntlNumberFormat.cpp
@@ -34,7 +34,6 @@
#include "IntlNumberFormatConstructor.h"
#include "IntlObject.h"
#include "JSBoundFunction.h"
-#include "JSCellInlines.h"
#include "JSCInlines.h"
#include "ObjectConstructor.h"
diff --git a/Source/JavaScriptCore/runtime/IntlNumberFormatConstructor.cpp b/Source/JavaScriptCore/runtime/IntlNumberFormatConstructor.cpp
index 2639fe7..98e500b 100644
--- a/Source/JavaScriptCore/runtime/IntlNumberFormatConstructor.cpp
+++ b/Source/JavaScriptCore/runtime/IntlNumberFormatConstructor.cpp
@@ -33,11 +33,8 @@
#include "IntlNumberFormatPrototype.h"
#include "IntlObject.h"
#include "IntlObjectInlines.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "Lookup.h"
-#include "SlotVisitorInlines.h"
-#include "StructureInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/IntlNumberFormatPrototype.cpp b/Source/JavaScriptCore/runtime/IntlNumberFormatPrototype.cpp
index 6114e9e..4fee9e2 100644
--- a/Source/JavaScriptCore/runtime/IntlNumberFormatPrototype.cpp
+++ b/Source/JavaScriptCore/runtime/IntlNumberFormatPrototype.cpp
@@ -33,10 +33,8 @@
#include "Error.h"
#include "IntlNumberFormat.h"
#include "JSBoundFunction.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSObjectInlines.h"
-#include "StructureInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/IntlObject.cpp b/Source/JavaScriptCore/runtime/IntlObject.cpp
index fede118..e19499c 100644
--- a/Source/JavaScriptCore/runtime/IntlObject.cpp
+++ b/Source/JavaScriptCore/runtime/IntlObject.cpp
@@ -50,6 +50,7 @@
#include <wtf/Assertions.h>
#include <wtf/NeverDestroyed.h>
#include <wtf/PlatformUserPreferredLanguages.h>
+#include <wtf/text/StringBuilder.h>
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/IteratorPrototype.cpp b/Source/JavaScriptCore/runtime/IteratorPrototype.cpp
index aa95069..121a01e 100644
--- a/Source/JavaScriptCore/runtime/IteratorPrototype.cpp
+++ b/Source/JavaScriptCore/runtime/IteratorPrototype.cpp
@@ -27,11 +27,9 @@
#include "IteratorPrototype.h"
#include "JSCBuiltins.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSGlobalObject.h"
#include "ObjectConstructor.h"
-#include "StructureInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/JSArray.cpp b/Source/JavaScriptCore/runtime/JSArray.cpp
index f6d7350..a79fad9 100644
--- a/Source/JavaScriptCore/runtime/JSArray.cpp
+++ b/Source/JavaScriptCore/runtime/JSArray.cpp
@@ -60,6 +60,54 @@
return butterfly;
}
+JSArray* JSArray::tryCreateUninitialized(VM& vm, Structure* structure, unsigned initialLength)
+{
+ if (initialLength > MAX_STORAGE_VECTOR_LENGTH)
+ return 0;
+
+ unsigned outOfLineStorage = structure->outOfLineCapacity();
+
+ Butterfly* butterfly;
+ IndexingType indexingType = structure->indexingType();
+ if (LIKELY(!hasAnyArrayStorage(indexingType))) {
+ ASSERT(
+ hasUndecided(indexingType)
+ || hasInt32(indexingType)
+ || hasDouble(indexingType)
+ || hasContiguous(indexingType));
+
+ unsigned vectorLength = Butterfly::optimalContiguousVectorLength(structure, initialLength);
+ void* temp = vm.heap.tryAllocateAuxiliary(nullptr, Butterfly::totalSize(0, outOfLineStorage, true, vectorLength * sizeof(EncodedJSValue)));
+ if (!temp)
+ return nullptr;
+ butterfly = Butterfly::fromBase(temp, 0, outOfLineStorage);
+ butterfly->setVectorLength(vectorLength);
+ butterfly->setPublicLength(initialLength);
+ if (hasDouble(indexingType)) {
+ for (unsigned i = initialLength; i < vectorLength; ++i)
+ butterfly->contiguousDouble()[i] = PNaN;
+ } else {
+ for (unsigned i = initialLength; i < vectorLength; ++i)
+ butterfly->contiguous()[i].clear();
+ }
+ } else {
+ unsigned vectorLength = ArrayStorage::optimalVectorLength(0, structure, initialLength);
+ void* temp = vm.heap.tryAllocateAuxiliary(nullptr, Butterfly::totalSize(0, outOfLineStorage, true, ArrayStorage::sizeFor(vectorLength)));
+ if (!temp)
+ return nullptr;
+ butterfly = Butterfly::fromBase(temp, 0, outOfLineStorage);
+ *butterfly->indexingHeader() = indexingHeaderForArrayStorage(initialLength, vectorLength);
+ ArrayStorage* storage = butterfly->arrayStorage();
+ storage->m_indexBias = 0;
+ storage->m_sparseMap.clear();
+ storage->m_numValuesInVector = initialLength;
+ for (unsigned i = initialLength; i < vectorLength; ++i)
+ storage->m_vector[i].clear();
+ }
+
+ return createWithButterfly(vm, structure, butterfly);
+}
+
void JSArray::setLengthWritable(ExecState* exec, bool writable)
{
ASSERT(isLengthWritable() || !writable);
@@ -243,13 +291,14 @@
}
// This method makes room in the vector, but leaves the new space for count slots uncleared.
-bool JSArray::unshiftCountSlowCase(VM& vm, bool addToFront, unsigned count)
+bool JSArray::unshiftCountSlowCase(VM& vm, DeferGC&, bool addToFront, unsigned count)
{
ArrayStorage* storage = ensureArrayStorage(vm);
Butterfly* butterfly = storage->butterfly();
- unsigned propertyCapacity = structure(vm)->outOfLineCapacity();
- unsigned propertySize = structure(vm)->outOfLineSize();
-
+ Structure* structure = this->structure(vm);
+ unsigned propertyCapacity = structure->outOfLineCapacity();
+ unsigned propertySize = structure->outOfLineSize();
+
// If not, we should have handled this on the fast path.
ASSERT(!addToFront || count > storage->m_indexBias);
@@ -261,7 +310,8 @@
// * desiredCapacity - how large should we like to grow the vector to - based on 2x requiredVectorLength.
unsigned length = storage->length();
- unsigned usedVectorLength = min(storage->vectorLength(), length);
+ unsigned oldVectorLength = storage->vectorLength();
+ unsigned usedVectorLength = min(oldVectorLength, length);
ASSERT(usedVectorLength <= MAX_STORAGE_VECTOR_LENGTH);
// Check that required vector length is possible, in an overflow-safe fashion.
if (count > MAX_STORAGE_VECTOR_LENGTH - usedVectorLength)
@@ -272,23 +322,29 @@
ASSERT(storage->vectorLength() <= MAX_STORAGE_VECTOR_LENGTH && (MAX_STORAGE_VECTOR_LENGTH - storage->vectorLength()) >= storage->m_indexBias);
unsigned currentCapacity = storage->vectorLength() + storage->m_indexBias;
// The calculation of desiredCapacity won't overflow, due to the range of MAX_STORAGE_VECTOR_LENGTH.
- unsigned desiredCapacity = min(MAX_STORAGE_VECTOR_LENGTH, max(BASE_VECTOR_LEN, requiredVectorLength) << 1);
+ // FIXME: This code should be fixed to avoid internal fragmentation. It's not super high
+ // priority since increaseVectorLength() will "fix" any mistakes we make, but it would be cool
+ // to get this right eventually.
+ unsigned desiredCapacity = min(MAX_STORAGE_VECTOR_LENGTH, max(BASE_ARRAY_STORAGE_VECTOR_LEN, requiredVectorLength) << 1);
// Step 2:
// We're either going to choose to allocate a new ArrayStorage, or we're going to reuse the existing one.
- DeferGC deferGC(vm.heap);
void* newAllocBase = 0;
unsigned newStorageCapacity;
+ bool allocatedNewStorage;
// If the current storage array is sufficiently large (but not too large!) then just keep using it.
if (currentCapacity > desiredCapacity && isDenseEnoughForVector(currentCapacity, requiredVectorLength)) {
- newAllocBase = butterfly->base(structure(vm));
+ newAllocBase = butterfly->base(structure);
newStorageCapacity = currentCapacity;
+ allocatedNewStorage = false;
} else {
size_t newSize = Butterfly::totalSize(0, propertyCapacity, true, ArrayStorage::sizeFor(desiredCapacity));
- if (!vm.heap.tryAllocateStorage(this, newSize, &newAllocBase))
+ newAllocBase = vm.heap.tryAllocateAuxiliary(this, newSize);
+ if (!newAllocBase)
return false;
newStorageCapacity = desiredCapacity;
+ allocatedNewStorage = true;
}
// Step 3:
@@ -306,7 +362,7 @@
// Atomic decay, + the post-capacity cannot be greater than what is available.
postCapacity = min((storage->vectorLength() - length) >> 1, newStorageCapacity - requiredVectorLength);
// If we're moving contents within the same allocation, the post-capacity is being reduced.
- ASSERT(newAllocBase != butterfly->base(structure(vm)) || postCapacity < storage->vectorLength() - length);
+ ASSERT(newAllocBase != butterfly->base(structure) || postCapacity < storage->vectorLength() - length);
}
unsigned newVectorLength = requiredVectorLength + postCapacity;
@@ -318,17 +374,24 @@
ASSERT(count + usedVectorLength <= newVectorLength);
memmove(newButterfly->arrayStorage()->m_vector + count, storage->m_vector, sizeof(JSValue) * usedVectorLength);
memmove(newButterfly->propertyStorage() - propertySize, butterfly->propertyStorage() - propertySize, sizeof(JSValue) * propertySize + sizeof(IndexingHeader) + ArrayStorage::sizeFor(0));
- } else if ((newAllocBase != butterfly->base(structure(vm))) || (newIndexBias != storage->m_indexBias)) {
+
+ if (allocatedNewStorage) {
+ // We will set the vectorLength to newVectorLength. We populated requiredVectorLength
+ // (usedVectorLength + count), which is less. Clear the difference.
+ for (unsigned i = requiredVectorLength; i < newVectorLength; ++i)
+ newButterfly->arrayStorage()->m_vector[i].clear();
+ }
+ } else if ((newAllocBase != butterfly->base(structure)) || (newIndexBias != storage->m_indexBias)) {
memmove(newButterfly->propertyStorage() - propertySize, butterfly->propertyStorage() - propertySize, sizeof(JSValue) * propertySize + sizeof(IndexingHeader) + ArrayStorage::sizeFor(0));
memmove(newButterfly->arrayStorage()->m_vector, storage->m_vector, sizeof(JSValue) * usedVectorLength);
-
- WriteBarrier<Unknown>* newVector = newButterfly->arrayStorage()->m_vector;
+
for (unsigned i = requiredVectorLength; i < newVectorLength; i++)
- newVector[i].clear();
+ newButterfly->arrayStorage()->m_vector[i].clear();
}
newButterfly->arrayStorage()->setVectorLength(newVectorLength);
newButterfly->arrayStorage()->m_indexBias = newIndexBias;
+
setButterflyWithoutChangingStructure(vm, newButterfly);
return true;
@@ -337,7 +400,7 @@
bool JSArray::setLengthWithArrayStorage(ExecState* exec, unsigned newLength, bool throwException, ArrayStorage* storage)
{
unsigned length = storage->length();
-
+
// If the length is read only then we enter sparse mode, so should enter the following 'if'.
ASSERT(isLengthWritable() || storage->m_sparseMap);
@@ -997,6 +1060,10 @@
unsigned vectorLength = storage->vectorLength();
+ // Need to have GC deferred around the unshiftCountSlowCase(), since that leaves the butterfly in
+ // a weird state: some parts of it will be left uninitialized, which we will fill in here.
+ DeferGC deferGC(vm.heap);
+
if (moveFront && storage->m_indexBias >= count) {
Butterfly* newButterfly = storage->butterfly()->unshift(structure(), count);
storage = newButterfly->arrayStorage();
@@ -1005,7 +1072,7 @@
setButterflyWithoutChangingStructure(vm, newButterfly);
} else if (!moveFront && vectorLength - length >= count)
storage = storage->butterfly()->arrayStorage();
- else if (unshiftCountSlowCase(vm, moveFront, count))
+ else if (unshiftCountSlowCase(vm, deferGC, moveFront, count))
storage = arrayStorage();
else {
throwOutOfMemoryError(exec, scope);
@@ -1199,7 +1266,6 @@
ASSERT(length == this->length());
Butterfly* butterfly = m_butterfly.get();
-
switch (indexingType()) {
case ArrayClass:
return;
diff --git a/Source/JavaScriptCore/runtime/JSArray.h b/Source/JavaScriptCore/runtime/JSArray.h
index 6ee8e65..c3c5fb5 100644
--- a/Source/JavaScriptCore/runtime/JSArray.h
+++ b/Source/JavaScriptCore/runtime/JSArray.h
@@ -1,6 +1,6 @@
/*
* Copyright (C) 1999-2000 Harri Porten (porten@kde.org)
- * Copyright (C) 2003, 2007, 2008, 2009, 2012, 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2003, 2007, 2008, 2009, 2012, 2015-2016 Apple Inc. All rights reserved.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -60,7 +60,7 @@
// contents are known at time of creation. Clients of this interface must:
// - null-check the result (indicating out of memory, or otherwise unable to allocate vector).
// - call 'initializeIndex' for all properties in sequence, for 0 <= i < initialLength.
- static JSArray* tryCreateUninitialized(VM&, Structure*, unsigned initialLength);
+ JS_EXPORT_PRIVATE static JSArray* tryCreateUninitialized(VM&, Structure*, unsigned initialLength);
JS_EXPORT_PRIVATE static bool defineOwnProperty(JSObject*, ExecState*, PropertyName, const PropertyDescriptor&, bool throwException);
@@ -168,7 +168,7 @@
bool unshiftCountWithAnyIndexingType(ExecState*, unsigned startIndex, unsigned count);
bool unshiftCountWithArrayStorage(ExecState*, unsigned startIndex, unsigned count, ArrayStorage*);
- bool unshiftCountSlowCase(VM&, bool, unsigned);
+ bool unshiftCountSlowCase(VM&, DeferGC&, bool, unsigned);
bool setLengthWithArrayStorage(ExecState*, unsigned newLength, bool throwException, ArrayStorage*);
void setLengthWritable(ExecState*, bool writable);
@@ -177,7 +177,8 @@
inline Butterfly* createContiguousArrayButterfly(VM& vm, JSCell* intendedOwner, unsigned length, unsigned& vectorLength)
{
IndexingHeader header;
- vectorLength = std::max(length, BASE_VECTOR_LEN);
+ vectorLength = Butterfly::optimalContiguousVectorLength(
+ intendedOwner ? intendedOwner->structure(vm) : 0, length);
header.setVectorLength(vectorLength);
header.setPublicLength(length);
Butterfly* result = Butterfly::create(
@@ -188,11 +189,11 @@
inline Butterfly* createArrayButterfly(VM& vm, JSCell* intendedOwner, unsigned initialLength)
{
Butterfly* butterfly = Butterfly::create(
- vm, intendedOwner, 0, 0, true, baseIndexingHeaderForArray(initialLength),
- ArrayStorage::sizeFor(BASE_VECTOR_LEN));
+ vm, intendedOwner, 0, 0, true, baseIndexingHeaderForArrayStorage(initialLength),
+ ArrayStorage::sizeFor(BASE_ARRAY_STORAGE_VECTOR_LEN));
ArrayStorage* storage = butterfly->arrayStorage();
- storage->m_indexBias = 0;
storage->m_sparseMap.clear();
+ storage->m_indexBias = 0;
storage->m_numValuesInVector = 0;
return butterfly;
}
@@ -211,57 +212,17 @@
|| hasContiguous(structure->indexingType()));
unsigned vectorLength;
butterfly = createContiguousArrayButterfly(vm, 0, initialLength, vectorLength);
- ASSERT(initialLength < MIN_ARRAY_STORAGE_CONSTRUCTION_LENGTH);
- if (hasDouble(structure->indexingType())) {
- for (unsigned i = 0; i < vectorLength; ++i)
- butterfly->contiguousDouble()[i] = PNaN;
- }
+ if (hasDouble(structure->indexingType()))
+ clearArray(butterfly->contiguousDouble().data(), vectorLength);
+ else
+ clearArray(butterfly->contiguous().data(), vectorLength);
} else {
ASSERT(
structure->indexingType() == ArrayWithSlowPutArrayStorage
|| structure->indexingType() == ArrayWithArrayStorage);
butterfly = createArrayButterfly(vm, 0, initialLength);
- }
-
- return createWithButterfly(vm, structure, butterfly);
-}
-
-inline JSArray* JSArray::tryCreateUninitialized(VM& vm, Structure* structure, unsigned initialLength)
-{
- unsigned vectorLength = std::max(BASE_VECTOR_LEN, initialLength);
- if (vectorLength > MAX_STORAGE_VECTOR_LENGTH)
- return 0;
-
- unsigned outOfLineStorage = structure->outOfLineCapacity();
-
- Butterfly* butterfly;
- if (LIKELY(!hasAnyArrayStorage(structure->indexingType()))) {
- ASSERT(
- hasUndecided(structure->indexingType())
- || hasInt32(structure->indexingType())
- || hasDouble(structure->indexingType())
- || hasContiguous(structure->indexingType()));
-
- void* temp;
- if (!vm.heap.tryAllocateStorage(0, Butterfly::totalSize(0, outOfLineStorage, true, vectorLength * sizeof(EncodedJSValue)), &temp))
- return 0;
- butterfly = Butterfly::fromBase(temp, 0, outOfLineStorage);
- butterfly->setVectorLength(vectorLength);
- butterfly->setPublicLength(initialLength);
- if (hasDouble(structure->indexingType())) {
- for (unsigned i = initialLength; i < vectorLength; ++i)
- butterfly->contiguousDouble()[i] = PNaN;
- }
- } else {
- void* temp;
- if (!vm.heap.tryAllocateStorage(0, Butterfly::totalSize(0, outOfLineStorage, true, ArrayStorage::sizeFor(vectorLength)), &temp))
- return 0;
- butterfly = Butterfly::fromBase(temp, 0, outOfLineStorage);
- *butterfly->indexingHeader() = indexingHeaderForArray(initialLength, vectorLength);
- ArrayStorage* storage = butterfly->arrayStorage();
- storage->m_indexBias = 0;
- storage->m_sparseMap.clear();
- storage->m_numValuesInVector = initialLength;
+ for (unsigned i = 0; i < BASE_ARRAY_STORAGE_VECTOR_LEN; ++i)
+ butterfly->arrayStorage()->m_vector[i].clear();
}
return createWithButterfly(vm, structure, butterfly);
diff --git a/Source/JavaScriptCore/runtime/JSArrayBufferView.h b/Source/JavaScriptCore/runtime/JSArrayBufferView.h
index 9520c0b..2aa62a3 100644
--- a/Source/JavaScriptCore/runtime/JSArrayBufferView.h
+++ b/Source/JavaScriptCore/runtime/JSArrayBufferView.h
@@ -26,6 +26,7 @@
#ifndef JSArrayBufferView_h
#define JSArrayBufferView_h
+#include "CopyBarrier.h"
#include "JSObject.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/JSCInlines.h b/Source/JavaScriptCore/runtime/JSCInlines.h
index 0408b16..4ca0ec5 100644
--- a/Source/JavaScriptCore/runtime/JSCInlines.h
+++ b/Source/JavaScriptCore/runtime/JSCInlines.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014, 2016 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -41,7 +41,6 @@
#include "GCIncomingRefCountedInlines.h"
#include "HeapInlines.h"
#include "IdentifierInlines.h"
-#include "Interpreter.h"
#include "JSArrayBufferViewInlines.h"
#include "JSCJSValueInlines.h"
#include "JSFunctionInlines.h"
diff --git a/Source/JavaScriptCore/runtime/JSCJSValue.cpp b/Source/JavaScriptCore/runtime/JSCJSValue.cpp
index 36ff545..68e4c86 100644
--- a/Source/JavaScriptCore/runtime/JSCJSValue.cpp
+++ b/Source/JavaScriptCore/runtime/JSCJSValue.cpp
@@ -29,11 +29,10 @@
#include "Error.h"
#include "ExceptionHelpers.h"
#include "GetterSetter.h"
-#include "JSCJSValueInlines.h"
+#include "JSCInlines.h"
#include "JSFunction.h"
#include "JSGlobalObject.h"
#include "NumberObject.h"
-#include "StructureInlines.h"
#include <wtf/MathExtras.h>
#include <wtf/StringExtras.h>
@@ -278,7 +277,11 @@
out.print("Symbol: ", RawPointer(asCell()));
else if (structure->classInfo()->isSubClassOf(Structure::info()))
out.print("Structure: ", inContext(*jsCast<Structure*>(asCell()), context));
- else {
+ else if (structure->classInfo()->isSubClassOf(JSObject::info())) {
+ out.print("Object: ", RawPointer(asCell()));
+ out.print(" with butterfly ", RawPointer(asObject(asCell())->butterfly()));
+ out.print(" (", inContext(*structure, context), ")");
+ } else {
out.print("Cell: ", RawPointer(asCell()));
out.print(" (", inContext(*structure, context), ")");
}
diff --git a/Source/JavaScriptCore/runtime/JSCallee.cpp b/Source/JavaScriptCore/runtime/JSCallee.cpp
index d303296..9192925 100644
--- a/Source/JavaScriptCore/runtime/JSCallee.cpp
+++ b/Source/JavaScriptCore/runtime/JSCallee.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014, 2016 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -27,13 +27,9 @@
#include "JSCallee.h"
#include "GetterSetter.h"
-#include "JSCJSValueInlines.h"
-#include "JSCell.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSGlobalObject.h"
-#include "SlotVisitorInlines.h"
#include "StackVisitor.h"
-#include "StructureInlines.h"
namespace JSC {
@@ -43,6 +39,7 @@
: Base(vm, structure)
, m_scope(vm, this, globalObject)
{
+ RELEASE_ASSERT(!isLargeAllocation());
}
JSCallee::JSCallee(VM& vm, JSScope* scope, Structure* structure)
diff --git a/Source/JavaScriptCore/runtime/JSCell.cpp b/Source/JavaScriptCore/runtime/JSCell.cpp
index 906b240..73719d3 100644
--- a/Source/JavaScriptCore/runtime/JSCell.cpp
+++ b/Source/JavaScriptCore/runtime/JSCell.cpp
@@ -58,7 +58,7 @@
size_t JSCell::estimatedSize(JSCell* cell)
{
- return MarkedBlock::blockFor(cell)->cellSize();
+ return cell->cellSize();
}
void JSCell::copyBackingStore(JSCell*, CopyVisitor&, CopyToken)
diff --git a/Source/JavaScriptCore/runtime/JSCell.h b/Source/JavaScriptCore/runtime/JSCell.h
index b5063e8..2fd7117 100644
--- a/Source/JavaScriptCore/runtime/JSCell.h
+++ b/Source/JavaScriptCore/runtime/JSCell.h
@@ -80,7 +80,7 @@
enum CreatingEarlyCellTag { CreatingEarlyCell };
JSCell(CreatingEarlyCellTag);
-
+
protected:
JSCell(VM&, Structure*);
JS_EXPORT_PRIVATE static void destroy(JSCell*);
@@ -108,8 +108,6 @@
const char* className() const;
- VM* vm() const;
-
// Extracting the value.
JS_EXPORT_PRIVATE bool getString(ExecState*, String&) const;
JS_EXPORT_PRIVATE String getString(ExecState*) const; // null string if not a string
@@ -190,6 +188,8 @@
{
return OBJECT_OFFSETOF(JSCell, m_cellState);
}
+
+ void callDestructor(VM&);
static const TypedArrayType TypedArrayStorageType = NotTypedArray;
protected:
diff --git a/Source/JavaScriptCore/runtime/JSCellInlines.h b/Source/JavaScriptCore/runtime/JSCellInlines.h
index d66bfb6..edaac4b 100644
--- a/Source/JavaScriptCore/runtime/JSCellInlines.h
+++ b/Source/JavaScriptCore/runtime/JSCellInlines.h
@@ -113,16 +113,13 @@
visitor.appendUnbarrieredPointer(&structure);
}
-inline VM* JSCell::vm() const
-{
- return MarkedBlock::blockFor(this)->vm();
-}
-
ALWAYS_INLINE VM& ExecState::vm() const
{
ASSERT(callee());
ASSERT(callee()->vm());
- return *calleeAsValue().asCell()->vm();
+ ASSERT(!callee()->isLargeAllocation());
+ // This is an important optimization since we access this so often.
+ return *calleeAsValue().asCell()->markedBlock().vm();
}
template<typename T>
@@ -233,12 +230,19 @@
&& !structure.typeInfo().overridesGetOwnPropertySlot();
}
-inline const ClassInfo* JSCell::classInfo() const
+ALWAYS_INLINE const ClassInfo* JSCell::classInfo() const
{
- MarkedBlock* block = MarkedBlock::blockFor(this);
- if (block->needsDestruction() && !(inlineTypeFlags() & StructureIsImmortal))
+ if (isLargeAllocation()) {
+ LargeAllocation& allocation = largeAllocation();
+ if (allocation.attributes().destruction == NeedsDestruction
+ && !(inlineTypeFlags() & StructureIsImmortal))
+ return static_cast<const JSDestructibleObject*>(this)->classInfo();
+ return structure(*allocation.vm())->classInfo();
+ }
+ MarkedBlock& block = markedBlock();
+ if (block.needsDestruction() && !(inlineTypeFlags() & StructureIsImmortal))
return static_cast<const JSDestructibleObject*>(this)->classInfo();
- return structure(*block->vm())->classInfo();
+ return structure(*block.vm())->classInfo();
}
inline bool JSCell::toBoolean(ExecState* exec) const
@@ -257,6 +261,18 @@
return MixedTriState;
}
+inline void JSCell::callDestructor(VM& vm)
+{
+ if (isZapped())
+ return;
+ ASSERT(structureID());
+ if (inlineTypeFlags() & StructureIsImmortal)
+ structure(vm)->classInfo()->methodTable.destroy(this);
+ else
+ jsCast<JSDestructibleObject*>(this)->classInfo()->methodTable.destroy(this);
+ zap();
+}
+
} // namespace JSC
#endif // JSCellInlines_h
diff --git a/Source/JavaScriptCore/runtime/JSFunction.cpp b/Source/JavaScriptCore/runtime/JSFunction.cpp
index be1ac0b..e746c96 100644
--- a/Source/JavaScriptCore/runtime/JSFunction.cpp
+++ b/Source/JavaScriptCore/runtime/JSFunction.cpp
@@ -64,7 +64,7 @@
JSFunction* JSFunction::create(VM& vm, FunctionExecutable* executable, JSScope* scope)
{
- return create(vm, executable, scope, scope->globalObject()->functionStructure());
+ return create(vm, executable, scope, scope->globalObject(vm)->functionStructure());
}
JSFunction* JSFunction::create(VM& vm, FunctionExecutable* executable, JSScope* scope, Structure* structure)
@@ -78,7 +78,7 @@
JSFunction* JSFunction::create(VM& vm, WebAssemblyExecutable* executable, JSScope* scope)
{
JSFunction* function = new (NotNull, allocateCell<JSFunction>(vm.heap)) JSFunction(vm, executable, scope);
- ASSERT(function->structure()->globalObject());
+ ASSERT(function->structure(vm)->globalObject());
function->finishCreation(vm);
return function;
}
@@ -145,9 +145,9 @@
VM& vm = exec->vm();
JSObject* prototype = jsDynamicCast<JSObject*>(get(exec, vm.propertyNames->prototype));
if (!prototype)
- prototype = globalObject()->objectPrototype();
+ prototype = globalObject(vm)->objectPrototype();
FunctionRareData* rareData = FunctionRareData::create(vm);
- rareData->initializeObjectAllocationProfile(globalObject()->vm(), prototype, inlineCapacity);
+ rareData->initializeObjectAllocationProfile(vm, prototype, inlineCapacity);
// A DFG compilation thread may be trying to read the rare data
// We want to ensure that it sees it properly allocated
@@ -163,8 +163,8 @@
VM& vm = exec->vm();
JSObject* prototype = jsDynamicCast<JSObject*>(get(exec, vm.propertyNames->prototype));
if (!prototype)
- prototype = globalObject()->objectPrototype();
- m_rareData->initializeObjectAllocationProfile(globalObject()->vm(), prototype, inlineCapacity);
+ prototype = globalObject(vm)->objectPrototype();
+ m_rareData->initializeObjectAllocationProfile(vm, prototype, inlineCapacity);
return m_rareData.get();
}
@@ -345,37 +345,37 @@
bool JSFunction::getOwnPropertySlot(JSObject* object, ExecState* exec, PropertyName propertyName, PropertySlot& slot)
{
+ VM& vm = exec->vm();
JSFunction* thisObject = jsCast<JSFunction*>(object);
if (thisObject->isHostOrBuiltinFunction()) {
- thisObject->reifyBoundNameIfNeeded(exec, propertyName);
+ thisObject->reifyBoundNameIfNeeded(vm, exec, propertyName);
return Base::getOwnPropertySlot(thisObject, exec, propertyName, slot);
}
- if (propertyName == exec->propertyNames().prototype && !thisObject->jsExecutable()->isArrowFunction()) {
- VM& vm = exec->vm();
+ if (propertyName == vm.propertyNames->prototype && !thisObject->jsExecutable()->isArrowFunction()) {
unsigned attributes;
PropertyOffset offset = thisObject->getDirectOffset(vm, propertyName, attributes);
if (!isValidOffset(offset)) {
JSObject* prototype = nullptr;
if (thisObject->jsExecutable()->parseMode() == SourceParseMode::GeneratorWrapperFunctionMode)
- prototype = constructEmptyObject(exec, thisObject->globalObject()->generatorPrototype());
+ prototype = constructEmptyObject(exec, thisObject->globalObject(vm)->generatorPrototype());
else
prototype = constructEmptyObject(exec);
- prototype->putDirect(vm, exec->propertyNames().constructor, thisObject, DontEnum);
- thisObject->putDirect(vm, exec->propertyNames().prototype, prototype, DontDelete | DontEnum);
- offset = thisObject->getDirectOffset(vm, exec->propertyNames().prototype, attributes);
+ prototype->putDirect(vm, vm.propertyNames->constructor, thisObject, DontEnum);
+ thisObject->putDirect(vm, vm.propertyNames->prototype, prototype, DontDelete | DontEnum);
+ offset = thisObject->getDirectOffset(vm, vm.propertyNames->prototype, attributes);
ASSERT(isValidOffset(offset));
}
slot.setValue(thisObject, attributes, thisObject->getDirect(offset), offset);
}
- if (propertyName == exec->propertyNames().arguments) {
+ if (propertyName == vm.propertyNames->arguments) {
if (thisObject->jsExecutable()->isStrictMode() || thisObject->jsExecutable()->isClassConstructorFunction()) {
bool result = Base::getOwnPropertySlot(thisObject, exec, propertyName, slot);
if (!result) {
- GetterSetter* errorGetterSetter = thisObject->globalObject()->throwTypeErrorArgumentsCalleeAndCallerGetterSetter();
+ GetterSetter* errorGetterSetter = thisObject->globalObject(vm)->throwTypeErrorArgumentsCalleeAndCallerGetterSetter();
thisObject->putDirectAccessor(exec, propertyName, errorGetterSetter, DontDelete | DontEnum | Accessor);
result = Base::getOwnPropertySlot(thisObject, exec, propertyName, slot);
ASSERT(result);
@@ -386,11 +386,11 @@
return true;
}
- if (propertyName == exec->propertyNames().caller) {
+ if (propertyName == vm.propertyNames->caller) {
if (thisObject->jsExecutable()->isStrictMode() || thisObject->jsExecutable()->isClassConstructorFunction()) {
bool result = Base::getOwnPropertySlot(thisObject, exec, propertyName, slot);
if (!result) {
- GetterSetter* errorGetterSetter = thisObject->globalObject()->throwTypeErrorArgumentsCalleeAndCallerGetterSetter();
+ GetterSetter* errorGetterSetter = thisObject->globalObject(vm)->throwTypeErrorArgumentsCalleeAndCallerGetterSetter();
thisObject->putDirectAccessor(exec, propertyName, errorGetterSetter, DontDelete | DontEnum | Accessor);
result = Base::getOwnPropertySlot(thisObject, exec, propertyName, slot);
ASSERT(result);
@@ -401,7 +401,7 @@
return true;
}
- thisObject->reifyLazyPropertyIfNeeded(exec, propertyName);
+ thisObject->reifyLazyPropertyIfNeeded(vm, exec, propertyName);
return Base::getOwnPropertySlot(thisObject, exec, propertyName, slot);
}
@@ -437,11 +437,11 @@
return ordinarySetSlow(exec, thisObject, propertyName, value, slot.thisValue(), slot.isStrictMode());
if (thisObject->isHostOrBuiltinFunction()) {
- thisObject->reifyBoundNameIfNeeded(exec, propertyName);
+ thisObject->reifyBoundNameIfNeeded(vm, exec, propertyName);
return Base::put(thisObject, exec, propertyName, value, slot);
}
- if (propertyName == exec->propertyNames().prototype) {
+ if (propertyName == vm.propertyNames->prototype) {
// Make sure prototype has been reified, such that it can only be overwritten
// following the rules set out in ECMA-262 8.12.9.
PropertySlot slot(thisObject, PropertySlot::InternalMethodType::VMInquiry);
@@ -453,19 +453,19 @@
scope.release();
return Base::put(thisObject, exec, propertyName, value, dontCache);
}
- if (thisObject->jsExecutable()->isStrictMode() && (propertyName == exec->propertyNames().arguments || propertyName == exec->propertyNames().caller)) {
+ if (thisObject->jsExecutable()->isStrictMode() && (propertyName == vm.propertyNames->arguments || propertyName == vm.propertyNames->caller)) {
// This will trigger the property to be reified, if this is not already the case!
bool okay = thisObject->hasProperty(exec, propertyName);
ASSERT_UNUSED(okay, okay);
scope.release();
return Base::put(thisObject, exec, propertyName, value, slot);
}
- if (propertyName == exec->propertyNames().arguments || propertyName == exec->propertyNames().caller) {
+ if (propertyName == vm.propertyNames->arguments || propertyName == vm.propertyNames->caller) {
if (slot.isStrictMode())
throwTypeError(exec, scope, StrictModeReadonlyPropertyWriteError);
return false;
}
- thisObject->reifyLazyPropertyIfNeeded(exec, propertyName);
+ thisObject->reifyLazyPropertyIfNeeded(vm, exec, propertyName);
scope.release();
return Base::put(thisObject, exec, propertyName, value, slot);
}
@@ -474,16 +474,17 @@
{
JSFunction* thisObject = jsCast<JSFunction*>(cell);
if (thisObject->isHostOrBuiltinFunction())
- thisObject->reifyBoundNameIfNeeded(exec, propertyName);
+ thisObject->reifyBoundNameIfNeeded(exec->vm(), exec, propertyName);
else if (exec->vm().deletePropertyMode() != VM::DeletePropertyMode::IgnoreConfigurable) {
// For non-host functions, don't let these properties by deleted - except by DefineOwnProperty.
+ VM& vm = exec->vm();
FunctionExecutable* executable = thisObject->jsExecutable();
- if (propertyName == exec->propertyNames().arguments
- || (propertyName == exec->propertyNames().prototype && !executable->isArrowFunction())
- || propertyName == exec->propertyNames().caller)
+ if (propertyName == vm.propertyNames->arguments
+ || (propertyName == vm.propertyNames->prototype && !executable->isArrowFunction())
+ || propertyName == vm.propertyNames->caller)
return false;
- thisObject->reifyLazyPropertyIfNeeded(exec, propertyName);
+ thisObject->reifyLazyPropertyIfNeeded(vm, exec, propertyName);
}
return Base::deleteProperty(thisObject, exec, propertyName);
@@ -496,11 +497,11 @@
JSFunction* thisObject = jsCast<JSFunction*>(object);
if (thisObject->isHostOrBuiltinFunction()) {
- thisObject->reifyBoundNameIfNeeded(exec, propertyName);
+ thisObject->reifyBoundNameIfNeeded(vm, exec, propertyName);
return Base::defineOwnProperty(object, exec, propertyName, descriptor, throwException);
}
- if (propertyName == exec->propertyNames().prototype) {
+ if (propertyName == vm.propertyNames->prototype) {
// Make sure prototype has been reified, such that it can only be overwritten
// following the rules set out in ECMA-262 8.12.9.
PropertySlot slot(thisObject, PropertySlot::InternalMethodType::VMInquiry);
@@ -511,24 +512,24 @@
}
bool valueCheck;
- if (propertyName == exec->propertyNames().arguments) {
+ if (propertyName == vm.propertyNames->arguments) {
if (thisObject->jsExecutable()->isStrictMode()) {
PropertySlot slot(thisObject, PropertySlot::InternalMethodType::VMInquiry);
if (!Base::getOwnPropertySlot(thisObject, exec, propertyName, slot))
- thisObject->putDirectAccessor(exec, propertyName, thisObject->globalObject()->throwTypeErrorArgumentsCalleeAndCallerGetterSetter(), DontDelete | DontEnum | Accessor);
+ thisObject->putDirectAccessor(exec, propertyName, thisObject->globalObject(vm)->throwTypeErrorArgumentsCalleeAndCallerGetterSetter(), DontDelete | DontEnum | Accessor);
return Base::defineOwnProperty(object, exec, propertyName, descriptor, throwException);
}
valueCheck = !descriptor.value() || sameValue(exec, descriptor.value(), retrieveArguments(exec, thisObject));
- } else if (propertyName == exec->propertyNames().caller) {
+ } else if (propertyName == vm.propertyNames->caller) {
if (thisObject->jsExecutable()->isStrictMode()) {
PropertySlot slot(thisObject, PropertySlot::InternalMethodType::VMInquiry);
if (!Base::getOwnPropertySlot(thisObject, exec, propertyName, slot))
- thisObject->putDirectAccessor(exec, propertyName, thisObject->globalObject()->throwTypeErrorArgumentsCalleeAndCallerGetterSetter(), DontDelete | DontEnum | Accessor);
+ thisObject->putDirectAccessor(exec, propertyName, thisObject->globalObject(vm)->throwTypeErrorArgumentsCalleeAndCallerGetterSetter(), DontDelete | DontEnum | Accessor);
return Base::defineOwnProperty(object, exec, propertyName, descriptor, throwException);
}
valueCheck = !descriptor.value() || sameValue(exec, descriptor.value(), retrieveCallerFunction(exec, thisObject));
} else {
- thisObject->reifyLazyPropertyIfNeeded(exec, propertyName);
+ thisObject->reifyLazyPropertyIfNeeded(vm, exec, propertyName);
return Base::defineOwnProperty(object, exec, propertyName, descriptor, throwException);
}
@@ -590,6 +591,7 @@
void JSFunction::setFunctionName(ExecState* exec, JSValue value)
{
+ VM& vm = exec->vm();
// The "name" property may have been already been defined as part of a property list in an
// object literal (and therefore reified).
if (hasReifiedName())
@@ -605,7 +607,6 @@
else
name = makeString('[', String(&uid), ']');
} else {
- VM& vm = exec->vm();
JSString* jsStr = value.toString(exec);
if (vm.exception())
return;
@@ -613,25 +614,24 @@
if (vm.exception())
return;
}
- reifyName(exec, name);
+ reifyName(vm, exec, name);
}
-void JSFunction::reifyLength(ExecState* exec)
+void JSFunction::reifyLength(VM& vm)
{
- VM& vm = exec->vm();
FunctionRareData* rareData = this->rareData(vm);
ASSERT(!hasReifiedLength());
ASSERT(!isHostFunction());
JSValue initialValue = jsNumber(jsExecutable()->parameterCount());
unsigned initialAttributes = DontEnum | ReadOnly;
- const Identifier& identifier = exec->propertyNames().length;
+ const Identifier& identifier = vm.propertyNames->length;
putDirect(vm, identifier, initialValue, initialAttributes);
rareData->setHasReifiedLength();
}
-void JSFunction::reifyName(ExecState* exec)
+void JSFunction::reifyName(VM& vm, ExecState* exec)
{
const Identifier& ecmaName = jsExecutable()->ecmaName();
String name;
@@ -642,18 +642,17 @@
name = exec->propertyNames().defaultKeyword.string();
else
name = ecmaName.string();
- reifyName(exec, name);
+ reifyName(vm, exec, name);
}
-void JSFunction::reifyName(ExecState* exec, String name)
+void JSFunction::reifyName(VM& vm, ExecState* exec, String name)
{
- VM& vm = exec->vm();
FunctionRareData* rareData = this->rareData(vm);
ASSERT(!hasReifiedName());
ASSERT(!isHostFunction());
unsigned initialAttributes = DontEnum | ReadOnly;
- const Identifier& propID = exec->propertyNames().name;
+ const Identifier& propID = vm.propertyNames->name;
if (exec->lexicalGlobalObject()->needsSiteSpecificQuirks()) {
auto illegalCharMatcher = [] (UChar ch) -> bool {
@@ -672,20 +671,20 @@
rareData->setHasReifiedName();
}
-void JSFunction::reifyLazyPropertyIfNeeded(ExecState* exec, PropertyName propertyName)
+void JSFunction::reifyLazyPropertyIfNeeded(VM& vm, ExecState* exec, PropertyName propertyName)
{
- if (propertyName == exec->propertyNames().length) {
+ if (propertyName == vm.propertyNames->length) {
if (!hasReifiedLength())
- reifyLength(exec);
- } else if (propertyName == exec->propertyNames().name) {
+ reifyLength(vm);
+ } else if (propertyName == vm.propertyNames->name) {
if (!hasReifiedName())
- reifyName(exec);
+ reifyName(vm, exec);
}
}
-void JSFunction::reifyBoundNameIfNeeded(ExecState* exec, PropertyName propertyName)
+void JSFunction::reifyBoundNameIfNeeded(VM& vm, ExecState* exec, PropertyName propertyName)
{
- const Identifier& nameIdent = exec->propertyNames().name;
+ const Identifier& nameIdent = vm.propertyNames->name;
if (propertyName != nameIdent)
return;
@@ -693,7 +692,6 @@
return;
if (this->inherits(JSBoundFunction::info())) {
- VM& vm = exec->vm();
FunctionRareData* rareData = this->rareData(vm);
String name = makeString("bound ", static_cast<NativeExecutable*>(m_executable.get())->name());
unsigned initialAttributes = DontEnum | ReadOnly;
diff --git a/Source/JavaScriptCore/runtime/JSFunction.h b/Source/JavaScriptCore/runtime/JSFunction.h
index fa946f2..cdb6f75 100644
--- a/Source/JavaScriptCore/runtime/JSFunction.h
+++ b/Source/JavaScriptCore/runtime/JSFunction.h
@@ -189,11 +189,11 @@
bool hasReifiedLength() const;
bool hasReifiedName() const;
- void reifyLength(ExecState*);
- void reifyName(ExecState*);
- void reifyBoundNameIfNeeded(ExecState*, PropertyName);
- void reifyName(ExecState*, String name);
- void reifyLazyPropertyIfNeeded(ExecState*, PropertyName propertyName);
+ void reifyLength(VM&);
+ void reifyName(VM&, ExecState*);
+ void reifyBoundNameIfNeeded(VM&, ExecState*, PropertyName);
+ void reifyName(VM&, ExecState*, String name);
+ void reifyLazyPropertyIfNeeded(VM&, ExecState*, PropertyName propertyName);
friend class LLIntOffsetsExtractor;
diff --git a/Source/JavaScriptCore/runtime/JSFunctionInlines.h b/Source/JavaScriptCore/runtime/JSFunctionInlines.h
index 8a2db18..72a23c0 100644
--- a/Source/JavaScriptCore/runtime/JSFunctionInlines.h
+++ b/Source/JavaScriptCore/runtime/JSFunctionInlines.h
@@ -35,7 +35,7 @@
VM& vm, FunctionExecutable* executable, JSScope* scope)
{
ASSERT(executable->singletonFunction()->hasBeenInvalidated());
- return createImpl(vm, executable, scope, scope->globalObject()->functionStructure());
+ return createImpl(vm, executable, scope, scope->globalObject(vm)->functionStructure());
}
inline JSFunction::JSFunction(VM& vm, FunctionExecutable* executable, JSScope* scope, Structure* structure)
@@ -47,7 +47,7 @@
#if ENABLE(WEBASSEMBLY)
inline JSFunction::JSFunction(VM& vm, WebAssemblyExecutable* executable, JSScope* scope)
- : Base(vm, scope, scope->globalObject()->functionStructure())
+ : Base(vm, scope, scope->globalObject(vm)->functionStructure())
, m_executable(vm, this, executable)
, m_rareData()
{
diff --git a/Source/JavaScriptCore/runtime/JSGenericTypedArrayViewInlines.h b/Source/JavaScriptCore/runtime/JSGenericTypedArrayViewInlines.h
index a4d212c..d803a08 100644
--- a/Source/JavaScriptCore/runtime/JSGenericTypedArrayViewInlines.h
+++ b/Source/JavaScriptCore/runtime/JSGenericTypedArrayViewInlines.h
@@ -520,25 +520,15 @@
// up. But if you do *anything* to trigger a GC watermark check, it will know
// that you *had* done those allocations and it will GC appropriately.
Heap* heap = Heap::heap(thisObject);
+ VM& vm = *heap->vm();
DeferGCForAWhile deferGC(*heap);
ASSERT(!thisObject->hasIndexingHeader());
- size_t size = thisObject->byteSize();
-
- if (thisObject->m_mode == FastTypedArray
- && !thisObject->butterfly() && size >= sizeof(IndexingHeader)) {
- ASSERT(thisObject->m_vector);
- // Reuse already allocated memory if at all possible.
- thisObject->m_butterfly.setWithoutBarrier(
- bitwise_cast<IndexingHeader*>(thisObject->vector())->butterfly());
- } else {
- RELEASE_ASSERT(!thisObject->hasIndexingHeader());
- VM& vm = *heap->vm();
- thisObject->m_butterfly.set(vm, thisObject, Butterfly::createOrGrowArrayRight(
- thisObject->butterfly(), vm, thisObject, thisObject->structure(),
- thisObject->structure()->outOfLineCapacity(), false, 0, 0));
- }
+ RELEASE_ASSERT(!thisObject->hasIndexingHeader());
+ thisObject->m_butterfly.set(vm, thisObject, Butterfly::createOrGrowArrayRight(
+ thisObject->butterfly(), vm, thisObject, thisObject->structure(),
+ thisObject->structure()->outOfLineCapacity(), false, 0, 0));
RefPtr<ArrayBuffer> buffer;
diff --git a/Source/JavaScriptCore/runtime/JSInternalPromise.cpp b/Source/JavaScriptCore/runtime/JSInternalPromise.cpp
index f952592..90f8032 100644
--- a/Source/JavaScriptCore/runtime/JSInternalPromise.cpp
+++ b/Source/JavaScriptCore/runtime/JSInternalPromise.cpp
@@ -27,10 +27,7 @@
#include "JSInternalPromise.h"
#include "BuiltinNames.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
-#include "JSObjectInlines.h"
-#include "StructureInlines.h"
+#include "JSCInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/JSInternalPromiseConstructor.cpp b/Source/JavaScriptCore/runtime/JSInternalPromiseConstructor.cpp
index 9fcc86e..9a214c6 100644
--- a/Source/JavaScriptCore/runtime/JSInternalPromiseConstructor.cpp
+++ b/Source/JavaScriptCore/runtime/JSInternalPromiseConstructor.cpp
@@ -27,11 +27,9 @@
#include "JSInternalPromiseConstructor.h"
#include "JSCBuiltins.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSInternalPromise.h"
#include "JSInternalPromisePrototype.h"
-#include "StructureInlines.h"
#include "JSInternalPromiseConstructor.lut.h"
diff --git a/Source/JavaScriptCore/runtime/JSInternalPromiseDeferred.cpp b/Source/JavaScriptCore/runtime/JSInternalPromiseDeferred.cpp
index ab4a8f4..0e0f3dd 100644
--- a/Source/JavaScriptCore/runtime/JSInternalPromiseDeferred.cpp
+++ b/Source/JavaScriptCore/runtime/JSInternalPromiseDeferred.cpp
@@ -29,13 +29,9 @@
#include "BuiltinNames.h"
#include "Error.h"
#include "Exception.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSInternalPromise.h"
#include "JSInternalPromiseConstructor.h"
-#include "JSObjectInlines.h"
-#include "SlotVisitorInlines.h"
-#include "StructureInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/JSInternalPromisePrototype.cpp b/Source/JavaScriptCore/runtime/JSInternalPromisePrototype.cpp
index ecedfaf..5db434b 100644
--- a/Source/JavaScriptCore/runtime/JSInternalPromisePrototype.cpp
+++ b/Source/JavaScriptCore/runtime/JSInternalPromisePrototype.cpp
@@ -28,12 +28,10 @@
#include "Error.h"
#include "JSCBuiltins.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSGlobalObject.h"
#include "JSInternalPromise.h"
#include "Microtask.h"
-#include "StructureInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/JSJob.cpp b/Source/JavaScriptCore/runtime/JSJob.cpp
index d11ffa3..00bbfbd 100644
--- a/Source/JavaScriptCore/runtime/JSJob.cpp
+++ b/Source/JavaScriptCore/runtime/JSJob.cpp
@@ -28,12 +28,10 @@
#include "Error.h"
#include "Exception.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSGlobalObject.h"
#include "JSObjectInlines.h"
#include "Microtask.h"
-#include "SlotVisitorInlines.h"
#include "StrongInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/JSMapIterator.cpp b/Source/JavaScriptCore/runtime/JSMapIterator.cpp
index f13ea5f..2c388a2 100644
--- a/Source/JavaScriptCore/runtime/JSMapIterator.cpp
+++ b/Source/JavaScriptCore/runtime/JSMapIterator.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013 Apple, Inc. All rights reserved.
+ * Copyright (C) 2013, 2016 Apple, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -26,12 +26,9 @@
#include "config.h"
#include "JSMapIterator.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSMap.h"
#include "MapDataInlines.h"
-#include "SlotVisitorInlines.h"
-#include "StructureInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/JSModuleNamespaceObject.cpp b/Source/JavaScriptCore/runtime/JSModuleNamespaceObject.cpp
index 89834c4..0423333 100644
--- a/Source/JavaScriptCore/runtime/JSModuleNamespaceObject.cpp
+++ b/Source/JavaScriptCore/runtime/JSModuleNamespaceObject.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2015-2016 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -27,14 +27,10 @@
#include "JSModuleNamespaceObject.h"
#include "Error.h"
-#include "IdentifierInlines.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSModuleEnvironment.h"
#include "JSModuleRecord.h"
#include "JSPropertyNameIterator.h"
-#include "SlotVisitorInlines.h"
-#include "StructureInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/JSModuleRecord.cpp b/Source/JavaScriptCore/runtime/JSModuleRecord.cpp
index 21bac79..eea8435 100644
--- a/Source/JavaScriptCore/runtime/JSModuleRecord.cpp
+++ b/Source/JavaScriptCore/runtime/JSModuleRecord.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2015-2016 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -28,15 +28,11 @@
#include "Error.h"
#include "Executable.h"
-#include "IdentifierInlines.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "Interpreter.h"
+#include "JSCInlines.h"
#include "JSMap.h"
#include "JSModuleEnvironment.h"
#include "JSModuleNamespaceObject.h"
-#include "JSObjectInlines.h"
-#include "SlotVisitorInlines.h"
-#include "StructureInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/JSObject.cpp b/Source/JavaScriptCore/runtime/JSObject.cpp
index 6e72e59..7bf48fe 100644
--- a/Source/JavaScriptCore/runtime/JSObject.cpp
+++ b/Source/JavaScriptCore/runtime/JSObject.cpp
@@ -88,77 +88,6 @@
}
}
-ALWAYS_INLINE void JSObject::copyButterfly(CopyVisitor& visitor, Butterfly* butterfly, size_t storageSize)
-{
- ASSERT(butterfly);
-
- Structure* structure = this->structure();
-
- size_t propertyCapacity = structure->outOfLineCapacity();
- size_t preCapacity;
- size_t indexingPayloadSizeInBytes;
- bool hasIndexingHeader = this->hasIndexingHeader();
- if (UNLIKELY(hasIndexingHeader)) {
- preCapacity = butterfly->indexingHeader()->preCapacity(structure);
- indexingPayloadSizeInBytes = butterfly->indexingHeader()->indexingPayloadSizeInBytes(structure);
- } else {
- preCapacity = 0;
- indexingPayloadSizeInBytes = 0;
- }
- size_t capacityInBytes = Butterfly::totalSize(preCapacity, propertyCapacity, hasIndexingHeader, indexingPayloadSizeInBytes);
- if (visitor.checkIfShouldCopy(butterfly->base(preCapacity, propertyCapacity))) {
- Butterfly* newButterfly = Butterfly::createUninitializedDuringCollection(visitor, preCapacity, propertyCapacity, hasIndexingHeader, indexingPayloadSizeInBytes);
-
- // Copy the properties.
- PropertyStorage currentTarget = newButterfly->propertyStorage();
- PropertyStorage currentSource = butterfly->propertyStorage();
- for (size_t count = storageSize; count--;)
- (--currentTarget)->setWithoutWriteBarrier((--currentSource)->get());
-
- if (UNLIKELY(hasIndexingHeader)) {
- *newButterfly->indexingHeader() = *butterfly->indexingHeader();
-
- // Copy the array if appropriate.
-
- WriteBarrier<Unknown>* currentTarget;
- WriteBarrier<Unknown>* currentSource;
- size_t count;
-
- switch (this->indexingType()) {
- case ALL_UNDECIDED_INDEXING_TYPES:
- case ALL_CONTIGUOUS_INDEXING_TYPES:
- case ALL_INT32_INDEXING_TYPES:
- case ALL_DOUBLE_INDEXING_TYPES: {
- currentTarget = newButterfly->contiguous().data();
- currentSource = butterfly->contiguous().data();
- RELEASE_ASSERT(newButterfly->publicLength() <= newButterfly->vectorLength());
- count = newButterfly->vectorLength();
- break;
- }
-
- case ALL_ARRAY_STORAGE_INDEXING_TYPES: {
- newButterfly->arrayStorage()->copyHeaderFromDuringGC(*butterfly->arrayStorage());
- currentTarget = newButterfly->arrayStorage()->m_vector;
- currentSource = butterfly->arrayStorage()->m_vector;
- count = newButterfly->arrayStorage()->vectorLength();
- break;
- }
-
- default:
- currentTarget = 0;
- currentSource = 0;
- count = 0;
- break;
- }
-
- memcpy(currentTarget, currentSource, count * sizeof(EncodedJSValue));
- }
-
- m_butterfly.setWithoutBarrier(newButterfly);
- visitor.didCopy(butterfly->base(preCapacity, propertyCapacity), capacityInBytes);
- }
-}
-
ALWAYS_INLINE void JSObject::visitButterfly(SlotVisitor& visitor, Butterfly* butterfly, Structure* structure)
{
ASSERT(butterfly);
@@ -166,22 +95,21 @@
size_t storageSize = structure->outOfLineSize();
size_t propertyCapacity = structure->outOfLineCapacity();
size_t preCapacity;
- size_t indexingPayloadSizeInBytes;
bool hasIndexingHeader = this->hasIndexingHeader();
- if (UNLIKELY(hasIndexingHeader)) {
+ if (UNLIKELY(hasIndexingHeader))
preCapacity = butterfly->indexingHeader()->preCapacity(structure);
- indexingPayloadSizeInBytes = butterfly->indexingHeader()->indexingPayloadSizeInBytes(structure);
- } else {
+ else
preCapacity = 0;
- indexingPayloadSizeInBytes = 0;
- }
- size_t capacityInBytes = Butterfly::totalSize(preCapacity, propertyCapacity, hasIndexingHeader, indexingPayloadSizeInBytes);
+
+ HeapCell* base = bitwise_cast<HeapCell*>(butterfly->base(preCapacity, propertyCapacity));
+
+ ASSERT(Heap::heap(base) == visitor.heap());
+ // Keep the butterfly alive.
+ visitor.markAuxiliary(base);
+
// Mark the properties.
visitor.appendValuesHidden(butterfly->propertyStorage() - storageSize, storageSize);
- visitor.copyLater(
- this, ButterflyCopyToken,
- butterfly->base(preCapacity, propertyCapacity), capacityInBytes);
// Mark the array if appropriate.
switch (this->indexingType()) {
@@ -225,19 +153,6 @@
#endif
}
-void JSObject::copyBackingStore(JSCell* cell, CopyVisitor& visitor, CopyToken token)
-{
- JSObject* thisObject = jsCast<JSObject*>(cell);
- ASSERT_GC_OBJECT_INHERITS(thisObject, info());
-
- if (token != ButterflyCopyToken)
- return;
-
- Butterfly* butterfly = thisObject->m_butterfly.get();
- if (butterfly)
- thisObject->copyButterfly(visitor, butterfly, thisObject->structure()->outOfLineSize());
-}
-
void JSObject::heapSnapshot(JSCell* cell, HeapSnapshotBuilder& builder)
{
JSObject* thisObject = jsCast<JSObject*>(cell);
@@ -783,20 +698,22 @@
if (!vm.prototypeMap.isPrototype(this))
return;
- globalObject()->haveABadTime(vm);
+ globalObject(vm)->haveABadTime(vm);
}
-Butterfly* JSObject::createInitialIndexedStorage(VM& vm, unsigned length, size_t elementSize)
+Butterfly* JSObject::createInitialIndexedStorage(VM& vm, unsigned length)
{
ASSERT(length < MAX_ARRAY_INDEX);
IndexingType oldType = indexingType();
ASSERT_UNUSED(oldType, !hasIndexedProperties(oldType));
ASSERT(!structure()->needsSlowPutIndexing());
ASSERT(!indexingShouldBeSparse());
- unsigned vectorLength = std::max(length, BASE_VECTOR_LEN);
+ Structure* structure = this->structure(vm);
+ unsigned propertyCapacity = structure->outOfLineCapacity();
+ unsigned vectorLength = Butterfly::optimalContiguousVectorLength(propertyCapacity, length);
Butterfly* newButterfly = Butterfly::createOrGrowArrayRight(
- m_butterfly.get(), vm, this, structure(), structure()->outOfLineCapacity(), false, 0,
- elementSize * vectorLength);
+ m_butterfly.get(), vm, this, structure, propertyCapacity, false, 0,
+ sizeof(EncodedJSValue) * vectorLength);
newButterfly->setPublicLength(length);
newButterfly->setVectorLength(vectorLength);
return newButterfly;
@@ -805,7 +722,7 @@
Butterfly* JSObject::createInitialUndecided(VM& vm, unsigned length)
{
DeferGC deferGC(vm.heap);
- Butterfly* newButterfly = createInitialIndexedStorage(vm, length, sizeof(EncodedJSValue));
+ Butterfly* newButterfly = createInitialIndexedStorage(vm, length);
Structure* newStructure = Structure::nonPropertyTransition(vm, structure(vm), NonPropertyTransition::AllocateUndecided);
setStructureAndButterfly(vm, newStructure, newButterfly);
return newButterfly;
@@ -814,7 +731,9 @@
ContiguousJSValues JSObject::createInitialInt32(VM& vm, unsigned length)
{
DeferGC deferGC(vm.heap);
- Butterfly* newButterfly = createInitialIndexedStorage(vm, length, sizeof(EncodedJSValue));
+ Butterfly* newButterfly = createInitialIndexedStorage(vm, length);
+ for (unsigned i = newButterfly->vectorLength(); i--;)
+ newButterfly->contiguousInt32()[i].setWithoutWriteBarrier(JSValue());
Structure* newStructure = Structure::nonPropertyTransition(vm, structure(vm), NonPropertyTransition::AllocateInt32);
setStructureAndButterfly(vm, newStructure, newButterfly);
return newButterfly->contiguousInt32();
@@ -823,7 +742,7 @@
ContiguousDoubles JSObject::createInitialDouble(VM& vm, unsigned length)
{
DeferGC deferGC(vm.heap);
- Butterfly* newButterfly = createInitialIndexedStorage(vm, length, sizeof(double));
+ Butterfly* newButterfly = createInitialIndexedStorage(vm, length);
for (unsigned i = newButterfly->vectorLength(); i--;)
newButterfly->contiguousDouble()[i] = PNaN;
Structure* newStructure = Structure::nonPropertyTransition(vm, structure(vm), NonPropertyTransition::AllocateDouble);
@@ -834,7 +753,9 @@
ContiguousJSValues JSObject::createInitialContiguous(VM& vm, unsigned length)
{
DeferGC deferGC(vm.heap);
- Butterfly* newButterfly = createInitialIndexedStorage(vm, length, sizeof(EncodedJSValue));
+ Butterfly* newButterfly = createInitialIndexedStorage(vm, length);
+ for (unsigned i = newButterfly->vectorLength(); i--;)
+ newButterfly->contiguous()[i].setWithoutWriteBarrier(JSValue());
Structure* newStructure = Structure::nonPropertyTransition(vm, structure(vm), NonPropertyTransition::AllocateContiguous);
setStructureAndButterfly(vm, newStructure, newButterfly);
return newButterfly->contiguous();
@@ -857,6 +778,8 @@
result->m_sparseMap.clear();
result->m_numValuesInVector = 0;
result->m_indexBias = 0;
+ for (size_t i = vectorLength; i--;)
+ result->m_vector[i].setWithoutWriteBarrier(JSValue());
Structure* newStructure = Structure::nonPropertyTransition(vm, structure, structure->suggestedArrayStorageTransition());
setStructureAndButterfly(vm, newStructure, newButterfly);
return result;
@@ -864,12 +787,18 @@
ArrayStorage* JSObject::createInitialArrayStorage(VM& vm)
{
- return createArrayStorage(vm, 0, BASE_VECTOR_LEN);
+ return createArrayStorage(
+ vm, 0, ArrayStorage::optimalVectorLength(0, structure(vm)->outOfLineCapacity(), 0));
}
ContiguousJSValues JSObject::convertUndecidedToInt32(VM& vm)
{
ASSERT(hasUndecided(indexingType()));
+
+ Butterfly* butterfly = m_butterfly.get();
+ for (unsigned i = butterfly->vectorLength(); i--;)
+ butterfly->contiguousInt32()[i].setWithoutWriteBarrier(JSValue());
+
setStructure(vm, Structure::nonPropertyTransition(vm, structure(vm), NonPropertyTransition::AllocateInt32));
return m_butterfly.get()->contiguousInt32();
}
@@ -889,6 +818,11 @@
ContiguousJSValues JSObject::convertUndecidedToContiguous(VM& vm)
{
ASSERT(hasUndecided(indexingType()));
+
+ Butterfly* butterfly = m_butterfly.get();
+ for (unsigned i = butterfly->vectorLength(); i--;)
+ butterfly->contiguous()[i].setWithoutWriteBarrier(JSValue());
+
setStructure(vm, Structure::nonPropertyTransition(vm, structure(vm), NonPropertyTransition::AllocateContiguous));
return m_butterfly.get()->contiguous();
}
@@ -925,7 +859,9 @@
unsigned vectorLength = m_butterfly.get()->vectorLength();
ArrayStorage* storage = constructConvertedArrayStorageWithoutCopyingElements(vm, vectorLength);
- // No need to copy elements.
+
+ for (unsigned i = vectorLength; i--;)
+ storage->m_vector[i].setWithoutWriteBarrier(JSValue());
Structure* newStructure = Structure::nonPropertyTransition(vm, structure(vm), transition);
setStructureAndButterfly(vm, newStructure, storage->butterfly());
@@ -946,11 +882,12 @@
WriteBarrier<Unknown>* current = &butterfly->contiguousInt32()[i];
double* currentAsDouble = bitwise_cast<double*>(current);
JSValue v = current->get();
- if (!v) {
+ // NOTE: Since this may be used during initialization, v could be garbage. If it's garbage,
+ // that means it will be overwritten later.
+ if (!v.isInt32()) {
*currentAsDouble = PNaN;
continue;
}
- ASSERT(v.isInt32());
*currentAsDouble = v.asInt32();
}
@@ -974,13 +911,11 @@
unsigned vectorLength = m_butterfly.get()->vectorLength();
ArrayStorage* newStorage = constructConvertedArrayStorageWithoutCopyingElements(vm, vectorLength);
Butterfly* butterfly = m_butterfly.get();
- for (unsigned i = 0; i < butterfly->publicLength(); i++) {
+ for (unsigned i = 0; i < vectorLength; i++) {
JSValue v = butterfly->contiguous()[i].get();
- if (v) {
- newStorage->m_vector[i].setWithoutWriteBarrier(v);
+ newStorage->m_vector[i].setWithoutWriteBarrier(v);
+ if (v)
newStorage->m_numValuesInVector++;
- } else
- ASSERT(newStorage->m_vector[i].get().isEmpty());
}
Structure* newStructure = Structure::nonPropertyTransition(vm, structure(vm), transition);
@@ -1022,13 +957,11 @@
unsigned vectorLength = m_butterfly.get()->vectorLength();
ArrayStorage* newStorage = constructConvertedArrayStorageWithoutCopyingElements(vm, vectorLength);
Butterfly* butterfly = m_butterfly.get();
- for (unsigned i = 0; i < butterfly->publicLength(); i++) {
+ for (unsigned i = 0; i < vectorLength; i++) {
double value = butterfly->contiguousDouble()[i];
- if (value == value) {
- newStorage->m_vector[i].setWithoutWriteBarrier(JSValue(JSValue::EncodeAsDouble, value));
+ newStorage->m_vector[i].setWithoutWriteBarrier(JSValue(JSValue::EncodeAsDouble, value));
+ if (value == value)
newStorage->m_numValuesInVector++;
- } else
- ASSERT(newStorage->m_vector[i].get().isEmpty());
}
Structure* newStructure = Structure::nonPropertyTransition(vm, structure(vm), transition);
@@ -1049,13 +982,11 @@
unsigned vectorLength = m_butterfly.get()->vectorLength();
ArrayStorage* newStorage = constructConvertedArrayStorageWithoutCopyingElements(vm, vectorLength);
Butterfly* butterfly = m_butterfly.get();
- for (unsigned i = 0; i < butterfly->publicLength(); i++) {
+ for (unsigned i = 0; i < vectorLength; i++) {
JSValue v = butterfly->contiguous()[i].get();
- if (v) {
- newStorage->m_vector[i].setWithoutWriteBarrier(v);
+ newStorage->m_vector[i].setWithoutWriteBarrier(v);
+ if (v)
newStorage->m_numValuesInVector++;
- } else
- ASSERT(newStorage->m_vector[i].get().isEmpty());
}
Structure* newStructure = Structure::nonPropertyTransition(vm, structure(vm), transition);
@@ -2406,7 +2337,7 @@
}
if (structure(vm)->needsSlowPutIndexing()) {
// Convert the indexing type to the SlowPutArrayStorage and retry.
- createArrayStorage(vm, i + 1, getNewVectorLength(0, 0, i + 1));
+ createArrayStorage(vm, i + 1, getNewVectorLength(0, 0, 0, i + 1));
return putByIndex(this, exec, i, value, shouldThrow);
}
@@ -2547,7 +2478,7 @@
exec, i, value, attributes, mode, createArrayStorage(vm, 0, 0));
}
if (structure(vm)->needsSlowPutIndexing()) {
- ArrayStorage* storage = createArrayStorage(vm, i + 1, getNewVectorLength(0, 0, i + 1));
+ ArrayStorage* storage = createArrayStorage(vm, i + 1, getNewVectorLength(0, 0, 0, i + 1));
storage->m_vector[i].set(vm, this, value);
storage->m_numValuesInVector++;
return true;
@@ -2666,7 +2597,8 @@
putDirectWithoutTransition(vm, propertyName, function, attributes);
}
-ALWAYS_INLINE unsigned JSObject::getNewVectorLength(unsigned currentVectorLength, unsigned currentLength, unsigned desiredLength)
+// NOTE: This method is for ArrayStorage vectors.
+ALWAYS_INLINE unsigned JSObject::getNewVectorLength(unsigned indexBias, unsigned currentVectorLength, unsigned currentLength, unsigned desiredLength)
{
ASSERT(desiredLength <= MAX_STORAGE_VECTOR_LENGTH);
@@ -2683,25 +2615,27 @@
ASSERT(increasedLength >= desiredLength);
- lastArraySize = std::min(increasedLength, FIRST_VECTOR_GROW);
+ lastArraySize = std::min(increasedLength, FIRST_ARRAY_STORAGE_VECTOR_GROW);
- return std::min(increasedLength, MAX_STORAGE_VECTOR_LENGTH);
+ return ArrayStorage::optimalVectorLength(
+ indexBias, structure()->outOfLineCapacity(),
+ std::min(increasedLength, MAX_STORAGE_VECTOR_LENGTH));
}
ALWAYS_INLINE unsigned JSObject::getNewVectorLength(unsigned desiredLength)
{
- unsigned vectorLength;
- unsigned length;
+ unsigned indexBias = 0;
+ unsigned vectorLength = 0;
+ unsigned length = 0;
if (hasIndexedProperties(indexingType())) {
+ if (ArrayStorage* storage = arrayStorageOrNull())
+ indexBias = storage->m_indexBias;
vectorLength = m_butterfly.get()->vectorLength();
length = m_butterfly.get()->publicLength();
- } else {
- vectorLength = 0;
- length = 0;
}
- return getNewVectorLength(vectorLength, length, desiredLength);
+ return getNewVectorLength(indexBias, vectorLength, length, desiredLength);
}
template<IndexingType indexingShape>
@@ -2754,19 +2688,28 @@
bool JSObject::increaseVectorLength(VM& vm, unsigned newLength)
{
+ ArrayStorage* storage = arrayStorage();
+
+ unsigned vectorLength = storage->vectorLength();
+ unsigned availableVectorLength = storage->availableVectorLength(structure(vm), vectorLength);
+ if (availableVectorLength >= newLength) {
+ // The cell was already big enough for the desired length!
+ for (unsigned i = vectorLength; i < availableVectorLength; ++i)
+ storage->m_vector[i].clear();
+ storage->setVectorLength(availableVectorLength);
+ return true;
+ }
+
// This function leaves the array in an internally inconsistent state, because it does not move any values from sparse value map
// to the vector. Callers have to account for that, because they can do it more efficiently.
if (newLength > MAX_STORAGE_VECTOR_LENGTH)
return false;
- ArrayStorage* storage = arrayStorage();
-
if (newLength >= MIN_SPARSE_ARRAY_INDEX
&& !isDenseEnoughForVector(newLength, storage->m_numValuesInVector))
return false;
unsigned indexBias = storage->m_indexBias;
- unsigned vectorLength = storage->vectorLength();
ASSERT(newLength > vectorLength);
unsigned newVectorLength = getNewVectorLength(newLength);
@@ -2779,6 +2722,8 @@
ArrayStorage::sizeFor(vectorLength), ArrayStorage::sizeFor(newVectorLength));
if (!newButterfly)
return false;
+ for (unsigned i = vectorLength; i < newVectorLength; ++i)
+ newButterfly->arrayStorage()->m_vector[i].clear();
newButterfly->arrayStorage()->setVectorLength(newVectorLength);
setButterflyWithoutChangingStructure(vm, newButterfly);
return true;
@@ -2793,6 +2738,8 @@
newIndexBias, true, ArrayStorage::sizeFor(newVectorLength));
if (!newButterfly)
return false;
+ for (unsigned i = vectorLength; i < newVectorLength; ++i)
+ newButterfly->arrayStorage()->m_vector[i].clear();
newButterfly->arrayStorage()->setVectorLength(newVectorLength);
newButterfly->arrayStorage()->m_indexBias = newIndexBias;
setButterflyWithoutChangingStructure(vm, newButterfly);
@@ -2807,25 +2754,41 @@
ASSERT(hasContiguous(indexingType()) || hasInt32(indexingType()) || hasDouble(indexingType()) || hasUndecided(indexingType()));
ASSERT(length > butterfly->vectorLength());
- unsigned newVectorLength = std::min(
- length << 1,
- MAX_STORAGE_VECTOR_LENGTH);
unsigned oldVectorLength = butterfly->vectorLength();
- DeferGC deferGC(vm.heap);
- butterfly = butterfly->growArrayRight(
- vm, this, structure(), structure()->outOfLineCapacity(), true,
- oldVectorLength * sizeof(EncodedJSValue),
- newVectorLength * sizeof(EncodedJSValue));
- if (!butterfly)
- return false;
- m_butterfly.set(vm, this, butterfly);
+ unsigned newVectorLength;
+
+ Structure* structure = this->structure(vm);
+ unsigned propertyCapacity = structure->outOfLineCapacity();
+
+ unsigned availableOldLength =
+ Butterfly::availableContiguousVectorLength(propertyCapacity, oldVectorLength);
+ if (availableOldLength >= length) {
+ // This is the case where someone else selected a vector length that caused internal
+ // fragmentation. If we did our jobs right, this would never happen. But I bet we will mess
+ // this up, so this defense should stay.
+ newVectorLength = availableOldLength;
+ } else {
+ newVectorLength = Butterfly::optimalContiguousVectorLength(
+ propertyCapacity, std::min(length << 1, MAX_STORAGE_VECTOR_LENGTH));
+ butterfly = butterfly->growArrayRight(
+ vm, this, structure, propertyCapacity, true,
+ oldVectorLength * sizeof(EncodedJSValue),
+ newVectorLength * sizeof(EncodedJSValue));
+ if (!butterfly)
+ return false;
+ m_butterfly.set(vm, this, butterfly);
+ }
butterfly->setVectorLength(newVectorLength);
if (hasDouble(indexingType())) {
for (unsigned i = oldVectorLength; i < newVectorLength; ++i)
- butterfly->contiguousDouble().data()[i] = PNaN;
+ butterfly->contiguousDouble()[i] = PNaN;
+ } else {
+ for (unsigned i = oldVectorLength; i < newVectorLength; ++i)
+ butterfly->contiguous()[i].clear();
}
+
return true;
}
diff --git a/Source/JavaScriptCore/runtime/JSObject.h b/Source/JavaScriptCore/runtime/JSObject.h
index 82f4b0f..e2b8a52 100644
--- a/Source/JavaScriptCore/runtime/JSObject.h
+++ b/Source/JavaScriptCore/runtime/JSObject.h
@@ -26,15 +26,14 @@
#include "ArgList.h"
#include "ArrayConventions.h"
#include "ArrayStorage.h"
+#include "AuxiliaryBarrier.h"
#include "Butterfly.h"
#include "CallFrame.h"
#include "ClassInfo.h"
#include "CommonIdentifiers.h"
-#include "CopyBarrier.h"
#include "CustomGetterSetter.h"
#include "DeferGC.h"
#include "Heap.h"
-#include "HeapInlines.h"
#include "IndexingHeaderInlines.h"
#include "JSCell.h"
#include "PropertySlot.h"
@@ -103,7 +102,6 @@
JS_EXPORT_PRIVATE static size_t estimatedSize(JSCell*);
JS_EXPORT_PRIVATE static void visitChildren(JSCell*, SlotVisitor&);
- JS_EXPORT_PRIVATE static void copyBackingStore(JSCell*, CopyVisitor&, CopyToken);
JS_EXPORT_PRIVATE static void heapSnapshot(JSCell*, HeapSnapshotBuilder&);
JS_EXPORT_PRIVATE static String className(const JSObject*);
@@ -420,6 +418,8 @@
initializeIndex(vm, i, v, indexingType());
}
+ // NOTE: Clients of this method may call it more than once for any index, and this is supposed
+ // to work.
void initializeIndex(VM& vm, unsigned i, JSValue v, IndexingType indexingType)
{
Butterfly* butterfly = m_butterfly.get();
@@ -692,8 +692,6 @@
void setStructure(VM&, Structure*);
void setStructureAndButterfly(VM&, Structure*, Butterfly*);
- void setStructureAndReallocateStorageIfNecessary(VM&, unsigned oldCapacity, Structure*);
- void setStructureAndReallocateStorageIfNecessary(VM&, Structure*);
JS_EXPORT_PRIVATE void convertToDictionary(VM&);
@@ -710,6 +708,13 @@
return structure()->globalObject();
}
+ JSGlobalObject* globalObject(VM& vm) const
+ {
+ ASSERT(structure(vm)->globalObject());
+ ASSERT(!isGlobalObject() || ((JSObject*)structure()->globalObject()) == this);
+ return structure(vm)->globalObject();
+ }
+
void switchToSlowPutArrayStorage(VM&);
// The receiver is the prototype in this case. The following:
@@ -803,7 +808,6 @@
JSObject(VM&, Structure*, Butterfly* = 0);
void visitButterfly(SlotVisitor&, Butterfly*, Structure*);
- void copyButterfly(CopyVisitor&, Butterfly*, size_t storageSize);
// Call this if you know that the object is in a mode where it has array
// storage. This will assert otherwise.
@@ -914,7 +918,7 @@
void isObject();
void isString();
- Butterfly* createInitialIndexedStorage(VM&, unsigned length, size_t elementSize);
+ Butterfly* createInitialIndexedStorage(VM&, unsigned length);
ArrayStorage* enterDictionaryIndexingModeWhenArrayStorageAlreadyExists(VM&, ArrayStorage*);
@@ -938,7 +942,7 @@
bool putDirectIndexBeyondVectorLengthWithArrayStorage(ExecState*, unsigned propertyName, JSValue, unsigned attributes, PutDirectIndexMode, ArrayStorage*);
JS_EXPORT_PRIVATE bool putDirectIndexBeyondVectorLength(ExecState*, unsigned propertyName, JSValue, unsigned attributes, PutDirectIndexMode);
- unsigned getNewVectorLength(unsigned currentVectorLength, unsigned currentLength, unsigned desiredLength);
+ unsigned getNewVectorLength(unsigned indexBias, unsigned currentVectorLength, unsigned currentLength, unsigned desiredLength);
unsigned getNewVectorLength(unsigned desiredLength);
ArrayStorage* constructConvertedArrayStorageWithoutCopyingElements(VM&, unsigned neededLength);
@@ -955,7 +959,7 @@
JS_EXPORT_PRIVATE ArrayStorage* ensureArrayStorageSlow(VM&);
protected:
- CopyBarrier<Butterfly> m_butterfly;
+ AuxiliaryBarrier<Butterfly*> m_butterfly;
#if USE(JSVALUE32_64)
private:
uint32_t m_padding;
@@ -1417,8 +1421,16 @@
validateOffset(offset);
ASSERT(newStructure->isValidOffset(offset));
- setStructureAndReallocateStorageIfNecessary(vm, newStructure);
-
+ DeferGC deferGC(vm.heap);
+ size_t oldCapacity = structure->outOfLineCapacity();
+ size_t newCapacity = newStructure->outOfLineCapacity();
+ ASSERT(oldCapacity <= newCapacity);
+ if (oldCapacity == newCapacity)
+ setStructure(vm, newStructure);
+ else {
+ Butterfly* newButterfly = growOutOfLineStorage(vm, oldCapacity, newCapacity);
+ setStructureAndButterfly(vm, newStructure, newButterfly);
+ }
putDirect(vm, offset, value);
slot.setNewProperty(this, offset);
if (attributes & ReadOnly)
@@ -1426,27 +1438,6 @@
return true;
}
-inline void JSObject::setStructureAndReallocateStorageIfNecessary(VM& vm, unsigned oldCapacity, Structure* newStructure)
-{
- ASSERT(oldCapacity <= newStructure->outOfLineCapacity());
-
- if (oldCapacity == newStructure->outOfLineCapacity()) {
- setStructure(vm, newStructure);
- return;
- }
-
- DeferGC deferGC(vm.heap);
- Butterfly* newButterfly = growOutOfLineStorage(
- vm, oldCapacity, newStructure->outOfLineCapacity());
- setStructureAndButterfly(vm, newStructure, newButterfly);
-}
-
-inline void JSObject::setStructureAndReallocateStorageIfNecessary(VM& vm, Structure* newStructure)
-{
- setStructureAndReallocateStorageIfNecessary(
- vm, structure(vm)->outOfLineCapacity(), newStructure);
-}
-
inline bool JSObject::putOwnDataProperty(VM& vm, PropertyName propertyName, JSValue value, PutPropertySlot& slot)
{
ASSERT(value);
diff --git a/Source/JavaScriptCore/runtime/JSObjectInlines.h b/Source/JavaScriptCore/runtime/JSObjectInlines.h
index 5ef7c59..64aa05c 100644
--- a/Source/JavaScriptCore/runtime/JSObjectInlines.h
+++ b/Source/JavaScriptCore/runtime/JSObjectInlines.h
@@ -1,7 +1,7 @@
/*
* Copyright (C) 1999-2001 Harri Porten (porten@kde.org)
* Copyright (C) 2001 Peter Kelly (pmk@post.com)
- * Copyright (C) 2003-2006, 2008, 2009, 2012-2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2003-2006, 2008, 2009, 2012-2016 Apple Inc. All rights reserved.
* Copyright (C) 2007 Eric Seidel (eric@webkit.org)
*
* This library is free software; you can redistribute it and/or
@@ -24,6 +24,7 @@
#ifndef JSObjectInlines_h
#define JSObjectInlines_h
+#include "AuxiliaryBarrierInlines.h"
#include "Error.h"
#include "JSObject.h"
#include "Lookup.h"
diff --git a/Source/JavaScriptCore/runtime/JSPromise.cpp b/Source/JavaScriptCore/runtime/JSPromise.cpp
index 0202e17..29b54ed 100644
--- a/Source/JavaScriptCore/runtime/JSPromise.cpp
+++ b/Source/JavaScriptCore/runtime/JSPromise.cpp
@@ -28,12 +28,9 @@
#include "BuiltinNames.h"
#include "Error.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSPromiseConstructor.h"
#include "Microtask.h"
-#include "SlotVisitorInlines.h"
-#include "StructureInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/JSPromiseConstructor.cpp b/Source/JavaScriptCore/runtime/JSPromiseConstructor.cpp
index 6312e86..5f21cd6 100644
--- a/Source/JavaScriptCore/runtime/JSPromiseConstructor.cpp
+++ b/Source/JavaScriptCore/runtime/JSPromiseConstructor.cpp
@@ -32,14 +32,12 @@
#include "GetterSetter.h"
#include "IteratorOperations.h"
#include "JSCBuiltins.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSFunction.h"
#include "JSPromise.h"
#include "JSPromisePrototype.h"
#include "Lookup.h"
#include "NumberObject.h"
-#include "StructureInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/JSPromiseDeferred.cpp b/Source/JavaScriptCore/runtime/JSPromiseDeferred.cpp
index 8dd2454..a34f069 100644
--- a/Source/JavaScriptCore/runtime/JSPromiseDeferred.cpp
+++ b/Source/JavaScriptCore/runtime/JSPromiseDeferred.cpp
@@ -29,13 +29,10 @@
#include "BuiltinNames.h"
#include "Error.h"
#include "Exception.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSObjectInlines.h"
#include "JSPromise.h"
#include "JSPromiseConstructor.h"
-#include "SlotVisitorInlines.h"
-#include "StructureInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/JSPromisePrototype.cpp b/Source/JavaScriptCore/runtime/JSPromisePrototype.cpp
index f3eaa2e..d66dde9 100644
--- a/Source/JavaScriptCore/runtime/JSPromisePrototype.cpp
+++ b/Source/JavaScriptCore/runtime/JSPromisePrototype.cpp
@@ -29,13 +29,11 @@
#include "BuiltinNames.h"
#include "Error.h"
#include "JSCBuiltins.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSFunction.h"
#include "JSGlobalObject.h"
#include "JSPromise.h"
#include "Microtask.h"
-#include "StructureInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/JSPropertyNameIterator.cpp b/Source/JavaScriptCore/runtime/JSPropertyNameIterator.cpp
index b7ded12..e25998a 100644
--- a/Source/JavaScriptCore/runtime/JSPropertyNameIterator.cpp
+++ b/Source/JavaScriptCore/runtime/JSPropertyNameIterator.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2015 Apple, Inc. All rights reserved.
+ * Copyright (C) 2015-2016 Apple, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -26,13 +26,9 @@
#include "config.h"
#include "JSPropertyNameIterator.h"
-#include "IdentifierInlines.h"
#include "IteratorOperations.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSPropertyNameEnumerator.h"
-#include "SlotVisitorInlines.h"
-#include "StructureInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/JSScope.cpp b/Source/JavaScriptCore/runtime/JSScope.cpp
index b3baf87..59ac40c 100644
--- a/Source/JavaScriptCore/runtime/JSScope.cpp
+++ b/Source/JavaScriptCore/runtime/JSScope.cpp
@@ -217,6 +217,7 @@
JSObject* JSScope::resolve(ExecState* exec, JSScope* scope, const Identifier& ident)
{
+ VM& vm = exec->vm();
ScopeChainIterator end = scope->end();
ScopeChainIterator it = scope->begin();
while (1) {
@@ -225,7 +226,7 @@
// Global scope.
if (++it == end) {
- JSScope* globalScopeExtension = scope->globalObject()->globalScopeExtension();
+ JSScope* globalScopeExtension = scope->globalObject(vm)->globalScopeExtension();
if (UNLIKELY(globalScopeExtension)) {
if (object->hasProperty(exec, ident))
return object;
diff --git a/Source/JavaScriptCore/runtime/JSScope.h b/Source/JavaScriptCore/runtime/JSScope.h
index d676411..2d30918 100644
--- a/Source/JavaScriptCore/runtime/JSScope.h
+++ b/Source/JavaScriptCore/runtime/JSScope.h
@@ -69,7 +69,7 @@
JSScope* next();
JSGlobalObject* globalObject();
- VM* vm();
+ JSGlobalObject* globalObject(VM&);
JSObject* globalThis();
SymbolTable* symbolTable();
@@ -129,9 +129,9 @@
return structure()->globalObject();
}
-inline VM* JSScope::vm()
+inline JSGlobalObject* JSScope::globalObject(VM& vm)
{
- return MarkedBlock::blockFor(this)->vm();
+ return structure(vm)->globalObject();
}
inline Register& Register::operator=(JSScope* scope)
diff --git a/Source/JavaScriptCore/runtime/JSSetIterator.cpp b/Source/JavaScriptCore/runtime/JSSetIterator.cpp
index 634c2ac..2f2deb5 100644
--- a/Source/JavaScriptCore/runtime/JSSetIterator.cpp
+++ b/Source/JavaScriptCore/runtime/JSSetIterator.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013 Apple, Inc. All rights reserved.
+ * Copyright (C) 2013, 2016 Apple, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -26,12 +26,9 @@
#include "config.h"
#include "JSSetIterator.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSSet.h"
#include "MapDataInlines.h"
-#include "SlotVisitorInlines.h"
-#include "StructureInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/JSStringIterator.cpp b/Source/JavaScriptCore/runtime/JSStringIterator.cpp
index 7bfd8a9..43b6c36 100644
--- a/Source/JavaScriptCore/runtime/JSStringIterator.cpp
+++ b/Source/JavaScriptCore/runtime/JSStringIterator.cpp
@@ -28,9 +28,7 @@
#include "JSStringIterator.h"
#include "BuiltinNames.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
-#include "StructureInlines.h"
+#include "JSCInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/JSTemplateRegistryKey.cpp b/Source/JavaScriptCore/runtime/JSTemplateRegistryKey.cpp
index e3f3ae9..3b9797c 100644
--- a/Source/JavaScriptCore/runtime/JSTemplateRegistryKey.cpp
+++ b/Source/JavaScriptCore/runtime/JSTemplateRegistryKey.cpp
@@ -1,5 +1,6 @@
/*
* Copyright (C) 2015 Yusuke Suzuki <utatane.tea@gmail.com>.
+ * Copyright (C) 2016 Apple Inc. All Rights Reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -26,9 +27,7 @@
#include "config.h"
#include "JSTemplateRegistryKey.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
-#include "StructureInlines.h"
+#include "JSCInlines.h"
#include "VM.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/JSTypedArrayViewConstructor.cpp b/Source/JavaScriptCore/runtime/JSTypedArrayViewConstructor.cpp
index 02c2fa1..bfae59e 100644
--- a/Source/JavaScriptCore/runtime/JSTypedArrayViewConstructor.cpp
+++ b/Source/JavaScriptCore/runtime/JSTypedArrayViewConstructor.cpp
@@ -30,9 +30,8 @@
#include "Error.h"
#include "GetterSetter.h"
#include "JSCBuiltins.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSGenericTypedArrayViewConstructorInlines.h"
-#include "JSObject.h"
#include "JSTypedArrayViewPrototype.h"
#include "JSTypedArrays.h"
diff --git a/Source/JavaScriptCore/runtime/JSTypedArrayViewPrototype.cpp b/Source/JavaScriptCore/runtime/JSTypedArrayViewPrototype.cpp
index 3f6aa485..a1949fe 100644
--- a/Source/JavaScriptCore/runtime/JSTypedArrayViewPrototype.cpp
+++ b/Source/JavaScriptCore/runtime/JSTypedArrayViewPrototype.cpp
@@ -29,7 +29,7 @@
#include "BuiltinNames.h"
#include "CallFrame.h"
#include "GetterSetter.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSFunction.h"
#include "JSGenericTypedArrayViewPrototypeFunctions.h"
#include "JSObjectInlines.h"
diff --git a/Source/JavaScriptCore/runtime/JSWeakMap.cpp b/Source/JavaScriptCore/runtime/JSWeakMap.cpp
index 5f85c8f..c027ff2 100644
--- a/Source/JavaScriptCore/runtime/JSWeakMap.cpp
+++ b/Source/JavaScriptCore/runtime/JSWeakMap.cpp
@@ -26,11 +26,8 @@
#include "config.h"
#include "JSWeakMap.h"
-#include "JSCJSValueInlines.h"
-#include "SlotVisitorInlines.h"
-#include "StructureInlines.h"
+#include "JSCInlines.h"
#include "WeakMapData.h"
-#include "WriteBarrierInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/JSWeakSet.cpp b/Source/JavaScriptCore/runtime/JSWeakSet.cpp
index 07bf372..3577c5d 100644
--- a/Source/JavaScriptCore/runtime/JSWeakSet.cpp
+++ b/Source/JavaScriptCore/runtime/JSWeakSet.cpp
@@ -26,11 +26,8 @@
#include "config.h"
#include "JSWeakSet.h"
-#include "JSCJSValueInlines.h"
-#include "SlotVisitorInlines.h"
-#include "StructureInlines.h"
+#include "JSCInlines.h"
#include "WeakMapData.h"
-#include "WriteBarrierInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/MapConstructor.cpp b/Source/JavaScriptCore/runtime/MapConstructor.cpp
index ee3b602..5a2b69a 100644
--- a/Source/JavaScriptCore/runtime/MapConstructor.cpp
+++ b/Source/JavaScriptCore/runtime/MapConstructor.cpp
@@ -29,13 +29,11 @@
#include "Error.h"
#include "GetterSetter.h"
#include "IteratorOperations.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSGlobalObject.h"
#include "JSMap.h"
#include "JSObjectInlines.h"
#include "MapPrototype.h"
-#include "StructureInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/MapIteratorPrototype.cpp b/Source/JavaScriptCore/runtime/MapIteratorPrototype.cpp
index 63606a3..1ec4d92 100644
--- a/Source/JavaScriptCore/runtime/MapIteratorPrototype.cpp
+++ b/Source/JavaScriptCore/runtime/MapIteratorPrototype.cpp
@@ -27,10 +27,8 @@
#include "MapIteratorPrototype.h"
#include "IteratorOperations.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSMapIterator.h"
-#include "StructureInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/MapPrototype.cpp b/Source/JavaScriptCore/runtime/MapPrototype.cpp
index efd4416..d50f807 100644
--- a/Source/JavaScriptCore/runtime/MapPrototype.cpp
+++ b/Source/JavaScriptCore/runtime/MapPrototype.cpp
@@ -31,13 +31,11 @@
#include "ExceptionHelpers.h"
#include "GetterSetter.h"
#include "IteratorOperations.h"
-#include "JSCJSValueInlines.h"
-#include "JSFunctionInlines.h"
+#include "JSCInlines.h"
#include "JSMap.h"
#include "JSMapIterator.h"
#include "Lookup.h"
#include "MapDataInlines.h"
-#include "StructureInlines.h"
#include "MapPrototype.lut.h"
diff --git a/Source/JavaScriptCore/runtime/NativeErrorConstructor.cpp b/Source/JavaScriptCore/runtime/NativeErrorConstructor.cpp
index 800ec89..0974c41 100644
--- a/Source/JavaScriptCore/runtime/NativeErrorConstructor.cpp
+++ b/Source/JavaScriptCore/runtime/NativeErrorConstructor.cpp
@@ -22,6 +22,7 @@
#include "NativeErrorConstructor.h"
#include "ErrorInstance.h"
+#include "Interpreter.h"
#include "JSFunction.h"
#include "JSString.h"
#include "NativeErrorPrototype.h"
diff --git a/Source/JavaScriptCore/runtime/NativeStdFunctionCell.cpp b/Source/JavaScriptCore/runtime/NativeStdFunctionCell.cpp
index 581f31a..1c03266 100644
--- a/Source/JavaScriptCore/runtime/NativeStdFunctionCell.cpp
+++ b/Source/JavaScriptCore/runtime/NativeStdFunctionCell.cpp
@@ -26,10 +26,7 @@
#include "config.h"
#include "NativeStdFunctionCell.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
-#include "JSFunctionInlines.h"
-#include "SlotVisitorInlines.h"
+#include "JSCInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/Operations.h b/Source/JavaScriptCore/runtime/Operations.h
index 4f9bb5c..34e301f 100644
--- a/Source/JavaScriptCore/runtime/Operations.h
+++ b/Source/JavaScriptCore/runtime/Operations.h
@@ -199,6 +199,20 @@
return jsAddSlowCase(callFrame, v1, v2);
}
+inline bool scribbleFreeCells()
+{
+ return !ASSERT_DISABLED || Options::scribbleFreeCells();
+}
+
+inline void scribble(void* base, size_t size)
+{
+ for (size_t i = size / sizeof(EncodedJSValue); i--;) {
+ // Use a 16-byte aligned value to ensure that it passes the cell check.
+ static_cast<EncodedJSValue*>(base)[i] = JSValue::encode(
+ bitwise_cast<JSCell*>(static_cast<intptr_t>(0xbadbeef0)));
+ }
+}
+
} // namespace JSC
#endif // Operations_h
diff --git a/Source/JavaScriptCore/runtime/Options.cpp b/Source/JavaScriptCore/runtime/Options.cpp
index 156eb08..de8a4bb 100644
--- a/Source/JavaScriptCore/runtime/Options.cpp
+++ b/Source/JavaScriptCore/runtime/Options.cpp
@@ -370,7 +370,7 @@
Options::useOSREntryToDFG() = false;
Options::useOSREntryToFTL() = false;
}
-
+
#if PLATFORM(IOS) && !PLATFORM(IOS_SIMULATOR) && __IPHONE_OS_VERSION_MIN_REQUIRED >= 100000
// Override globally for now. Longer term we'll just make the default
// be to have this option enabled, and have platforms that don't support
diff --git a/Source/JavaScriptCore/runtime/Options.h b/Source/JavaScriptCore/runtime/Options.h
index d1cf255..567c9c2 100644
--- a/Source/JavaScriptCore/runtime/Options.h
+++ b/Source/JavaScriptCore/runtime/Options.h
@@ -182,6 +182,11 @@
v(bool, testTheFTL, false, Normal, nullptr) \
v(bool, verboseSanitizeStack, false, Normal, nullptr) \
v(bool, useGenerationalGC, true, Normal, nullptr) \
+ v(bool, scribbleFreeCells, false, Normal, nullptr) \
+ v(double, sizeClassProgression, 1.4, Normal, nullptr) \
+ v(unsigned, largeAllocationCutoff, 100000, Normal, nullptr) \
+ v(bool, dumpSizeClasses, false, Normal, nullptr) \
+ v(bool, useBumpAllocator, true, Normal, nullptr) \
v(bool, eagerlyUpdateTopCallFrame, false, Normal, nullptr) \
\
v(bool, useOSREntryToDFG, true, Normal, nullptr) \
diff --git a/Source/JavaScriptCore/runtime/PropertyTable.cpp b/Source/JavaScriptCore/runtime/PropertyTable.cpp
index c458bc5..35aac5a 100644
--- a/Source/JavaScriptCore/runtime/PropertyTable.cpp
+++ b/Source/JavaScriptCore/runtime/PropertyTable.cpp
@@ -26,10 +26,7 @@
#include "config.h"
#include "PropertyMapHashTable.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
-#include "SlotVisitorInlines.h"
-#include "StructureInlines.h"
+#include "JSCInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/ProxyConstructor.cpp b/Source/JavaScriptCore/runtime/ProxyConstructor.cpp
index 8f26294..7ec3af5 100644
--- a/Source/JavaScriptCore/runtime/ProxyConstructor.cpp
+++ b/Source/JavaScriptCore/runtime/ProxyConstructor.cpp
@@ -28,8 +28,7 @@
#include "Error.h"
#include "IdentifierInlines.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "ObjectConstructor.h"
#include "ObjectPrototype.h"
#include "ProxyObject.h"
diff --git a/Source/JavaScriptCore/runtime/ProxyObject.cpp b/Source/JavaScriptCore/runtime/ProxyObject.cpp
index d52f17f..db84cd42 100644
--- a/Source/JavaScriptCore/runtime/ProxyObject.cpp
+++ b/Source/JavaScriptCore/runtime/ProxyObject.cpp
@@ -29,8 +29,7 @@
#include "ArrayConstructor.h"
#include "Error.h"
#include "IdentifierInlines.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSObjectInlines.h"
#include "ObjectConstructor.h"
#include "SlotVisitorInlines.h"
diff --git a/Source/JavaScriptCore/runtime/ProxyRevoke.cpp b/Source/JavaScriptCore/runtime/ProxyRevoke.cpp
index 4ce423b..c792c5b 100644
--- a/Source/JavaScriptCore/runtime/ProxyRevoke.cpp
+++ b/Source/JavaScriptCore/runtime/ProxyRevoke.cpp
@@ -26,10 +26,8 @@
#include "config.h"
#include "ProxyRevoke.h"
-#include "JSCJSValueInlines.h"
+#include "JSCInlines.h"
#include "ProxyObject.h"
-#include "SlotVisitorInlines.h"
-#include "StructureInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/RegExp.cpp b/Source/JavaScriptCore/runtime/RegExp.cpp
index c87751f..f6d0e9b 100644
--- a/Source/JavaScriptCore/runtime/RegExp.cpp
+++ b/Source/JavaScriptCore/runtime/RegExp.cpp
@@ -296,13 +296,13 @@
m_regExpBytecode = Yarr::byteCompile(pattern, &vm->m_regExpAllocator, &vm->m_regExpAllocatorLock);
}
-int RegExp::match(VM& vm, const String& s, unsigned startOffset, Vector<int, 32>& ovector)
+int RegExp::match(VM& vm, const String& s, unsigned startOffset, Vector<int>& ovector)
{
return matchInline(vm, s, startOffset, ovector);
}
bool RegExp::matchConcurrently(
- VM& vm, const String& s, unsigned startOffset, int& position, Vector<int, 32>& ovector)
+ VM& vm, const String& s, unsigned startOffset, int& position, Vector<int>& ovector)
{
ConcurrentJITLocker locker(m_lock);
@@ -382,7 +382,7 @@
void RegExp::matchCompareWithInterpreter(const String& s, int startOffset, int* offsetVector, int jitResult)
{
int offsetVectorSize = (m_numSubpatterns + 1) * 2;
- Vector<int, 32> interpreterOvector;
+ Vector<int> interpreterOvector;
interpreterOvector.resize(offsetVectorSize);
int* interpreterOffsetVector = interpreterOvector.data();
int interpreterResult = 0;
diff --git a/Source/JavaScriptCore/runtime/RegExp.h b/Source/JavaScriptCore/runtime/RegExp.h
index 995aae5..b6c0d77 100644
--- a/Source/JavaScriptCore/runtime/RegExp.h
+++ b/Source/JavaScriptCore/runtime/RegExp.h
@@ -64,17 +64,18 @@
bool isValid() const { return !m_constructionError && m_flags != InvalidFlags; }
const char* errorMessage() const { return m_constructionError; }
- JS_EXPORT_PRIVATE int match(VM&, const String&, unsigned startOffset, Vector<int, 32>& ovector);
+ JS_EXPORT_PRIVATE int match(VM&, const String&, unsigned startOffset, Vector<int>& ovector);
// Returns false if we couldn't run the regular expression for any reason.
- bool matchConcurrently(VM&, const String&, unsigned startOffset, int& position, Vector<int, 32>& ovector);
+ bool matchConcurrently(VM&, const String&, unsigned startOffset, int& position, Vector<int>& ovector);
JS_EXPORT_PRIVATE MatchResult match(VM&, const String&, unsigned startOffset);
bool matchConcurrently(VM&, const String&, unsigned startOffset, MatchResult&);
// Call these versions of the match functions if you're desperate for performance.
- int matchInline(VM&, const String&, unsigned startOffset, Vector<int, 32>& ovector);
+ template<typename VectorType>
+ int matchInline(VM&, const String&, unsigned startOffset, VectorType& ovector);
MatchResult matchInline(VM&, const String&, unsigned startOffset);
unsigned numSubpatterns() const { return m_numSubpatterns; }
diff --git a/Source/JavaScriptCore/runtime/RegExpConstructor.h b/Source/JavaScriptCore/runtime/RegExpConstructor.h
index 2536525..50c8b5e 100644
--- a/Source/JavaScriptCore/runtime/RegExpConstructor.h
+++ b/Source/JavaScriptCore/runtime/RegExpConstructor.h
@@ -80,7 +80,7 @@
RegExpCachedResult m_cachedResult;
bool m_multiline;
- Vector<int, 32> m_ovector;
+ Vector<int> m_ovector;
};
RegExpConstructor* asRegExpConstructor(JSValue);
diff --git a/Source/JavaScriptCore/runtime/RegExpInlines.h b/Source/JavaScriptCore/runtime/RegExpInlines.h
index 295d931..5b0db95 100644
--- a/Source/JavaScriptCore/runtime/RegExpInlines.h
+++ b/Source/JavaScriptCore/runtime/RegExpInlines.h
@@ -94,7 +94,8 @@
compile(&vm, charSize);
}
-ALWAYS_INLINE int RegExp::matchInline(VM& vm, const String& s, unsigned startOffset, Vector<int, 32>& ovector)
+template<typename VectorType>
+ALWAYS_INLINE int RegExp::matchInline(VM& vm, const String& s, unsigned startOffset, VectorType& ovector)
{
#if ENABLE(REGEXP_TRACING)
m_rtMatchCallCount++;
diff --git a/Source/JavaScriptCore/runtime/RegExpMatchesArray.h b/Source/JavaScriptCore/runtime/RegExpMatchesArray.h
index 7df8c73..a69a151 100644
--- a/Source/JavaScriptCore/runtime/RegExpMatchesArray.h
+++ b/Source/JavaScriptCore/runtime/RegExpMatchesArray.h
@@ -34,17 +34,20 @@
ALWAYS_INLINE JSArray* tryCreateUninitializedRegExpMatchesArray(VM& vm, Structure* structure, unsigned initialLength)
{
- unsigned vectorLength = std::max(BASE_VECTOR_LEN, initialLength);
+ unsigned vectorLength = initialLength;
if (vectorLength > MAX_STORAGE_VECTOR_LENGTH)
return 0;
- void* temp;
- if (!vm.heap.tryAllocateStorage(0, Butterfly::totalSize(0, structure->outOfLineCapacity(), true, vectorLength * sizeof(EncodedJSValue)), &temp))
- return 0;
+ void* temp = vm.heap.tryAllocateAuxiliary(nullptr, Butterfly::totalSize(0, structure->outOfLineCapacity(), true, vectorLength * sizeof(EncodedJSValue)));
+ if (!temp)
+ return nullptr;
Butterfly* butterfly = Butterfly::fromBase(temp, 0, structure->outOfLineCapacity());
butterfly->setVectorLength(vectorLength);
butterfly->setPublicLength(initialLength);
-
+
+ for (unsigned i = initialLength; i < vectorLength; ++i)
+ butterfly->contiguous()[i].clear();
+
return JSArray::createWithButterfly(vm, structure, butterfly);
}
@@ -67,40 +70,54 @@
// FIXME: This should handle array allocation errors gracefully.
// https://bugs.webkit.org/show_bug.cgi?id=155144
+ auto setProperties = [&] () {
+ array->putDirect(vm, RegExpMatchesArrayIndexPropertyOffset, jsNumber(result.start));
+ array->putDirect(vm, RegExpMatchesArrayInputPropertyOffset, input);
+ };
+
+ unsigned numSubpatterns = regExp->numSubpatterns();
+
if (UNLIKELY(globalObject->isHavingABadTime())) {
- array = JSArray::tryCreateUninitialized(vm, globalObject->regExpMatchesArrayStructure(), regExp->numSubpatterns() + 1);
+ array = JSArray::tryCreateUninitialized(vm, globalObject->regExpMatchesArrayStructure(), numSubpatterns + 1);
+
+ setProperties();
+
+ array->initializeIndex(vm, 0, jsUndefined());
+
+ for (unsigned i = 1; i <= numSubpatterns; ++i)
+ array->initializeIndex(vm, i, jsUndefined());
+
+ // Now the object is safe to scan by GC.
array->initializeIndex(vm, 0, jsSubstringOfResolved(vm, input, result.start, result.end - result.start));
- if (unsigned numSubpatterns = regExp->numSubpatterns()) {
- for (unsigned i = 1; i <= numSubpatterns; ++i) {
- int start = subpatternResults[2 * i];
- if (start >= 0)
- array->initializeIndex(vm, i, JSRopeString::createSubstringOfResolved(vm, input, start, subpatternResults[2 * i + 1] - start));
- else
- array->initializeIndex(vm, i, jsUndefined());
- }
+ for (unsigned i = 1; i <= numSubpatterns; ++i) {
+ int start = subpatternResults[2 * i];
+ if (start >= 0)
+ array->initializeIndex(vm, i, JSRopeString::createSubstringOfResolved(vm, input, start, subpatternResults[2 * i + 1] - start));
}
} else {
- array = tryCreateUninitializedRegExpMatchesArray(vm, globalObject->regExpMatchesArrayStructure(), regExp->numSubpatterns() + 1);
+ array = tryCreateUninitializedRegExpMatchesArray(vm, globalObject->regExpMatchesArrayStructure(), numSubpatterns + 1);
RELEASE_ASSERT(array);
+ setProperties();
+
+ array->initializeIndex(vm, 0, jsUndefined(), ArrayWithContiguous);
+
+ for (unsigned i = 1; i <= numSubpatterns; ++i)
+ array->initializeIndex(vm, i, jsUndefined(), ArrayWithContiguous);
+
+ // Now the object is safe to scan by GC.
+
array->initializeIndex(vm, 0, jsSubstringOfResolved(vm, input, result.start, result.end - result.start), ArrayWithContiguous);
- if (unsigned numSubpatterns = regExp->numSubpatterns()) {
- for (unsigned i = 1; i <= numSubpatterns; ++i) {
- int start = subpatternResults[2 * i];
- if (start >= 0)
- array->initializeIndex(vm, i, JSRopeString::createSubstringOfResolved(vm, input, start, subpatternResults[2 * i + 1] - start), ArrayWithContiguous);
- else
- array->initializeIndex(vm, i, jsUndefined(), ArrayWithContiguous);
- }
+ for (unsigned i = 1; i <= numSubpatterns; ++i) {
+ int start = subpatternResults[2 * i];
+ if (start >= 0)
+ array->initializeIndex(vm, i, JSRopeString::createSubstringOfResolved(vm, input, start, subpatternResults[2 * i + 1] - start), ArrayWithContiguous);
}
}
- array->putDirect(vm, RegExpMatchesArrayIndexPropertyOffset, jsNumber(result.start));
- array->putDirect(vm, RegExpMatchesArrayInputPropertyOffset, input);
-
return array;
}
diff --git a/Source/JavaScriptCore/runtime/RegExpPrototype.cpp b/Source/JavaScriptCore/runtime/RegExpPrototype.cpp
index 4f9758d..919fd70 100644
--- a/Source/JavaScriptCore/runtime/RegExpPrototype.cpp
+++ b/Source/JavaScriptCore/runtime/RegExpPrototype.cpp
@@ -503,12 +503,14 @@
unsigned& matchPosition, bool regExpIsSticky, bool regExpIsUnicode,
const ControlFunc& control, const PushFunc& push)
{
+ Vector<int> ovector;
+
while (matchPosition < inputSize) {
if (control() == AbortSplit)
return;
- Vector<int, 32> ovector;
-
+ ovector.resize(0);
+
// a. Perform ? Set(splitter, "lastIndex", q, true).
// b. Let z be ? RegExpExec(splitter, S).
int mpos = regexp->match(vm, input, matchPosition, ovector);
diff --git a/Source/JavaScriptCore/runtime/RuntimeType.cpp b/Source/JavaScriptCore/runtime/RuntimeType.cpp
index 6ef504b..dfd8cc9 100644
--- a/Source/JavaScriptCore/runtime/RuntimeType.cpp
+++ b/Source/JavaScriptCore/runtime/RuntimeType.cpp
@@ -28,8 +28,7 @@
#include "config.h"
#include "RuntimeType.h"
-#include "JSCJSValue.h"
-#include "JSCJSValueInlines.h"
+#include "JSCInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/SamplingProfiler.cpp b/Source/JavaScriptCore/runtime/SamplingProfiler.cpp
index 0ba0927..81f7a7d 100644
--- a/Source/JavaScriptCore/runtime/SamplingProfiler.cpp
+++ b/Source/JavaScriptCore/runtime/SamplingProfiler.cpp
@@ -33,6 +33,7 @@
#include "Executable.h"
#include "HeapInlines.h"
#include "HeapIterationScope.h"
+#include "HeapUtil.h"
#include "InlineCallFrame.h"
#include "Interpreter.h"
#include "JSCJSValueInlines.h"
@@ -357,7 +358,6 @@
RELEASE_ASSERT(m_lock.isLocked());
TinyBloomFilter filter = m_vm.heap.objectSpace().blocks().filter();
- MarkedBlockSet& markedBlockSet = m_vm.heap.objectSpace().blocks();
for (UnprocessedStackTrace& unprocessedStackTrace : m_unprocessedStackTraces) {
m_stackTraces.append(StackTrace());
@@ -391,7 +391,7 @@
JSValue callee = JSValue::decode(encodedCallee);
StackFrame& stackFrame = stackTrace.frames.last();
bool alreadyHasExecutable = !!stackFrame.executable;
- if (!Heap::isValueGCObject(filter, markedBlockSet, callee)) {
+ if (!HeapUtil::isValueGCObject(m_vm.heap, filter, callee)) {
if (!alreadyHasExecutable)
stackFrame.frameType = FrameType::Unknown;
return;
@@ -436,7 +436,7 @@
return;
}
- RELEASE_ASSERT(Heap::isPointerGCObject(filter, markedBlockSet, executable));
+ RELEASE_ASSERT(HeapUtil::isPointerGCObjectJSCell(m_vm.heap, filter, executable));
stackFrame.frameType = FrameType::Executable;
stackFrame.executable = executable;
m_liveCellPointers.add(executable);
diff --git a/Source/JavaScriptCore/runtime/SetConstructor.cpp b/Source/JavaScriptCore/runtime/SetConstructor.cpp
index 8c293b7..e30561c 100644
--- a/Source/JavaScriptCore/runtime/SetConstructor.cpp
+++ b/Source/JavaScriptCore/runtime/SetConstructor.cpp
@@ -29,14 +29,12 @@
#include "Error.h"
#include "GetterSetter.h"
#include "IteratorOperations.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSGlobalObject.h"
#include "JSObjectInlines.h"
#include "JSSet.h"
#include "MapData.h"
#include "SetPrototype.h"
-#include "StructureInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/SetIteratorPrototype.cpp b/Source/JavaScriptCore/runtime/SetIteratorPrototype.cpp
index 1e92e79..5a12c68 100644
--- a/Source/JavaScriptCore/runtime/SetIteratorPrototype.cpp
+++ b/Source/JavaScriptCore/runtime/SetIteratorPrototype.cpp
@@ -27,10 +27,8 @@
#include "SetIteratorPrototype.h"
#include "IteratorOperations.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSSetIterator.h"
-#include "StructureInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/SetPrototype.cpp b/Source/JavaScriptCore/runtime/SetPrototype.cpp
index f893ec5..80046d6 100644
--- a/Source/JavaScriptCore/runtime/SetPrototype.cpp
+++ b/Source/JavaScriptCore/runtime/SetPrototype.cpp
@@ -31,13 +31,11 @@
#include "ExceptionHelpers.h"
#include "GetterSetter.h"
#include "IteratorOperations.h"
-#include "JSCJSValueInlines.h"
-#include "JSFunctionInlines.h"
+#include "JSCInlines.h"
#include "JSSet.h"
#include "JSSetIterator.h"
#include "Lookup.h"
#include "MapDataInlines.h"
-#include "StructureInlines.h"
#include "SetPrototype.lut.h"
diff --git a/Source/JavaScriptCore/runtime/StackFrame.cpp b/Source/JavaScriptCore/runtime/StackFrame.cpp
new file mode 100644
index 0000000..ea7ec5e
--- /dev/null
+++ b/Source/JavaScriptCore/runtime/StackFrame.cpp
@@ -0,0 +1,119 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "config.h"
+#include "StackFrame.h"
+
+#include "CodeBlock.h"
+#include "DebuggerPrimitives.h"
+#include "JSCInlines.h"
+#include <wtf/text/StringBuilder.h>
+
+namespace JSC {
+
+intptr_t StackFrame::sourceID() const
+{
+ if (!codeBlock)
+ return noSourceID;
+ return codeBlock->ownerScriptExecutable()->sourceID();
+}
+
+String StackFrame::sourceURL() const
+{
+ if (!codeBlock)
+ return ASCIILiteral("[native code]");
+
+ String sourceURL = codeBlock->ownerScriptExecutable()->sourceURL();
+ if (!sourceURL.isNull())
+ return sourceURL;
+ return emptyString();
+}
+
+String StackFrame::functionName(VM& vm) const
+{
+ if (codeBlock) {
+ switch (codeBlock->codeType()) {
+ case EvalCode:
+ return ASCIILiteral("eval code");
+ case ModuleCode:
+ return ASCIILiteral("module code");
+ case FunctionCode:
+ break;
+ case GlobalCode:
+ return ASCIILiteral("global code");
+ default:
+ ASSERT_NOT_REACHED();
+ }
+ }
+ String name;
+ if (callee)
+ name = getCalculatedDisplayName(vm, callee.get()).impl();
+ return name.isNull() ? emptyString() : name;
+}
+
+void StackFrame::computeLineAndColumn(unsigned& line, unsigned& column) const
+{
+ if (!codeBlock) {
+ line = 0;
+ column = 0;
+ return;
+ }
+
+ int divot = 0;
+ int unusedStartOffset = 0;
+ int unusedEndOffset = 0;
+ codeBlock->expressionRangeForBytecodeOffset(bytecodeOffset, divot, unusedStartOffset, unusedEndOffset, line, column);
+
+ ScriptExecutable* executable = codeBlock->ownerScriptExecutable();
+ if (executable->hasOverrideLineNumber())
+ line = executable->overrideLineNumber();
+}
+
+String StackFrame::toString(VM& vm) const
+{
+ StringBuilder traceBuild;
+ String functionName = this->functionName(vm);
+ String sourceURL = this->sourceURL();
+ traceBuild.append(functionName);
+ if (!sourceURL.isEmpty()) {
+ if (!functionName.isEmpty())
+ traceBuild.append('@');
+ traceBuild.append(sourceURL);
+ if (codeBlock) {
+ unsigned line;
+ unsigned column;
+ computeLineAndColumn(line, column);
+
+ traceBuild.append(':');
+ traceBuild.appendNumber(line);
+ traceBuild.append(':');
+ traceBuild.appendNumber(column);
+ }
+ }
+ return traceBuild.toString().impl();
+}
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/runtime/StackFrame.h b/Source/JavaScriptCore/runtime/StackFrame.h
new file mode 100644
index 0000000..3c137c1
--- /dev/null
+++ b/Source/JavaScriptCore/runtime/StackFrame.h
@@ -0,0 +1,50 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#include "Strong.h"
+
+namespace JSC {
+
+class CodeBlock;
+class JSObject;
+
+struct StackFrame {
+ Strong<JSObject> callee;
+ Strong<CodeBlock> codeBlock;
+ unsigned bytecodeOffset;
+
+ bool isNative() const { return !codeBlock; }
+
+ void computeLineAndColumn(unsigned& line, unsigned& column) const;
+ String functionName(VM&) const;
+ intptr_t sourceID() const;
+ String sourceURL() const;
+ String toString(VM&) const;
+};
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/runtime/StringConstructor.cpp b/Source/JavaScriptCore/runtime/StringConstructor.cpp
index 74d2305..92ad2dc 100644
--- a/Source/JavaScriptCore/runtime/StringConstructor.cpp
+++ b/Source/JavaScriptCore/runtime/StringConstructor.cpp
@@ -28,6 +28,7 @@
#include "JSGlobalObject.h"
#include "JSCInlines.h"
#include "StringPrototype.h"
+#include <wtf/text/StringBuilder.h>
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/StringIteratorPrototype.cpp b/Source/JavaScriptCore/runtime/StringIteratorPrototype.cpp
index 38423ba..f2e6edd 100644
--- a/Source/JavaScriptCore/runtime/StringIteratorPrototype.cpp
+++ b/Source/JavaScriptCore/runtime/StringIteratorPrototype.cpp
@@ -27,12 +27,10 @@
#include "config.h"
#include "StringIteratorPrototype.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSGlobalObject.h"
#include "JSStringIterator.h"
#include "ObjectConstructor.h"
-#include "StructureInlines.h"
#include "StringIteratorPrototype.lut.h"
diff --git a/Source/JavaScriptCore/runtime/StructureInlines.h b/Source/JavaScriptCore/runtime/StructureInlines.h
index b943b7e..95d2369 100644
--- a/Source/JavaScriptCore/runtime/StructureInlines.h
+++ b/Source/JavaScriptCore/runtime/StructureInlines.h
@@ -242,7 +242,6 @@
ALWAYS_INLINE WriteBarrier<PropertyTable>& Structure::propertyTable()
{
- ASSERT(!globalObject() || (!globalObject()->vm().heap.isCollecting() || globalObject()->vm().heap.isHeapSnapshotting()));
return m_propertyTableUnsafe;
}
diff --git a/Source/JavaScriptCore/runtime/TemplateRegistry.cpp b/Source/JavaScriptCore/runtime/TemplateRegistry.cpp
index 579e1b6..925807a 100644
--- a/Source/JavaScriptCore/runtime/TemplateRegistry.cpp
+++ b/Source/JavaScriptCore/runtime/TemplateRegistry.cpp
@@ -26,10 +26,9 @@
#include "config.h"
#include "TemplateRegistry.h"
-#include "JSCJSValueInlines.h"
+#include "JSCInlines.h"
#include "JSGlobalObject.h"
#include "ObjectConstructor.h"
-#include "StructureInlines.h"
#include "WeakGCMapInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/TestRunnerUtils.cpp b/Source/JavaScriptCore/runtime/TestRunnerUtils.cpp
index 43aa456..516054d 100644
--- a/Source/JavaScriptCore/runtime/TestRunnerUtils.cpp
+++ b/Source/JavaScriptCore/runtime/TestRunnerUtils.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2014, 2016 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -27,7 +27,9 @@
#include "TestRunnerUtils.h"
#include "CodeBlock.h"
+#include "HeapStatistics.h"
#include "JSCInlines.h"
+#include "LLIntData.h"
namespace JSC {
@@ -150,5 +152,14 @@
return optimizeNextInvocation(exec->uncheckedArgument(0));
}
+// This is a hook called at the bitter end of some of our tests.
+void finalizeStatsAtEndOfTesting()
+{
+ if (Options::logHeapStatisticsAtExit())
+ HeapStatistics::reportSuccess();
+ if (Options::reportLLIntStats())
+ LLInt::Data::finalizeStats();
+}
+
} // namespace JSC
diff --git a/Source/JavaScriptCore/runtime/TestRunnerUtils.h b/Source/JavaScriptCore/runtime/TestRunnerUtils.h
index 14658d6..875fbb6 100644
--- a/Source/JavaScriptCore/runtime/TestRunnerUtils.h
+++ b/Source/JavaScriptCore/runtime/TestRunnerUtils.h
@@ -53,6 +53,8 @@
JS_EXPORT_PRIVATE unsigned numberOfStaticOSRExitFuzzChecks();
JS_EXPORT_PRIVATE unsigned numberOfOSRExitFuzzChecks();
+JS_EXPORT_PRIVATE void finalizeStatsAtEndOfTesting();
+
} // namespace JSC
#endif // TestRunnerUtils_h
diff --git a/Source/JavaScriptCore/runtime/ThrowScope.cpp b/Source/JavaScriptCore/runtime/ThrowScope.cpp
index 57fd99a..09c7c03 100644
--- a/Source/JavaScriptCore/runtime/ThrowScope.cpp
+++ b/Source/JavaScriptCore/runtime/ThrowScope.cpp
@@ -26,7 +26,7 @@
#include "config.h"
#include "ThrowScope.h"
-#include "JSCJSValueInlines.h"
+#include "JSCInlines.h"
#include "VM.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/TypeProfilerLog.cpp b/Source/JavaScriptCore/runtime/TypeProfilerLog.cpp
index 9d10aae..60d14e4 100644
--- a/Source/JavaScriptCore/runtime/TypeProfilerLog.cpp
+++ b/Source/JavaScriptCore/runtime/TypeProfilerLog.cpp
@@ -29,7 +29,7 @@
#include "config.h"
#include "TypeProfilerLog.h"
-#include "JSCJSValueInlines.h"
+#include "JSCInlines.h"
#include "TypeLocation.h"
#include <wtf/CurrentTime.h>
diff --git a/Source/JavaScriptCore/runtime/TypeSet.cpp b/Source/JavaScriptCore/runtime/TypeSet.cpp
index d923888..a6afbbd 100644
--- a/Source/JavaScriptCore/runtime/TypeSet.cpp
+++ b/Source/JavaScriptCore/runtime/TypeSet.cpp
@@ -27,8 +27,7 @@
#include "TypeSet.h"
#include "InspectorProtocolObjects.h"
-#include "JSCJSValue.h"
-#include "JSCJSValueInlines.h"
+#include "JSCInlines.h"
#include <wtf/text/CString.h>
#include <wtf/text/WTFString.h>
#include <wtf/text/StringBuilder.h>
diff --git a/Source/JavaScriptCore/runtime/VM.cpp b/Source/JavaScriptCore/runtime/VM.cpp
index c30122c..1bee77f 100644
--- a/Source/JavaScriptCore/runtime/VM.cpp
+++ b/Source/JavaScriptCore/runtime/VM.cpp
@@ -69,6 +69,7 @@
#include "JSPropertyNameEnumerator.h"
#include "JSTemplateRegistryKey.h"
#include "JSWithScope.h"
+#include "LLIntData.h"
#include "Lexer.h"
#include "Lookup.h"
#include "MapData.h"
@@ -98,6 +99,7 @@
#include "WeakMapData.h"
#include <wtf/CurrentTime.h>
#include <wtf/ProcessID.h>
+#include <wtf/SimpleStats.h>
#include <wtf/StringPrintStream.h>
#include <wtf/Threading.h>
#include <wtf/WTFThreadData.h>
@@ -106,6 +108,7 @@
#if !ENABLE(JIT)
#include "CLoopStack.h"
+#include "CLoopStackInlines.h"
#endif
#if ENABLE(DFG_JIT)
@@ -162,6 +165,7 @@
, m_atomicStringTable(vmType == Default ? wtfThreadData().atomicStringTable() : new AtomicStringTable)
, propertyNames(nullptr)
, emptyList(new MarkedArgumentBuffer)
+ , machineCodeBytesPerBytecodeWordForBaselineJIT(std::make_unique<SimpleStats>())
, customGetterSetterFunctionMap(*this)
, stringCache(*this)
, symbolImplToSymbolMap(*this)
@@ -873,4 +877,16 @@
#endif
}
+#if !ENABLE(JIT)
+bool VM::ensureStackCapacityForCLoop(Register* newTopOfStack)
+{
+ return interpreter->cloopStack().ensureCapacityFor(newTopOfStack);
+}
+
+bool VM::isSafeToRecurseSoftCLoop() const
+{
+ return interpreter->cloopStack().isSafeToRecurse();
+}
+#endif // !ENABLE(JIT)
+
} // namespace JSC
diff --git a/Source/JavaScriptCore/runtime/VM.h b/Source/JavaScriptCore/runtime/VM.h
index d6f4e80..eabc4eb 100644
--- a/Source/JavaScriptCore/runtime/VM.h
+++ b/Source/JavaScriptCore/runtime/VM.h
@@ -39,7 +39,6 @@
#include "JITThunks.h"
#include "JSCJSValue.h"
#include "JSLock.h"
-#include "LLIntData.h"
#include "MacroAssemblerCodeRef.h"
#include "Microtask.h"
#include "NumericStrings.h"
@@ -60,7 +59,6 @@
#include <wtf/Forward.h>
#include <wtf/HashMap.h>
#include <wtf/HashSet.h>
-#include <wtf/SimpleStats.h>
#include <wtf/StackBounds.h>
#include <wtf/Stopwatch.h>
#include <wtf/ThreadSafeRefCounted.h>
@@ -72,6 +70,11 @@
#include <wtf/ListHashSet.h>
#endif
+namespace WTF {
+class SimpleStats;
+} // namespace WTF
+using WTF::SimpleStats;
+
namespace JSC {
class BuiltinExecutables;
@@ -342,7 +345,7 @@
SmallStrings smallStrings;
NumericStrings numericStrings;
DateInstanceCache dateInstanceCache;
- WTF::SimpleStats machineCodeBytesPerBytecodeWordForBaselineJIT;
+ std::unique_ptr<SimpleStats> machineCodeBytesPerBytecodeWordForBaselineJIT;
WeakGCMap<std::pair<CustomGetterSetter*, int>, JSCustomGetterSetterFunction> customGetterSetterFunctionMap;
WeakGCMap<StringImpl*, JSString, PtrHash<StringImpl*>> stringCache;
Strong<JSString> lastCachedString;
@@ -642,6 +645,11 @@
m_lastException = exception;
}
+#if !ENABLE(JIT)
+ bool ensureStackCapacityForCLoop(Register* newTopOfStack);
+ bool isSafeToRecurseSoftCLoop() const;
+#endif // !ENABLE(JIT)
+
JS_EXPORT_PRIVATE void throwException(ExecState*, Exception*);
JS_EXPORT_PRIVATE JSValue throwException(ExecState*, JSValue);
JS_EXPORT_PRIVATE JSObject* throwException(ExecState*, JSObject*);
diff --git a/Source/JavaScriptCore/runtime/VMEntryScope.h b/Source/JavaScriptCore/runtime/VMEntryScope.h
index 2942c36..178bc3b 100644
--- a/Source/JavaScriptCore/runtime/VMEntryScope.h
+++ b/Source/JavaScriptCore/runtime/VMEntryScope.h
@@ -26,7 +26,6 @@
#ifndef VMEntryScope_h
#define VMEntryScope_h
-#include "Interpreter.h"
#include <wtf/StackBounds.h>
#include <wtf/StackStats.h>
#include <wtf/Vector.h>
diff --git a/Source/JavaScriptCore/runtime/VMInlines.h b/Source/JavaScriptCore/runtime/VMInlines.h
index d12250d..626e479 100644
--- a/Source/JavaScriptCore/runtime/VMInlines.h
+++ b/Source/JavaScriptCore/runtime/VMInlines.h
@@ -30,10 +30,6 @@
#include "VM.h"
#include "Watchdog.h"
-#if !ENABLE(JIT)
-#include "CLoopStackInlines.h"
-#endif
-
namespace JSC {
bool VM::ensureStackCapacityFor(Register* newTopOfStack)
@@ -42,7 +38,7 @@
ASSERT(wtfThreadData().stack().isGrowingDownward());
return newTopOfStack >= m_softStackLimit;
#else
- return interpreter->cloopStack().ensureCapacityFor(newTopOfStack);
+ return ensureStackCapacityForCLoop(newTopOfStack);
#endif
}
@@ -51,7 +47,7 @@
{
bool safe = isSafeToRecurse(m_softStackLimit);
#if !ENABLE(JIT)
- safe = safe && interpreter->cloopStack().isSafeToRecurse();
+ safe = safe && isSafeToRecurseSoftCLoop();
#endif
return safe;
}
diff --git a/Source/JavaScriptCore/runtime/WeakMapConstructor.cpp b/Source/JavaScriptCore/runtime/WeakMapConstructor.cpp
index d91818a..75343b1 100644
--- a/Source/JavaScriptCore/runtime/WeakMapConstructor.cpp
+++ b/Source/JavaScriptCore/runtime/WeakMapConstructor.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013 Apple, Inc. All rights reserved.
+ * Copyright (C) 2013, 2016 Apple, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -28,12 +28,10 @@
#include "Error.h"
#include "IteratorOperations.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSGlobalObject.h"
#include "JSObjectInlines.h"
#include "JSWeakMap.h"
-#include "StructureInlines.h"
#include "WeakMapPrototype.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/WeakMapData.cpp b/Source/JavaScriptCore/runtime/WeakMapData.cpp
index 1332e8e..a6842c9 100644
--- a/Source/JavaScriptCore/runtime/WeakMapData.cpp
+++ b/Source/JavaScriptCore/runtime/WeakMapData.cpp
@@ -29,8 +29,7 @@
#include "CopiedAllocator.h"
#include "CopyVisitorInlines.h"
#include "ExceptionHelpers.h"
-#include "JSCJSValueInlines.h"
-#include "SlotVisitorInlines.h"
+#include "JSCInlines.h"
#include <wtf/MathExtras.h>
diff --git a/Source/JavaScriptCore/runtime/WeakMapPrototype.cpp b/Source/JavaScriptCore/runtime/WeakMapPrototype.cpp
index bd40d9a..0964e83 100644
--- a/Source/JavaScriptCore/runtime/WeakMapPrototype.cpp
+++ b/Source/JavaScriptCore/runtime/WeakMapPrototype.cpp
@@ -26,9 +26,8 @@
#include "config.h"
#include "WeakMapPrototype.h"
-#include "JSCJSValueInlines.h"
+#include "JSCInlines.h"
#include "JSWeakMap.h"
-#include "StructureInlines.h"
#include "WeakMapData.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/WeakSetConstructor.cpp b/Source/JavaScriptCore/runtime/WeakSetConstructor.cpp
index 56c55ce..270e7c7 100644
--- a/Source/JavaScriptCore/runtime/WeakSetConstructor.cpp
+++ b/Source/JavaScriptCore/runtime/WeakSetConstructor.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2015 Apple, Inc. All rights reserved.
+ * Copyright (C) 2015-2016 Apple, Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -28,12 +28,10 @@
#include "Error.h"
#include "IteratorOperations.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
+#include "JSCInlines.h"
#include "JSGlobalObject.h"
#include "JSObjectInlines.h"
#include "JSWeakSet.h"
-#include "StructureInlines.h"
#include "WeakSetPrototype.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/runtime/WeakSetPrototype.cpp b/Source/JavaScriptCore/runtime/WeakSetPrototype.cpp
index b739057..dc756f1 100644
--- a/Source/JavaScriptCore/runtime/WeakSetPrototype.cpp
+++ b/Source/JavaScriptCore/runtime/WeakSetPrototype.cpp
@@ -26,9 +26,8 @@
#include "config.h"
#include "WeakSetPrototype.h"
-#include "JSCJSValueInlines.h"
+#include "JSCInlines.h"
#include "JSWeakSet.h"
-#include "StructureInlines.h"
#include "WeakMapData.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/testRegExp.cpp b/Source/JavaScriptCore/testRegExp.cpp
index 7f29aea..6d6f280 100644
--- a/Source/JavaScriptCore/testRegExp.cpp
+++ b/Source/JavaScriptCore/testRegExp.cpp
@@ -191,7 +191,7 @@
static bool testOneRegExp(VM& vm, RegExp* regexp, RegExpTest* regExpTest, bool verbose, unsigned int lineNumber)
{
bool result = true;
- Vector<int, 32> outVector;
+ Vector<int> outVector;
outVector.resize(regExpTest->expectVector.size());
int matchResult = regexp->match(vm, regExpTest->subject, regExpTest->offset, outVector);
diff --git a/Source/JavaScriptCore/tools/JSDollarVM.cpp b/Source/JavaScriptCore/tools/JSDollarVM.cpp
index ee5c3d1..ae38973 100644
--- a/Source/JavaScriptCore/tools/JSDollarVM.cpp
+++ b/Source/JavaScriptCore/tools/JSDollarVM.cpp
@@ -26,8 +26,7 @@
#include "config.h"
#include "JSDollarVM.h"
-#include "JSCJSValueInlines.h"
-#include "StructureInlines.h"
+#include "JSCInlines.h"
namespace JSC {
diff --git a/Source/JavaScriptCore/tools/JSDollarVMPrototype.cpp b/Source/JavaScriptCore/tools/JSDollarVMPrototype.cpp
index 1e8b092..752b6ff 100644
--- a/Source/JavaScriptCore/tools/JSDollarVMPrototype.cpp
+++ b/Source/JavaScriptCore/tools/JSDollarVMPrototype.cpp
@@ -146,7 +146,13 @@
bool JSDollarVMPrototype::isInObjectSpace(Heap* heap, void* ptr)
{
MarkedBlock* candidate = MarkedBlock::blockFor(ptr);
- return heap->objectSpace().blocks().set().contains(candidate);
+ if (heap->objectSpace().blocks().set().contains(candidate))
+ return true;
+ for (LargeAllocation* allocation : heap->objectSpace().largeAllocations()) {
+ if (allocation->contains(ptr))
+ return true;
+ }
+ return false;
}
bool JSDollarVMPrototype::isInStorageSpace(Heap* heap, void* ptr)