fourthTier: DFG should be able to run on a separate thread
https://bugs.webkit.org/show_bug.cgi?id=112839
Source/JavaScriptCore:
Reviewed by Geoffrey Garen.
This is the final bit of concurrent JITing. The idea is that there is a
single global worklist, and a single global thread, that does all
optimizing compilation. This is the DFG::Worklist. It contains a queue of
DFG::Plans, and a map from CodeBlock* (the baseline code block we're
trying to optimize) to DFG::Plan. If the DFGDriver tries to concurrently
compile something, it puts the Plan on the Worklist. The Worklist's
thread will compile that Plan eventually, and when it's done, it will
signal its completion by (1) notifying anyone waiting for the Worklist to
be done, and (2) forcing the CodeBlock::m_jitExecuteCounter to take slow
path. The next Baseline JIT cti_optimize call will then install all ready
(i.e. compiled) Plans for that VM. Note that (1) is only for the GC and
VM shutdown, which will want to ensure that there aren't any outstanding
async compilations before proceeding. They do so by simply waiting for
all of the plans for the current VM to complete. (2) is the actual way
that code typically gets installed.
This is all very racy by design. For example, just as we try to force the
execute counter to take slow path, the main thread may be setting the
execute counter to some other value. The main thread must set it to
another value because (a) JIT code is constantly incrementing the counter
in a racy way, (b) the cti_optimize slow path will set it to some
large-ish negative value to ensure that cti_optimize isn't called
repeatedly, and (c) OSR exits from previously jettisoned code blocks may
still want to reset the counter values. This "race" is made benign, by
ensuring that while there is an asynchronous compilation, we at worse set
the counter to optimizeAfterWarmUp and never to deferIndefinitely. Hence
if the race happens then the worst case is that we wait another ~1000
counts before installing the optimized code. Another defense is that if
any CodeBlock calls into cti_optimize, then it will check for all ready
plans for the VM - so even if a code block has to wait another ~1000
executions before it calls cti_optimize to do the installation, it may
actually end up being installed sooner because a different code block had
called cti_optimize, potentially for an unrelated reason.
Special care is taken to ensure that installing plans informs the GC
about the increased memory usage, but also ensures that we don't recurse
infinitely - since at start of GC we try to install outstanding plans.
This is done by introducing a new GC deferral mechanism (the DeferGC
block-scoped thingy), which will ensure that GCs don't happen in the
scope but are allowed to happen after. This still leaves the strange
corner case that cti_optimize may install outstanding plans, then GC, and
that GC may jettison the code block that was installed. This, and the
fact that the plan that we took slow path to install could have been a
failed or invalid compile, mean that we have to take special precautions
in cti_optimize.
This patch also fixes a number of small concurrency bugs that I found
when things started running. There are probably more of those bugs still
left to fix. This patch just fixes the ones I know about.
Concurrent compilation is right now only enabled on X86_64 Mac. We need
platforms that are sufficiently CAStastic so that we can do the various
memory fence and CAS tricks that make this safe. We also need a platform
that uses JSVALUE64. And we need pthread_once. So, that pretty much means
just X64_64 for now. Enabling Linux-64_64 should be a breeze, but I'll
leave that up to the Qt and GTK+ ports to do at their discretion.
This is a solid speed-up on SunSpider (8-9%) and V8Spider (16%), our two
main compile-time benchmarks. Most peculiarly, this also appears to
reduce measurement noise, rather than increasing it as you would have
expected. I don't understand that result but I like it anyway. On the
other hand, this is a slight (1%) slow-down on V8v7. I will continue to
investigate this but I think that the results are already good enough
that we should land this as-is. So far, it appears that the slow-down is
due to this breaking the don't-compile-inlineables heuristics. See
investigation in https://bugs.webkit.org/show_bug.cgi?id=116556 and the
bug https://bugs.webkit.org/show_bug.cgi?id=116557.
* JavaScriptCore.xcodeproj/project.pbxproj:
* bytecode/CodeBlock.cpp:
(JSC):
(JSC::CodeBlock::finalizeUnconditionally):
(JSC::CodeBlock::resetStubInternal):
(JSC::CodeBlock::baselineVersion):
(JSC::CodeBlock::hasOptimizedReplacement):
(JSC::CodeBlock::optimizationThresholdScalingFactor):
(JSC::CodeBlock::checkIfOptimizationThresholdReached):
(JSC::CodeBlock::optimizeNextInvocation):
(JSC::CodeBlock::dontOptimizeAnytimeSoon):
(JSC::CodeBlock::optimizeAfterWarmUp):
(JSC::CodeBlock::optimizeAfterLongWarmUp):
(JSC::CodeBlock::optimizeSoon):
(JSC::CodeBlock::forceOptimizationSlowPathConcurrently):
(JSC::CodeBlock::setOptimizationThresholdBasedOnCompilationResult):
(JSC::CodeBlock::updateAllPredictionsAndCountLiveness):
(JSC::CodeBlock::updateAllArrayPredictions):
(JSC::CodeBlock::shouldOptimizeNow):
* bytecode/CodeBlock.h:
(CodeBlock):
(JSC::CodeBlock::jitCompile):
* bytecode/CodeBlockLock.h:
(JSC):
* bytecode/ExecutionCounter.cpp:
(JSC::ExecutionCounter::forceSlowPathConcurrently):
(JSC):
(JSC::ExecutionCounter::setThreshold):
* bytecode/ExecutionCounter.h:
(ExecutionCounter):
* debugger/Debugger.cpp:
(JSC::Debugger::recompileAllJSFunctions):
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::injectLazyOperandSpeculation):
(JSC::DFG::ByteCodeParser::getArrayMode):
(JSC::DFG::ByteCodeParser::getArrayModeAndEmitChecks):
* dfg/DFGCommon.h:
(JSC::DFG::enableConcurrentJIT):
(DFG):
* dfg/DFGDriver.cpp:
(JSC::DFG::compile):
* dfg/DFGGraph.cpp:
(JSC::DFG::Graph::Graph):
* dfg/DFGGraph.h:
(Graph):
* dfg/DFGOSREntry.cpp:
(JSC::DFG::prepareOSREntry):
* dfg/DFGOperations.cpp:
* dfg/DFGPlan.cpp:
(JSC::DFG::Plan::Plan):
(JSC::DFG::Plan::compileInThread):
(JSC::DFG::Plan::key):
(DFG):
* dfg/DFGPlan.h:
(DFG):
(Plan):
* dfg/DFGWorklist.cpp: Added.
(DFG):
(JSC::DFG::Worklist::Worklist):
(JSC::DFG::Worklist::~Worklist):
(JSC::DFG::Worklist::finishCreation):
(JSC::DFG::Worklist::create):
(JSC::DFG::Worklist::enqueue):
(JSC::DFG::Worklist::compilationState):
(JSC::DFG::Worklist::waitUntilAllPlansForVMAreReady):
(JSC::DFG::Worklist::removeAllReadyPlansForVM):
(JSC::DFG::Worklist::completeAllReadyPlansForVM):
(JSC::DFG::Worklist::completeAllPlansForVM):
(JSC::DFG::Worklist::queueLength):
(JSC::DFG::Worklist::dump):
(JSC::DFG::Worklist::runThread):
(JSC::DFG::Worklist::threadFunction):
(JSC::DFG::initializeGlobalWorklistOnce):
(JSC::DFG::globalWorklist):
* dfg/DFGWorklist.h: Added.
(DFG):
(Worklist):
* heap/CopiedSpaceInlines.h:
(JSC::CopiedSpace::allocateBlock):
* heap/DeferGC.h: Added.
(JSC):
(DeferGC):
(JSC::DeferGC::DeferGC):
(JSC::DeferGC::~DeferGC):
* heap/Heap.cpp:
(JSC::Heap::Heap):
(JSC::Heap::reportExtraMemoryCostSlowCase):
(JSC::Heap::collectAllGarbage):
(JSC::Heap::collect):
(JSC::Heap::collectIfNecessaryOrDefer):
(JSC):
(JSC::Heap::incrementDeferralDepth):
(JSC::Heap::decrementDeferralDepthAndGCIfNeeded):
* heap/Heap.h:
(Heap):
(JSC::Heap::isCollecting):
(JSC):
* heap/MarkedAllocator.cpp:
(JSC::MarkedAllocator::allocateSlowCase):
* jit/JIT.cpp:
(JSC::JIT::privateCompile):
* jit/JIT.h:
* jit/JITStubs.cpp:
(JSC::DEFINE_STUB_FUNCTION):
* llint/LLIntSlowPaths.cpp:
(JSC::LLInt::jitCompileAndSetHeuristics):
(JSC::LLInt::entryOSR):
(JSC::LLInt::LLINT_SLOW_PATH_DECL):
* profiler/ProfilerBytecodes.h:
* runtime/ConcurrentJITLock.h: Added.
(JSC):
* runtime/ExecutionHarness.h:
(JSC::replaceWithDeferredOptimizedCode):
* runtime/JSSegmentedVariableObject.cpp:
(JSC::JSSegmentedVariableObject::findRegisterIndex):
(JSC::JSSegmentedVariableObject::addRegisters):
* runtime/JSSegmentedVariableObject.h:
(JSSegmentedVariableObject):
* runtime/Options.h:
(JSC):
* runtime/Structure.h:
(Structure):
* runtime/StructureInlines.h:
(JSC::Structure::propertyTable):
* runtime/SymbolTable.h:
(SymbolTable):
* runtime/VM.cpp:
(JSC::VM::VM):
(JSC::VM::~VM):
(JSC::VM::prepareToDiscardCode):
(JSC):
(JSC::VM::discardAllCode):
(JSC::VM::releaseExecutableMemory):
* runtime/VM.h:
(DFG):
(VM):
Source/WTF:
Reviewed by Geoffrey Garen.
* wtf/ByteSpinLock.h:
Make it non-copyable. We previously had bugs where we used ByteSpinLock as a locker.
Clearly that's bad.
* wtf/MetaAllocatorHandle.h:
Make it thread-safe ref-counted, since we may now be passing them between the
concurrent JIT thread and the main thread.
* wtf/Vector.h:
(WTF::Vector::takeLast):
I've wanted this method for ages, and now I finally added.
git-svn-id: http://svn.webkit.org/repository/webkit/trunk@153169 268f45cc-cd09-0410-ab3c-d52691b4dbfc
diff --git a/Source/JavaScriptCore/ChangeLog b/Source/JavaScriptCore/ChangeLog
index b7f63c4..bbf41a0 100644
--- a/Source/JavaScriptCore/ChangeLog
+++ b/Source/JavaScriptCore/ChangeLog
@@ -1,3 +1,215 @@
+2013-05-20 Filip Pizlo <fpizlo@apple.com>
+
+ fourthTier: DFG should be able to run on a separate thread
+ https://bugs.webkit.org/show_bug.cgi?id=112839
+
+ Reviewed by Geoffrey Garen.
+
+ This is the final bit of concurrent JITing. The idea is that there is a
+ single global worklist, and a single global thread, that does all
+ optimizing compilation. This is the DFG::Worklist. It contains a queue of
+ DFG::Plans, and a map from CodeBlock* (the baseline code block we're
+ trying to optimize) to DFG::Plan. If the DFGDriver tries to concurrently
+ compile something, it puts the Plan on the Worklist. The Worklist's
+ thread will compile that Plan eventually, and when it's done, it will
+ signal its completion by (1) notifying anyone waiting for the Worklist to
+ be done, and (2) forcing the CodeBlock::m_jitExecuteCounter to take slow
+ path. The next Baseline JIT cti_optimize call will then install all ready
+ (i.e. compiled) Plans for that VM. Note that (1) is only for the GC and
+ VM shutdown, which will want to ensure that there aren't any outstanding
+ async compilations before proceeding. They do so by simply waiting for
+ all of the plans for the current VM to complete. (2) is the actual way
+ that code typically gets installed.
+
+ This is all very racy by design. For example, just as we try to force the
+ execute counter to take slow path, the main thread may be setting the
+ execute counter to some other value. The main thread must set it to
+ another value because (a) JIT code is constantly incrementing the counter
+ in a racy way, (b) the cti_optimize slow path will set it to some
+ large-ish negative value to ensure that cti_optimize isn't called
+ repeatedly, and (c) OSR exits from previously jettisoned code blocks may
+ still want to reset the counter values. This "race" is made benign, by
+ ensuring that while there is an asynchronous compilation, we at worse set
+ the counter to optimizeAfterWarmUp and never to deferIndefinitely. Hence
+ if the race happens then the worst case is that we wait another ~1000
+ counts before installing the optimized code. Another defense is that if
+ any CodeBlock calls into cti_optimize, then it will check for all ready
+ plans for the VM - so even if a code block has to wait another ~1000
+ executions before it calls cti_optimize to do the installation, it may
+ actually end up being installed sooner because a different code block had
+ called cti_optimize, potentially for an unrelated reason.
+
+ Special care is taken to ensure that installing plans informs the GC
+ about the increased memory usage, but also ensures that we don't recurse
+ infinitely - since at start of GC we try to install outstanding plans.
+ This is done by introducing a new GC deferral mechanism (the DeferGC
+ block-scoped thingy), which will ensure that GCs don't happen in the
+ scope but are allowed to happen after. This still leaves the strange
+ corner case that cti_optimize may install outstanding plans, then GC, and
+ that GC may jettison the code block that was installed. This, and the
+ fact that the plan that we took slow path to install could have been a
+ failed or invalid compile, mean that we have to take special precautions
+ in cti_optimize.
+
+ This patch also fixes a number of small concurrency bugs that I found
+ when things started running. There are probably more of those bugs still
+ left to fix. This patch just fixes the ones I know about.
+
+ Concurrent compilation is right now only enabled on X86_64 Mac. We need
+ platforms that are sufficiently CAStastic so that we can do the various
+ memory fence and CAS tricks that make this safe. We also need a platform
+ that uses JSVALUE64. And we need pthread_once. So, that pretty much means
+ just X64_64 for now. Enabling Linux-64_64 should be a breeze, but I'll
+ leave that up to the Qt and GTK+ ports to do at their discretion.
+
+ This is a solid speed-up on SunSpider (8-9%) and V8Spider (16%), our two
+ main compile-time benchmarks. Most peculiarly, this also appears to
+ reduce measurement noise, rather than increasing it as you would have
+ expected. I don't understand that result but I like it anyway. On the
+ other hand, this is a slight (1%) slow-down on V8v7. I will continue to
+ investigate this but I think that the results are already good enough
+ that we should land this as-is. So far, it appears that the slow-down is
+ due to this breaking the don't-compile-inlineables heuristics. See
+ investigation in https://bugs.webkit.org/show_bug.cgi?id=116556 and the
+ bug https://bugs.webkit.org/show_bug.cgi?id=116557.
+
+ * JavaScriptCore.xcodeproj/project.pbxproj:
+ * bytecode/CodeBlock.cpp:
+ (JSC):
+ (JSC::CodeBlock::finalizeUnconditionally):
+ (JSC::CodeBlock::resetStubInternal):
+ (JSC::CodeBlock::baselineVersion):
+ (JSC::CodeBlock::hasOptimizedReplacement):
+ (JSC::CodeBlock::optimizationThresholdScalingFactor):
+ (JSC::CodeBlock::checkIfOptimizationThresholdReached):
+ (JSC::CodeBlock::optimizeNextInvocation):
+ (JSC::CodeBlock::dontOptimizeAnytimeSoon):
+ (JSC::CodeBlock::optimizeAfterWarmUp):
+ (JSC::CodeBlock::optimizeAfterLongWarmUp):
+ (JSC::CodeBlock::optimizeSoon):
+ (JSC::CodeBlock::forceOptimizationSlowPathConcurrently):
+ (JSC::CodeBlock::setOptimizationThresholdBasedOnCompilationResult):
+ (JSC::CodeBlock::updateAllPredictionsAndCountLiveness):
+ (JSC::CodeBlock::updateAllArrayPredictions):
+ (JSC::CodeBlock::shouldOptimizeNow):
+ * bytecode/CodeBlock.h:
+ (CodeBlock):
+ (JSC::CodeBlock::jitCompile):
+ * bytecode/CodeBlockLock.h:
+ (JSC):
+ * bytecode/ExecutionCounter.cpp:
+ (JSC::ExecutionCounter::forceSlowPathConcurrently):
+ (JSC):
+ (JSC::ExecutionCounter::setThreshold):
+ * bytecode/ExecutionCounter.h:
+ (ExecutionCounter):
+ * debugger/Debugger.cpp:
+ (JSC::Debugger::recompileAllJSFunctions):
+ * dfg/DFGByteCodeParser.cpp:
+ (JSC::DFG::ByteCodeParser::injectLazyOperandSpeculation):
+ (JSC::DFG::ByteCodeParser::getArrayMode):
+ (JSC::DFG::ByteCodeParser::getArrayModeAndEmitChecks):
+ * dfg/DFGCommon.h:
+ (JSC::DFG::enableConcurrentJIT):
+ (DFG):
+ * dfg/DFGDriver.cpp:
+ (JSC::DFG::compile):
+ * dfg/DFGGraph.cpp:
+ (JSC::DFG::Graph::Graph):
+ * dfg/DFGGraph.h:
+ (Graph):
+ * dfg/DFGOSREntry.cpp:
+ (JSC::DFG::prepareOSREntry):
+ * dfg/DFGOperations.cpp:
+ * dfg/DFGPlan.cpp:
+ (JSC::DFG::Plan::Plan):
+ (JSC::DFG::Plan::compileInThread):
+ (JSC::DFG::Plan::key):
+ (DFG):
+ * dfg/DFGPlan.h:
+ (DFG):
+ (Plan):
+ * dfg/DFGWorklist.cpp: Added.
+ (DFG):
+ (JSC::DFG::Worklist::Worklist):
+ (JSC::DFG::Worklist::~Worklist):
+ (JSC::DFG::Worklist::finishCreation):
+ (JSC::DFG::Worklist::create):
+ (JSC::DFG::Worklist::enqueue):
+ (JSC::DFG::Worklist::compilationState):
+ (JSC::DFG::Worklist::waitUntilAllPlansForVMAreReady):
+ (JSC::DFG::Worklist::removeAllReadyPlansForVM):
+ (JSC::DFG::Worklist::completeAllReadyPlansForVM):
+ (JSC::DFG::Worklist::completeAllPlansForVM):
+ (JSC::DFG::Worklist::queueLength):
+ (JSC::DFG::Worklist::dump):
+ (JSC::DFG::Worklist::runThread):
+ (JSC::DFG::Worklist::threadFunction):
+ (JSC::DFG::initializeGlobalWorklistOnce):
+ (JSC::DFG::globalWorklist):
+ * dfg/DFGWorklist.h: Added.
+ (DFG):
+ (Worklist):
+ * heap/CopiedSpaceInlines.h:
+ (JSC::CopiedSpace::allocateBlock):
+ * heap/DeferGC.h: Added.
+ (JSC):
+ (DeferGC):
+ (JSC::DeferGC::DeferGC):
+ (JSC::DeferGC::~DeferGC):
+ * heap/Heap.cpp:
+ (JSC::Heap::Heap):
+ (JSC::Heap::reportExtraMemoryCostSlowCase):
+ (JSC::Heap::collectAllGarbage):
+ (JSC::Heap::collect):
+ (JSC::Heap::collectIfNecessaryOrDefer):
+ (JSC):
+ (JSC::Heap::incrementDeferralDepth):
+ (JSC::Heap::decrementDeferralDepthAndGCIfNeeded):
+ * heap/Heap.h:
+ (Heap):
+ (JSC::Heap::isCollecting):
+ (JSC):
+ * heap/MarkedAllocator.cpp:
+ (JSC::MarkedAllocator::allocateSlowCase):
+ * jit/JIT.cpp:
+ (JSC::JIT::privateCompile):
+ * jit/JIT.h:
+ * jit/JITStubs.cpp:
+ (JSC::DEFINE_STUB_FUNCTION):
+ * llint/LLIntSlowPaths.cpp:
+ (JSC::LLInt::jitCompileAndSetHeuristics):
+ (JSC::LLInt::entryOSR):
+ (JSC::LLInt::LLINT_SLOW_PATH_DECL):
+ * profiler/ProfilerBytecodes.h:
+ * runtime/ConcurrentJITLock.h: Added.
+ (JSC):
+ * runtime/ExecutionHarness.h:
+ (JSC::replaceWithDeferredOptimizedCode):
+ * runtime/JSSegmentedVariableObject.cpp:
+ (JSC::JSSegmentedVariableObject::findRegisterIndex):
+ (JSC::JSSegmentedVariableObject::addRegisters):
+ * runtime/JSSegmentedVariableObject.h:
+ (JSSegmentedVariableObject):
+ * runtime/Options.h:
+ (JSC):
+ * runtime/Structure.h:
+ (Structure):
+ * runtime/StructureInlines.h:
+ (JSC::Structure::propertyTable):
+ * runtime/SymbolTable.h:
+ (SymbolTable):
+ * runtime/VM.cpp:
+ (JSC::VM::VM):
+ (JSC::VM::~VM):
+ (JSC::VM::prepareToDiscardCode):
+ (JSC):
+ (JSC::VM::discardAllCode):
+ (JSC::VM::releaseExecutableMemory):
+ * runtime/VM.h:
+ (DFG):
+ (VM):
+
2013-05-17 Mark Hahnenberg <mhahnenberg@apple.com>
CheckArrays should be hoisted
diff --git a/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj b/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
index f2c1150..53fbc02 100644
--- a/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
+++ b/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
@@ -75,6 +75,7 @@
0F0CD4C415F6B6BB0032F1C0 /* SparseArrayValueMap.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F0CD4C315F6B6B50032F1C0 /* SparseArrayValueMap.cpp */; };
0F0D85B21723455400338210 /* CodeBlockLock.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0D85B11723455100338210 /* CodeBlockLock.h */; settings = {ATTRIBUTES = (Private, ); }; };
0F0FC45A14BD15F500B81154 /* LLIntCallLinkInfo.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0FC45814BD15F100B81154 /* LLIntCallLinkInfo.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ 0F136D4D174AD69E0075B354 /* DeferGC.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F136D4B174AD69B0075B354 /* DeferGC.h */; settings = {ATTRIBUTES = (Private, ); }; };
0F13912916771C33009CCB07 /* ProfilerBytecodeSequence.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F13912416771C30009CCB07 /* ProfilerBytecodeSequence.cpp */; };
0F13912A16771C36009CCB07 /* ProfilerBytecodeSequence.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F13912516771C30009CCB07 /* ProfilerBytecodeSequence.h */; settings = {ATTRIBUTES = (Private, ); }; };
0F13912B16771C3A009CCB07 /* ProfilerProfiledBytecodes.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F13912616771C30009CCB07 /* ProfilerProfiledBytecodes.cpp */; };
@@ -314,6 +315,9 @@
0FD82E86141F3FF100179C94 /* SpeculatedType.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FD82E84141F3FDA00179C94 /* SpeculatedType.cpp */; };
0FDB2CC9173DA520007B3C1B /* FTLAbbreviatedTypes.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FDB2CC7173DA51E007B3C1B /* FTLAbbreviatedTypes.h */; settings = {ATTRIBUTES = (Private, ); }; };
0FDB2CCA173DA523007B3C1B /* FTLValueFromBlock.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FDB2CC8173DA51E007B3C1B /* FTLValueFromBlock.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ 0FDB2CE7174830A2007B3C1B /* DFGWorklist.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FDB2CE5174830A2007B3C1B /* DFGWorklist.cpp */; };
+ 0FDB2CE8174830A2007B3C1B /* DFGWorklist.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FDB2CE6174830A2007B3C1B /* DFGWorklist.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ 0FDB2CEA174896C7007B3C1B /* ConcurrentJITLock.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FDB2CE9174896C7007B3C1B /* ConcurrentJITLock.h */; settings = {ATTRIBUTES = (Private, ); }; };
0FDDBFB51666EED800C55FEF /* DFGVariableAccessDataDump.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FDDBFB21666EED500C55FEF /* DFGVariableAccessDataDump.cpp */; };
0FDDBFB61666EEDA00C55FEF /* DFGVariableAccessDataDump.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FDDBFB31666EED500C55FEF /* DFGVariableAccessDataDump.h */; settings = {ATTRIBUTES = (Private, ); }; };
0FE228ED1436AB2700196C48 /* Options.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FE228EB1436AB2300196C48 /* Options.h */; settings = {ATTRIBUTES = (Private, ); }; };
@@ -1081,6 +1085,7 @@
0F0CD4C315F6B6B50032F1C0 /* SparseArrayValueMap.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = SparseArrayValueMap.cpp; sourceTree = "<group>"; };
0F0D85B11723455100338210 /* CodeBlockLock.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = CodeBlockLock.h; sourceTree = "<group>"; };
0F0FC45814BD15F100B81154 /* LLIntCallLinkInfo.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = LLIntCallLinkInfo.h; sourceTree = "<group>"; };
+ 0F136D4B174AD69B0075B354 /* DeferGC.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = DeferGC.h; sourceTree = "<group>"; };
0F13912416771C30009CCB07 /* ProfilerBytecodeSequence.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = ProfilerBytecodeSequence.cpp; path = profiler/ProfilerBytecodeSequence.cpp; sourceTree = "<group>"; };
0F13912516771C30009CCB07 /* ProfilerBytecodeSequence.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = ProfilerBytecodeSequence.h; path = profiler/ProfilerBytecodeSequence.h; sourceTree = "<group>"; };
0F13912616771C30009CCB07 /* ProfilerProfiledBytecodes.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = ProfilerProfiledBytecodes.cpp; path = profiler/ProfilerProfiledBytecodes.cpp; sourceTree = "<group>"; };
@@ -1334,6 +1339,9 @@
0FD82E84141F3FDA00179C94 /* SpeculatedType.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = SpeculatedType.cpp; sourceTree = "<group>"; };
0FDB2CC7173DA51E007B3C1B /* FTLAbbreviatedTypes.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = FTLAbbreviatedTypes.h; path = ftl/FTLAbbreviatedTypes.h; sourceTree = "<group>"; };
0FDB2CC8173DA51E007B3C1B /* FTLValueFromBlock.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = FTLValueFromBlock.h; path = ftl/FTLValueFromBlock.h; sourceTree = "<group>"; };
+ 0FDB2CE5174830A2007B3C1B /* DFGWorklist.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGWorklist.cpp; path = dfg/DFGWorklist.cpp; sourceTree = "<group>"; };
+ 0FDB2CE6174830A2007B3C1B /* DFGWorklist.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGWorklist.h; path = dfg/DFGWorklist.h; sourceTree = "<group>"; };
+ 0FDB2CE9174896C7007B3C1B /* ConcurrentJITLock.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ConcurrentJITLock.h; sourceTree = "<group>"; };
0FDDBFB21666EED500C55FEF /* DFGVariableAccessDataDump.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGVariableAccessDataDump.cpp; path = dfg/DFGVariableAccessDataDump.cpp; sourceTree = "<group>"; };
0FDDBFB31666EED500C55FEF /* DFGVariableAccessDataDump.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGVariableAccessDataDump.h; path = dfg/DFGVariableAccessDataDump.h; sourceTree = "<group>"; };
0FE228EA1436AB2300196C48 /* Options.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = Options.cpp; sourceTree = "<group>"; };
@@ -2374,6 +2382,7 @@
C2239D1316262BDD005AC5FD /* CopyVisitor.h */,
C2239D1416262BDD005AC5FD /* CopyVisitorInlines.h */,
C218D13F1655CFD50062BB81 /* CopyWorkList.h */,
+ 0F136D4B174AD69B0075B354 /* DeferGC.h */,
0F2C556D14738F2E00121E4F /* DFGCodeBlocks.cpp */,
0F2C556E14738F2E00121E4F /* DFGCodeBlocks.h */,
BCBE2CAD14E985AA000593AD /* GCAssertions.h */,
@@ -2681,6 +2690,7 @@
0F15F15D14B7A73A005DE37D /* CommonSlowPaths.h */,
969A09220ED1E09C00F1F681 /* Completion.cpp */,
F5BB2BC5030F772101FCFE1D /* Completion.h */,
+ 0FDB2CE9174896C7007B3C1B /* ConcurrentJITLock.h */,
BCA62DFF0E2826310004F30D /* ConstructData.cpp */,
BC8F3CCF0DAF17BA00577A80 /* ConstructData.h */,
BCD203450E17135E002C7E82 /* DateConstructor.cpp */,
@@ -3079,6 +3089,8 @@
0F85A31E16AB76AE0077571E /* DFGVariadicFunction.h */,
0FFFC95314EF909500C72532 /* DFGVirtualRegisterAllocationPhase.cpp */,
0FFFC95414EF909500C72532 /* DFGVirtualRegisterAllocationPhase.h */,
+ 0FDB2CE5174830A2007B3C1B /* DFGWorklist.cpp */,
+ 0FDB2CE6174830A2007B3C1B /* DFGWorklist.h */,
);
name = dfg;
sourceTree = "<group>";
@@ -3533,6 +3545,7 @@
0F5EF91F16878F7D003E5C25 /* JITThunks.h in Headers */,
A76F54A313B28AAB00EF2BCE /* JITWriteBarrier.h in Headers */,
BC18C4160E16F5CD00B34460 /* JSActivation.h in Headers */,
+ 0FDB2CEA174896C7007B3C1B /* ConcurrentJITLock.h in Headers */,
840480131021A1D9008E7F01 /* JSAPIValueWrapper.h in Headers */,
C2CF39C216E15A8100DD69BE /* JSAPIWrapperObject.h in Headers */,
BC18C4170E16F5CD00B34460 /* JSArray.h in Headers */,
@@ -3655,6 +3668,7 @@
BC18C43F0E16F5CD00B34460 /* Nodes.h in Headers */,
BC18C4410E16F5CD00B34460 /* NumberConstructor.h in Headers */,
BC18C4420E16F5CD00B34460 /* NumberConstructor.lut.h in Headers */,
+ 0FDB2CE8174830A2007B3C1B /* DFGWorklist.h in Headers */,
BC18C4430E16F5CD00B34460 /* NumberObject.h in Headers */,
BC18C4440E16F5CD00B34460 /* NumberPrototype.h in Headers */,
142D3939103E4560007DCB52 /* NumericStrings.h in Headers */,
@@ -3804,6 +3818,7 @@
86704B8812DBA33700A9FE7B /* YarrParser.h in Headers */,
86704B8A12DBA33700A9FE7B /* YarrPattern.h in Headers */,
86704B4312DB8A8100A9FE7B /* YarrSyntaxChecker.h in Headers */,
+ 0F136D4D174AD69E0075B354 /* DeferGC.h in Headers */,
);
runOnlyForDeploymentPostprocessing = 0;
};
@@ -4433,6 +4448,7 @@
14469DE8107EC7E700650446 /* PropertySlot.cpp in Sources */,
ADE39FFF16DD144B0003CD4A /* PropertyTable.cpp in Sources */,
1474C33C16AA2D9B0062F01D /* PrototypeMap.cpp in Sources */,
+ 0FDB2CE7174830A2007B3C1B /* DFGWorklist.cpp in Sources */,
0F9332A314CA7DD70085F3C6 /* PutByIdStatus.cpp in Sources */,
0FF60AC316740F8800029779 /* ReduceWhitespace.cpp in Sources */,
14280841107EC0930013E7B2 /* RegExp.cpp in Sources */,
diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.cpp b/Source/JavaScriptCore/bytecode/CodeBlock.cpp
index 11f2746..198b7d5 100644
--- a/Source/JavaScriptCore/bytecode/CodeBlock.cpp
+++ b/Source/JavaScriptCore/bytecode/CodeBlock.cpp
@@ -36,6 +36,7 @@
#include "DFGCommon.h"
#include "DFGNode.h"
#include "DFGRepatch.h"
+#include "DFGWorklist.h"
#include "Debugger.h"
#include "FTLJITCode.h"
#include "Interpreter.h"
@@ -2187,12 +2188,6 @@
performTracingFixpointIteration(visitor);
}
-#if ENABLE(JIT_VERBOSE_OSR)
-static const bool verboseUnlinking = true;
-#else
-static const bool verboseUnlinking = false;
-#endif
-
void CodeBlock::finalizeUnconditionally()
{
#if ENABLE(LLINT)
@@ -2208,7 +2203,7 @@
case op_put_by_id_out_of_line:
if (!curInstruction[4].u.structure || Heap::isMarked(curInstruction[4].u.structure.get()))
break;
- if (verboseUnlinking)
+ if (Options::verboseOSR())
dataLogF("Clearing LLInt property access with structure %p.\n", curInstruction[4].u.structure.get());
curInstruction[4].u.structure.clear();
curInstruction[5].u.operand = 0;
@@ -2221,7 +2216,7 @@
&& Heap::isMarked(curInstruction[6].u.structure.get())
&& Heap::isMarked(curInstruction[7].u.structureChain.get()))
break;
- if (verboseUnlinking) {
+ if (Options::verboseOSR()) {
dataLogF("Clearing LLInt put transition with structures %p -> %p, chain %p.\n",
curInstruction[4].u.structure.get(),
curInstruction[6].u.structure.get(),
@@ -2241,7 +2236,7 @@
for (unsigned i = 0; i < m_llintCallLinkInfos.size(); ++i) {
if (m_llintCallLinkInfos[i].isLinked() && !Heap::isMarked(m_llintCallLinkInfos[i].callee.get())) {
- if (verboseUnlinking)
+ if (Options::verboseOSR())
dataLog("Clearing LLInt call from ", *this, "\n");
m_llintCallLinkInfos[i].unlink();
}
@@ -2254,7 +2249,7 @@
#if ENABLE(DFG_JIT)
// Check if we're not live. If we are, then jettison.
if (!(shouldImmediatelyAssumeLivenessDuringScan() || m_jitCode->dfgCommon()->livenessHasBeenProved)) {
- if (verboseUnlinking)
+ if (Options::verboseOSR())
dataLog(*this, " has dead weak references, jettisoning during GC.\n");
if (DFG::shouldShowDisassembly()) {
@@ -2284,7 +2279,7 @@
for (size_t size = m_putToBaseOperations.size(), i = 0; i < size; ++i) {
if (m_putToBaseOperations[i].m_structure && !Heap::isMarked(m_putToBaseOperations[i].m_structure.get())) {
- if (verboseUnlinking)
+ if (Options::verboseOSR())
dataLog("Clearing putToBase info in ", *this, "\n");
m_putToBaseOperations[i].m_structure.clear();
}
@@ -2298,7 +2293,7 @@
#endif
m_resolveOperations[i].last().m_structure.clear();
if (m_resolveOperations[i].last().m_structure && !Heap::isMarked(m_resolveOperations[i].last().m_structure.get())) {
- if (verboseUnlinking)
+ if (Options::verboseOSR())
dataLog("Clearing resolve info in ", *this, "\n");
m_resolveOperations[i].last().m_structure.clear();
}
@@ -2313,7 +2308,7 @@
if (ClosureCallStubRoutine* stub = callLinkInfo(i).stub.get()) {
if (!Heap::isMarked(stub->structure())
|| !Heap::isMarked(stub->executable())) {
- if (verboseUnlinking) {
+ if (Options::verboseOSR()) {
dataLog(
"Clearing closure call from ", *this, " to ",
stub->executable()->hashFor(callLinkInfo(i).specializationKind()),
@@ -2322,7 +2317,7 @@
callLinkInfo(i).unlink(*m_vm, repatchBuffer);
}
} else if (!Heap::isMarked(callLinkInfo(i).callee.get())) {
- if (verboseUnlinking) {
+ if (Options::verboseOSR()) {
dataLog(
"Clearing call from ", *this, " to ",
RawPointer(callLinkInfo(i).callee.get()), " (",
@@ -2363,7 +2358,7 @@
{
AccessType accessType = static_cast<AccessType>(stubInfo.accessType);
- if (verboseUnlinking)
+ if (Options::verboseOSR())
dataLog("Clearing structure cache (kind ", static_cast<int>(stubInfo.accessType), ") in ", *this, ".\n");
switch (getJITType()) {
@@ -2434,6 +2429,42 @@
#endif
}
+CodeBlock* CodeBlock::baselineVersion()
+{
+#if ENABLE(JIT)
+ // When we're initializing the original baseline code block, we won't be able
+ // to get its replacement. But we'll know that it's the original baseline code
+ // block because it won't have JIT code yet and it won't have an alternative.
+ if (getJITType() == JITCode::None && !alternative())
+ return this;
+
+ CodeBlock* result = replacement();
+ ASSERT(result);
+ while (result->alternative())
+ result = result->alternative();
+ ASSERT(result);
+ ASSERT(JITCode::isBaselineCode(result->getJITType()));
+ return result;
+#else
+ return this;
+#endif
+}
+
+#if ENABLE(JIT)
+bool CodeBlock::hasOptimizedReplacement()
+{
+ ASSERT(JITCode::isBaselineCode(getJITType()));
+ bool result = JITCode::isHigherTier(replacement()->getJITType(), getJITType());
+ if (result)
+ ASSERT(JITCode::isOptimizingJIT(replacement()->getJITType()));
+ else {
+ ASSERT(JITCode::isBaselineCode(replacement()->getJITType()));
+ ASSERT(replacement() == this);
+ }
+ return result;
+}
+#endif
+
HandlerInfo* CodeBlock::handlerForBytecodeOffset(unsigned bytecodeOffset)
{
RELEASE_ASSERT(bytecodeOffset < instructions().size());
@@ -3015,9 +3046,12 @@
ASSERT(instructionCount); // Make sure this is called only after we have an instruction stream; otherwise it'll just return the value of d, which makes no sense.
double result = d + a * sqrt(instructionCount + b) + c * instructionCount;
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLog(*this, ": instruction count is ", instructionCount, ", scaling execution counter by ", result, " * ", codeTypeThresholdMultiplier(), "\n");
-#endif
+ if (Options::verboseOSR()) {
+ dataLog(
+ *this, ": instruction count is ", instructionCount,
+ ", scaling execution counter by ", result, " * ", codeTypeThresholdMultiplier(),
+ "\n");
+ }
return result * codeTypeThresholdMultiplier();
}
@@ -3058,34 +3092,90 @@
bool CodeBlock::checkIfOptimizationThresholdReached()
{
+#if ENABLE(DFG_JIT)
+ if (m_vm->worklist
+ && m_vm->worklist->compilationState(this) == DFG::Worklist::Compiled) {
+ optimizeNextInvocation();
+ return true;
+ }
+#endif
+
return m_jitExecuteCounter.checkIfThresholdCrossedAndSet(this);
}
void CodeBlock::optimizeNextInvocation()
{
+ if (Options::verboseOSR())
+ dataLog(*this, ": Optimizing next invocation.\n");
m_jitExecuteCounter.setNewThreshold(0, this);
}
void CodeBlock::dontOptimizeAnytimeSoon()
{
+ if (Options::verboseOSR())
+ dataLog(*this, ": Not optimizing anytime soon.\n");
m_jitExecuteCounter.deferIndefinitely();
}
void CodeBlock::optimizeAfterWarmUp()
{
+ if (Options::verboseOSR())
+ dataLog(*this, ": Optimizing after warm-up.\n");
m_jitExecuteCounter.setNewThreshold(counterValueForOptimizeAfterWarmUp(), this);
}
void CodeBlock::optimizeAfterLongWarmUp()
{
+ if (Options::verboseOSR())
+ dataLog(*this, ": Optimizing after long warm-up.\n");
m_jitExecuteCounter.setNewThreshold(counterValueForOptimizeAfterLongWarmUp(), this);
}
void CodeBlock::optimizeSoon()
{
+ if (Options::verboseOSR())
+ dataLog(*this, ": Optimizing soon.\n");
m_jitExecuteCounter.setNewThreshold(counterValueForOptimizeSoon(), this);
}
+void CodeBlock::forceOptimizationSlowPathConcurrently()
+{
+ if (Options::verboseOSR())
+ dataLog(*this, ": Forcing slow path concurrently.\n");
+ m_jitExecuteCounter.forceSlowPathConcurrently();
+}
+
+void CodeBlock::setOptimizationThresholdBasedOnCompilationResult(CompilationResult result)
+{
+ RELEASE_ASSERT(getJITType() == JITCode::BaselineJIT);
+ RELEASE_ASSERT((result == CompilationSuccessful) == (replacement() != this));
+ switch (result) {
+ case CompilationSuccessful:
+ RELEASE_ASSERT(JITCode::isOptimizingJIT(replacement()->getJITType()));
+ optimizeNextInvocation();
+ break;
+ case CompilationFailed:
+ dontOptimizeAnytimeSoon();
+ break;
+ case CompilationDeferred:
+ // We'd like to do dontOptimizeAnytimeSoon() but we cannot because
+ // forceOptimizationSlowPathConcurrently() is inherently racy. It won't
+ // necessarily guarantee anything. So, we make sure that even if that
+ // function ends up being a no-op, we still eventually retry and realize
+ // that we have optimized code ready.
+ optimizeAfterWarmUp();
+ break;
+ case CompilationInvalidated:
+ // Retry with exponential backoff.
+ countReoptimization();
+ optimizeAfterWarmUp();
+ break;
+ default:
+ RELEASE_ASSERT_NOT_REACHED();
+ break;
+ }
+}
+
#if ENABLE(JIT)
uint32_t CodeBlock::adjustedExitCountThreshold(uint32_t desiredThreshold)
{
@@ -3144,7 +3234,7 @@
void CodeBlock::updateAllPredictionsAndCountLiveness(
OperationInProgress operation, unsigned& numberOfLiveNonArgumentValueProfiles, unsigned& numberOfSamplesInProfiles)
{
- CodeBlockLock locker(m_lock);
+ CodeBlockLocker locker(m_lock);
numberOfLiveNonArgumentValueProfiles = 0;
numberOfSamplesInProfiles = 0; // If this divided by ValueProfile::numberOfBuckets equals numberOfValueProfiles() then value profiles are full.
@@ -3176,7 +3266,7 @@
void CodeBlock::updateAllArrayPredictions(OperationInProgress operation)
{
- CodeBlockLock locker(m_lock);
+ CodeBlockLocker locker(m_lock);
for (unsigned i = m_arrayProfiles.size(); i--;)
m_arrayProfiles[i].computeUpdatedPrediction(locker, this, operation);
@@ -3194,9 +3284,8 @@
bool CodeBlock::shouldOptimizeNow()
{
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLog("Considering optimizing ", *this, "...\n");
-#endif
+ if (Options::verboseOSR())
+ dataLog("Considering optimizing ", *this, "...\n");
#if ENABLE(VERBOSE_VALUE_PROFILE)
dumpValueProfiles();
@@ -3211,9 +3300,14 @@
unsigned numberOfSamplesInProfiles;
updateAllPredictionsAndCountLiveness(NoOperation, numberOfLiveNonArgumentValueProfiles, numberOfSamplesInProfiles);
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLogF("Profile hotness: %lf (%u / %u), %lf (%u / %u)\n", (double)numberOfLiveNonArgumentValueProfiles / numberOfValueProfiles(), numberOfLiveNonArgumentValueProfiles, numberOfValueProfiles(), (double)numberOfSamplesInProfiles / ValueProfile::numberOfBuckets / numberOfValueProfiles(), numberOfSamplesInProfiles, ValueProfile::numberOfBuckets * numberOfValueProfiles());
-#endif
+ if (Options::verboseOSR()) {
+ dataLogF(
+ "Profile hotness: %lf (%u / %u), %lf (%u / %u)\n",
+ (double)numberOfLiveNonArgumentValueProfiles / numberOfValueProfiles(),
+ numberOfLiveNonArgumentValueProfiles, numberOfValueProfiles(),
+ (double)numberOfSamplesInProfiles / ValueProfile::numberOfBuckets / numberOfValueProfiles(),
+ numberOfSamplesInProfiles, ValueProfile::numberOfBuckets * numberOfValueProfiles());
+ }
if ((!numberOfValueProfiles() || (double)numberOfLiveNonArgumentValueProfiles / numberOfValueProfiles() >= Options::desiredProfileLivenessRate())
&& (!totalNumberOfValueProfiles() || (double)numberOfSamplesInProfiles / ValueProfile::numberOfBuckets / totalNumberOfValueProfiles() >= Options::desiredProfileFullnessRate())
diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.h b/Source/JavaScriptCore/bytecode/CodeBlock.h
index 7474b37..b4ff38e 100644
--- a/Source/JavaScriptCore/bytecode/CodeBlock.h
+++ b/Source/JavaScriptCore/bytecode/CodeBlock.h
@@ -130,26 +130,8 @@
{
return specializationFromIsConstruct(m_isConstructor);
}
-
-#if ENABLE(JIT)
- CodeBlock* baselineVersion()
- {
- if (JITCode::isBaselineCode(getJITType()))
- return this;
- CodeBlock* result = replacement();
- ASSERT(result);
- while (result->alternative())
- result = result->alternative();
- ASSERT(result);
- ASSERT(JITCode::isBaselineCode(result->getJITType()));
- return result;
- }
-#else
- CodeBlock* baselineVersion()
- {
- return this;
- }
-#endif
+
+ CodeBlock* baselineVersion();
void visitAggregate(SlotVisitor&);
@@ -312,12 +294,7 @@
ASSERT(getJITType() == JITCode::BaselineJIT);
return CompilationNotNeeded;
}
-#if ENABLE(JIT)
return jitCompileImpl(exec);
-#else
- UNUSED_PARAM(exec);
- return CompilationFailed;
-#endif
}
virtual CodeBlock* replacement() = 0;
@@ -330,20 +307,7 @@
}
DFG::CapabilityLevel canCompileWithDFGState() { return m_canCompileWithDFGState; }
- bool hasOptimizedReplacement()
- {
- ASSERT(JITCode::isBaselineCode(getJITType()));
- bool result = JITCode::isHigherTier(replacement()->getJITType(), getJITType());
-#if !ASSERT_DISABLED
- if (result)
- ASSERT(JITCode::isOptimizingJIT(replacement()->getJITType()));
- else {
- ASSERT(JITCode::isBaselineCode(replacement()->getJITType()));
- ASSERT(replacement() == this);
- }
-#endif
- return result;
- }
+ bool hasOptimizedReplacement();
#else
JITCode::JITType getJITType() const { return JITCode::BaselineJIT; }
#endif
@@ -878,7 +842,11 @@
// to trigger optimization if one of those functions becomes hot
// in the baseline code.
void optimizeSoon();
-
+
+ void forceOptimizationSlowPathConcurrently();
+
+ void setOptimizationThresholdBasedOnCompilationResult(CompilationResult);
+
uint32_t osrExitCounter() const { return m_osrExitCounter; }
void countOSRExit() { m_osrExitCounter++; }
diff --git a/Source/JavaScriptCore/bytecode/CodeBlockLock.h b/Source/JavaScriptCore/bytecode/CodeBlockLock.h
index ce7fdc6..4fc2656 100644
--- a/Source/JavaScriptCore/bytecode/CodeBlockLock.h
+++ b/Source/JavaScriptCore/bytecode/CodeBlockLock.h
@@ -26,18 +26,12 @@
#ifndef CodeBlockLock_h
#define CodeBlockLock_h
-#include <wtf/ByteSpinLock.h>
-#include <wtf/NoLock.h>
+#include "ConcurrentJITLock.h"
namespace JSC {
-#if ENABLE(CONCURRENT_JIT)
-typedef ByteSpinLock CodeBlockLock;
-typedef ByteSpinLocker CodeBlockLocker;
-#else
-typedef NoLock CodeBlockLock;
-typedef NoLockLocker CodeBlockLocker;
-#endif
+typedef ConcurrentJITLock CodeBlockLock;
+typedef ConcurrentJITLocker CodeBlockLocker;
} // namespace JSC
diff --git a/Source/JavaScriptCore/bytecode/ExecutionCounter.cpp b/Source/JavaScriptCore/bytecode/ExecutionCounter.cpp
index dca9c51..3a646a8 100644
--- a/Source/JavaScriptCore/bytecode/ExecutionCounter.cpp
+++ b/Source/JavaScriptCore/bytecode/ExecutionCounter.cpp
@@ -37,6 +37,11 @@
reset();
}
+void ExecutionCounter::forceSlowPathConcurrently()
+{
+ m_counter = 0;
+}
+
bool ExecutionCounter::checkIfThresholdCrossedAndSet(CodeBlock* codeBlock)
{
if (hasCrossedThreshold(codeBlock))
@@ -124,11 +129,11 @@
return false;
}
- ASSERT(!hasCrossedThreshold(codeBlock));
+ ASSERT(!m_activeThreshold || !hasCrossedThreshold(codeBlock));
// Compute the true total count.
double trueTotalCount = count();
-
+
// Correct the threshold for current memory usage.
double threshold = applyMemoryUsageHeuristics(m_activeThreshold, codeBlock);
diff --git a/Source/JavaScriptCore/bytecode/ExecutionCounter.h b/Source/JavaScriptCore/bytecode/ExecutionCounter.h
index c755c04..a734669 100644
--- a/Source/JavaScriptCore/bytecode/ExecutionCounter.h
+++ b/Source/JavaScriptCore/bytecode/ExecutionCounter.h
@@ -38,6 +38,7 @@
class ExecutionCounter {
public:
ExecutionCounter();
+ void forceSlowPathConcurrently(); // If you use this, checkIfThresholdCrossedAndSet() may still return false.
bool checkIfThresholdCrossedAndSet(CodeBlock*);
void setNewThreshold(int32_t threshold, CodeBlock*);
void deferIndefinitely();
@@ -74,7 +75,6 @@
void reset();
public:
-
// NB. These are intentionally public because it will be modified from machine code.
// This counter is incremented by the JIT or LLInt. It starts out negative and is
diff --git a/Source/JavaScriptCore/debugger/Debugger.cpp b/Source/JavaScriptCore/debugger/Debugger.cpp
index 2c5a126..2a75234 100644
--- a/Source/JavaScriptCore/debugger/Debugger.cpp
+++ b/Source/JavaScriptCore/debugger/Debugger.cpp
@@ -118,6 +118,8 @@
ASSERT(!vm->dynamicGlobalObject);
if (vm->dynamicGlobalObject)
return;
+
+ vm->prepareToDiscardCode();
Recompiler recompiler(this);
vm->heap.objectSpace().forEachLiveCell(recompiler);
diff --git a/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp b/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
index b4f6d9c..f550b97 100644
--- a/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
+++ b/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
@@ -264,7 +264,7 @@
{
ASSERT(node->op() == GetLocal);
ASSERT(node->codeOrigin.bytecodeIndex == m_currentIndex);
- CodeBlockLock locker(m_inlineStackTop->m_profiledBlock->m_lock);
+ CodeBlockLocker locker(m_inlineStackTop->m_profiledBlock->m_lock);
LazyOperandValueProfileKey key(m_currentIndex, node->local());
SpeculatedType prediction = m_inlineStackTop->m_lazyOperands.prediction(locker, key);
#if DFG_ENABLE(DEBUG_VERBOSE)
@@ -839,7 +839,7 @@
ArrayMode getArrayMode(ArrayProfile* profile, Array::Action action)
{
- CodeBlockLock locker(m_inlineStackTop->m_profiledBlock->m_lock);
+ CodeBlockLocker locker(m_inlineStackTop->m_profiledBlock->m_lock);
profile->computeUpdatedPrediction(locker, m_inlineStackTop->m_profiledBlock);
return ArrayMode::fromObserved(locker, profile, action, false);
}
@@ -851,7 +851,7 @@
ArrayMode getArrayModeAndEmitChecks(ArrayProfile* profile, Array::Action action, Node* base)
{
- CodeBlockLock locker(m_inlineStackTop->m_profiledBlock->m_lock);
+ CodeBlockLocker locker(m_inlineStackTop->m_profiledBlock->m_lock);
profile->computeUpdatedPrediction(locker, m_inlineStackTop->m_profiledBlock);
diff --git a/Source/JavaScriptCore/dfg/DFGCommon.h b/Source/JavaScriptCore/dfg/DFGCommon.h
index 7aef749..5c67bfb 100644
--- a/Source/JavaScriptCore/dfg/DFGCommon.h
+++ b/Source/JavaScriptCore/dfg/DFGCommon.h
@@ -136,6 +136,15 @@
#endif
}
+inline bool enableConcurrentJIT()
+{
+#if ENABLE(CONCURRENT_JIT)
+ return Options::enableConcurrentJIT();
+#else
+ return false;
+#endif
+}
+
enum SpillRegistersMode { NeedToSpill, DontSpill };
enum NoResultTag { NoResult };
diff --git a/Source/JavaScriptCore/dfg/DFGDriver.cpp b/Source/JavaScriptCore/dfg/DFGDriver.cpp
index bee3623..da62081 100644
--- a/Source/JavaScriptCore/dfg/DFGDriver.cpp
+++ b/Source/JavaScriptCore/dfg/DFGDriver.cpp
@@ -35,11 +35,13 @@
#include "DFGJITCode.h"
#include "DFGPlan.h"
#include "DFGThunks.h"
+#include "DFGWorklist.h"
#include "FTLThunks.h"
#include "JITCode.h"
#include "Operations.h"
#include "Options.h"
#include "SamplingTool.h"
+#include <wtf/Atomics.h>
namespace JSC { namespace DFG {
@@ -69,19 +71,21 @@
return CompilationFailed;
if (logCompilationChanges())
- dataLog("DFG compiling ", *codeBlock, ", number of instructions = ", codeBlock->instructionCount(), "\n");
+ dataLog("DFG(Driver) compiling ", *codeBlock, ", number of instructions = ", codeBlock->instructionCount(), "\n");
+
+ VM& vm = exec->vm();
// Make sure that any stubs that the DFG is going to use are initialized. We want to
// make sure that al JIT code generation does finalization on the main thread.
- exec->vm().getCTIStub(osrExitGenerationThunkGenerator);
- exec->vm().getCTIStub(throwExceptionFromCallSlowPathGenerator);
- exec->vm().getCTIStub(linkCallThunkGenerator);
- exec->vm().getCTIStub(linkConstructThunkGenerator);
- exec->vm().getCTIStub(linkClosureCallThunkGenerator);
- exec->vm().getCTIStub(virtualCallThunkGenerator);
- exec->vm().getCTIStub(virtualConstructThunkGenerator);
+ vm.getCTIStub(osrExitGenerationThunkGenerator);
+ vm.getCTIStub(throwExceptionFromCallSlowPathGenerator);
+ vm.getCTIStub(linkCallThunkGenerator);
+ vm.getCTIStub(linkConstructThunkGenerator);
+ vm.getCTIStub(linkClosureCallThunkGenerator);
+ vm.getCTIStub(virtualCallThunkGenerator);
+ vm.getCTIStub(virtualConstructThunkGenerator);
#if ENABLE(FTL_JIT)
- exec->vm().getCTIStub(FTL::osrExitGenerationThunkGenerator);
+ vm.getCTIStub(FTL::osrExitGenerationThunkGenerator);
#endif
// Derive our set of must-handle values. The compilation must be at least conservative
@@ -108,7 +112,16 @@
plan->mustHandleValues[i] = exec->uncheckedR(operand).jsValue();
}
- plan->compileInThread();
+ if (enableConcurrentJIT()) {
+ if (!vm.worklist)
+ vm.worklist = globalWorklist();
+ if (logCompilationChanges())
+ dataLog("Deferring DFG compilation of ", *codeBlock, " with queue length ", vm.worklist->queueLength(), ".\n");
+ vm.worklist->enqueue(plan);
+ return CompilationDeferred;
+ }
+
+ plan->compileInThread(*vm.dfgState);
return plan->finalize(jitCode, jitCodeWithArityCheck);
}
diff --git a/Source/JavaScriptCore/dfg/DFGGraph.cpp b/Source/JavaScriptCore/dfg/DFGGraph.cpp
index 6c5fdef..6837a1d7 100644
--- a/Source/JavaScriptCore/dfg/DFGGraph.cpp
+++ b/Source/JavaScriptCore/dfg/DFGGraph.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2011 Apple Inc. All rights reserved.
+ * Copyright (C) 2011, 2013 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -44,12 +44,12 @@
#undef STRINGIZE_DFG_OP_ENUM
};
-Graph::Graph(VM& vm, Plan& plan)
+Graph::Graph(VM& vm, Plan& plan, LongLivedState& longLivedState)
: m_vm(vm)
, m_plan(plan)
, m_codeBlock(m_plan.codeBlock.get())
, m_profiledBlock(m_codeBlock->alternative())
- , m_allocator(vm.m_dfgState->m_allocator)
+ , m_allocator(longLivedState.m_allocator)
, m_hasArguments(false)
, m_fixpointState(BeforeFixpoint)
, m_form(LoadStore)
diff --git a/Source/JavaScriptCore/dfg/DFGGraph.h b/Source/JavaScriptCore/dfg/DFGGraph.h
index 2ed03bb..083504c 100644
--- a/Source/JavaScriptCore/dfg/DFGGraph.h
+++ b/Source/JavaScriptCore/dfg/DFGGraph.h
@@ -90,7 +90,7 @@
// Nodes that are 'dead' remain in the vector with refCount 0.
class Graph {
public:
- Graph(VM&, Plan&);
+ Graph(VM&, Plan&, LongLivedState&);
~Graph();
void changeChild(Edge& edge, Node* newNode)
diff --git a/Source/JavaScriptCore/dfg/DFGOSREntry.cpp b/Source/JavaScriptCore/dfg/DFGOSREntry.cpp
index 6a519bb..e42c5d8 100644
--- a/Source/JavaScriptCore/dfg/DFGOSREntry.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOSREntry.cpp
@@ -45,17 +45,18 @@
ASSERT(codeBlock->alternative()->getJITType() == JITCode::BaselineJIT);
ASSERT(!codeBlock->jitCodeMap());
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLog("OSR in ", *codeBlock->alternative(), " -> ", *codeBlock, " from bc#", bytecodeIndex, "\n");
-#endif
+ if (Options::verboseOSR()) {
+ dataLog(
+ "OSR in ", *codeBlock->alternative(), " -> ", *codeBlock,
+ " from bc#", bytecodeIndex, "\n");
+ }
VM* vm = &exec->vm();
OSREntryData* entry = codeBlock->getJITCode()->dfg()->osrEntryDataForBytecodeIndex(bytecodeIndex);
if (!entry) {
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLogF(" OSR failed because the entrypoint was optimized out.\n");
-#endif
+ if (Options::verboseOSR())
+ dataLogF(" OSR failed because the entrypoint was optimized out.\n");
return 0;
}
@@ -87,11 +88,11 @@
for (size_t argument = 0; argument < entry->m_expectedValues.numberOfArguments(); ++argument) {
if (argument >= exec->argumentCountIncludingThis()) {
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLogF(" OSR failed because argument %zu was not passed, expected ", argument);
- entry->m_expectedValues.argument(argument).dump(WTF::dataFile());
- dataLogF(".\n");
-#endif
+ if (Options::verboseOSR()) {
+ dataLogF(" OSR failed because argument %zu was not passed, expected ", argument);
+ entry->m_expectedValues.argument(argument).dump(WTF::dataFile());
+ dataLogF(".\n");
+ }
return 0;
}
@@ -102,9 +103,11 @@
value = exec->argument(argument - 1);
if (!entry->m_expectedValues.argument(argument).validate(value)) {
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLog(" OSR failed because argument ", argument, " is ", value, ", expected ", entry->m_expectedValues.argument(argument), ".\n");
-#endif
+ if (Options::verboseOSR()) {
+ dataLog(
+ " OSR failed because argument ", argument, " is ", value,
+ ", expected ", entry->m_expectedValues.argument(argument), ".\n");
+ }
return 0;
}
}
@@ -112,17 +115,22 @@
for (size_t local = 0; local < entry->m_expectedValues.numberOfLocals(); ++local) {
if (entry->m_localsForcedDouble.get(local)) {
if (!exec->registers()[local].jsValue().isNumber()) {
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLog(" OSR failed because variable ", local, " is ", exec->registers()[local].jsValue(), ", expected number.\n");
-#endif
+ if (Options::verboseOSR()) {
+ dataLog(
+ " OSR failed because variable ", local, " is ",
+ exec->registers()[local].jsValue(), ", expected number.\n");
+ }
return 0;
}
continue;
}
if (!entry->m_expectedValues.local(local).validate(exec->registers()[local].jsValue())) {
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLog(" OSR failed because variable ", local, " is ", exec->registers()[local].jsValue(), ", expected ", entry->m_expectedValues.local(local), ".\n");
-#endif
+ if (Options::verboseOSR()) {
+ dataLog(
+ " OSR failed because variable ", local, " is ",
+ exec->registers()[local].jsValue(), ", expected ",
+ entry->m_expectedValues.local(local), ".\n");
+ }
return 0;
}
}
@@ -135,15 +143,13 @@
// would have otherwise just kept running albeit less quickly.
if (!vm->interpreter->stack().grow(&exec->registers()[codeBlock->m_numCalleeRegisters])) {
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLogF(" OSR failed because stack growth failed.\n");
-#endif
+ if (Options::verboseOSR())
+ dataLogF(" OSR failed because stack growth failed.\n");
return 0;
}
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLogF(" OSR should succeed.\n");
-#endif
+ if (Options::verboseOSR())
+ dataLogF(" OSR should succeed.\n");
// 3) Perform data format conversions.
for (size_t local = 0; local < entry->m_expectedValues.numberOfLocals(); ++local) {
@@ -159,9 +165,8 @@
void* result = codeBlock->getJITCode()->executableAddressAtOffset(entry->m_machineCodeOffset);
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLogF(" OSR returning machine code address %p.\n", result);
-#endif
+ if (Options::verboseOSR())
+ dataLogF(" OSR returning machine code address %p.\n", result);
return result;
#else // DFG_ENABLE(OSR_ENTRY)
diff --git a/Source/JavaScriptCore/dfg/DFGOperations.cpp b/Source/JavaScriptCore/dfg/DFGOperations.cpp
index 0d0f503..27ce7b5 100644
--- a/Source/JavaScriptCore/dfg/DFGOperations.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOperations.cpp
@@ -1722,9 +1722,8 @@
extern "C" void DFG_OPERATION triggerReoptimizationNow(CodeBlock* codeBlock)
{
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLog(*codeBlock, ": Entered reoptimize\n");
-#endif
+ if (Options::verboseOSR())
+ dataLog(*codeBlock, ": Entered reoptimize\n");
// We must be called with the baseline code block.
ASSERT(JITCode::isBaselineCode(codeBlock->getJITType()));
diff --git a/Source/JavaScriptCore/dfg/DFGPlan.cpp b/Source/JavaScriptCore/dfg/DFGPlan.cpp
index 842d23c..8f89ad1 100644
--- a/Source/JavaScriptCore/dfg/DFGPlan.cpp
+++ b/Source/JavaScriptCore/dfg/DFGPlan.cpp
@@ -78,6 +78,7 @@
, mustHandleValues(codeBlock->numParameters(), numVarsWithValues)
, compilation(codeBlock->vm()->m_perBytecodeProfiler ? adoptRef(new Profiler::Compilation(codeBlock->vm()->m_perBytecodeProfiler->ensureBytecodesFor(codeBlock.get()), Profiler::DFG)) : 0)
, identifiers(codeBlock.get())
+ , isCompiled(false)
{
}
@@ -85,12 +86,15 @@
{
}
-void Plan::compileInThread()
+void Plan::compileInThread(LongLivedState& longLivedState)
{
SamplingRegion samplingRegion("DFG Compilation (Plan)");
CompilationScope compilationScope;
- Graph dfg(vm, *this);
+ if (logCompilationChanges())
+ dataLog("DFG(Plan) compiling ", *codeBlock, ", number of instructions = ", codeBlock->instructionCount(), "\n");
+
+ Graph dfg(vm, *this, longLivedState);
if (!parse(dfg)) {
finalizer = adoptPtr(new FailedFinalizer(*this));
@@ -199,6 +203,11 @@
return CompilationSuccessful;
}
+CodeBlock* Plan::key()
+{
+ return codeBlock->alternative();
+}
+
} } // namespace JSC::DFG
#endif // ENABLE(DFG_JIT)
diff --git a/Source/JavaScriptCore/dfg/DFGPlan.h b/Source/JavaScriptCore/dfg/DFGPlan.h
index e4a39ca..7a71823 100644
--- a/Source/JavaScriptCore/dfg/DFGPlan.h
+++ b/Source/JavaScriptCore/dfg/DFGPlan.h
@@ -43,6 +43,8 @@
namespace DFG {
+class LongLivedState;
+
enum CompileMode { CompileFunction, CompileOther };
#if ENABLE(DFG_JIT)
@@ -53,10 +55,12 @@
unsigned osrEntryBytecodeIndex, unsigned numVarsWithValues);
~Plan();
- void compileInThread();
+ void compileInThread(LongLivedState&);
CompilationResult finalize(RefPtr<JSC::JITCode>& jitCode, MacroAssemblerCodePtr* jitCodeWithArityCheck);
+ CodeBlock* key();
+
const CompileMode compileMode;
VM& vm;
RefPtr<CodeBlock> codeBlock;
@@ -71,6 +75,8 @@
DesiredWatchpoints watchpoints;
DesiredIdentifiers identifiers;
DesiredStructureChains chains;
+
+ bool isCompiled;
private:
bool isStillValid();
diff --git a/Source/JavaScriptCore/dfg/DFGWorklist.cpp b/Source/JavaScriptCore/dfg/DFGWorklist.cpp
new file mode 100644
index 0000000..70d8e3d
--- /dev/null
+++ b/Source/JavaScriptCore/dfg/DFGWorklist.cpp
@@ -0,0 +1,258 @@
+/*
+ * Copyright (C) 2013 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "config.h"
+#include "DFGWorklist.h"
+
+#if ENABLE(DFG_JIT)
+
+#include "CodeBlock.h"
+#include "DeferGC.h"
+#include "DFGLongLivedState.h"
+
+namespace JSC { namespace DFG {
+
+Worklist::Worklist()
+{
+}
+
+Worklist::~Worklist()
+{
+ m_queue.append(nullptr); // Use null plan to indicate that we want the thread to terminate.
+ waitForThreadCompletion(m_thread);
+}
+
+void Worklist::finishCreation()
+{
+ m_thread = createThread(threadFunction, this, "JSC Compilation Thread");
+}
+
+PassRefPtr<Worklist> Worklist::create()
+{
+ RefPtr<Worklist> result = adoptRef(new Worklist());
+ result->finishCreation();
+ return result;
+}
+
+void Worklist::enqueue(PassRefPtr<Plan> passedPlan)
+{
+ RefPtr<Plan> plan = passedPlan;
+ MutexLocker locker(m_lock);
+ if (Options::verboseCompilationQueue()) {
+ dump(locker, WTF::dataFile());
+ dataLog(": Enqueueing plan to optimize ", *plan->key(), "\n");
+ }
+ ASSERT(m_plans.find(plan->key()) == m_plans.end());
+ m_plans.add(plan->key(), plan);
+ m_queue.append(plan);
+ m_condition.broadcast();
+}
+
+Worklist::State Worklist::compilationState(CodeBlock* profiledBlock)
+{
+ MutexLocker locker(m_lock);
+ PlanMap::iterator iter = m_plans.find(profiledBlock);
+ if (iter == m_plans.end())
+ return NotKnown;
+ return iter->value->isCompiled ? Compiled : Compiling;
+}
+
+void Worklist::waitUntilAllPlansForVMAreReady(VM& vm)
+{
+ DeferGC deferGC(vm.heap);
+ // Wait for all of the plans for the given VM to complete. The idea here
+ // is that we want all of the caller VM's plans to be done. We don't care
+ // about any other VM's plans, and we won't attempt to wait on those.
+ // After we release this lock, we know that although other VMs may still
+ // be adding plans, our VM will not be.
+
+ MutexLocker locker(m_lock);
+
+ if (Options::verboseCompilationQueue()) {
+ dump(locker, WTF::dataFile());
+ dataLog(": Waiting for all in VM to complete.\n");
+ }
+
+ for (;;) {
+ bool allAreCompiled = true;
+ PlanMap::iterator end = m_plans.end();
+ for (PlanMap::iterator iter = m_plans.begin(); iter != end; ++iter) {
+ if (&iter->value->vm != &vm)
+ continue;
+ if (!iter->value->isCompiled) {
+ allAreCompiled = false;
+ break;
+ }
+ }
+
+ if (allAreCompiled)
+ break;
+
+ m_condition.wait(m_lock);
+ }
+}
+
+void Worklist::removeAllReadyPlansForVM(VM& vm, Vector<RefPtr<Plan>, 8>& myReadyPlans)
+{
+ DeferGC deferGC(vm.heap);
+ MutexLocker locker(m_lock);
+ for (size_t i = 0; i < m_readyPlans.size(); ++i) {
+ RefPtr<Plan> plan = m_readyPlans[i];
+ if (&plan->vm != &vm)
+ continue;
+ if (!plan->isCompiled)
+ continue;
+ myReadyPlans.append(plan);
+ m_readyPlans[i--] = m_readyPlans.takeLast();
+ m_plans.remove(plan->key());
+ }
+}
+
+void Worklist::removeAllReadyPlansForVM(VM& vm)
+{
+ Vector<RefPtr<Plan>, 8> myReadyPlans;
+ removeAllReadyPlansForVM(vm, myReadyPlans);
+}
+
+Worklist::State Worklist::completeAllReadyPlansForVM(VM& vm, CodeBlock* requestedProfiledBlock)
+{
+ DeferGC deferGC(vm.heap);
+ Vector<RefPtr<Plan>, 8> myReadyPlans;
+
+ removeAllReadyPlansForVM(vm, myReadyPlans);
+
+ State resultingState = NotKnown;
+
+ while (!myReadyPlans.isEmpty()) {
+ RefPtr<Plan> plan = myReadyPlans.takeLast();
+ CodeBlock* profiledBlock = plan->key();
+
+ if (Options::verboseCompilationQueue())
+ dataLog(*this, ": Completing ", *profiledBlock, "\n");
+
+ RELEASE_ASSERT(plan->isCompiled);
+
+ CompilationResult compilationResult =
+ profiledBlock->replaceWithDeferredOptimizedCode(plan);
+ RELEASE_ASSERT(compilationResult != CompilationDeferred);
+ profiledBlock->setOptimizationThresholdBasedOnCompilationResult(compilationResult);
+
+ if (profiledBlock == requestedProfiledBlock)
+ resultingState = Compiled;
+ }
+
+ if (requestedProfiledBlock && resultingState == NotKnown) {
+ MutexLocker locker(m_lock);
+ if (m_plans.contains(requestedProfiledBlock))
+ resultingState = Compiling;
+ }
+
+ return resultingState;
+}
+
+void Worklist::completeAllPlansForVM(VM& vm)
+{
+ DeferGC deferGC(vm.heap);
+ waitUntilAllPlansForVMAreReady(vm);
+ completeAllReadyPlansForVM(vm);
+}
+
+size_t Worklist::queueLength()
+{
+ MutexLocker locker(m_lock);
+ return m_queue.size();
+}
+
+void Worklist::dump(PrintStream& out) const
+{
+ MutexLocker locker(m_lock);
+ dump(locker, out);
+}
+
+void Worklist::dump(const MutexLocker&, PrintStream& out) const
+{
+ out.print(
+ "Worklist(", RawPointer(this), ")[Queue Length = ", m_queue.size(),
+ ", Map Size = ", m_plans.size(), "]");
+}
+
+void Worklist::runThread()
+{
+ LongLivedState longLivedState;
+
+ for (;;) {
+ RefPtr<Plan> plan;
+ {
+ MutexLocker locker(m_lock);
+ while (m_queue.isEmpty())
+ m_condition.wait(m_lock);
+ plan = m_queue.takeFirst();
+ }
+
+ if (!plan)
+ return;
+
+ plan->compileInThread(longLivedState);
+
+ {
+ MutexLocker locker(m_lock);
+ plan->key()->forceOptimizationSlowPathConcurrently();
+ plan->isCompiled = true;
+
+ if (Options::verboseCompilationQueue()) {
+ dump(locker, WTF::dataFile());
+ dataLog(": Compiled ", *plan->key(), " asynchronously\n");
+ }
+
+ m_readyPlans.append(plan);
+
+ m_condition.broadcast();
+ }
+ }
+}
+
+void Worklist::threadFunction(void* argument)
+{
+ static_cast<Worklist*>(argument)->runThread();
+}
+
+static pthread_once_t initializeGlobalWorklistKeyOnce = PTHREAD_ONCE_INIT;
+static Worklist* theGlobalWorklist;
+
+static void initializeGlobalWorklistOnce()
+{
+ theGlobalWorklist = Worklist::create().leakRef();
+}
+
+Worklist* globalWorklist()
+{
+ pthread_once(&initializeGlobalWorklistKeyOnce, initializeGlobalWorklistOnce);
+ return theGlobalWorklist;
+}
+
+} } // namespace JSC::DFG
+
+#endif // ENABLE(DFG_JIT)
+
diff --git a/Source/JavaScriptCore/dfg/DFGWorklist.h b/Source/JavaScriptCore/dfg/DFGWorklist.h
new file mode 100644
index 0000000..8aa6c8d
--- /dev/null
+++ b/Source/JavaScriptCore/dfg/DFGWorklist.h
@@ -0,0 +1,110 @@
+/*
+ * Copyright (C) 2013 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef DFGWorklist_h
+#define DFGWorklist_h
+
+#include <wtf/Platform.h>
+
+#if ENABLE(DFG_JIT)
+
+#include "DFGPlan.h"
+#include <wtf/Deque.h>
+#include <wtf/HashMap.h>
+#include <wtf/Noncopyable.h>
+#include <wtf/PassOwnPtr.h>
+#include <wtf/ThreadingPrimitives.h>
+
+namespace JSC { namespace DFG {
+
+class Worklist : public RefCounted<Worklist> {
+public:
+ enum State { NotKnown, Compiling, Compiled };
+
+ ~Worklist();
+
+ static PassRefPtr<Worklist> create();
+
+ void enqueue(PassRefPtr<Plan>);
+
+ // This is equivalent to:
+ // worklist->waitUntilAllPlansForVMAreReady(vm);
+ // worklist->completeAllReadyPlansForVM(vm);
+ void completeAllPlansForVM(VM&);
+
+ void waitUntilAllPlansForVMAreReady(VM&);
+ State completeAllReadyPlansForVM(VM&, CodeBlock* profiledBlock = 0);
+ void removeAllReadyPlansForVM(VM&);
+
+ State compilationState(CodeBlock* profiledBlock);
+
+ size_t queueLength();
+ void dump(PrintStream&) const;
+
+private:
+ Worklist();
+ void finishCreation();
+
+ void runThread();
+ static void threadFunction(void* argument);
+
+ void removeAllReadyPlansForVM(VM&, Vector<RefPtr<Plan>, 8>&);
+
+ void dump(const MutexLocker&, PrintStream&) const;
+
+ // Used to inform the thread about what work there is left to do.
+ Deque<RefPtr<Plan>, 16> m_queue;
+
+ // Used to answer questions about the current state of a code block. This
+ // is particularly great for the cti_optimize OSR slow path, which wants
+ // to know: did I get here because a better version of me just got
+ // compiled?
+ typedef HashMap<RefPtr<CodeBlock>, RefPtr<Plan> > PlanMap;
+ PlanMap m_plans;
+
+ // Used to quickly find which plans have been compiled and are ready to
+ // be completed.
+ Vector<RefPtr<Plan>, 16> m_readyPlans;
+
+ mutable Mutex m_lock;
+ // We broadcast on this condition whenever:
+ // - Something is enqueued.
+ // - Something is completed.
+ ThreadCondition m_condition;
+ ThreadIdentifier m_thread;
+};
+
+// For now we use a single global worklist. It's not clear that this
+// is the right thing to do, but it is what we do, for now. This function
+// will lazily create one when it's needed. Currently this is only called
+// from DFGDriver.cpp, when it actually wants to enqueue something.
+Worklist* globalWorklist();
+
+} } // namespace JSC::DFG
+
+#endif // ENABLE(DFG_JIT)
+
+#endif // DFGWorklist_h
+
diff --git a/Source/JavaScriptCore/heap/CopiedSpaceInlines.h b/Source/JavaScriptCore/heap/CopiedSpaceInlines.h
index 47f2414..4409261 100644
--- a/Source/JavaScriptCore/heap/CopiedSpaceInlines.h
+++ b/Source/JavaScriptCore/heap/CopiedSpaceInlines.h
@@ -135,8 +135,7 @@
inline void CopiedSpace::allocateBlock()
{
- if (m_heap->shouldCollect())
- m_heap->collect(Heap::DoNotSweep);
+ m_heap->collectIfNecessaryOrDefer();
m_allocator.resetCurrentBlock();
diff --git a/Source/JavaScriptCore/heap/DeferGC.h b/Source/JavaScriptCore/heap/DeferGC.h
new file mode 100644
index 0000000..dceed5d
--- /dev/null
+++ b/Source/JavaScriptCore/heap/DeferGC.h
@@ -0,0 +1,55 @@
+/*
+ * Copyright (C) 2013 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef DeferGC_h
+#define DeferGC_h
+
+#include "Heap.h"
+#include <wtf/Noncopyable.h>
+
+namespace JSC {
+
+class DeferGC {
+ WTF_MAKE_NONCOPYABLE(DeferGC);
+public:
+ DeferGC(Heap& heap)
+ : m_heap(heap)
+ {
+ m_heap.incrementDeferralDepth();
+ }
+
+ ~DeferGC()
+ {
+ m_heap.decrementDeferralDepthAndGCIfNeeded();
+ }
+
+private:
+ Heap& m_heap;
+};
+
+} // namespace JSC
+
+#endif // DeferGC_h
+
diff --git a/Source/JavaScriptCore/heap/Heap.cpp b/Source/JavaScriptCore/heap/Heap.cpp
index 7494717..96ffbd6 100644
--- a/Source/JavaScriptCore/heap/Heap.cpp
+++ b/Source/JavaScriptCore/heap/Heap.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2011 Apple Inc. All rights reserved.
+ * Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2011, 2013 Apple Inc. All rights reserved.
* Copyright (C) 2007 Eric Seidel <eric@webkit.org>
*
* This library is free software; you can redistribute it and/or
@@ -26,6 +26,7 @@
#include "CopiedSpace.h"
#include "CopiedSpaceInlines.h"
#include "CopyVisitorInlines.h"
+#include "DFGWorklist.h"
#include "GCActivityCallback.h"
#include "HeapRootVisitor.h"
#include "HeapStatistics.h"
@@ -264,6 +265,7 @@
, m_lastCodeDiscardTime(WTF::currentTime())
, m_activityCallback(DefaultGCActivityCallback::create(this))
, m_sweeper(IncrementalSweeper::create(this))
+ , m_deferralDepth(0)
{
m_storageSpace.init();
}
@@ -306,8 +308,7 @@
// collecting more frequently as long as it stays alive.
didAllocate(cost);
- if (shouldCollect())
- collect(DoNotSweep);
+ collectIfNecessaryOrDefer();
}
void Heap::reportAbandonedObjectGraph()
@@ -693,7 +694,7 @@
{
if (!m_isSafeToCollect)
return;
-
+
collect(DoSweep);
}
@@ -703,12 +704,18 @@
{
SamplingRegion samplingRegion("Garbage Collection");
+ RELEASE_ASSERT(!m_deferralDepth);
GCPHASE(Collect);
ASSERT(vm()->apiLock().currentThreadIsHoldingLock());
RELEASE_ASSERT(vm()->identifierTable == wtfThreadData().currentIdentifierTable());
ASSERT(m_isSafeToCollect);
JAVASCRIPTCORE_GC_BEGIN();
RELEASE_ASSERT(m_operationInProgress == NoOperation);
+
+ m_deferralDepth++; // Make sure that we don't GC in this call.
+ m_vm->prepareToDiscardCode();
+ m_deferralDepth--; // Decrement deferal manually, so we don't GC when we do so, since we are already GCing!.
+
m_operationInProgress = Collection;
m_activityCallback->willCollect();
@@ -809,6 +816,18 @@
HeapStatistics::showObjectStatistics(this);
}
+bool Heap::collectIfNecessaryOrDefer()
+{
+ if (m_deferralDepth)
+ return false;
+
+ if (!shouldCollect())
+ return false;
+
+ collect(DoNotSweep);
+ return true;
+}
+
void Heap::markDeadObjects()
{
m_objectSpace.forEachDeadCell<MarkObject>();
@@ -893,4 +912,20 @@
m_objectSpace.forEachDeadCell<Zombify>();
}
+void Heap::incrementDeferralDepth()
+{
+ RELEASE_ASSERT(m_deferralDepth < 100); // Sanity check to make sure this doesn't get ridiculous.
+
+ m_deferralDepth++;
+}
+
+void Heap::decrementDeferralDepthAndGCIfNeeded()
+{
+ RELEASE_ASSERT(m_deferralDepth >= 1);
+
+ m_deferralDepth--;
+
+ collectIfNecessaryOrDefer();
+}
+
} // namespace JSC
diff --git a/Source/JavaScriptCore/heap/Heap.h b/Source/JavaScriptCore/heap/Heap.h
index b647738..6df105e 100644
--- a/Source/JavaScriptCore/heap/Heap.h
+++ b/Source/JavaScriptCore/heap/Heap.h
@@ -1,7 +1,7 @@
/*
* Copyright (C) 1999-2000 Harri Porten (porten@kde.org)
* Copyright (C) 2001 Peter Kelly (pmk@post.com)
- * Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008, 2009 Apple Inc. All rights reserved.
+ * Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2013 Apple Inc. All rights reserved.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -110,6 +110,8 @@
JS_EXPORT_PRIVATE IncrementalSweeper* sweeper();
+ // true if collection is in progress
+ inline bool isCollecting();
// true if an allocation or collection is in progress
inline bool isBusy();
@@ -131,6 +133,7 @@
enum SweepToggle { DoNotSweep, DoSweep };
bool shouldCollect();
void collect(SweepToggle);
+ bool collectIfNecessaryOrDefer(); // Returns true if it did collect.
void reportExtraMemoryCost(size_t cost);
JS_EXPORT_PRIVATE void reportAbandonedObjectGraph();
@@ -179,6 +182,7 @@
private:
friend class CodeBlock;
friend class CopiedBlock;
+ friend class DeferGC;
friend class GCAwareJITStubRoutine;
friend class HandleSet;
friend class JITStubRoutine;
@@ -222,6 +226,9 @@
JSStack& stack();
BlockAllocator& blockAllocator();
+
+ void incrementDeferralDepth();
+ void decrementDeferralDepthAndGCIfNeeded();
const HeapType m_heapType;
const size_t m_ramSize;
@@ -268,6 +275,8 @@
OwnPtr<GCActivityCallback> m_activityCallback;
OwnPtr<IncrementalSweeper> m_sweeper;
Vector<MarkedBlock*> m_blockSnapshot;
+
+ unsigned m_deferralDepth;
};
struct MarkedBlockSnapshotFunctor : public MarkedBlock::VoidFunctor {
@@ -295,6 +304,11 @@
return m_operationInProgress != NoOperation;
}
+ bool Heap::isCollecting()
+ {
+ return m_operationInProgress == Collection;
+ }
+
inline Heap* Heap::heap(const JSCell* cell)
{
return MarkedBlock::blockFor(cell)->heap();
diff --git a/Source/JavaScriptCore/heap/MarkedAllocator.cpp b/Source/JavaScriptCore/heap/MarkedAllocator.cpp
index cbdbfd5..e2e2a2c 100644
--- a/Source/JavaScriptCore/heap/MarkedAllocator.cpp
+++ b/Source/JavaScriptCore/heap/MarkedAllocator.cpp
@@ -84,9 +84,7 @@
if (LIKELY(result != 0))
return result;
- if (m_heap->shouldCollect()) {
- m_heap->collect(Heap::DoNotSweep);
-
+ if (m_heap->collectIfNecessaryOrDefer()) {
result = tryAllocate(bytes);
if (result)
return result;
diff --git a/Source/JavaScriptCore/jit/JIT.cpp b/Source/JavaScriptCore/jit/JIT.cpp
index 84c9eec..1e1b522 100644
--- a/Source/JavaScriptCore/jit/JIT.cpp
+++ b/Source/JavaScriptCore/jit/JIT.cpp
@@ -564,10 +564,6 @@
PassRefPtr<JITCode> JIT::privateCompile(CodePtr* functionEntryArityCheck, JITCompilationEffort effort)
{
-#if ENABLE(JIT_VERBOSE_OSR)
- printf("Compiling JIT code!\n");
-#endif
-
#if ENABLE(VALUE_PROFILER)
DFG::CapabilityLevel level = m_codeBlock->canCompileWithDFG();
switch (level) {
diff --git a/Source/JavaScriptCore/jit/JIT.h b/Source/JavaScriptCore/jit/JIT.h
index 9e363fc..228cbb3 100644
--- a/Source/JavaScriptCore/jit/JIT.h
+++ b/Source/JavaScriptCore/jit/JIT.h
@@ -28,11 +28,6 @@
#if ENABLE(JIT)
-// Verbose logging of code generation
-#define ENABLE_JIT_VERBOSE 0
-// Verbose logging for OSR-related code.
-#define ENABLE_JIT_VERBOSE_OSR 0
-
// We've run into some problems where changing the size of the class JIT leads to
// performance fluctuations. Try forcing alignment in an attempt to stabalize this.
#if COMPILER(GCC)
diff --git a/Source/JavaScriptCore/jit/JITStubs.cpp b/Source/JavaScriptCore/jit/JITStubs.cpp
index ab5e3fe..75d1ddb 100644
--- a/Source/JavaScriptCore/jit/JITStubs.cpp
+++ b/Source/JavaScriptCore/jit/JITStubs.cpp
@@ -40,6 +40,7 @@
#include "CodeProfiling.h"
#include "CommonSlowPaths.h"
#include "DFGOSREntry.h"
+#include "DFGWorklist.h"
#include "Debugger.h"
#include "ExceptionHelpers.h"
#include "GetterSetter.h"
@@ -961,31 +962,83 @@
CodeBlock* codeBlock = callFrame->codeBlock();
unsigned bytecodeIndex = stackFrame.args[0].int32();
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLog(
- *codeBlock, ": Entered optimize with bytecodeIndex = ", bytecodeIndex,
- ", executeCounter = ", codeBlock->jitExecuteCounter(),
- ", optimizationDelayCounter = ", codeBlock->reoptimizationRetryCounter(),
- ", exitCounter = ");
- if (codeBlock->hasOptimizedReplacement())
- dataLog(codeBlock->replacement()->osrExitCounter());
- else
- dataLog("N/A");
- dataLog("\n");
-#endif
+ if (Options::verboseOSR()) {
+ dataLog(
+ *codeBlock, ": Entered optimize with bytecodeIndex = ", bytecodeIndex,
+ ", executeCounter = ", codeBlock->jitExecuteCounter(),
+ ", optimizationDelayCounter = ", codeBlock->reoptimizationRetryCounter(),
+ ", exitCounter = ");
+ if (codeBlock->hasOptimizedReplacement())
+ dataLog(codeBlock->replacement()->osrExitCounter());
+ else
+ dataLog("N/A");
+ dataLog("\n");
+ }
if (!codeBlock->checkIfOptimizationThresholdReached()) {
codeBlock->updateAllPredictions();
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLog("Choosing not to optimize ", *codeBlock, " yet.\n");
-#endif
+ if (Options::verboseOSR())
+ dataLog("Choosing not to optimize ", *codeBlock, " yet.\n");
return;
}
-
- if (codeBlock->hasOptimizedReplacement()) {
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLog("Considering OSR ", *codeBlock, " -> ", *codeBlock->replacement(), ".\n");
-#endif
+
+ // We cannot be in the process of asynchronous compilation and also have an optimized
+ // replacement.
+ ASSERT(
+ !stackFrame.vm->worklist
+ || !(stackFrame.vm->worklist->compilationState(codeBlock) != DFG::Worklist::NotKnown
+ && codeBlock->hasOptimizedReplacement()));
+
+ DFG::Worklist::State worklistState;
+ if (stackFrame.vm->worklist) {
+ // The call to DFG::Worklist::completeAllReadyPlansForVM() will complete all ready
+ // (i.e. compiled) code blocks. But if it completes ours, we also need to know
+ // what the result was so that we don't plow ahead and attempt OSR or immediate
+ // reoptimization. This will have already also set the appropriate JIT execution
+ // count threshold depending on what happened, so if the compilation was anything
+ // but successful we just want to return early. See the case for worklistState ==
+ // DFG::Worklist::Compiled, below.
+
+ // Note that we could have alternatively just called Worklist::compilationState()
+ // here, and if it returned Compiled, we could have then called
+ // completeAndScheduleOSR() below. But that would have meant that it could take
+ // longer for code blocks to be completed: they would only complete when *their*
+ // execution count trigger fired; but that could take a while since the firing is
+ // racy. It could also mean that code blocks that never run again after being
+ // compiled would sit on the worklist until next GC. That's fine, but it's
+ // probably a waste of memory. Our goal here is to complete code blocks as soon as
+ // possible in order to minimize the chances of us executing baseline code after
+ // optimized code is already available.
+
+ worklistState =
+ stackFrame.vm->worklist->completeAllReadyPlansForVM(*stackFrame.vm, codeBlock);
+ } else
+ worklistState = DFG::Worklist::NotKnown;
+
+ if (worklistState == DFG::Worklist::Compiling) {
+ // We cannot be in the process of asynchronous compilation and also have an optimized
+ // replacement.
+ RELEASE_ASSERT(!codeBlock->hasOptimizedReplacement());
+ codeBlock->setOptimizationThresholdBasedOnCompilationResult(CompilationDeferred);
+ return;
+ }
+
+ if (worklistState == DFG::Worklist::Compiled) {
+ // If we don't have an optimized replacement but we did just get compiled, then
+ // either of the following happened:
+ // - The compilation failed or was invalidated, in which case the execution count
+ // thresholds have already been set appropriately by
+ // CodeBlock::setOptimizationThresholdBasedOnCompilationResult() and we have
+ // nothing left to do.
+ // - GC ran after DFG::Worklist::completeAllReadyPlansForVM() and jettisoned our
+ // code block. Obviously that's unfortunate and we'd rather not have that
+ // happen, but it can happen, and if it did then the jettisoning logic will
+ // have set our threshold appropriately and we have nothing left to do.
+ if (!codeBlock->hasOptimizedReplacement())
+ return;
+ } else if (codeBlock->hasOptimizedReplacement()) {
+ if (Options::verboseOSR())
+ dataLog("Considering OSR ", *codeBlock, " -> ", *codeBlock->replacement(), ".\n");
// If we have an optimized replacement, then it must be the case that we entered
// cti_optimize from a loop. That's because is there's an optimized replacement,
// then all calls to this function will be relinked to the replacement and so
@@ -1000,60 +1053,39 @@
// shouldReoptimizeFromLoopNow() to always return true. But we make it do some
// additional checking anyway, to reduce the amount of recompilation thrashing.
if (codeBlock->replacement()->shouldReoptimizeFromLoopNow()) {
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLog("Triggering reoptimization of ", *codeBlock, "(", *codeBlock->replacement(), ") (in loop).\n");
-#endif
+ if (Options::verboseOSR()) {
+ dataLog(
+ "Triggering reoptimization of ", *codeBlock,
+ "(", *codeBlock->replacement(), ") (in loop).\n");
+ }
codeBlock->reoptimize();
return;
}
} else {
if (!codeBlock->shouldOptimizeNow()) {
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLog("Delaying optimization for ", *codeBlock, " (in loop) because of insufficient profiling.\n");
-#endif
+ if (Options::verboseOSR()) {
+ dataLog(
+ "Delaying optimization for ", *codeBlock,
+ " (in loop) because of insufficient profiling.\n");
+ }
return;
}
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLog("Triggering optimized compilation of ", *codeBlock, "\n");
-#endif
+ if (Options::verboseOSR())
+ dataLog("Triggering optimized compilation of ", *codeBlock, "\n");
JSScope* scope = callFrame->scope();
CompilationResult result;
JSObject* error = codeBlock->compileOptimized(callFrame, scope, result, bytecodeIndex);
- if (Options::verboseCompilation()
- || Options::showDisassembly()
- || Options::showDFGDisassembly())
+ if (Options::verboseOSR()) {
dataLog("Optimizing compilation of ", *codeBlock, " result: ", result, "\n");
-#if ENABLE(JIT_VERBOSE_OSR)
- if (error)
- dataLog("WARNING: optimized compilation failed with a JS error.\n");
-#else
- UNUSED_PARAM(error);
-#endif
-
- RELEASE_ASSERT((result == CompilationSuccessful) == (codeBlock->replacement() != codeBlock));
- switch (result) {
- case CompilationSuccessful:
- break;
- case CompilationFailed:
- case CompilationDeferred:
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLog("Optimizing ", *codeBlock, " failed.\n");
-#endif
- ASSERT(codeBlock->getJITType() == JITCode::BaselineJIT);
- codeBlock->dontOptimizeAnytimeSoon();
- return;
- case CompilationInvalidated:
- ASSERT(codeBlock->getJITType() == JITCode::BaselineJIT);
- // Retry with exponential backoff.
- codeBlock->countReoptimization();
- codeBlock->optimizeAfterWarmUp();
- return;
- default:
- RELEASE_ASSERT_NOT_REACHED();
- return;
+ if (error)
+ dataLog("WARNING: optimized compilation failed with a JS error.\n");
}
+
+ codeBlock->setOptimizationThresholdBasedOnCompilationResult(result);
+ if (result != CompilationSuccessful)
+ return;
}
CodeBlock* optimizedCodeBlock = codeBlock->replacement();
@@ -1069,32 +1101,28 @@
}
if (void* address = DFG::prepareOSREntry(callFrame, optimizedCodeBlock, bytecodeIndex)) {
- if (Options::showDFGDisassembly()) {
+ if (Options::verboseOSR()) {
dataLog(
"Performing OSR ", *codeBlock, " -> ", *optimizedCodeBlock, ", address ",
RawPointer((STUB_RETURN_ADDRESS).value()), " -> ", RawPointer(address), ".\n");
}
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLog("Optimizing ", *codeBlock, " succeeded, performing OSR after a delay of ", codeBlock->optimizationDelayCounter(), ".\n");
-#endif
codeBlock->optimizeSoon();
STUB_SET_RETURN_ADDRESS(address);
return;
}
-
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLog("Optimizing ", *codeBlock, " succeeded, OSR failed, after a delay of ", codeBlock->optimizationDelayCounter(), ".\n");
-#endif
+
+ if (Options::verboseOSR()) {
+ dataLog(
+ "Optimizing ", *codeBlock, " -> ", *codeBlock->replacement(),
+ " succeeded, OSR failed, after a delay of ",
+ codeBlock->optimizationDelayCounter(), ".\n");
+ }
// Count the OSR failure as a speculation failure. If this happens a lot, then
// reoptimize.
optimizedCodeBlock->countOSRExit();
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLog("Encountered OSR failure ", *codeBlock, " -> ", *codeBlock->replacement(), ".\n");
-#endif
-
// We are a lot more conservative about triggering reoptimization after OSR failure than
// before it. If we enter the optimize_from_loop trigger with a bucket full of fail
// already, then we really would like to reoptimize immediately. But this case covers
@@ -1104,9 +1132,11 @@
// right now. So, we only trigger reoptimization only upon the more conservative (non-loop)
// reoptimization trigger.
if (optimizedCodeBlock->shouldReoptimizeNow()) {
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLog("Triggering reoptimization of ", *codeBlock, " -> ", *codeBlock->replacement(), " (after OSR fail).\n");
-#endif
+ if (Options::verboseOSR()) {
+ dataLog(
+ "Triggering reoptimization of ", *codeBlock, " -> ",
+ *codeBlock->replacement(), " (after OSR fail).\n");
+ }
codeBlock->reoptimize();
return;
}
diff --git a/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp b/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
index 6114aa8..d58e8e4 100644
--- a/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
+++ b/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
@@ -282,30 +282,26 @@
codeBlock->updateAllValueProfilePredictions();
if (!codeBlock->checkIfJITThresholdReached()) {
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLogF(" JIT threshold should be lifted.\n");
-#endif
+ if (Options::verboseOSR())
+ dataLogF(" JIT threshold should be lifted.\n");
return false;
}
CompilationResult result = codeBlock->jitCompile(exec);
switch (result) {
case CompilationNotNeeded:
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLogF(" Code was already compiled.\n");
-#endif
+ if (Options::verboseOSR())
+ dataLogF(" Code was already compiled.\n");
codeBlock->jitSoon();
return true;
case CompilationFailed:
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLogF(" JIT compilation failed.\n");
-#endif
+ if (Options::verboseOSR())
+ dataLogF(" JIT compilation failed.\n");
codeBlock->dontJITAnytimeSoon();
return false;
case CompilationSuccessful:
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLogF(" JIT compilation successful.\n");
-#endif
+ if (Options::verboseOSR())
+ dataLogF(" JIT compilation successful.\n");
codeBlock->jitSoon();
return true;
default:
@@ -317,11 +313,11 @@
enum EntryKind { Prologue, ArityCheck };
static SlowPathReturnType entryOSR(ExecState* exec, Instruction*, CodeBlock* codeBlock, const char *name, EntryKind kind)
{
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLog(*codeBlock, ": Entered ", name, " with executeCounter = ", codeBlock->llintExecuteCounter(), "\n");
-#else
- UNUSED_PARAM(name);
-#endif
+ if (Options::verboseOSR()) {
+ dataLog(
+ *codeBlock, ": Entered ", name, " with executeCounter = ",
+ codeBlock->llintExecuteCounter(), "\n");
+ }
if (!shouldJIT(exec)) {
codeBlock->dontJITAnytimeSoon();
@@ -364,10 +360,12 @@
LLINT_SLOW_PATH_DECL(loop_osr)
{
CodeBlock* codeBlock = exec->codeBlock();
-
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLog(*codeBlock, ": Entered loop_osr with executeCounter = ", codeBlock->llintExecuteCounter(), "\n");
-#endif
+
+ if (Options::verboseOSR()) {
+ dataLog(
+ *codeBlock, ": Entered loop_osr with executeCounter = ",
+ codeBlock->llintExecuteCounter(), "\n");
+ }
if (!shouldJIT(exec)) {
codeBlock->dontJITAnytimeSoon();
@@ -394,10 +392,12 @@
LLINT_SLOW_PATH_DECL(replace)
{
CodeBlock* codeBlock = exec->codeBlock();
-
-#if ENABLE(JIT_VERBOSE_OSR)
- dataLog(*codeBlock, ": Entered replace with executeCounter = ", codeBlock->llintExecuteCounter(), "\n");
-#endif
+
+ if (Options::verboseOSR()) {
+ dataLog(
+ *codeBlock, ": Entered replace with executeCounter = ",
+ codeBlock->llintExecuteCounter(), "\n");
+ }
if (shouldJIT(exec))
jitCompileAndSetHeuristics(codeBlock, exec);
diff --git a/Source/JavaScriptCore/profiler/ProfilerBytecodes.h b/Source/JavaScriptCore/profiler/ProfilerBytecodes.h
index a538558..e445980 100644
--- a/Source/JavaScriptCore/profiler/ProfilerBytecodes.h
+++ b/Source/JavaScriptCore/profiler/ProfilerBytecodes.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2012 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2013 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -29,7 +29,6 @@
#include "CodeBlockHash.h"
#include "JSCJSValue.h"
#include "ProfilerBytecodeSequence.h"
-#include <wtf/ByteSpinLock.h>
#include <wtf/PrintStream.h>
#include <wtf/text/WTFString.h>
diff --git a/Source/JavaScriptCore/runtime/ConcurrentJITLock.h b/Source/JavaScriptCore/runtime/ConcurrentJITLock.h
new file mode 100644
index 0000000..112dd35
--- /dev/null
+++ b/Source/JavaScriptCore/runtime/ConcurrentJITLock.h
@@ -0,0 +1,45 @@
+/*
+ * Copyright (C) 2013 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef ConcurrentJITLock_h
+#define ConcurrentJITLock_h
+
+#include <wtf/ByteSpinLock.h>
+#include <wtf/NoLock.h>
+
+namespace JSC {
+
+#if ENABLE(CONCURRENT_JIT)
+typedef ByteSpinLock ConcurrentJITLock;
+typedef ByteSpinLocker ConcurrentJITLocker;
+#else
+typedef NoLock ConcurrentJITLock;
+typedef NoLockLocker ConcurrentJITLocker;
+#endif
+
+} // namespace JSC
+
+#endif // ConcurrentJITLock_h
+
diff --git a/Source/JavaScriptCore/runtime/ExecutionHarness.h b/Source/JavaScriptCore/runtime/ExecutionHarness.h
index 3fb3076..78e7330 100644
--- a/Source/JavaScriptCore/runtime/ExecutionHarness.h
+++ b/Source/JavaScriptCore/runtime/ExecutionHarness.h
@@ -141,12 +141,20 @@
template<typename CodeBlockType>
inline CompilationResult replaceWithDeferredOptimizedCode(
- PassRefPtr<DFG::Plan> plan, RefPtr<CodeBlockType>& sink, RefPtr<JITCode>& jitCode,
+ PassRefPtr<DFG::Plan> passedPlan, RefPtr<CodeBlockType>& sink, RefPtr<JITCode>& jitCode,
MacroAssemblerCodePtr* jitCodeWithArityCheck, int* numParameters)
{
+ RefPtr<DFG::Plan> plan = passedPlan;
+ CompilationResult result = DFG::tryFinalizePlan(plan, jitCode, jitCodeWithArityCheck);
+ if (Options::verboseOSR()) {
+ dataLog(
+ "Deferred optimizing compilation ", *plan->key(), " -> ", *plan->codeBlock,
+ " result: ", result, "\n");
+ }
+ if (result == CompilationSuccessful)
+ plan->codeBlock->alternative()->unlinkIncomingCalls();
return installOptimizedCode(
- DFG::tryFinalizePlan(plan, jitCode, jitCodeWithArityCheck),
- sink, static_cast<CodeBlockType*>(plan->codeBlock.get()), jitCode,
+ result, sink, static_cast<CodeBlockType*>(plan->codeBlock.get()), jitCode,
jitCodeWithArityCheck ? *jitCodeWithArityCheck : MacroAssemblerCodePtr(),
numParameters);
}
diff --git a/Source/JavaScriptCore/runtime/JSSegmentedVariableObject.cpp b/Source/JavaScriptCore/runtime/JSSegmentedVariableObject.cpp
index 01c0704..3f8c402 100644
--- a/Source/JavaScriptCore/runtime/JSSegmentedVariableObject.cpp
+++ b/Source/JavaScriptCore/runtime/JSSegmentedVariableObject.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2012 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2013 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -35,6 +35,8 @@
int JSSegmentedVariableObject::findRegisterIndex(void* registerAddress)
{
+ Locker locker(m_lock);
+
for (int i = m_registers.size(); i--;) {
if (&m_registers[i] != registerAddress)
continue;
@@ -46,6 +48,8 @@
int JSSegmentedVariableObject::addRegisters(int numberOfRegistersToAdd)
{
+ Locker locker(m_lock);
+
ASSERT(numberOfRegistersToAdd >= 0);
size_t oldSize = m_registers.size();
diff --git a/Source/JavaScriptCore/runtime/JSSegmentedVariableObject.h b/Source/JavaScriptCore/runtime/JSSegmentedVariableObject.h
index 3a6f625..1cc7988 100644
--- a/Source/JavaScriptCore/runtime/JSSegmentedVariableObject.h
+++ b/Source/JavaScriptCore/runtime/JSSegmentedVariableObject.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2012 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2013 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -29,6 +29,7 @@
#ifndef JSSegmentedVariableObject_h
#define JSSegmentedVariableObject_h
+#include "ConcurrentJITLock.h"
#include "JSObject.h"
#include "JSSymbolTableObject.h"
#include "Register.h"
@@ -90,8 +91,12 @@
{
Base::finishCreation(vm);
}
+
+ typedef ConcurrentJITLock Lock;
+ typedef ConcurrentJITLocker Locker;
SegmentedVector<WriteBarrier<Unknown>, 16> m_registers;
+ Lock m_lock;
};
} // namespace JSC
diff --git a/Source/JavaScriptCore/runtime/Options.h b/Source/JavaScriptCore/runtime/Options.h
index 1ce9455..21148c7 100644
--- a/Source/JavaScriptCore/runtime/Options.h
+++ b/Source/JavaScriptCore/runtime/Options.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2011, 2012 Apple Inc. All rights reserved.
+ * Copyright (C) 2011, 2012, 2013 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -109,6 +109,8 @@
v(bool, printEachOSRExit, false) \
v(bool, validateGraph, false) \
v(bool, validateGraphAtEachPhase, false) \
+ v(bool, verboseOSR, false) \
+ v(bool, verboseCompilationQueue, false) \
\
v(bool, useExperimentalFTL, false) \
v(bool, useFTLTBAA, true) \
@@ -116,6 +118,8 @@
v(bool, enableLLVMFastISel, true) \
v(unsigned, llvmOptimizationLevel, 2) \
\
+ v(bool, enableConcurrentJIT, true) \
+ \
v(bool, enableProfiler, false) \
\
v(unsigned, maximumOptimizationCandidateInstructionCount, 10000) \
diff --git a/Source/JavaScriptCore/runtime/Structure.h b/Source/JavaScriptCore/runtime/Structure.h
index 51a119c..4c4b05c 100644
--- a/Source/JavaScriptCore/runtime/Structure.h
+++ b/Source/JavaScriptCore/runtime/Structure.h
@@ -27,6 +27,7 @@
#define Structure_h
#include "ClassInfo.h"
+#include "ConcurrentJITLock.h"
#include "IndexingType.h"
#include "JSCJSValue.h"
#include "JSCell.h"
@@ -40,7 +41,6 @@
#include "JSTypeInfo.h"
#include "Watchpoint.h"
#include "Weak.h"
-#include <wtf/ByteSpinLock.h>
#include <wtf/CompilationThread.h>
#include <wtf/PassRefPtr.h>
#include <wtf/RefCounted.h>
@@ -72,8 +72,8 @@
typedef JSCell Base;
- typedef ByteSpinLock Lock;
- typedef ByteSpinLocker Locker;
+ typedef ConcurrentJITLock Lock;
+ typedef ConcurrentJITLocker Locker;
static Structure* create(VM&, JSGlobalObject*, JSValue prototype, const TypeInfo&, const ClassInfo*, IndexingType = NonArray, unsigned inlineCapacity = 0);
diff --git a/Source/JavaScriptCore/runtime/StructureInlines.h b/Source/JavaScriptCore/runtime/StructureInlines.h
index 3a905654..a1d1d77 100644
--- a/Source/JavaScriptCore/runtime/StructureInlines.h
+++ b/Source/JavaScriptCore/runtime/StructureInlines.h
@@ -226,7 +226,7 @@
ALWAYS_INLINE WriteBarrier<PropertyTable>& Structure::propertyTable()
{
- ASSERT(!globalObject() || !globalObject()->vm().heap.isBusy());
+ ASSERT(!globalObject() || !globalObject()->vm().heap.isCollecting());
return m_propertyTableUnsafe;
}
diff --git a/Source/JavaScriptCore/runtime/SymbolTable.h b/Source/JavaScriptCore/runtime/SymbolTable.h
index 422a633..d0ea334 100644
--- a/Source/JavaScriptCore/runtime/SymbolTable.h
+++ b/Source/JavaScriptCore/runtime/SymbolTable.h
@@ -29,6 +29,7 @@
#ifndef SymbolTable_h
#define SymbolTable_h
+#include "ConcurrentJITLock.h"
#include "JSObject.h"
#include "Watchpoint.h"
#include <wtf/HashTraits.h>
@@ -342,8 +343,8 @@
class SymbolTable {
public:
typedef HashMap<RefPtr<StringImpl>, SymbolTableEntry, IdentifierRepHash, HashTraits<RefPtr<StringImpl> >, SymbolTableIndexHashTraits> Map;
- typedef ByteSpinLock Lock;
- typedef ByteSpinLocker Locker;
+ typedef ConcurrentJITLock Lock;
+ typedef ConcurrentJITLocker Locker;
JS_EXPORT_PRIVATE SymbolTable();
JS_EXPORT_PRIVATE ~SymbolTable();
@@ -433,7 +434,7 @@
private:
Map m_map;
public:
- mutable ByteSpinLock m_lock;
+ mutable ConcurrentJITLock m_lock;
};
diff --git a/Source/JavaScriptCore/runtime/VM.cpp b/Source/JavaScriptCore/runtime/VM.cpp
index 3ca134b..bcf959a 100644
--- a/Source/JavaScriptCore/runtime/VM.cpp
+++ b/Source/JavaScriptCore/runtime/VM.cpp
@@ -33,6 +33,7 @@
#include "CodeCache.h"
#include "CommonIdentifiers.h"
#include "DFGLongLivedState.h"
+#include "DFGWorklist.h"
#include "DebuggerActivation.h"
#include "FunctionConstructor.h"
#include "GCActivityCallback.h"
@@ -259,12 +260,19 @@
#if ENABLE(DFG_JIT)
if (canUseJIT())
- m_dfgState = adoptPtr(new DFG::LongLivedState());
+ dfgState = adoptPtr(new DFG::LongLivedState());
#endif
}
VM::~VM()
{
+ // Make sure concurrent compilations are done, but don't install them, since installing
+ // them might cause a GC. We don't want to GC right now.
+ if (worklist) {
+ worklist->waitUntilAllPlansForVMAreReady(*this);
+ worklist->removeAllReadyPlansForVM(*this);
+ }
+
// Clear this first to ensure that nobody tries to remove themselves from it.
m_perBytecodeProfiler.clear();
@@ -440,8 +448,19 @@
interpreter->stopSampling();
}
+void VM::prepareToDiscardCode()
+{
+#if ENABLE(DFG_JIT)
+ if (!worklist)
+ return;
+
+ worklist->completeAllPlansForVM(*this);
+#endif
+}
+
void VM::discardAllCode()
{
+ prepareToDiscardCode();
m_codeCache->clear();
heap.deleteAllCompiledCode();
heap.reportAbandonedObjectGraph();
@@ -483,6 +502,8 @@
void VM::releaseExecutableMemory()
{
+ prepareToDiscardCode();
+
if (dynamicGlobalObject) {
StackPreservingRecompiler recompiler;
HashSet<JSCell*> roots;
diff --git a/Source/JavaScriptCore/runtime/VM.h b/Source/JavaScriptCore/runtime/VM.h
index dcd5251..0a2aadb 100644
--- a/Source/JavaScriptCore/runtime/VM.h
+++ b/Source/JavaScriptCore/runtime/VM.h
@@ -96,6 +96,7 @@
#if ENABLE(DFG_JIT)
namespace DFG {
class LongLivedState;
+ class Worklist;
}
#endif // ENABLE(DFG_JIT)
@@ -208,7 +209,8 @@
Heap heap;
#if ENABLE(DFG_JIT)
- OwnPtr<DFG::LongLivedState> m_dfgState;
+ OwnPtr<DFG::LongLivedState> dfgState;
+ RefPtr<DFG::Worklist> worklist;
#endif // ENABLE(DFG_JIT)
VMType vmType;
@@ -472,6 +474,8 @@
JSLock& apiLock() { return *m_apiLock; }
CodeCache* codeCache() { return m_codeCache.get(); }
+ void prepareToDiscardCode();
+
JS_EXPORT_PRIVATE void discardAllCode();
private: