FTL should be able to do polymorphic call inlining
https://bugs.webkit.org/show_bug.cgi?id=135145
Reviewed by Geoffrey Garen.
Source/JavaScriptCore:
Added a log-based high-fidelity call edge profiler that runs in DFG JIT (and optionally
baseline JIT) code. Used it to do precise polymorphic inlining in the FTL. Potential
inlining sites use the call edge profile if it is available, but they will still fall back
on the call inline cache and rare case counts if it's not. Polymorphic inlining means that
multiple possible callees can be inlined with a switch to guard them. The slow path may
either be an OSR exit or a virtual call.
The call edge profiling added in this patch is very precise - it will tell you about every
call that has ever happened. It took some effort to reduce the overhead of this profiling.
This mostly involved ensuring that we don't do it unnecessarily. For example, we avoid it
in the baseline JIT (you can conditionally enable it but it's off by default) and we only do
it in the DFG JIT if we know that the regular inline cache profiling wasn't precise enough.
I also experimented with reducing the precision of the profiling. This led to a significant
reduction in the speed-up, so I avoided this approach. I also explored making log processing
concurrent, but that didn't help. Also, I tested the overhead of the log processing and
found that most of the overhead of this profiling is actually in putting things into the log
rather than in processing the log - that part appears to be surprisingly cheap.
Polymorphic inlining could be enabled in the DFG if we enabled baseline call edge profiling,
and if we guarded such inlining sites with some profiling mechanism to detect
polyvariant monomorphisation opportunities (where the callsite being inlined reveals that
it's actually monomorphic).
This is a ~28% speed-up on deltablue and a ~7% speed-up on richards, with small speed-ups on
other programs as well. It's about a 2% speed-up on Octane version 2, and never a regression
on anything we care about. Some aggregates, like V8Spider, see a regression. This is
highlighting the increase in profiling overhead. But since this doesn't show up on any major
score (code-load or SunSpider), it's probably not relevant.
Relanding after fixing debug assertions in fast/storage/serialized-script-value.html.
* CMakeLists.txt:
* JavaScriptCore.vcxproj/JavaScriptCore.vcxproj:
* JavaScriptCore.xcodeproj/project.pbxproj:
* bytecode/CallEdge.cpp: Added.
(JSC::CallEdge::dump):
* bytecode/CallEdge.h: Added.
(JSC::CallEdge::operator!):
(JSC::CallEdge::callee):
(JSC::CallEdge::count):
(JSC::CallEdge::despecifiedClosure):
(JSC::CallEdge::CallEdge):
* bytecode/CallEdgeProfile.cpp: Added.
(JSC::CallEdgeProfile::callEdges):
(JSC::CallEdgeProfile::numCallsToKnownCells):
(JSC::worthDespecifying):
(JSC::CallEdgeProfile::worthDespecifying):
(JSC::CallEdgeProfile::visitWeak):
(JSC::CallEdgeProfile::addSlow):
(JSC::CallEdgeProfile::mergeBack):
(JSC::CallEdgeProfile::fadeByHalf):
(JSC::CallEdgeLog::CallEdgeLog):
(JSC::CallEdgeLog::~CallEdgeLog):
(JSC::CallEdgeLog::isEnabled):
(JSC::operationProcessCallEdgeLog):
(JSC::CallEdgeLog::emitLogCode):
(JSC::CallEdgeLog::processLog):
* bytecode/CallEdgeProfile.h: Added.
(JSC::CallEdgeProfile::numCallsToNotCell):
(JSC::CallEdgeProfile::numCallsToUnknownCell):
(JSC::CallEdgeProfile::totalCalls):
* bytecode/CallEdgeProfileInlines.h: Added.
(JSC::CallEdgeProfile::CallEdgeProfile):
(JSC::CallEdgeProfile::add):
* bytecode/CallLinkInfo.cpp:
(JSC::CallLinkInfo::visitWeak):
* bytecode/CallLinkInfo.h:
* bytecode/CallLinkStatus.cpp:
(JSC::CallLinkStatus::CallLinkStatus):
(JSC::CallLinkStatus::computeFromLLInt):
(JSC::CallLinkStatus::computeFor):
(JSC::CallLinkStatus::computeExitSiteData):
(JSC::CallLinkStatus::computeFromCallLinkInfo):
(JSC::CallLinkStatus::computeFromCallEdgeProfile):
(JSC::CallLinkStatus::computeDFGStatuses):
(JSC::CallLinkStatus::isClosureCall):
(JSC::CallLinkStatus::makeClosureCall):
(JSC::CallLinkStatus::dump):
(JSC::CallLinkStatus::function): Deleted.
(JSC::CallLinkStatus::internalFunction): Deleted.
(JSC::CallLinkStatus::intrinsicFor): Deleted.
* bytecode/CallLinkStatus.h:
(JSC::CallLinkStatus::CallLinkStatus):
(JSC::CallLinkStatus::isSet):
(JSC::CallLinkStatus::couldTakeSlowPath):
(JSC::CallLinkStatus::edges):
(JSC::CallLinkStatus::size):
(JSC::CallLinkStatus::at):
(JSC::CallLinkStatus::operator[]):
(JSC::CallLinkStatus::canOptimize):
(JSC::CallLinkStatus::canTrustCounts):
(JSC::CallLinkStatus::isClosureCall): Deleted.
(JSC::CallLinkStatus::callTarget): Deleted.
(JSC::CallLinkStatus::executable): Deleted.
(JSC::CallLinkStatus::makeClosureCall): Deleted.
* bytecode/CallVariant.cpp: Added.
(JSC::CallVariant::dump):
* bytecode/CallVariant.h: Added.
(JSC::CallVariant::CallVariant):
(JSC::CallVariant::operator!):
(JSC::CallVariant::despecifiedClosure):
(JSC::CallVariant::rawCalleeCell):
(JSC::CallVariant::internalFunction):
(JSC::CallVariant::function):
(JSC::CallVariant::isClosureCall):
(JSC::CallVariant::executable):
(JSC::CallVariant::nonExecutableCallee):
(JSC::CallVariant::intrinsicFor):
(JSC::CallVariant::functionExecutable):
(JSC::CallVariant::isHashTableDeletedValue):
(JSC::CallVariant::operator==):
(JSC::CallVariant::operator!=):
(JSC::CallVariant::operator<):
(JSC::CallVariant::operator>):
(JSC::CallVariant::operator<=):
(JSC::CallVariant::operator>=):
(JSC::CallVariant::hash):
(JSC::CallVariant::deletedToken):
(JSC::CallVariantHash::hash):
(JSC::CallVariantHash::equal):
* bytecode/CodeOrigin.h:
(JSC::InlineCallFrame::isNormalCall):
* bytecode/ExitKind.cpp:
(JSC::exitKindToString):
* bytecode/ExitKind.h:
* bytecode/GetByIdStatus.cpp:
(JSC::GetByIdStatus::computeForStubInfo):
* bytecode/PutByIdStatus.cpp:
(JSC::PutByIdStatus::computeForStubInfo):
* dfg/DFGAbstractInterpreterInlines.h:
(JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
* dfg/DFGBackwardsPropagationPhase.cpp:
(JSC::DFG::BackwardsPropagationPhase::propagate):
* dfg/DFGBasicBlock.cpp:
(JSC::DFG::BasicBlock::~BasicBlock):
* dfg/DFGBasicBlock.h:
(JSC::DFG::BasicBlock::takeLast):
(JSC::DFG::BasicBlock::didLink):
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::processSetLocalQueue):
(JSC::DFG::ByteCodeParser::removeLastNodeFromGraph):
(JSC::DFG::ByteCodeParser::addCallWithoutSettingResult):
(JSC::DFG::ByteCodeParser::addCall):
(JSC::DFG::ByteCodeParser::handleCall):
(JSC::DFG::ByteCodeParser::emitFunctionChecks):
(JSC::DFG::ByteCodeParser::undoFunctionChecks):
(JSC::DFG::ByteCodeParser::inliningCost):
(JSC::DFG::ByteCodeParser::inlineCall):
(JSC::DFG::ByteCodeParser::cancelLinkingForBlock):
(JSC::DFG::ByteCodeParser::attemptToInlineCall):
(JSC::DFG::ByteCodeParser::handleInlining):
(JSC::DFG::ByteCodeParser::handleConstantInternalFunction):
(JSC::DFG::ByteCodeParser::prepareToParseBlock):
(JSC::DFG::ByteCodeParser::clearCaches):
(JSC::DFG::ByteCodeParser::parseBlock):
(JSC::DFG::ByteCodeParser::linkBlock):
(JSC::DFG::ByteCodeParser::linkBlocks):
(JSC::DFG::ByteCodeParser::parseCodeBlock):
* dfg/DFGCPSRethreadingPhase.cpp:
(JSC::DFG::CPSRethreadingPhase::freeUnnecessaryNodes):
* dfg/DFGClobberize.h:
(JSC::DFG::clobberize):
* dfg/DFGCommon.h:
* dfg/DFGConstantFoldingPhase.cpp:
(JSC::DFG::ConstantFoldingPhase::foldConstants):
* dfg/DFGDoesGC.cpp:
(JSC::DFG::doesGC):
* dfg/DFGDriver.cpp:
(JSC::DFG::compileImpl):
* dfg/DFGFixupPhase.cpp:
(JSC::DFG::FixupPhase::fixupNode):
* dfg/DFGGraph.cpp:
(JSC::DFG::Graph::dump):
(JSC::DFG::Graph::getBlocksInPreOrder):
(JSC::DFG::Graph::visitChildren):
* dfg/DFGJITCompiler.cpp:
(JSC::DFG::JITCompiler::link):
* dfg/DFGLazyJSValue.cpp:
(JSC::DFG::LazyJSValue::switchLookupValue):
* dfg/DFGLazyJSValue.h:
(JSC::DFG::LazyJSValue::switchLookupValue): Deleted.
* dfg/DFGNode.cpp:
(WTF::printInternal):
* dfg/DFGNode.h:
(JSC::DFG::OpInfo::OpInfo):
(JSC::DFG::Node::hasHeapPrediction):
(JSC::DFG::Node::hasCellOperand):
(JSC::DFG::Node::cellOperand):
(JSC::DFG::Node::setCellOperand):
(JSC::DFG::Node::canBeKnownFunction): Deleted.
(JSC::DFG::Node::hasKnownFunction): Deleted.
(JSC::DFG::Node::knownFunction): Deleted.
(JSC::DFG::Node::giveKnownFunction): Deleted.
(JSC::DFG::Node::hasFunction): Deleted.
(JSC::DFG::Node::function): Deleted.
(JSC::DFG::Node::hasExecutable): Deleted.
(JSC::DFG::Node::executable): Deleted.
* dfg/DFGNodeType.h:
* dfg/DFGPhantomCanonicalizationPhase.cpp:
(JSC::DFG::PhantomCanonicalizationPhase::run):
* dfg/DFGPhantomRemovalPhase.cpp:
(JSC::DFG::PhantomRemovalPhase::run):
* dfg/DFGPredictionPropagationPhase.cpp:
(JSC::DFG::PredictionPropagationPhase::propagate):
* dfg/DFGSafeToExecute.h:
(JSC::DFG::safeToExecute):
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::emitSwitch):
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::emitCall):
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::emitCall):
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGStructureRegistrationPhase.cpp:
(JSC::DFG::StructureRegistrationPhase::run):
* dfg/DFGTierUpCheckInjectionPhase.cpp:
(JSC::DFG::TierUpCheckInjectionPhase::run):
(JSC::DFG::TierUpCheckInjectionPhase::removeFTLProfiling):
* dfg/DFGValidate.cpp:
(JSC::DFG::Validate::validate):
* dfg/DFGWatchpointCollectionPhase.cpp:
(JSC::DFG::WatchpointCollectionPhase::handle):
* ftl/FTLCapabilities.cpp:
(JSC::FTL::canCompile):
* ftl/FTLLowerDFGToLLVM.cpp:
(JSC::FTL::ftlUnreachable):
(JSC::FTL::LowerDFGToLLVM::lower):
(JSC::FTL::LowerDFGToLLVM::compileNode):
(JSC::FTL::LowerDFGToLLVM::compileCheckCell):
(JSC::FTL::LowerDFGToLLVM::compileCheckBadCell):
(JSC::FTL::LowerDFGToLLVM::compileGetExecutable):
(JSC::FTL::LowerDFGToLLVM::compileNativeCallOrConstruct):
(JSC::FTL::LowerDFGToLLVM::compileSwitch):
(JSC::FTL::LowerDFGToLLVM::buildSwitch):
(JSC::FTL::LowerDFGToLLVM::compileCheckFunction): Deleted.
(JSC::FTL::LowerDFGToLLVM::compileCheckExecutable): Deleted.
* heap/Heap.cpp:
(JSC::Heap::collect):
* jit/AssemblyHelpers.h:
(JSC::AssemblyHelpers::storeValue):
(JSC::AssemblyHelpers::loadValue):
* jit/CCallHelpers.h:
(JSC::CCallHelpers::setupArguments):
* jit/GPRInfo.h:
(JSC::JSValueRegs::uses):
* jit/JITCall.cpp:
(JSC::JIT::compileOpCall):
* jit/JITCall32_64.cpp:
(JSC::JIT::compileOpCall):
* runtime/Options.h:
* runtime/VM.cpp:
(JSC::VM::ensureCallEdgeLog):
* runtime/VM.h:
* tests/stress/fold-profiled-call-to-call.js: Added. This test pinpoints the problem we saw in fast/storage/serialized-script-value.html.
* tests/stress/new-array-then-exit.js: Added.
* tests/stress/poly-call-exit-this.js: Added.
* tests/stress/poly-call-exit.js: Added.
Source/WTF:
Add some power that I need for call edge profiling.
* wtf/OwnPtr.h:
(WTF::OwnPtr<T>::createTransactionally):
* wtf/Spectrum.h:
(WTF::Spectrum::add):
(WTF::Spectrum::addAll):
(WTF::Spectrum::get):
(WTF::Spectrum::size):
(WTF::Spectrum::KeyAndCount::KeyAndCount):
(WTF::Spectrum::clear):
(WTF::Spectrum::removeIf):
LayoutTests:
* js/regress/script-tests/simple-poly-call-nested.js: Added.
* js/regress/script-tests/simple-poly-call.js: Added.
* js/regress/simple-poly-call-expected.txt: Added.
* js/regress/simple-poly-call-nested-expected.txt: Added.
* js/regress/simple-poly-call-nested.html: Added.
* js/regress/simple-poly-call.html: Added.
git-svn-id: http://svn.webkit.org/repository/webkit/trunk@173069 268f45cc-cd09-0410-ab3c-d52691b4dbfc
diff --git a/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h b/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
index 2cd8f68..22f8d0e 100644
--- a/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
+++ b/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
@@ -1459,14 +1459,6 @@
forNode(node).setType(SpecInt32);
break;
- case CheckExecutable: {
- // FIXME: We could track executables in AbstractValue, which would allow us to get rid of these checks
- // more thoroughly. https://bugs.webkit.org/show_bug.cgi?id=106200
- // FIXME: We could eliminate these entirely if we know the exact value that flows into this.
- // https://bugs.webkit.org/show_bug.cgi?id=106201
- break;
- }
-
case CheckStructure: {
// FIXME: We should be able to propagate the structure sets of constants (i.e. prototypes).
AbstractValue& value = forNode(node->child1());
@@ -1726,16 +1718,29 @@
m_state.setIsValid(false);
break;
}
-
- case CheckFunction: {
+
+ case GetExecutable: {
JSValue value = forNode(node->child1()).value();
- if (value == node->function()->value()) {
+ if (value) {
+ JSFunction* function = jsDynamicCast<JSFunction*>(value);
+ if (function) {
+ setConstant(node, *m_graph.freeze(function->executable()));
+ break;
+ }
+ }
+ forNode(node).setType(SpecCellOther);
+ break;
+ }
+
+ case CheckCell: {
+ JSValue value = forNode(node->child1()).value();
+ if (value == node->cellOperand()->value()) {
m_state.setFoundConstants(true);
ASSERT(value);
break;
}
- filterByValue(node->child1(), *node->function());
+ filterByValue(node->child1(), *node->cellOperand());
break;
}
@@ -1859,8 +1864,6 @@
case VariableWatchpoint:
case VarInjectionWatchpoint:
- break;
-
case PutGlobalVar:
case NotifyWrite:
break;
@@ -1900,7 +1903,16 @@
forNode(node).makeHeapTop();
break;
+ case ProfiledCall:
+ case ProfiledConstruct:
+ if (forNode(m_graph.varArgChild(node, 0)).m_value)
+ m_state.setFoundConstants(true);
+ clobberWorld(node->origin.semantic, clobberLimit);
+ forNode(node).makeHeapTop();
+ break;
+
case ForceOSRExit:
+ case CheckBadCell:
m_state.setIsValid(false);
break;
@@ -1955,7 +1967,8 @@
case LastNodeType:
case ArithIMul:
case FiatInt52:
- RELEASE_ASSERT_NOT_REACHED();
+ case BottomValue:
+ DFG_CRASH(m_graph, node, "Unexpected node type");
break;
}
diff --git a/Source/JavaScriptCore/dfg/DFGBackwardsPropagationPhase.cpp b/Source/JavaScriptCore/dfg/DFGBackwardsPropagationPhase.cpp
index e2d3a83..85859d2 100644
--- a/Source/JavaScriptCore/dfg/DFGBackwardsPropagationPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGBackwardsPropagationPhase.cpp
@@ -389,6 +389,11 @@
// then -0 and 0 are treated the same.
node->child1()->mergeFlags(NodeBytecodeUsesAsNumber | NodeBytecodeUsesAsOther);
break;
+ case SwitchCell:
+ // There is currently no point to being clever here since this is used for switching
+ // on objects.
+ mergeDefaultFlags(node);
+ break;
}
break;
}
diff --git a/Source/JavaScriptCore/dfg/DFGBasicBlock.cpp b/Source/JavaScriptCore/dfg/DFGBasicBlock.cpp
index 733f8c2..14a2153 100644
--- a/Source/JavaScriptCore/dfg/DFGBasicBlock.cpp
+++ b/Source/JavaScriptCore/dfg/DFGBasicBlock.cpp
@@ -58,7 +58,9 @@
{
}
-BasicBlock::~BasicBlock() { }
+BasicBlock::~BasicBlock()
+{
+}
void BasicBlock::ensureLocals(unsigned newNumLocals)
{
diff --git a/Source/JavaScriptCore/dfg/DFGBasicBlock.h b/Source/JavaScriptCore/dfg/DFGBasicBlock.h
index 8099407..dfbd880 100644
--- a/Source/JavaScriptCore/dfg/DFGBasicBlock.h
+++ b/Source/JavaScriptCore/dfg/DFGBasicBlock.h
@@ -62,6 +62,7 @@
Node*& operator[](size_t i) { return at(i); }
Node* operator[](size_t i) const { return at(i); }
Node* last() const { return at(size() - 1); }
+ Node* takeLast() { return m_nodes.takeLast(); }
void resize(size_t size) { m_nodes.resize(size); }
void grow(size_t size) { m_nodes.grow(size); }
@@ -106,6 +107,13 @@
void dump(PrintStream& out) const;
+ void didLink()
+ {
+#if !ASSERT_DISABLED
+ isLinked = true;
+#endif
+ }
+
// This value is used internally for block linking and OSR entry. It is mostly meaningless
// for other purposes due to inlining.
unsigned bytecodeBegin;
diff --git a/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp b/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
index 3de9e0b..a8ba5b9 100644
--- a/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
+++ b/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
@@ -50,6 +50,8 @@
namespace JSC { namespace DFG {
+static const bool verbose = false;
+
class ConstantBufferKey {
public:
ConstantBufferKey()
@@ -178,14 +180,20 @@
Node* callTarget, int argCount, int registerOffset, CallLinkStatus);
void handleCall(int result, NodeType op, CodeSpecializationKind, unsigned instructionSize, int callee, int argCount, int registerOffset);
void handleCall(Instruction* pc, NodeType op, CodeSpecializationKind);
- void emitFunctionChecks(const CallLinkStatus&, Node* callTarget, int registerOffset, CodeSpecializationKind);
+ void emitFunctionChecks(CallVariant, Node* callTarget, int registerOffset, CodeSpecializationKind);
+ void undoFunctionChecks(CallVariant);
void emitArgumentPhantoms(int registerOffset, int argumentCountIncludingThis, CodeSpecializationKind);
+ unsigned inliningCost(CallVariant, int argumentCountIncludingThis, CodeSpecializationKind); // Return UINT_MAX if it's not an inlining candidate. By convention, intrinsics have a cost of 1.
// Handle inlining. Return true if it succeeded, false if we need to plant a call.
- bool handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus&, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind);
+ bool handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus&, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, NodeType callOp, InlineCallFrame::Kind, SpeculatedType prediction);
+ enum CallerLinkability { CallerDoesNormalLinking, CallerLinksManually };
+ bool attemptToInlineCall(Node* callTargetNode, int resultOperand, CallVariant, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind, CallerLinkability, SpeculatedType prediction, unsigned& inliningBalance);
+ void inlineCall(Node* callTargetNode, int resultOperand, CallVariant, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind, CallerLinkability);
+ void cancelLinkingForBlock(InlineStackEntry*, BasicBlock*); // Only works when the given block is the last one to have been added for that inline stack entry.
// Handle intrinsic functions. Return true if it succeeded, false if we need to plant a call.
bool handleIntrinsic(int resultOperand, Intrinsic, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction);
bool handleTypedArrayConstructor(int resultOperand, InternalFunction*, int registerOffset, int argumentCountIncludingThis, TypedArrayType);
- bool handleConstantInternalFunction(int resultOperand, InternalFunction*, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction, CodeSpecializationKind);
+ bool handleConstantInternalFunction(int resultOperand, InternalFunction*, int registerOffset, int argumentCountIncludingThis, CodeSpecializationKind);
Node* handlePutByOffset(Node* base, unsigned identifier, PropertyOffset, Node* value);
Node* handleGetByOffset(SpeculatedType, Node* base, const StructureSet&, unsigned identifierNumber, PropertyOffset, NodeType op = GetByOffset);
void handleGetById(
@@ -200,8 +208,9 @@
Node* getScope(unsigned skipCount);
- // Prepare to parse a block.
void prepareToParseBlock();
+ void clearCaches();
+
// Parse a single basic block of bytecode instructions.
bool parseBlock(unsigned limit);
// Link block successors.
@@ -296,6 +305,13 @@
return delayed.execute(this, setMode);
}
+
+ void processSetLocalQueue()
+ {
+ for (unsigned i = 0; i < m_setLocalQueue.size(); ++i)
+ m_setLocalQueue[i].execute(this);
+ m_setLocalQueue.resize(0);
+ }
Node* set(VirtualRegister operand, Node* value, SetMode setMode = NormalSet)
{
@@ -637,6 +653,13 @@
return result;
}
+
+ void removeLastNodeFromGraph(NodeType expectedNodeType)
+ {
+ Node* node = m_currentBlock->takeLast();
+ RELEASE_ASSERT(node->op() == expectedNodeType);
+ m_graph.m_allocator.free(node);
+ }
void addVarArgChild(Node* child)
{
@@ -645,7 +668,7 @@
}
Node* addCallWithoutSettingResult(
- NodeType op, Node* callee, int argCount, int registerOffset,
+ NodeType op, OpInfo opInfo, Node* callee, int argCount, int registerOffset,
SpeculatedType prediction)
{
addVarArgChild(callee);
@@ -653,19 +676,19 @@
if (parameterSlots > m_parameterSlots)
m_parameterSlots = parameterSlots;
- int dummyThisArgument = op == Call || op == NativeCall ? 0 : 1;
+ int dummyThisArgument = op == Call || op == NativeCall || op == ProfiledCall ? 0 : 1;
for (int i = 0 + dummyThisArgument; i < argCount; ++i)
addVarArgChild(get(virtualRegisterForArgument(i, registerOffset)));
- return addToGraph(Node::VarArg, op, OpInfo(0), OpInfo(prediction));
+ return addToGraph(Node::VarArg, op, opInfo, OpInfo(prediction));
}
Node* addCall(
- int result, NodeType op, Node* callee, int argCount, int registerOffset,
+ int result, NodeType op, OpInfo opInfo, Node* callee, int argCount, int registerOffset,
SpeculatedType prediction)
{
Node* call = addCallWithoutSettingResult(
- op, callee, argCount, registerOffset, prediction);
+ op, opInfo, callee, argCount, registerOffset, prediction);
VirtualRegister resultReg(result);
if (resultReg.isValid())
set(VirtualRegister(result), call);
@@ -871,8 +894,7 @@
Vector<UnlinkedBlock> m_unlinkedBlocks;
// Potential block linking targets. Must be sorted by bytecodeBegin, and
- // cannot have two blocks that have the same bytecodeBegin. For this very
- // reason, this is not equivalent to
+ // cannot have two blocks that have the same bytecodeBegin.
Vector<BasicBlock*> m_blockLinkingTargets;
// If the callsite's basic block was split into two, then this will be
@@ -1019,77 +1041,63 @@
CallLinkStatus callLinkStatus, SpeculatedType prediction)
{
ASSERT(registerOffset <= 0);
- CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);
if (callTarget->hasConstant())
callLinkStatus = CallLinkStatus(callTarget->asJSValue()).setIsProved(true);
+ if ((!callLinkStatus.canOptimize() || callLinkStatus.size() != 1)
+ && !isFTL(m_graph.m_plan.mode) && Options::useFTLJIT()
+ && InlineCallFrame::isNormalCall(kind)
+ && CallEdgeLog::isEnabled()
+ && Options::dfgDoesCallEdgeProfiling()) {
+ ASSERT(op == Call || op == Construct);
+ if (op == Call)
+ op = ProfiledCall;
+ else
+ op = ProfiledConstruct;
+ }
+
if (!callLinkStatus.canOptimize()) {
// Oddly, this conflates calls that haven't executed with calls that behaved sufficiently polymorphically
// that we cannot optimize them.
- addCall(result, op, callTarget, argumentCountIncludingThis, registerOffset, prediction);
+ addCall(result, op, OpInfo(), callTarget, argumentCountIncludingThis, registerOffset, prediction);
return;
}
unsigned nextOffset = m_currentIndex + instructionSize;
-
- if (InternalFunction* function = callLinkStatus.internalFunction()) {
- if (handleConstantInternalFunction(result, function, registerOffset, argumentCountIncludingThis, prediction, specializationKind)) {
- // This phantoming has to be *after* the code for the intrinsic, to signify that
- // the inputs must be kept alive whatever exits the intrinsic may do.
- addToGraph(Phantom, callTarget);
- emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, specializationKind);
- return;
- }
-
- // Can only handle this using the generic call handler.
- addCall(result, op, callTarget, argumentCountIncludingThis, registerOffset, prediction);
- return;
- }
-
- Intrinsic intrinsic = callLinkStatus.intrinsicFor(specializationKind);
-
- JSFunction* knownFunction = nullptr;
- if (intrinsic != NoIntrinsic) {
- emitFunctionChecks(callLinkStatus, callTarget, registerOffset, specializationKind);
-
- if (handleIntrinsic(result, intrinsic, registerOffset, argumentCountIncludingThis, prediction)) {
- // This phantoming has to be *after* the code for the intrinsic, to signify that
- // the inputs must be kept alive whatever exits the intrinsic may do.
- addToGraph(Phantom, callTarget);
- emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, specializationKind);
- if (m_graph.compilation())
- m_graph.compilation()->noticeInlinedCall();
- return;
- }
- } else if (handleInlining(callTarget, result, callLinkStatus, registerOffset, argumentCountIncludingThis, nextOffset, kind)) {
+
+ OpInfo callOpInfo;
+
+ if (handleInlining(callTarget, result, callLinkStatus, registerOffset, argumentCountIncludingThis, nextOffset, op, kind, prediction)) {
if (m_graph.compilation())
m_graph.compilation()->noticeInlinedCall();
return;
+ }
+
#if ENABLE(FTL_NATIVE_CALL_INLINING)
- } else if (isFTL(m_graph.m_plan.mode) && Options::optimizeNativeCalls()) {
- JSFunction* function = callLinkStatus.function();
+ if (isFTL(m_graph.m_plan.mode) && Options::optimizeNativeCalls() && callLinkStatus.size() == 1 && !callLinkStatus.couldTakeSlowPath()) {
+ CallVariant callee = callLinkStatus[0].callee();
+ JSFunction* function = callee.function();
+ CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);
if (function && function->isHostFunction()) {
- emitFunctionChecks(callLinkStatus, callTarget, registerOffset, specializationKind);
- knownFunction = function;
+ emitFunctionChecks(callee, callTarget, registerOffset, specializationKind);
+ callOpInfo = OpInfo(m_graph.freeze(function));
- if (op == Call)
+ if (op == Call || op == ProfiledCall)
op = NativeCall;
else {
- ASSERT(op == Construct);
+ ASSERT(op == Construct || op == ProfiledConstruct);
op = NativeConstruct;
}
}
-#endif
}
- Node* call = addCall(result, op, callTarget, argumentCountIncludingThis, registerOffset, prediction);
-
- if (knownFunction)
- call->giveKnownFunction(knownFunction);
+#endif
+
+ addCall(result, op, callOpInfo, callTarget, argumentCountIncludingThis, registerOffset, prediction);
}
-void ByteCodeParser::emitFunctionChecks(const CallLinkStatus& callLinkStatus, Node* callTarget, int registerOffset, CodeSpecializationKind kind)
+void ByteCodeParser::emitFunctionChecks(CallVariant callee, Node* callTarget, int registerOffset, CodeSpecializationKind kind)
{
Node* thisArgument;
if (kind == CodeForCall)
@@ -1097,20 +1105,25 @@
else
thisArgument = 0;
- if (callLinkStatus.isProved()) {
- addToGraph(Phantom, callTarget, thisArgument);
- return;
+ JSCell* calleeCell;
+ Node* callTargetForCheck;
+ if (callee.isClosureCall()) {
+ calleeCell = callee.executable();
+ callTargetForCheck = addToGraph(GetExecutable, callTarget);
+ } else {
+ calleeCell = callee.nonExecutableCallee();
+ callTargetForCheck = callTarget;
}
- ASSERT(callLinkStatus.canOptimize());
-
- if (JSFunction* function = callLinkStatus.function())
- addToGraph(CheckFunction, OpInfo(m_graph.freeze(function)), callTarget, thisArgument);
- else {
- ASSERT(callLinkStatus.executable());
-
- addToGraph(CheckExecutable, OpInfo(callLinkStatus.executable()), callTarget, thisArgument);
- }
+ ASSERT(calleeCell);
+ addToGraph(CheckCell, OpInfo(m_graph.freeze(calleeCell)), callTargetForCheck, thisArgument);
+}
+
+void ByteCodeParser::undoFunctionChecks(CallVariant callee)
+{
+ removeLastNodeFromGraph(CheckCell);
+ if (callee.isClosureCall())
+ removeLastNodeFromGraph(GetExecutable);
}
void ByteCodeParser::emitArgumentPhantoms(int registerOffset, int argumentCountIncludingThis, CodeSpecializationKind kind)
@@ -1119,28 +1132,17 @@
addToGraph(Phantom, get(virtualRegisterForArgument(i, registerOffset)));
}
-bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus& callLinkStatus, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind kind)
+unsigned ByteCodeParser::inliningCost(CallVariant callee, int argumentCountIncludingThis, CodeSpecializationKind kind)
{
- static const bool verbose = false;
-
- CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);
-
if (verbose)
- dataLog("Considering inlining ", callLinkStatus, " into ", currentCodeOrigin(), "\n");
+ dataLog("Considering inlining ", callee, " into ", currentCodeOrigin(), "\n");
- // First, the really simple checks: do we have an actual JS function?
- if (!callLinkStatus.executable()) {
+ FunctionExecutable* executable = callee.functionExecutable();
+ if (!executable) {
if (verbose)
- dataLog(" Failing because there is no executable.\n");
- return false;
+ dataLog(" Failing because there is no function executable.");
+ return UINT_MAX;
}
- if (callLinkStatus.executable()->isHostFunction()) {
- if (verbose)
- dataLog(" Failing because it's a host function.\n");
- return false;
- }
-
- FunctionExecutable* executable = jsCast<FunctionExecutable*>(callLinkStatus.executable());
// Does the number of arguments we're passing match the arity of the target? We currently
// inline only if the number of arguments passed is greater than or equal to the number
@@ -1148,7 +1150,7 @@
if (static_cast<int>(executable->parameterCount()) + 1 > argumentCountIncludingThis) {
if (verbose)
dataLog(" Failing because of arity mismatch.\n");
- return false;
+ return UINT_MAX;
}
// Do we have a code block, and does the code block's size match the heuristics/requirements for
@@ -1157,18 +1159,18 @@
// if we had a static proof of what was being called; this might happen for example if you call a
// global function, where watchpointing gives us static information. Overall, it's a rare case
// because we expect that any hot callees would have already been compiled.
- CodeBlock* codeBlock = executable->baselineCodeBlockFor(specializationKind);
+ CodeBlock* codeBlock = executable->baselineCodeBlockFor(kind);
if (!codeBlock) {
if (verbose)
dataLog(" Failing because no code block available.\n");
- return false;
+ return UINT_MAX;
}
CapabilityLevel capabilityLevel = inlineFunctionForCapabilityLevel(
- codeBlock, specializationKind, callLinkStatus.isClosureCall());
+ codeBlock, kind, callee.isClosureCall());
if (!canInline(capabilityLevel)) {
if (verbose)
dataLog(" Failing because the function is not inlineable.\n");
- return false;
+ return UINT_MAX;
}
// Check if the caller is already too large. We do this check here because that's just
@@ -1178,7 +1180,7 @@
codeBlock->m_shouldAlwaysBeInlined = false;
if (verbose)
dataLog(" Failing because the caller is too large.\n");
- return false;
+ return UINT_MAX;
}
// FIXME: this should be better at predicting how much bloat we will introduce by inlining
@@ -1197,7 +1199,7 @@
if (depth >= Options::maximumInliningDepth()) {
if (verbose)
dataLog(" Failing because depth exceeded.\n");
- return false;
+ return UINT_MAX;
}
if (entry->executable() == executable) {
@@ -1205,19 +1207,26 @@
if (recursion >= Options::maximumInliningRecursion()) {
if (verbose)
dataLog(" Failing because recursion detected.\n");
- return false;
+ return UINT_MAX;
}
}
}
if (verbose)
- dataLog(" Committing to inlining.\n");
+ dataLog(" Inlining should be possible.\n");
- // Now we know without a doubt that we are committed to inlining. So begin the process
- // by checking the callee (if necessary) and making sure that arguments and the callee
- // are flushed.
- emitFunctionChecks(callLinkStatus, callTargetNode, registerOffset, specializationKind);
+ // It might be possible to inline.
+ return codeBlock->instructionCount();
+}
+
+void ByteCodeParser::inlineCall(Node* callTargetNode, int resultOperand, CallVariant callee, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind kind, CallerLinkability callerLinkability)
+{
+ CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);
+ ASSERT(inliningCost(callee, argumentCountIncludingThis, specializationKind) != UINT_MAX);
+
+ CodeBlock* codeBlock = callee.functionExecutable()->baselineCodeBlockFor(specializationKind);
+
// FIXME: Don't flush constants!
int inlineCallFrameStart = m_inlineStackTop->remapOperand(VirtualRegister(registerOffset)).offset() + JSStack::CallFrameHeaderSize;
@@ -1233,7 +1242,7 @@
resultReg = m_inlineStackTop->remapOperand(resultReg);
InlineStackEntry inlineStackEntry(
- this, codeBlock, codeBlock, m_graph.lastBlock(), callLinkStatus.function(), resultReg,
+ this, codeBlock, codeBlock, m_graph.lastBlock(), callee.function(), resultReg,
(VirtualRegister)inlineCallFrameStart, argumentCountIncludingThis, kind);
// This is where the actual inlining really happens.
@@ -1247,8 +1256,8 @@
RELEASE_ASSERT(
m_inlineStackTop->m_inlineCallFrame->isClosureCall
- == callLinkStatus.isClosureCall());
- if (callLinkStatus.isClosureCall()) {
+ == callee.isClosureCall());
+ if (callee.isClosureCall()) {
VariableAccessData* calleeVariable =
set(VirtualRegister(JSStack::Callee), callTargetNode, ImmediateNakedSet)->variableAccessData();
VariableAccessData* scopeVariable =
@@ -1263,7 +1272,7 @@
m_graph.m_inlineVariableData.append(inlineVariableData);
parseCodeBlock();
- prepareToParseBlock(); // Reset our state now that we're back to the outer code.
+ clearCaches(); // Reset our state now that we're back to the outer code.
m_currentIndex = oldIndex;
@@ -1276,20 +1285,8 @@
else
ASSERT(inlineStackEntry.m_callsiteBlockHead->isLinked);
- // It's possible that the callsite block head is not owned by the caller.
- if (!inlineStackEntry.m_caller->m_unlinkedBlocks.isEmpty()) {
- // It's definitely owned by the caller, because the caller created new blocks.
- // Assert that this all adds up.
- ASSERT(inlineStackEntry.m_caller->m_unlinkedBlocks.last().m_block == inlineStackEntry.m_callsiteBlockHead);
- ASSERT(inlineStackEntry.m_caller->m_unlinkedBlocks.last().m_needsNormalLinking);
- inlineStackEntry.m_caller->m_unlinkedBlocks.last().m_needsNormalLinking = false;
- } else {
- // It's definitely not owned by the caller. Tell the caller that he does not
- // need to link his callsite block head, because we did it for him.
- ASSERT(inlineStackEntry.m_caller->m_callsiteBlockHeadNeedsLinking);
- ASSERT(inlineStackEntry.m_caller->m_callsiteBlockHead == inlineStackEntry.m_callsiteBlockHead);
- inlineStackEntry.m_caller->m_callsiteBlockHeadNeedsLinking = false;
- }
+ if (callerLinkability == CallerDoesNormalLinking)
+ cancelLinkingForBlock(inlineStackEntry.m_caller, inlineStackEntry.m_callsiteBlockHead);
linkBlocks(inlineStackEntry.m_unlinkedBlocks, inlineStackEntry.m_blockLinkingTargets);
} else
@@ -1308,16 +1305,19 @@
// for release builds because this block will never serve as a potential target
// in the linker's binary search.
lastBlock->bytecodeBegin = m_currentIndex;
- m_inlineStackTop->m_caller->m_unlinkedBlocks.append(UnlinkedBlock(m_graph.lastBlock()));
+ if (callerLinkability == CallerDoesNormalLinking) {
+ if (verbose)
+ dataLog("Adding unlinked block ", RawPointer(m_graph.lastBlock()), " (one return)\n");
+ m_inlineStackTop->m_caller->m_unlinkedBlocks.append(UnlinkedBlock(m_graph.lastBlock()));
+ }
}
m_currentBlock = m_graph.lastBlock();
- return true;
+ return;
}
// If we get to this point then all blocks must end in some sort of terminals.
ASSERT(lastBlock->last()->isTerminal());
-
// Need to create a new basic block for the continuation at the caller.
RefPtr<BasicBlock> block = adoptRef(new BasicBlock(nextOffset, m_numArguments, m_numLocals, PNaN));
@@ -1333,20 +1333,293 @@
ASSERT(!node->targetBlock());
node->targetBlock() = block.get();
inlineStackEntry.m_unlinkedBlocks[i].m_needsEarlyReturnLinking = false;
-#if !ASSERT_DISABLED
- blockToLink->isLinked = true;
-#endif
+ if (verbose)
+ dataLog("Marking ", RawPointer(blockToLink), " as linked (jumps to return)\n");
+ blockToLink->didLink();
}
m_currentBlock = block.get();
ASSERT(m_inlineStackTop->m_caller->m_blockLinkingTargets.isEmpty() || m_inlineStackTop->m_caller->m_blockLinkingTargets.last()->bytecodeBegin < nextOffset);
- m_inlineStackTop->m_caller->m_unlinkedBlocks.append(UnlinkedBlock(block.get()));
- m_inlineStackTop->m_caller->m_blockLinkingTargets.append(block.get());
+ if (verbose)
+ dataLog("Adding unlinked block ", RawPointer(block.get()), " (many returns)\n");
+ if (callerLinkability == CallerDoesNormalLinking) {
+ m_inlineStackTop->m_caller->m_unlinkedBlocks.append(UnlinkedBlock(block.get()));
+ m_inlineStackTop->m_caller->m_blockLinkingTargets.append(block.get());
+ }
m_graph.appendBlock(block);
prepareToParseBlock();
+}
+
+void ByteCodeParser::cancelLinkingForBlock(InlineStackEntry* inlineStackEntry, BasicBlock* block)
+{
+ // It's possible that the callsite block head is not owned by the caller.
+ if (!inlineStackEntry->m_unlinkedBlocks.isEmpty()) {
+ // It's definitely owned by the caller, because the caller created new blocks.
+ // Assert that this all adds up.
+ ASSERT_UNUSED(block, inlineStackEntry->m_unlinkedBlocks.last().m_block == block);
+ ASSERT(inlineStackEntry->m_unlinkedBlocks.last().m_needsNormalLinking);
+ inlineStackEntry->m_unlinkedBlocks.last().m_needsNormalLinking = false;
+ } else {
+ // It's definitely not owned by the caller. Tell the caller that he does not
+ // need to link his callsite block head, because we did it for him.
+ ASSERT(inlineStackEntry->m_callsiteBlockHeadNeedsLinking);
+ ASSERT_UNUSED(block, inlineStackEntry->m_callsiteBlockHead == block);
+ inlineStackEntry->m_callsiteBlockHeadNeedsLinking = false;
+ }
+}
+
+bool ByteCodeParser::attemptToInlineCall(Node* callTargetNode, int resultOperand, CallVariant callee, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind kind, CallerLinkability callerLinkability, SpeculatedType prediction, unsigned& inliningBalance)
+{
+ CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);
- // At this point we return and continue to generate code for the caller, but
- // in the new basic block.
+ if (!inliningBalance)
+ return false;
+
+ if (InternalFunction* function = callee.internalFunction()) {
+ if (handleConstantInternalFunction(resultOperand, function, registerOffset, argumentCountIncludingThis, specializationKind)) {
+ addToGraph(Phantom, callTargetNode);
+ emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, specializationKind);
+ inliningBalance--;
+ return true;
+ }
+ return false;
+ }
+
+ Intrinsic intrinsic = callee.intrinsicFor(specializationKind);
+ if (intrinsic != NoIntrinsic) {
+ if (handleIntrinsic(resultOperand, intrinsic, registerOffset, argumentCountIncludingThis, prediction)) {
+ addToGraph(Phantom, callTargetNode);
+ emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, specializationKind);
+ inliningBalance--;
+ return true;
+ }
+ return false;
+ }
+
+ unsigned myInliningCost = inliningCost(callee, argumentCountIncludingThis, specializationKind);
+ if (myInliningCost > inliningBalance)
+ return false;
+
+ inlineCall(callTargetNode, resultOperand, callee, registerOffset, argumentCountIncludingThis, nextOffset, kind, callerLinkability);
+ inliningBalance -= myInliningCost;
+ return true;
+}
+
+bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus& callLinkStatus, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, NodeType callOp, InlineCallFrame::Kind kind, SpeculatedType prediction)
+{
+ if (verbose) {
+ dataLog("Handling inlining...\n");
+ dataLog("Stack: ", currentCodeOrigin(), "\n");
+ }
+ CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);
+
+ if (!callLinkStatus.size()) {
+ if (verbose)
+ dataLog("Bailing inlining.\n");
+ return false;
+ }
+
+ unsigned inliningBalance = Options::maximumFunctionForCallInlineCandidateInstructionCount();
+ if (specializationKind == CodeForConstruct)
+ inliningBalance = std::min(inliningBalance, Options::maximumFunctionForConstructInlineCandidateInstructionCount());
+ if (callLinkStatus.isClosureCall())
+ inliningBalance = std::min(inliningBalance, Options::maximumFunctionForClosureCallInlineCandidateInstructionCount());
+
+ // First check if we can avoid creating control flow. Our inliner does some CFG
+ // simplification on the fly and this helps reduce compile times, but we can only leverage
+ // this in cases where we don't need control flow diamonds to check the callee.
+ if (!callLinkStatus.couldTakeSlowPath() && callLinkStatus.size() == 1) {
+ emitFunctionChecks(
+ callLinkStatus[0].callee(), callTargetNode, registerOffset, specializationKind);
+ bool result = attemptToInlineCall(
+ callTargetNode, resultOperand, callLinkStatus[0].callee(), registerOffset,
+ argumentCountIncludingThis, nextOffset, kind, CallerDoesNormalLinking, prediction,
+ inliningBalance);
+ if (!result && !callLinkStatus.isProved())
+ undoFunctionChecks(callLinkStatus[0].callee());
+ if (verbose) {
+ dataLog("Done inlining (simple).\n");
+ dataLog("Stack: ", currentCodeOrigin(), "\n");
+ }
+ return result;
+ }
+
+ // We need to create some kind of switch over callee. For now we only do this if we believe that
+ // we're in the top tier. We have two reasons for this: first, it provides us an opportunity to
+ // do more detailed polyvariant/polymorphic profiling; and second, it reduces compile times in
+ // the DFG. And by polyvariant profiling we mean polyvariant profiling of *this* call. Note that
+ // we could improve that aspect of this by doing polymorphic inlining but having the profiling
+ // also. Currently we opt against this, but it could be interesting. That would require having a
+ // separate node for call edge profiling.
+ // FIXME: Introduce the notion of a separate call edge profiling node.
+ // https://bugs.webkit.org/show_bug.cgi?id=136033
+ if (!isFTL(m_graph.m_plan.mode) || !Options::enablePolymorphicCallInlining()) {
+ if (verbose) {
+ dataLog("Bailing inlining (hard).\n");
+ dataLog("Stack: ", currentCodeOrigin(), "\n");
+ }
+ return false;
+ }
+
+ unsigned oldOffset = m_currentIndex;
+
+ bool allAreClosureCalls = true;
+ bool allAreDirectCalls = true;
+ for (unsigned i = callLinkStatus.size(); i--;) {
+ if (callLinkStatus[i].callee().isClosureCall())
+ allAreDirectCalls = false;
+ else
+ allAreClosureCalls = false;
+ }
+
+ Node* thingToSwitchOn;
+ if (allAreDirectCalls)
+ thingToSwitchOn = callTargetNode;
+ else if (allAreClosureCalls)
+ thingToSwitchOn = addToGraph(GetExecutable, callTargetNode);
+ else {
+ // FIXME: We should be able to handle this case, but it's tricky and we don't know of cases
+ // where it would be beneficial. Also, CallLinkStatus would make all callees appear like
+ // closure calls if any calls were closure calls - except for calls to internal functions.
+ // So this will only arise if some callees are internal functions and others are closures.
+ // https://bugs.webkit.org/show_bug.cgi?id=136020
+ if (verbose) {
+ dataLog("Bailing inlining (mix).\n");
+ dataLog("Stack: ", currentCodeOrigin(), "\n");
+ }
+ return false;
+ }
+
+ if (verbose) {
+ dataLog("Doing hard inlining...\n");
+ dataLog("Stack: ", currentCodeOrigin(), "\n");
+ }
+
+ // This makes me wish that we were in SSA all the time. We need to pick a variable into which to
+ // store the callee so that it will be accessible to all of the blocks we're about to create. We
+ // get away with doing an immediate-set here because we wouldn't have performed any side effects
+ // yet.
+ if (verbose)
+ dataLog("Register offset: ", registerOffset);
+ VirtualRegister calleeReg(registerOffset + JSStack::Callee);
+ calleeReg = m_inlineStackTop->remapOperand(calleeReg);
+ if (verbose)
+ dataLog("Callee is going to be ", calleeReg, "\n");
+ setDirect(calleeReg, callTargetNode, ImmediateSetWithFlush);
+
+ SwitchData& data = *m_graph.m_switchData.add();
+ data.kind = SwitchCell;
+ addToGraph(Switch, OpInfo(&data), thingToSwitchOn);
+
+ BasicBlock* originBlock = m_currentBlock;
+ if (verbose)
+ dataLog("Marking ", RawPointer(originBlock), " as linked (origin of poly inline)\n");
+ originBlock->didLink();
+ cancelLinkingForBlock(m_inlineStackTop, originBlock);
+
+ // Each inlined callee will have a landing block that it returns at. They should all have jumps
+ // to the continuation block, which we create last.
+ Vector<BasicBlock*> landingBlocks;
+
+ // We make force this true if we give up on inlining any of the edges.
+ bool couldTakeSlowPath = callLinkStatus.couldTakeSlowPath();
+
+ if (verbose)
+ dataLog("About to loop over functions at ", currentCodeOrigin(), ".\n");
+
+ for (unsigned i = 0; i < callLinkStatus.size(); ++i) {
+ m_currentIndex = oldOffset;
+ RefPtr<BasicBlock> block = adoptRef(new BasicBlock(UINT_MAX, m_numArguments, m_numLocals, PNaN));
+ m_currentBlock = block.get();
+ m_graph.appendBlock(block);
+ prepareToParseBlock();
+
+ Node* myCallTargetNode = getDirect(calleeReg);
+
+ bool inliningResult = attemptToInlineCall(
+ myCallTargetNode, resultOperand, callLinkStatus[i].callee(), registerOffset,
+ argumentCountIncludingThis, nextOffset, kind, CallerLinksManually, prediction,
+ inliningBalance);
+
+ if (!inliningResult) {
+ // That failed so we let the block die. Nothing interesting should have been added to
+ // the block. We also give up on inlining any of the (less frequent) callees.
+ ASSERT(m_currentBlock == block.get());
+ ASSERT(m_graph.m_blocks.last() == block);
+ m_graph.killBlockAndItsContents(block.get());
+ m_graph.m_blocks.removeLast();
+
+ // The fact that inlining failed means we need a slow path.
+ couldTakeSlowPath = true;
+ break;
+ }
+
+ JSCell* thingToCaseOn;
+ if (allAreDirectCalls)
+ thingToCaseOn = callLinkStatus[i].callee().nonExecutableCallee();
+ else {
+ ASSERT(allAreClosureCalls);
+ thingToCaseOn = callLinkStatus[i].callee().executable();
+ }
+ data.cases.append(SwitchCase(m_graph.freeze(thingToCaseOn), block.get()));
+ m_currentIndex = nextOffset;
+ processSetLocalQueue(); // This only comes into play for intrinsics, since normal inlined code will leave an empty queue.
+ addToGraph(Jump);
+ if (verbose)
+ dataLog("Marking ", RawPointer(m_currentBlock), " as linked (tail of poly inlinee)\n");
+ m_currentBlock->didLink();
+ landingBlocks.append(m_currentBlock);
+
+ if (verbose)
+ dataLog("Finished inlining ", callLinkStatus[i].callee(), " at ", currentCodeOrigin(), ".\n");
+ }
+
+ RefPtr<BasicBlock> slowPathBlock = adoptRef(
+ new BasicBlock(UINT_MAX, m_numArguments, m_numLocals, PNaN));
+ m_currentIndex = oldOffset;
+ data.fallThrough = BranchTarget(slowPathBlock.get());
+ m_graph.appendBlock(slowPathBlock);
+ if (verbose)
+ dataLog("Marking ", RawPointer(slowPathBlock.get()), " as linked (slow path block)\n");
+ slowPathBlock->didLink();
+ prepareToParseBlock();
+ m_currentBlock = slowPathBlock.get();
+ Node* myCallTargetNode = getDirect(calleeReg);
+ if (couldTakeSlowPath) {
+ addCall(
+ resultOperand, callOp, OpInfo(), myCallTargetNode, argumentCountIncludingThis,
+ registerOffset, prediction);
+ } else {
+ addToGraph(CheckBadCell);
+ addToGraph(Phantom, myCallTargetNode);
+ emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, specializationKind);
+
+ set(VirtualRegister(resultOperand), addToGraph(BottomValue));
+ }
+
+ m_currentIndex = nextOffset;
+ processSetLocalQueue();
+ addToGraph(Jump);
+ landingBlocks.append(m_currentBlock);
+
+ RefPtr<BasicBlock> continuationBlock = adoptRef(
+ new BasicBlock(UINT_MAX, m_numArguments, m_numLocals, PNaN));
+ m_graph.appendBlock(continuationBlock);
+ if (verbose)
+ dataLog("Adding unlinked block ", RawPointer(continuationBlock.get()), " (continuation)\n");
+ m_inlineStackTop->m_unlinkedBlocks.append(UnlinkedBlock(continuationBlock.get()));
+ prepareToParseBlock();
+ m_currentBlock = continuationBlock.get();
+
+ for (unsigned i = landingBlocks.size(); i--;)
+ landingBlocks[i]->last()->targetBlock() = continuationBlock.get();
+
+ m_currentIndex = oldOffset;
+
+ if (verbose) {
+ dataLog("Done inlining (hard).\n");
+ dataLog("Stack: ", currentCodeOrigin(), "\n");
+ }
return true;
}
@@ -1645,7 +1918,7 @@
bool ByteCodeParser::handleConstantInternalFunction(
int resultOperand, InternalFunction* function, int registerOffset,
- int argumentCountIncludingThis, SpeculatedType prediction, CodeSpecializationKind kind)
+ int argumentCountIncludingThis, CodeSpecializationKind kind)
{
// If we ever find that we have a lot of internal functions that we specialize for,
// then we should probably have some sort of hashtable dispatch, or maybe even
@@ -1654,8 +1927,6 @@
// we know about is small enough, that having just a linear cascade of if statements
// is good enough.
- UNUSED_PARAM(prediction); // Remove this once we do more things.
-
if (function->classInfo() == ArrayConstructor::info()) {
if (function->globalObject() != m_inlineStackTop->m_codeBlock->globalObject())
return false;
@@ -2020,6 +2291,12 @@
void ByteCodeParser::prepareToParseBlock()
{
+ clearCaches();
+ ASSERT(m_setLocalQueue.isEmpty());
+}
+
+void ByteCodeParser::clearCaches()
+{
m_constants.resize(0);
}
@@ -2059,9 +2336,7 @@
}
while (true) {
- for (unsigned i = 0; i < m_setLocalQueue.size(); ++i)
- m_setLocalQueue[i].execute(this);
- m_setLocalQueue.resize(0);
+ processSetLocalQueue();
// Don't extend over jump destinations.
if (m_currentIndex == limit) {
@@ -2205,13 +2480,13 @@
JSCell* cachedFunction = currentInstruction[2].u.jsCell.get();
if (!cachedFunction
|| m_inlineStackTop->m_profiledBlock->couldTakeSlowCase(m_currentIndex)
- || m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, BadFunction)) {
+ || m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, BadCell)) {
set(VirtualRegister(currentInstruction[1].u.operand), get(VirtualRegister(JSStack::Callee)));
} else {
FrozenValue* frozen = m_graph.freeze(cachedFunction);
ASSERT(cachedFunction->inherits(JSFunction::info()));
Node* actualCallee = get(VirtualRegister(JSStack::Callee));
- addToGraph(CheckFunction, OpInfo(frozen), actualCallee);
+ addToGraph(CheckCell, OpInfo(frozen), actualCallee);
set(VirtualRegister(currentInstruction[1].u.operand), addToGraph(JSConstant, OpInfo(frozen)));
}
NEXT_OPCODE(op_get_callee);
@@ -2893,7 +3168,7 @@
// already gnarly enough as it is.
ASSERT(pointerIsFunction(currentInstruction[2].u.specialPointer));
addToGraph(
- CheckFunction,
+ CheckCell,
OpInfo(m_graph.freeze(static_cast<JSCell*>(actualPointerFor(
m_inlineStackTop->m_codeBlock, currentInstruction[2].u.specialPointer)))),
get(VirtualRegister(currentInstruction[1].u.operand)));
@@ -3317,15 +3592,19 @@
break;
}
-#if !ASSERT_DISABLED
- block->isLinked = true;
-#endif
+ if (verbose)
+ dataLog("Marking ", RawPointer(block), " as linked (actually did linking)\n");
+ block->didLink();
}
void ByteCodeParser::linkBlocks(Vector<UnlinkedBlock>& unlinkedBlocks, Vector<BasicBlock*>& possibleTargets)
{
for (size_t i = 0; i < unlinkedBlocks.size(); ++i) {
+ if (verbose)
+ dataLog("Attempting to link ", RawPointer(unlinkedBlocks[i].m_block), "\n");
if (unlinkedBlocks[i].m_needsNormalLinking) {
+ if (verbose)
+ dataLog(" Does need normal linking.\n");
linkBlock(unlinkedBlocks[i].m_block, possibleTargets);
unlinkedBlocks[i].m_needsNormalLinking = false;
}
@@ -3492,7 +3771,7 @@
void ByteCodeParser::parseCodeBlock()
{
- prepareToParseBlock();
+ clearCaches();
CodeBlock* codeBlock = m_inlineStackTop->m_codeBlock;
@@ -3558,7 +3837,12 @@
// 2) If the bytecodeBegin is equal to the currentIndex, then we failed to do
// a peephole coalescing of this block in the if statement above. So, we're
// generating suboptimal code and leaving more work for the CFG simplifier.
- ASSERT(m_inlineStackTop->m_unlinkedBlocks.isEmpty() || m_inlineStackTop->m_unlinkedBlocks.last().m_block->bytecodeBegin < m_currentIndex);
+ if (!m_inlineStackTop->m_unlinkedBlocks.isEmpty()) {
+ unsigned lastBegin =
+ m_inlineStackTop->m_unlinkedBlocks.last().m_block->bytecodeBegin;
+ ASSERT_UNUSED(
+ lastBegin, lastBegin == UINT_MAX || lastBegin < m_currentIndex);
+ }
m_inlineStackTop->m_unlinkedBlocks.append(UnlinkedBlock(block.get()));
m_inlineStackTop->m_blockLinkingTargets.append(block.get());
// The first block is definitely an OSR target.
diff --git a/Source/JavaScriptCore/dfg/DFGCPSRethreadingPhase.cpp b/Source/JavaScriptCore/dfg/DFGCPSRethreadingPhase.cpp
index 80d49c1..b7ef0d0 100644
--- a/Source/JavaScriptCore/dfg/DFGCPSRethreadingPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGCPSRethreadingPhase.cpp
@@ -90,8 +90,10 @@
node->children.setChild1(Edge());
break;
case Phantom:
- if (!node->child1())
+ if (!node->child1()) {
+ m_graph.m_allocator.free(node);
continue;
+ }
switch (node->child1()->op()) {
case Phi:
case SetArgument:
diff --git a/Source/JavaScriptCore/dfg/DFGClobberize.h b/Source/JavaScriptCore/dfg/DFGClobberize.h
index fb723d3..9a37e73 100644
--- a/Source/JavaScriptCore/dfg/DFGClobberize.h
+++ b/Source/JavaScriptCore/dfg/DFGClobberize.h
@@ -144,6 +144,8 @@
case FiatInt52:
case MakeRope:
case ValueToInt32:
+ case GetExecutable:
+ case BottomValue:
def(PureValue(node));
return;
@@ -239,12 +241,8 @@
def(PureValue(node, node->arithMode()));
return;
- case CheckFunction:
- def(PureValue(CheckFunction, AdjacencyList(AdjacencyList::Fixed, node->child1()), node->function()));
- return;
-
- case CheckExecutable:
- def(PureValue(node, node->executable()));
+ case CheckCell:
+ def(PureValue(CheckCell, AdjacencyList(AdjacencyList::Fixed, node->child1()), node->cellOperand()));
return;
case ConstantStoragePointer:
@@ -263,6 +261,7 @@
case Switch:
case Throw:
case ForceOSRExit:
+ case CheckBadCell:
case Return:
case Unreachable:
case CheckTierUpInLoop:
@@ -358,6 +357,8 @@
case ArrayPop:
case Call:
case Construct:
+ case ProfiledCall:
+ case ProfiledConstruct:
case NativeCall:
case NativeConstruct:
case ToPrimitive:
diff --git a/Source/JavaScriptCore/dfg/DFGCommon.h b/Source/JavaScriptCore/dfg/DFGCommon.h
index 417d7ab..68e7a41 100644
--- a/Source/JavaScriptCore/dfg/DFGCommon.h
+++ b/Source/JavaScriptCore/dfg/DFGCommon.h
@@ -63,6 +63,13 @@
DontRefNode
};
+enum SwitchKind {
+ SwitchImm,
+ SwitchChar,
+ SwitchString,
+ SwitchCell
+};
+
inline bool verboseCompilationEnabled(CompilationMode mode = DFGMode)
{
return Options::verboseCompilation() || Options::dumpGraphAtEachPhase() || (isFTL(mode) && Options::verboseFTLCompilation());
diff --git a/Source/JavaScriptCore/dfg/DFGConstantFoldingPhase.cpp b/Source/JavaScriptCore/dfg/DFGConstantFoldingPhase.cpp
index ddbeca0..1544644 100644
--- a/Source/JavaScriptCore/dfg/DFGConstantFoldingPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGConstantFoldingPhase.cpp
@@ -142,8 +142,8 @@
break;
}
- case CheckFunction: {
- if (m_state.forNode(node->child1()).value() != node->function()->value())
+ case CheckCell: {
+ if (m_state.forNode(node->child1()).value() != node->cellOperand()->value())
break;
node->convertToPhantom();
eliminated = true;
@@ -384,6 +384,20 @@
}
break;
}
+
+ case ProfiledCall:
+ case ProfiledConstruct: {
+ if (!m_state.forNode(m_graph.varArgChild(node, 0)).m_value)
+ break;
+
+ // If we were able to prove that the callee is a constant then the normal call
+ // inline cache will record this callee. This means that there is no need to do any
+ // additional profiling.
+ m_interpreter.execute(indexInBlock);
+ node->setOp(node->op() == ProfiledCall ? Call : Construct);
+ eliminated = true;
+ break;
+ }
default:
break;
diff --git a/Source/JavaScriptCore/dfg/DFGDoesGC.cpp b/Source/JavaScriptCore/dfg/DFGDoesGC.cpp
index 3114f6a..9d6d4e0 100644
--- a/Source/JavaScriptCore/dfg/DFGDoesGC.cpp
+++ b/Source/JavaScriptCore/dfg/DFGDoesGC.cpp
@@ -91,7 +91,7 @@
case PutByIdFlush:
case PutByIdDirect:
case CheckStructure:
- case CheckExecutable:
+ case GetExecutable:
case GetButterfly:
case CheckArray:
case GetScope:
@@ -104,7 +104,7 @@
case PutGlobalVar:
case VariableWatchpoint:
case VarInjectionWatchpoint:
- case CheckFunction:
+ case CheckCell:
case AllocationProfileWatchpoint:
case RegExpExec:
case RegExpTest:
@@ -119,6 +119,8 @@
case Construct:
case NativeCall:
case NativeConstruct:
+ case ProfiledCall:
+ case ProfiledConstruct:
case Breakpoint:
case ProfileWillCall:
case ProfileDidCall:
@@ -195,6 +197,8 @@
case GetDirectPname:
case FiatInt52:
case BooleanToNumber:
+ case CheckBadCell:
+ case BottomValue:
return false;
case CreateActivation:
diff --git a/Source/JavaScriptCore/dfg/DFGDriver.cpp b/Source/JavaScriptCore/dfg/DFGDriver.cpp
index 30b4f38..2601735 100644
--- a/Source/JavaScriptCore/dfg/DFGDriver.cpp
+++ b/Source/JavaScriptCore/dfg/DFGDriver.cpp
@@ -89,6 +89,9 @@
vm.getCTIStub(virtualConstructThatPreservesRegsThunkGenerator);
}
+ if (CallEdgeLog::isEnabled())
+ vm.ensureCallEdgeLog().processLog();
+
RefPtr<Plan> plan = adoptRef(
new Plan(codeBlock, profiledDFGCodeBlock, mode, osrEntryBytecodeIndex, mustHandleValues));
diff --git a/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp b/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
index 6bb35e6..78b08d7 100644
--- a/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
@@ -736,6 +736,12 @@
else if (node->child1()->shouldSpeculateString())
fixEdge<StringUse>(node->child1());
break;
+ case SwitchCell:
+ if (node->child1()->shouldSpeculateCell())
+ fixEdge<CellUse>(node->child1());
+ // else it's fine for this to have UntypedUse; we will handle this by just making
+ // non-cells take the default case.
+ break;
}
break;
}
@@ -897,13 +903,13 @@
break;
}
- case CheckExecutable: {
+ case GetExecutable: {
fixEdge<FunctionUse>(node->child1());
break;
}
case CheckStructure:
- case CheckFunction:
+ case CheckCell:
case CheckHasInstance:
case CreateThis:
case GetButterfly: {
@@ -1120,6 +1126,8 @@
case AllocationProfileWatchpoint:
case Call:
case Construct:
+ case ProfiledCall:
+ case ProfiledConstruct:
case NativeCall:
case NativeConstruct:
case NewObject:
@@ -1149,6 +1157,7 @@
case ThrowReferenceError:
case CountExecution:
case ForceOSRExit:
+ case CheckBadCell:
case CheckWatchdogTimer:
case Unreachable:
case ExtractOSREntryLocal:
@@ -1159,6 +1168,7 @@
case TypedArrayWatchpoint:
case MovHint:
case ZombieHint:
+ case BottomValue:
break;
#else
default:
diff --git a/Source/JavaScriptCore/dfg/DFGGraph.cpp b/Source/JavaScriptCore/dfg/DFGGraph.cpp
index 1fedfbe..d9a5f4b 100644
--- a/Source/JavaScriptCore/dfg/DFGGraph.cpp
+++ b/Source/JavaScriptCore/dfg/DFGGraph.cpp
@@ -222,24 +222,23 @@
out.print(comma, inContext(*node->structure(), context));
if (node->hasTransition())
out.print(comma, pointerDumpInContext(node->transition(), context));
- if (node->hasFunction()) {
- out.print(comma, "function(", pointerDump(node->function()), ", ");
- if (node->function()->value().isCell()
- && node->function()->value().asCell()->inherits(JSFunction::info())) {
- JSFunction* function = jsCast<JSFunction*>(node->function()->value().asCell());
- if (function->isHostFunction())
- out.print("<host function>");
- else
- out.print(FunctionExecutableDump(function->jsExecutable()));
- } else
- out.print("<not JSFunction>");
- out.print(")");
- }
- if (node->hasExecutable()) {
- if (node->executable()->inherits(FunctionExecutable::info()))
- out.print(comma, "executable(", FunctionExecutableDump(jsCast<FunctionExecutable*>(node->executable())), ")");
- else
- out.print(comma, "executable(not function: ", RawPointer(node->executable()), ")");
+ if (node->hasCellOperand()) {
+ if (!node->cellOperand()->value() || !node->cellOperand()->value().isCell())
+ out.print(comma, "invalid cell operand: ", node->cellOperand()->value());
+ else {
+ out.print(comma, pointerDump(node->cellOperand()->value().asCell()));
+ if (node->cellOperand()->value().isCell()) {
+ CallVariant variant(node->cellOperand()->value().asCell());
+ if (ExecutableBase* executable = variant.executable()) {
+ if (executable->isHostFunction())
+ out.print(comma, "<host function>");
+ else if (FunctionExecutable* functionExecutable = jsDynamicCast<FunctionExecutable*>(executable))
+ out.print(comma, FunctionExecutableDump(functionExecutable));
+ else
+ out.print(comma, "<non-function executable>");
+ }
+ }
+ }
}
if (node->hasFunctionDeclIndex()) {
FunctionExecutable* executable = m_codeBlock->functionDecl(node->functionDeclIndex());
@@ -985,10 +984,6 @@
Node* node = block->at(nodeIndex);
switch (node->op()) {
- case CheckExecutable:
- visitor.appendUnbarrieredReadOnlyPointer(node->executable());
- break;
-
case CheckStructure:
for (unsigned i = node->structureSet().size(); i--;)
visitor.appendUnbarrieredReadOnlyPointer(node->structureSet()[i]);
diff --git a/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp b/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
index 189ce48..cd27c7f 100644
--- a/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
+++ b/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
@@ -188,7 +188,7 @@
table.ctiOffsets[j] = table.ctiDefault;
for (unsigned j = data.cases.size(); j--;) {
SwitchCase& myCase = data.cases[j];
- table.ctiOffsets[myCase.value.switchLookupValue() - table.min] =
+ table.ctiOffsets[myCase.value.switchLookupValue(data.kind) - table.min] =
linkBuffer.locationOf(m_blockHeads[myCase.target.block->index]);
}
}
diff --git a/Source/JavaScriptCore/dfg/DFGLazyJSValue.cpp b/Source/JavaScriptCore/dfg/DFGLazyJSValue.cpp
index 244c7ed..6011490 100644
--- a/Source/JavaScriptCore/dfg/DFGLazyJSValue.cpp
+++ b/Source/JavaScriptCore/dfg/DFGLazyJSValue.cpp
@@ -113,6 +113,36 @@
return FalseTriState;
}
+uintptr_t LazyJSValue::switchLookupValue(SwitchKind kind) const
+{
+ // NB. Not every kind of JSValue will be able to give you a switch lookup
+ // value, and this method will assert, or do bad things, if you use it
+ // for a kind of value that can't.
+ switch (m_kind) {
+ case KnownValue:
+ switch (kind) {
+ case SwitchImm:
+ return value()->value().asInt32();
+ case SwitchCell:
+ return bitwise_cast<uintptr_t>(value()->value().asCell());
+ default:
+ RELEASE_ASSERT_NOT_REACHED();
+ return 0;
+ }
+ case SingleCharacterString:
+ switch (kind) {
+ case SwitchChar:
+ return character();
+ default:
+ RELEASE_ASSERT_NOT_REACHED();
+ return 0;
+ }
+ default:
+ RELEASE_ASSERT_NOT_REACHED();
+ return 0;
+ }
+}
+
void LazyJSValue::dumpInContext(PrintStream& out, DumpContext* context) const
{
switch (m_kind) {
diff --git a/Source/JavaScriptCore/dfg/DFGLazyJSValue.h b/Source/JavaScriptCore/dfg/DFGLazyJSValue.h
index 0b8187b..a1231db 100644
--- a/Source/JavaScriptCore/dfg/DFGLazyJSValue.h
+++ b/Source/JavaScriptCore/dfg/DFGLazyJSValue.h
@@ -28,6 +28,7 @@
#if ENABLE(DFG_JIT)
+#include "DFGCommon.h"
#include "DFGFrozenValue.h"
#include <wtf/text/StringImpl.h>
@@ -95,21 +96,7 @@
TriState strictEqual(const LazyJSValue& other) const;
- unsigned switchLookupValue() const
- {
- // NB. Not every kind of JSValue will be able to give you a switch lookup
- // value, and this method will assert, or do bad things, if you use it
- // for a kind of value that can't.
- switch (m_kind) {
- case KnownValue:
- return value()->value().asInt32();
- case SingleCharacterString:
- return character();
- default:
- RELEASE_ASSERT_NOT_REACHED();
- return 0;
- }
- }
+ uintptr_t switchLookupValue(SwitchKind) const;
void dump(PrintStream&) const;
void dumpInContext(PrintStream&, DumpContext*) const;
diff --git a/Source/JavaScriptCore/dfg/DFGNode.cpp b/Source/JavaScriptCore/dfg/DFGNode.cpp
index c277291..e6bb969 100644
--- a/Source/JavaScriptCore/dfg/DFGNode.cpp
+++ b/Source/JavaScriptCore/dfg/DFGNode.cpp
@@ -113,6 +113,9 @@
case SwitchString:
out.print("SwitchString");
return;
+ case SwitchCell:
+ out.print("SwitchCell");
+ return;
}
RELEASE_ASSERT_NOT_REACHED();
}
diff --git a/Source/JavaScriptCore/dfg/DFGNode.h b/Source/JavaScriptCore/dfg/DFGNode.h
index 1423394..e4b2bef 100644
--- a/Source/JavaScriptCore/dfg/DFGNode.h
+++ b/Source/JavaScriptCore/dfg/DFGNode.h
@@ -157,12 +157,6 @@
BranchTarget target;
};
-enum SwitchKind {
- SwitchImm,
- SwitchChar,
- SwitchString
-};
-
struct SwitchData {
// Initializes most fields to obviously invalid values. Anyone
// constructing this should make sure to initialize everything they
@@ -185,6 +179,7 @@
// distinguishes an immediate value (typically an index into a CodeBlock data structure -
// a constant index, argument, or identifier) from a Node*.
struct OpInfo {
+ OpInfo() : m_value(0) { }
explicit OpInfo(int32_t value) : m_value(static_cast<uintptr_t>(value)) { }
explicit OpInfo(uint32_t value) : m_value(static_cast<uintptr_t>(value)) { }
#if OS(DARWIN) || USE(JSVALUE64)
@@ -1009,6 +1004,8 @@
case GetMyArgumentByValSafe:
case Call:
case Construct:
+ case ProfiledCall:
+ case ProfiledConstruct:
case NativeCall:
case NativeConstruct:
case GetByOffset:
@@ -1044,65 +1041,29 @@
m_opInfo2 = prediction;
}
- bool canBeKnownFunction()
+ bool hasCellOperand()
{
switch (op()) {
- case NativeConstruct:
- case NativeCall:
- return true;
- default:
- return false;
- }
- }
-
- bool hasKnownFunction()
- {
- switch (op()) {
- case NativeConstruct:
- case NativeCall:
- return (bool)m_opInfo;
- default:
- return false;
- }
- }
-
- JSFunction* knownFunction()
- {
- ASSERT(canBeKnownFunction());
- return bitwise_cast<JSFunction*>(m_opInfo);
- }
-
- void giveKnownFunction(JSFunction* callData)
- {
- ASSERT(canBeKnownFunction());
- m_opInfo = bitwise_cast<uintptr_t>(callData);
- }
-
- bool hasFunction()
- {
- switch (op()) {
- case CheckFunction:
case AllocationProfileWatchpoint:
+ case CheckCell:
+ case NativeConstruct:
+ case NativeCall:
return true;
default:
return false;
}
}
- FrozenValue* function()
+ FrozenValue* cellOperand()
{
- ASSERT(hasFunction());
+ ASSERT(hasCellOperand());
return reinterpret_cast<FrozenValue*>(m_opInfo);
}
- bool hasExecutable()
+ void setCellOperand(FrozenValue* value)
{
- return op() == CheckExecutable;
- }
-
- ExecutableBase* executable()
- {
- return jsCast<ExecutableBase*>(reinterpret_cast<JSCell*>(m_opInfo));
+ ASSERT(hasCellOperand());
+ m_opInfo = bitwise_cast<uintptr_t>(value);
}
bool hasVariableWatchpointSet()
diff --git a/Source/JavaScriptCore/dfg/DFGNodeType.h b/Source/JavaScriptCore/dfg/DFGNodeType.h
index fede8b5..764a6b0 100644
--- a/Source/JavaScriptCore/dfg/DFGNodeType.h
+++ b/Source/JavaScriptCore/dfg/DFGNodeType.h
@@ -153,7 +153,7 @@
macro(PutByIdFlush, NodeMustGenerate | NodeMustGenerate | NodeClobbersWorld) \
macro(PutByIdDirect, NodeMustGenerate | NodeClobbersWorld) \
macro(CheckStructure, NodeMustGenerate) \
- macro(CheckExecutable, NodeMustGenerate) \
+ macro(GetExecutable, NodeResultJS) \
macro(PutStructure, NodeMustGenerate) \
macro(AllocatePropertyStorage, NodeMustGenerate | NodeResultStorage) \
macro(ReallocatePropertyStorage, NodeMustGenerate | NodeResultStorage) \
@@ -185,7 +185,8 @@
macro(VariableWatchpoint, NodeMustGenerate) \
macro(VarInjectionWatchpoint, NodeMustGenerate) \
macro(FunctionReentryWatchpoint, NodeMustGenerate) \
- macro(CheckFunction, NodeMustGenerate) \
+ macro(CheckCell, NodeMustGenerate) \
+ macro(CheckBadCell, NodeMustGenerate) \
macro(AllocationProfileWatchpoint, NodeMustGenerate) \
macro(CheckInBounds, NodeMustGenerate) \
\
@@ -214,6 +215,8 @@
/* Calls. */\
macro(Call, NodeResultJS | NodeMustGenerate | NodeHasVarArgs | NodeClobbersWorld) \
macro(Construct, NodeResultJS | NodeMustGenerate | NodeHasVarArgs | NodeClobbersWorld) \
+ macro(ProfiledCall, NodeResultJS | NodeMustGenerate | NodeHasVarArgs | NodeClobbersWorld) \
+ macro(ProfiledConstruct, NodeResultJS | NodeMustGenerate | NodeHasVarArgs | NodeClobbersWorld) \
macro(NativeCall, NodeResultJS | NodeMustGenerate | NodeHasVarArgs | NodeClobbersWorld) \
macro(NativeConstruct, NodeResultJS | NodeMustGenerate | NodeHasVarArgs | NodeClobbersWorld) \
\
@@ -286,6 +289,11 @@
/* different compiler. */\
macro(ForceOSRExit, NodeMustGenerate) \
\
+ /* Vends a bottom JS value. It is invalid to ever execute this. Useful for cases */\
+ /* where we know that we would have exited but we'd like to still track the control */\
+ /* flow. */\
+ macro(BottomValue, NodeResultJS) \
+ \
/* Checks the watchdog timer. If the timer has fired, we OSR exit to the */ \
/* baseline JIT to redo the watchdog timer check, and service the timer. */ \
macro(CheckWatchdogTimer, NodeMustGenerate) \
diff --git a/Source/JavaScriptCore/dfg/DFGPhantomCanonicalizationPhase.cpp b/Source/JavaScriptCore/dfg/DFGPhantomCanonicalizationPhase.cpp
index 5850582..e5d9c10 100644
--- a/Source/JavaScriptCore/dfg/DFGPhantomCanonicalizationPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGPhantomCanonicalizationPhase.cpp
@@ -92,8 +92,10 @@
node->children.removeEdge(i--);
}
- if (node->children.isEmpty())
+ if (node->children.isEmpty()) {
+ m_graph.m_allocator.free(node);
continue;
+ }
node->convertToCheck();
}
diff --git a/Source/JavaScriptCore/dfg/DFGPhantomRemovalPhase.cpp b/Source/JavaScriptCore/dfg/DFGPhantomRemovalPhase.cpp
index 73d62e8..a2c6df4 100644
--- a/Source/JavaScriptCore/dfg/DFGPhantomRemovalPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGPhantomRemovalPhase.cpp
@@ -125,6 +125,7 @@
}
if (node->children.isEmpty()) {
+ m_graph.m_allocator.free(node);
changed = true;
continue;
}
@@ -142,6 +143,7 @@
changed = true;
}
if (node->children.isEmpty()) {
+ m_graph.m_allocator.free(node);
changed = true;
continue;
}
@@ -149,8 +151,10 @@
}
case HardPhantom: {
- if (node->children.isEmpty())
+ if (node->children.isEmpty()) {
+ m_graph.m_allocator.free(node);
continue;
+ }
break;
}
diff --git a/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp b/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
index 19e1733..f9c6dcd 100644
--- a/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
@@ -188,6 +188,8 @@
case GetDirectPname:
case Call:
case Construct:
+ case ProfiledCall:
+ case ProfiledConstruct:
case NativeCall:
case NativeConstruct:
case GetGlobalVar:
@@ -196,7 +198,8 @@
break;
}
- case GetGetterSetterByOffset: {
+ case GetGetterSetterByOffset:
+ case GetExecutable: {
changed |= setPrediction(SpecCellOther);
break;
}
@@ -642,8 +645,8 @@
case ForceOSRExit:
case SetArgument:
case CheckStructure:
- case CheckExecutable:
- case CheckFunction:
+ case CheckCell:
+ case CheckBadCell:
case PutStructure:
case TearOffActivation:
case TearOffArguments:
@@ -665,6 +668,10 @@
case ZombieHint:
break;
+ // This gets ignored because it only pretends to produce a value.
+ case BottomValue:
+ break;
+
// This gets ignored because it already has a prediction.
case ExtractOSREntryLocal:
break;
diff --git a/Source/JavaScriptCore/dfg/DFGSafeToExecute.h b/Source/JavaScriptCore/dfg/DFGSafeToExecute.h
index c9f0e1dd..de97c89 100644
--- a/Source/JavaScriptCore/dfg/DFGSafeToExecute.h
+++ b/Source/JavaScriptCore/dfg/DFGSafeToExecute.h
@@ -159,7 +159,7 @@
case PutByIdFlush:
case PutByIdDirect:
case CheckStructure:
- case CheckExecutable:
+ case GetExecutable:
case GetButterfly:
case CheckArray:
case Arrayify:
@@ -174,7 +174,8 @@
case PutGlobalVar:
case VariableWatchpoint:
case VarInjectionWatchpoint:
- case CheckFunction:
+ case CheckCell:
+ case CheckBadCell:
case AllocationProfileWatchpoint:
case RegExpExec:
case RegExpTest:
@@ -187,6 +188,8 @@
case CompareStrictEq:
case Call:
case Construct:
+ case ProfiledCall:
+ case ProfiledConstruct:
case NewObject:
case NewArray:
case NewArrayWithSize:
@@ -273,6 +276,11 @@
case NativeConstruct:
return false; // TODO: add a check for already checked. https://bugs.webkit.org/show_bug.cgi?id=133769
+ case BottomValue:
+ // If in doubt, assume that this isn't safe to execute, just because we have no way of
+ // compiling this node.
+ return false;
+
case GetByVal:
case GetIndexedPropertyStorage:
case GetArrayLength:
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
index 01b313b..0729ce8 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
@@ -5354,6 +5354,10 @@
case SwitchString: {
emitSwitchString(node, data);
return;
+ }
+ case SwitchCell: {
+ DFG_CRASH(m_jit.graph(), node, "Bad switch kind");
+ return;
} }
RELEASE_ASSERT_NOT_REACHED();
}
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
index 4276097..0d0c7a0 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
@@ -638,9 +638,9 @@
void SpeculativeJIT::emitCall(Node* node)
{
- bool isCall = node->op() == Call;
+ bool isCall = node->op() == Call || node->op() == ProfiledCall;
if (!isCall)
- ASSERT(node->op() == Construct);
+ ASSERT(node->op() == Construct || node->op() == ProfiledConstruct);
// For constructors, the this argument is not passed but we have to make space
// for it.
@@ -687,6 +687,13 @@
m_jit.emitStoreCodeOrigin(node->origin.semantic);
+ CallLinkInfo* info = m_jit.codeBlock()->addCallLinkInfo();
+
+ if (node->op() == ProfiledCall || node->op() == ProfiledConstruct) {
+ m_jit.vm()->callEdgeLog->emitLogCode(
+ m_jit, info->callEdgeProfile, callee.jsValueRegs());
+ }
+
slowPath.append(branchNotCell(callee.jsValueRegs()));
slowPath.append(m_jit.branchPtrWithPatch(MacroAssembler::NotEqual, calleePayloadGPR, targetToCheck));
m_jit.loadPtr(MacroAssembler::Address(calleePayloadGPR, OBJECT_OFFSETOF(JSFunction, m_scope)), resultPayloadGPR);
@@ -711,7 +718,6 @@
m_jit.move(calleePayloadGPR, GPRInfo::regT0);
m_jit.move(calleeTagGPR, GPRInfo::regT1);
}
- CallLinkInfo* info = m_jit.codeBlock()->addCallLinkInfo();
m_jit.move(MacroAssembler::TrustedImmPtr(info), GPRInfo::regT2);
JITCompiler::Call slowCall = m_jit.nearCall();
@@ -3669,18 +3675,21 @@
compileGetArrayLength(node);
break;
- case CheckFunction: {
- SpeculateCellOperand function(this, node->child1());
- speculationCheck(BadFunction, JSValueSource::unboxedCell(function.gpr()), node->child1(), m_jit.branchWeakPtr(JITCompiler::NotEqual, function.gpr(), node->function()->value().asCell()));
+ case CheckCell: {
+ SpeculateCellOperand cell(this, node->child1());
+ speculationCheck(BadCell, JSValueSource::unboxedCell(cell.gpr()), node->child1(), m_jit.branchWeakPtr(JITCompiler::NotEqual, cell.gpr(), node->cellOperand()->value().asCell()));
noResult(node);
break;
}
- case CheckExecutable: {
+ case GetExecutable: {
SpeculateCellOperand function(this, node->child1());
- speculateCellType(node->child1(), function.gpr(), SpecFunction, JSFunctionType);
- speculationCheck(BadExecutable, JSValueSource::unboxedCell(function.gpr()), node->child1(), m_jit.branchWeakPtr(JITCompiler::NotEqual, JITCompiler::Address(function.gpr(), JSFunction::offsetOfExecutable()), node->executable()));
- noResult(node);
+ GPRTemporary result(this, Reuse, function);
+ GPRReg functionGPR = function.gpr();
+ GPRReg resultGPR = result.gpr();
+ speculateCellType(node->child1(), functionGPR, SpecFunction, JSFunctionType);
+ m_jit.loadPtr(JITCompiler::Address(functionGPR, JSFunction::offsetOfExecutable()), resultGPR);
+ cellResult(resultGPR, node);
break;
}
@@ -4150,6 +4159,8 @@
case Call:
case Construct:
+ case ProfiledCall:
+ case ProfiledConstruct:
emitCall(node);
break;
@@ -4895,6 +4906,8 @@
case MultiPutByOffset:
case NativeCall:
case NativeConstruct:
+ case CheckBadCell:
+ case BottomValue:
RELEASE_ASSERT_NOT_REACHED();
break;
}
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
index 7158385..ea8f031 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
@@ -626,10 +626,9 @@
void SpeculativeJIT::emitCall(Node* node)
{
-
- bool isCall = node->op() == Call;
+ bool isCall = node->op() == Call || node->op() == ProfiledCall;
if (!isCall)
- DFG_ASSERT(m_jit.graph(), node, node->op() == Construct);
+ DFG_ASSERT(m_jit.graph(), node, node->op() == Construct || node->op() == ProfiledConstruct);
// For constructors, the this argument is not passed but we have to make space
// for it.
@@ -670,6 +669,13 @@
m_jit.emitStoreCodeOrigin(node->origin.semantic);
+ CallLinkInfo* callLinkInfo = m_jit.codeBlock()->addCallLinkInfo();
+
+ if (node->op() == ProfiledCall || node->op() == ProfiledConstruct) {
+ m_jit.vm()->callEdgeLog->emitLogCode(
+ m_jit, callLinkInfo->callEdgeProfile, JSValueRegs(calleeGPR));
+ }
+
slowPath = m_jit.branchPtrWithPatch(MacroAssembler::NotEqual, calleeGPR, targetToCheck, MacroAssembler::TrustedImmPtr(0));
m_jit.loadPtr(MacroAssembler::Address(calleeGPR, OBJECT_OFFSETOF(JSFunction, m_scope)), resultGPR);
@@ -682,7 +688,6 @@
slowPath.link(&m_jit);
m_jit.move(calleeGPR, GPRInfo::regT0); // Callee needs to be in regT0
- CallLinkInfo* callLinkInfo = m_jit.codeBlock()->addCallLinkInfo();
m_jit.move(MacroAssembler::TrustedImmPtr(callLinkInfo), GPRInfo::regT2); // Link info needs to be in regT2
JITCompiler::Call slowCall = m_jit.nearCall();
@@ -3768,18 +3773,21 @@
compileGetArrayLength(node);
break;
- case CheckFunction: {
- SpeculateCellOperand function(this, node->child1());
- speculationCheck(BadFunction, JSValueSource::unboxedCell(function.gpr()), node->child1(), m_jit.branchWeakPtr(JITCompiler::NotEqual, function.gpr(), node->function()->value().asCell()));
+ case CheckCell: {
+ SpeculateCellOperand cell(this, node->child1());
+ speculationCheck(BadCell, JSValueSource::unboxedCell(cell.gpr()), node->child1(), m_jit.branchWeakPtr(JITCompiler::NotEqual, cell.gpr(), node->cellOperand()->value().asCell()));
noResult(node);
break;
}
- case CheckExecutable: {
+ case GetExecutable: {
SpeculateCellOperand function(this, node->child1());
- speculateCellType(node->child1(), function.gpr(), SpecFunction, JSFunctionType);
- speculationCheck(BadExecutable, JSValueSource::unboxedCell(function.gpr()), node->child1(), m_jit.branchWeakPtr(JITCompiler::NotEqual, JITCompiler::Address(function.gpr(), JSFunction::offsetOfExecutable()), node->executable()));
- noResult(node);
+ GPRTemporary result(this, Reuse, function);
+ GPRReg functionGPR = function.gpr();
+ GPRReg resultGPR = result.gpr();
+ speculateCellType(node->child1(), functionGPR, SpecFunction, JSFunctionType);
+ m_jit.loadPtr(JITCompiler::Address(functionGPR, JSFunction::offsetOfExecutable()), resultGPR);
+ cellResult(resultGPR, node);
break;
}
@@ -4219,9 +4227,11 @@
case Call:
case Construct:
+ case ProfiledCall:
+ case ProfiledConstruct:
emitCall(node);
break;
-
+
case CreateActivation: {
DFG_ASSERT(m_jit.graph(), node, !node->origin.semantic.inlineCallFrame);
@@ -4970,7 +4980,9 @@
case MultiGetByOffset:
case MultiPutByOffset:
case FiatInt52:
- DFG_CRASH(m_jit.graph(), node, "Unexpected FTL node");
+ case CheckBadCell:
+ case BottomValue:
+ DFG_CRASH(m_jit.graph(), node, "Unexpected node");
break;
}
diff --git a/Source/JavaScriptCore/dfg/DFGStructureRegistrationPhase.cpp b/Source/JavaScriptCore/dfg/DFGStructureRegistrationPhase.cpp
index 408ee4c..0f2e146 100644
--- a/Source/JavaScriptCore/dfg/DFGStructureRegistrationPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGStructureRegistrationPhase.cpp
@@ -62,10 +62,6 @@
Node* node = block->at(nodeIndex);
switch (node->op()) {
- case CheckExecutable:
- registerStructure(node->executable()->structure());
- break;
-
case CheckStructure:
registerStructures(node->structureSet());
break;
diff --git a/Source/JavaScriptCore/dfg/DFGTierUpCheckInjectionPhase.cpp b/Source/JavaScriptCore/dfg/DFGTierUpCheckInjectionPhase.cpp
index 09c22b8..d1b6e7a 100644
--- a/Source/JavaScriptCore/dfg/DFGTierUpCheckInjectionPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGTierUpCheckInjectionPhase.cpp
@@ -50,13 +50,17 @@
if (!Options::useFTLJIT())
return false;
- if (m_graph.m_profiledBlock->m_didFailFTLCompilation)
+ if (m_graph.m_profiledBlock->m_didFailFTLCompilation) {
+ removeFTLProfiling();
return false;
+ }
#if ENABLE(FTL_JIT)
FTL::CapabilityLevel level = FTL::canCompile(m_graph);
- if (level == FTL::CannotCompile)
+ if (level == FTL::CannotCompile) {
+ removeFTLProfiling();
return false;
+ }
if (!Options::enableOSREntryToFTL())
level = FTL::CanCompile;
@@ -118,6 +122,32 @@
return false;
#endif // ENABLE(FTL_JIT)
}
+
+private:
+ void removeFTLProfiling()
+ {
+ for (BlockIndex blockIndex = m_graph.numBlocks(); blockIndex--;) {
+ BasicBlock* block = m_graph.block(blockIndex);
+ if (!block)
+ continue;
+
+ for (unsigned nodeIndex = 0; nodeIndex < block->size(); ++nodeIndex) {
+ Node* node = block->at(nodeIndex);
+ switch (node->op()) {
+ case ProfiledCall:
+ node->setOp(Call);
+ break;
+
+ case ProfiledConstruct:
+ node->setOp(Construct);
+ break;
+
+ default:
+ break;
+ }
+ }
+ }
+ }
};
bool performTierUpCheckInjection(Graph& graph)
diff --git a/Source/JavaScriptCore/dfg/DFGValidate.cpp b/Source/JavaScriptCore/dfg/DFGValidate.cpp
index 35d1ebb..8e90f69 100644
--- a/Source/JavaScriptCore/dfg/DFGValidate.cpp
+++ b/Source/JavaScriptCore/dfg/DFGValidate.cpp
@@ -200,7 +200,8 @@
VALIDATE((node), !mayExit(m_graph, node) || node->origin.forExit.isSet());
VALIDATE((node), !node->hasStructure() || !!node->structure());
- VALIDATE((node), !node->hasFunction() || node->function()->value().isFunction());
+ VALIDATE((node), !node->hasCellOperand() || node->cellOperand()->value().isCell());
+ VALIDATE((node), !node->hasCellOperand() || !!node->cellOperand()->value());
if (!(node->flags() & NodeHasVarArgs)) {
if (!node->child2())
diff --git a/Source/JavaScriptCore/dfg/DFGWatchpointCollectionPhase.cpp b/Source/JavaScriptCore/dfg/DFGWatchpointCollectionPhase.cpp
index 0c48489..39aea40 100644
--- a/Source/JavaScriptCore/dfg/DFGWatchpointCollectionPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGWatchpointCollectionPhase.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -114,7 +114,7 @@
break;
case AllocationProfileWatchpoint:
- addLazily(jsCast<JSFunction*>(m_node->function()->value())->allocationProfileWatchpointSet());
+ addLazily(jsCast<JSFunction*>(m_node->cellOperand()->value())->allocationProfileWatchpointSet());
break;
case VariableWatchpoint: