DFG should really support varargs
https://bugs.webkit.org/show_bug.cgi?id=141332
Reviewed by Oliver Hunt.
Source/JavaScriptCore:
This adds comprehensive vararg call support to the DFG and FTL compilers. Previously, if a
function had a varargs call, then it could only be compiled if that varargs call was just
forwarding arguments and we were inlining the function rather than compiling it directly. Also,
only varargs calls were dealt with; varargs constructs were not.
This lifts all of those restrictions. Every varargs call or construct can now be compiled by both
the DFG and the FTL. Those calls can also be inlined, too - provided that profiling gives us a
sensible bound on arguments list length. When we inline a varargs call, the act of loading the
varargs is now made explicit in IR. I believe that we have enough IR machinery in place that we
would be able to do the arguments forwarding optimization as an IR transformation. This patch
doesn't implement that yet, and keeps the old bytecode-based varargs argument forwarding
optimization for now.
There are three major IR features introduced in this patch:
CallVarargs/ConstructVarargs: these are like Call/Construct except that they take an arguments
array rather than a list of arguments. Currently, they splat this arguments array onto the stack
using the same basic technique as the baseline JIT has always done. Except, these nodes indicate
that we are not interested in doing the non-escaping "arguments" optimization.
CallForwardVarargs: this is a form of CallVarargs that just does the non-escaping "arguments"
optimization, aka forwarding arguments. It's somewhat lazy that this doesn't include
ConstructForwardVarargs, but the reason is that once we eliminate the lazy tear-off for
arguments, this whole thing will have to be tweaked - and for now forwarding on construct is just
not important in benchmarks. ConstructVarargs will still do forwarding, just not inlined.
LoadVarargs: loads all elements out of an array onto the stack in a manner suitable for a varargs
call. This is used only when a varargs call (or construct) was inlined. The bytecode parser will
make room on the stack for the arguments, and will use LoadVarars to put those arguments into
place.
In the future, we can consider adding strength reductions like:
- If CallVarargs/ConstructVarargs see an array of known size with known elements, turn them into
Call/Construct.
- If CallVarargs/ConstructVarargs are passed an unmodified, unescaped Arguments object, then
turn them into CallForwardVarargs/ConstructForwardVarargs.
- If LoadVarargs sees an array of known size, then turn it into a sequence of GetByVals and
PutLocals.
- If LoadVarargs sees an unmodified, unescaped Arguments object, then turn it into something like
LoadForwardVarargs.
- If CallVarargs/ConstructVarargs/LoadVarargs see the result of a splice (or other Array
prototype function), then do the splice and varargs loading in one go (maybe via a new node
type).
* CMakeLists.txt:
* JavaScriptCore.vcxproj/JavaScriptCore.vcxproj:
* JavaScriptCore.xcodeproj/project.pbxproj:
* assembler/MacroAssembler.h:
(JSC::MacroAssembler::rshiftPtr):
(JSC::MacroAssembler::urshiftPtr):
* assembler/MacroAssemblerARM64.h:
(JSC::MacroAssemblerARM64::urshift64):
* assembler/MacroAssemblerX86_64.h:
(JSC::MacroAssemblerX86_64::urshift64):
* assembler/X86Assembler.h:
(JSC::X86Assembler::shrq_i8r):
* bytecode/CallLinkInfo.h:
(JSC::CallLinkInfo::CallLinkInfo):
* bytecode/CallLinkStatus.cpp:
(JSC::CallLinkStatus::computeFor):
(JSC::CallLinkStatus::setProvenConstantCallee):
(JSC::CallLinkStatus::dump):
* bytecode/CallLinkStatus.h:
(JSC::CallLinkStatus::maxNumArguments):
(JSC::CallLinkStatus::setIsProved): Deleted.
* bytecode/CodeOrigin.cpp:
(WTF::printInternal):
* bytecode/CodeOrigin.h:
(JSC::InlineCallFrame::varargsKindFor):
(JSC::InlineCallFrame::specializationKindFor):
(JSC::InlineCallFrame::isVarargs):
(JSC::InlineCallFrame::isNormalCall): Deleted.
* bytecode/ExitKind.cpp:
(JSC::exitKindToString):
* bytecode/ExitKind.h:
* bytecode/ValueRecovery.cpp:
(JSC::ValueRecovery::dumpInContext):
* dfg/DFGAbstractInterpreterInlines.h:
(JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
* dfg/DFGArgumentsSimplificationPhase.cpp:
(JSC::DFG::ArgumentsSimplificationPhase::run):
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::flush):
(JSC::DFG::ByteCodeParser::addCall):
(JSC::DFG::ByteCodeParser::handleCall):
(JSC::DFG::ByteCodeParser::handleVarargsCall):
(JSC::DFG::ByteCodeParser::emitFunctionChecks):
(JSC::DFG::ByteCodeParser::inliningCost):
(JSC::DFG::ByteCodeParser::inlineCall):
(JSC::DFG::ByteCodeParser::attemptToInlineCall):
(JSC::DFG::ByteCodeParser::handleInlining):
(JSC::DFG::ByteCodeParser::handleMinMax):
(JSC::DFG::ByteCodeParser::handleIntrinsic):
(JSC::DFG::ByteCodeParser::handleTypedArrayConstructor):
(JSC::DFG::ByteCodeParser::handleConstantInternalFunction):
(JSC::DFG::ByteCodeParser::parseBlock):
(JSC::DFG::ByteCodeParser::removeLastNodeFromGraph): Deleted.
(JSC::DFG::ByteCodeParser::undoFunctionChecks): Deleted.
* dfg/DFGCapabilities.cpp:
(JSC::DFG::capabilityLevel):
* dfg/DFGCapabilities.h:
(JSC::DFG::functionCapabilityLevel):
(JSC::DFG::mightCompileFunctionFor):
* dfg/DFGClobberize.h:
(JSC::DFG::clobberize):
* dfg/DFGCommon.cpp:
(WTF::printInternal):
* dfg/DFGCommon.h:
(JSC::DFG::canInline):
(JSC::DFG::leastUpperBound):
* dfg/DFGDoesGC.cpp:
(JSC::DFG::doesGC):
* dfg/DFGFixupPhase.cpp:
(JSC::DFG::FixupPhase::fixupNode):
* dfg/DFGGraph.cpp:
(JSC::DFG::Graph::dump):
(JSC::DFG::Graph::dumpBlockHeader):
(JSC::DFG::Graph::isLiveInBytecode):
(JSC::DFG::Graph::valueProfileFor):
(JSC::DFG::Graph::methodOfGettingAValueProfileFor):
* dfg/DFGGraph.h:
(JSC::DFG::Graph::valueProfileFor): Deleted.
(JSC::DFG::Graph::methodOfGettingAValueProfileFor): Deleted.
* dfg/DFGJITCompiler.cpp:
(JSC::DFG::JITCompiler::compileExceptionHandlers):
(JSC::DFG::JITCompiler::link):
* dfg/DFGMayExit.cpp:
(JSC::DFG::mayExit):
* dfg/DFGNode.h:
(JSC::DFG::Node::hasCallVarargsData):
(JSC::DFG::Node::callVarargsData):
(JSC::DFG::Node::hasLoadVarargsData):
(JSC::DFG::Node::loadVarargsData):
(JSC::DFG::Node::hasHeapPrediction):
* dfg/DFGNodeType.h:
* dfg/DFGOSRAvailabilityAnalysisPhase.cpp:
(JSC::DFG::LocalOSRAvailabilityCalculator::executeNode):
* dfg/DFGOSRExitCompilerCommon.cpp:
(JSC::DFG::reifyInlinedCallFrames):
* dfg/DFGOperations.cpp:
* dfg/DFGOperations.h:
* dfg/DFGPlan.cpp:
(JSC::DFG::dumpAndVerifyGraph):
(JSC::DFG::Plan::compileInThreadImpl):
* dfg/DFGPreciseLocalClobberize.h:
(JSC::DFG::PreciseLocalClobberizeAdaptor::readTop):
(JSC::DFG::PreciseLocalClobberizeAdaptor::writeTop):
* dfg/DFGPredictionPropagationPhase.cpp:
(JSC::DFG::PredictionPropagationPhase::propagate):
* dfg/DFGSSAConversionPhase.cpp:
* dfg/DFGSafeToExecute.h:
(JSC::DFG::safeToExecute):
* dfg/DFGSpeculativeJIT.h:
(JSC::DFG::SpeculativeJIT::isFlushed):
(JSC::DFG::SpeculativeJIT::callOperation):
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::emitCall):
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::emitCall):
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGStackLayoutPhase.cpp:
(JSC::DFG::StackLayoutPhase::run):
(JSC::DFG::StackLayoutPhase::assign):
* dfg/DFGStrengthReductionPhase.cpp:
(JSC::DFG::StrengthReductionPhase::handleNode):
* dfg/DFGTypeCheckHoistingPhase.cpp:
(JSC::DFG::TypeCheckHoistingPhase::run):
* dfg/DFGValidate.cpp:
(JSC::DFG::Validate::validateCPS):
* ftl/FTLAbbreviations.h:
(JSC::FTL::functionType):
(JSC::FTL::buildCall):
* ftl/FTLCapabilities.cpp:
(JSC::FTL::canCompile):
* ftl/FTLCompile.cpp:
(JSC::FTL::mmAllocateDataSection):
* ftl/FTLInlineCacheSize.cpp:
(JSC::FTL::sizeOfCall):
(JSC::FTL::sizeOfCallVarargs):
(JSC::FTL::sizeOfCallForwardVarargs):
(JSC::FTL::sizeOfConstructVarargs):
(JSC::FTL::sizeOfIn):
(JSC::FTL::sizeOfICFor):
(JSC::FTL::sizeOfCheckIn): Deleted.
* ftl/FTLInlineCacheSize.h:
* ftl/FTLIntrinsicRepository.h:
* ftl/FTLJSCall.cpp:
(JSC::FTL::JSCall::JSCall):
* ftl/FTLJSCallBase.cpp:
* ftl/FTLJSCallBase.h:
* ftl/FTLJSCallVarargs.cpp: Added.
(JSC::FTL::JSCallVarargs::JSCallVarargs):
(JSC::FTL::JSCallVarargs::numSpillSlotsNeeded):
(JSC::FTL::JSCallVarargs::emit):
(JSC::FTL::JSCallVarargs::link):
* ftl/FTLJSCallVarargs.h: Added.
(JSC::FTL::JSCallVarargs::node):
(JSC::FTL::JSCallVarargs::stackmapID):
(JSC::FTL::JSCallVarargs::operator<):
* ftl/FTLLowerDFGToLLVM.cpp:
(JSC::FTL::LowerDFGToLLVM::lower):
(JSC::FTL::LowerDFGToLLVM::compileNode):
(JSC::FTL::LowerDFGToLLVM::compileGetMyArgumentsLength):
(JSC::FTL::LowerDFGToLLVM::compileGetMyArgumentByVal):
(JSC::FTL::LowerDFGToLLVM::compileCallOrConstructVarargs):
(JSC::FTL::LowerDFGToLLVM::compileLoadVarargs):
(JSC::FTL::LowerDFGToLLVM::compileIn):
(JSC::FTL::LowerDFGToLLVM::emitStoreBarrier):
(JSC::FTL::LowerDFGToLLVM::vmCall):
(JSC::FTL::LowerDFGToLLVM::vmCallNoExceptions):
(JSC::FTL::LowerDFGToLLVM::callCheck):
* ftl/FTLOutput.h:
(JSC::FTL::Output::call):
* ftl/FTLState.cpp:
(JSC::FTL::State::State):
* ftl/FTLState.h:
* interpreter/Interpreter.cpp:
(JSC::sizeOfVarargs):
(JSC::sizeFrameForVarargs):
* interpreter/Interpreter.h:
* interpreter/StackVisitor.cpp:
(JSC::StackVisitor::readInlinedFrame):
* jit/AssemblyHelpers.cpp:
(JSC::AssemblyHelpers::emitExceptionCheck):
* jit/AssemblyHelpers.h:
(JSC::AssemblyHelpers::addressFor):
(JSC::AssemblyHelpers::calleeFrameSlot):
(JSC::AssemblyHelpers::calleeArgumentSlot):
(JSC::AssemblyHelpers::calleeFrameTagSlot):
(JSC::AssemblyHelpers::calleeFramePayloadSlot):
(JSC::AssemblyHelpers::calleeArgumentTagSlot):
(JSC::AssemblyHelpers::calleeArgumentPayloadSlot):
(JSC::AssemblyHelpers::calleeFrameCallerFrame):
(JSC::AssemblyHelpers::selectScratchGPR):
* jit/CCallHelpers.h:
(JSC::CCallHelpers::setupArgumentsWithExecState):
* jit/GPRInfo.h:
* jit/JIT.cpp:
(JSC::JIT::privateCompile):
* jit/JIT.h:
* jit/JITCall.cpp:
(JSC::JIT::compileSetupVarargsFrame):
(JSC::JIT::compileOpCall):
* jit/JITCall32_64.cpp:
(JSC::JIT::compileSetupVarargsFrame):
(JSC::JIT::compileOpCall):
* jit/JITOperations.h:
* jit/SetupVarargsFrame.cpp:
(JSC::emitSetupVarargsFrameFastCase):
* jit/SetupVarargsFrame.h:
* runtime/Arguments.h:
(JSC::Arguments::create):
(JSC::Arguments::registerArraySizeInBytes):
(JSC::Arguments::finishCreation):
* runtime/Options.h:
* tests/stress/construct-varargs-inline-smaller-Foo.js: Added.
(Foo):
(bar):
(checkEqual):
(test):
* tests/stress/construct-varargs-inline.js: Added.
(Foo):
(bar):
(checkEqual):
(test):
* tests/stress/construct-varargs-no-inline.js: Added.
(Foo):
(bar):
(checkEqual):
(test):
* tests/stress/get-argument-by-val-in-inlined-varargs-call-out-of-bounds.js: Added.
(foo):
(bar):
* tests/stress/get-argument-by-val-safe-in-inlined-varargs-call-out-of-bounds.js: Added.
(foo):
(bar):
* tests/stress/get-my-argument-by-val-creates-arguments.js: Added.
(blah):
(foo):
(bar):
(checkEqual):
(test):
* tests/stress/load-varargs-then-inlined-call-exit-in-foo.js: Added.
(foo):
(bar):
(checkEqual):
* tests/stress/load-varargs-then-inlined-call-inlined.js: Added.
(foo):
(bar):
(baz):
(checkEqual):
(test):
* tests/stress/load-varargs-then-inlined-call.js: Added.
(foo):
(bar):
(checkEqual):
(test):
LayoutTests:
Adds a version of deltablue that uses rest arguments profusely. This speeds up by 20% with this
patch. I believe that the machinery that this patch puts in place will allow us to ultimately
run deltablue-varargs at the same steady-state performance as normal deltablue.
* js/regress/deltablue-varargs-expected.txt: Added.
* js/regress/deltablue-varargs.html: Added.
* js/regress/script-tests/deltablue-varargs.js: Added.
(args):
(Object.prototype.inheritsFrom):
(OrderedCollection):
(OrderedCollection.prototype.add):
(OrderedCollection.prototype.at):
(OrderedCollection.prototype.size):
(OrderedCollection.prototype.removeFirst):
(OrderedCollection.prototype.remove):
(Strength):
(Strength.stronger):
(Strength.weaker):
(Strength.weakestOf):
(Strength.strongest):
(Strength.prototype.nextWeaker):
(Constraint):
(Constraint.prototype.addConstraint):
(Constraint.prototype.satisfy):
(Constraint.prototype.destroyConstraint):
(Constraint.prototype.isInput):
(UnaryConstraint):
(UnaryConstraint.prototype.addToGraph):
(UnaryConstraint.prototype.chooseMethod):
(UnaryConstraint.prototype.isSatisfied):
(UnaryConstraint.prototype.markInputs):
(UnaryConstraint.prototype.output):
(UnaryConstraint.prototype.recalculate):
(UnaryConstraint.prototype.markUnsatisfied):
(UnaryConstraint.prototype.inputsKnown):
(UnaryConstraint.prototype.removeFromGraph):
(StayConstraint):
(StayConstraint.prototype.execute):
(EditConstraint.prototype.isInput):
(EditConstraint.prototype.execute):
(BinaryConstraint):
(BinaryConstraint.prototype.chooseMethod):
(BinaryConstraint.prototype.addToGraph):
(BinaryConstraint.prototype.isSatisfied):
(BinaryConstraint.prototype.markInputs):
(BinaryConstraint.prototype.input):
(BinaryConstraint.prototype.output):
(BinaryConstraint.prototype.recalculate):
(BinaryConstraint.prototype.markUnsatisfied):
(BinaryConstraint.prototype.inputsKnown):
(BinaryConstraint.prototype.removeFromGraph):
(ScaleConstraint):
(ScaleConstraint.prototype.addToGraph):
(ScaleConstraint.prototype.removeFromGraph):
(ScaleConstraint.prototype.markInputs):
(ScaleConstraint.prototype.execute):
(ScaleConstraint.prototype.recalculate):
(EqualityConstraint):
(EqualityConstraint.prototype.execute):
(Variable):
(Variable.prototype.addConstraint):
(Variable.prototype.removeConstraint):
(Planner):
(Planner.prototype.incrementalAdd):
(Planner.prototype.incrementalRemove):
(Planner.prototype.newMark):
(Planner.prototype.makePlan):
(Planner.prototype.extractPlanFromConstraints):
(Planner.prototype.addPropagate):
(Planner.prototype.removePropagateFrom):
(Planner.prototype.addConstraintsConsumingTo):
(Plan):
(Plan.prototype.addConstraint):
(Plan.prototype.size):
(Plan.prototype.constraintAt):
(Plan.prototype.execute):
(chainTest):
(projectionTest):
(change):
(deltaBlue):
git-svn-id: http://svn.webkit.org/repository/webkit/trunk@180279 268f45cc-cd09-0410-ab3c-d52691b4dbfc
diff --git a/Source/JavaScriptCore/CMakeLists.txt b/Source/JavaScriptCore/CMakeLists.txt
index bd18c1e..fda72c9 100644
--- a/Source/JavaScriptCore/CMakeLists.txt
+++ b/Source/JavaScriptCore/CMakeLists.txt
@@ -845,6 +845,7 @@
ftl/FTLJITFinalizer.cpp
ftl/FTLJSCall.cpp
ftl/FTLJSCallBase.cpp
+ ftl/FTLJSCallVarargs.cpp
ftl/FTLLink.cpp
ftl/FTLLocation.cpp
ftl/FTLLowerDFGToLLVM.cpp
diff --git a/Source/JavaScriptCore/ChangeLog b/Source/JavaScriptCore/ChangeLog
index fd30ebd0..0a47b84 100644
--- a/Source/JavaScriptCore/ChangeLog
+++ b/Source/JavaScriptCore/ChangeLog
@@ -1,3 +1,314 @@
+2015-02-18 Filip Pizlo <fpizlo@apple.com>
+
+ DFG should really support varargs
+ https://bugs.webkit.org/show_bug.cgi?id=141332
+
+ Reviewed by Oliver Hunt.
+
+ This adds comprehensive vararg call support to the DFG and FTL compilers. Previously, if a
+ function had a varargs call, then it could only be compiled if that varargs call was just
+ forwarding arguments and we were inlining the function rather than compiling it directly. Also,
+ only varargs calls were dealt with; varargs constructs were not.
+
+ This lifts all of those restrictions. Every varargs call or construct can now be compiled by both
+ the DFG and the FTL. Those calls can also be inlined, too - provided that profiling gives us a
+ sensible bound on arguments list length. When we inline a varargs call, the act of loading the
+ varargs is now made explicit in IR. I believe that we have enough IR machinery in place that we
+ would be able to do the arguments forwarding optimization as an IR transformation. This patch
+ doesn't implement that yet, and keeps the old bytecode-based varargs argument forwarding
+ optimization for now.
+
+ There are three major IR features introduced in this patch:
+
+ CallVarargs/ConstructVarargs: these are like Call/Construct except that they take an arguments
+ array rather than a list of arguments. Currently, they splat this arguments array onto the stack
+ using the same basic technique as the baseline JIT has always done. Except, these nodes indicate
+ that we are not interested in doing the non-escaping "arguments" optimization.
+
+ CallForwardVarargs: this is a form of CallVarargs that just does the non-escaping "arguments"
+ optimization, aka forwarding arguments. It's somewhat lazy that this doesn't include
+ ConstructForwardVarargs, but the reason is that once we eliminate the lazy tear-off for
+ arguments, this whole thing will have to be tweaked - and for now forwarding on construct is just
+ not important in benchmarks. ConstructVarargs will still do forwarding, just not inlined.
+
+ LoadVarargs: loads all elements out of an array onto the stack in a manner suitable for a varargs
+ call. This is used only when a varargs call (or construct) was inlined. The bytecode parser will
+ make room on the stack for the arguments, and will use LoadVarars to put those arguments into
+ place.
+
+ In the future, we can consider adding strength reductions like:
+
+ - If CallVarargs/ConstructVarargs see an array of known size with known elements, turn them into
+ Call/Construct.
+
+ - If CallVarargs/ConstructVarargs are passed an unmodified, unescaped Arguments object, then
+ turn them into CallForwardVarargs/ConstructForwardVarargs.
+
+ - If LoadVarargs sees an array of known size, then turn it into a sequence of GetByVals and
+ PutLocals.
+
+ - If LoadVarargs sees an unmodified, unescaped Arguments object, then turn it into something like
+ LoadForwardVarargs.
+
+ - If CallVarargs/ConstructVarargs/LoadVarargs see the result of a splice (or other Array
+ prototype function), then do the splice and varargs loading in one go (maybe via a new node
+ type).
+
+ * CMakeLists.txt:
+ * JavaScriptCore.vcxproj/JavaScriptCore.vcxproj:
+ * JavaScriptCore.xcodeproj/project.pbxproj:
+ * assembler/MacroAssembler.h:
+ (JSC::MacroAssembler::rshiftPtr):
+ (JSC::MacroAssembler::urshiftPtr):
+ * assembler/MacroAssemblerARM64.h:
+ (JSC::MacroAssemblerARM64::urshift64):
+ * assembler/MacroAssemblerX86_64.h:
+ (JSC::MacroAssemblerX86_64::urshift64):
+ * assembler/X86Assembler.h:
+ (JSC::X86Assembler::shrq_i8r):
+ * bytecode/CallLinkInfo.h:
+ (JSC::CallLinkInfo::CallLinkInfo):
+ * bytecode/CallLinkStatus.cpp:
+ (JSC::CallLinkStatus::computeFor):
+ (JSC::CallLinkStatus::setProvenConstantCallee):
+ (JSC::CallLinkStatus::dump):
+ * bytecode/CallLinkStatus.h:
+ (JSC::CallLinkStatus::maxNumArguments):
+ (JSC::CallLinkStatus::setIsProved): Deleted.
+ * bytecode/CodeOrigin.cpp:
+ (WTF::printInternal):
+ * bytecode/CodeOrigin.h:
+ (JSC::InlineCallFrame::varargsKindFor):
+ (JSC::InlineCallFrame::specializationKindFor):
+ (JSC::InlineCallFrame::isVarargs):
+ (JSC::InlineCallFrame::isNormalCall): Deleted.
+ * bytecode/ExitKind.cpp:
+ (JSC::exitKindToString):
+ * bytecode/ExitKind.h:
+ * bytecode/ValueRecovery.cpp:
+ (JSC::ValueRecovery::dumpInContext):
+ * dfg/DFGAbstractInterpreterInlines.h:
+ (JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
+ * dfg/DFGArgumentsSimplificationPhase.cpp:
+ (JSC::DFG::ArgumentsSimplificationPhase::run):
+ * dfg/DFGByteCodeParser.cpp:
+ (JSC::DFG::ByteCodeParser::flush):
+ (JSC::DFG::ByteCodeParser::addCall):
+ (JSC::DFG::ByteCodeParser::handleCall):
+ (JSC::DFG::ByteCodeParser::handleVarargsCall):
+ (JSC::DFG::ByteCodeParser::emitFunctionChecks):
+ (JSC::DFG::ByteCodeParser::inliningCost):
+ (JSC::DFG::ByteCodeParser::inlineCall):
+ (JSC::DFG::ByteCodeParser::attemptToInlineCall):
+ (JSC::DFG::ByteCodeParser::handleInlining):
+ (JSC::DFG::ByteCodeParser::handleMinMax):
+ (JSC::DFG::ByteCodeParser::handleIntrinsic):
+ (JSC::DFG::ByteCodeParser::handleTypedArrayConstructor):
+ (JSC::DFG::ByteCodeParser::handleConstantInternalFunction):
+ (JSC::DFG::ByteCodeParser::parseBlock):
+ (JSC::DFG::ByteCodeParser::removeLastNodeFromGraph): Deleted.
+ (JSC::DFG::ByteCodeParser::undoFunctionChecks): Deleted.
+ * dfg/DFGCapabilities.cpp:
+ (JSC::DFG::capabilityLevel):
+ * dfg/DFGCapabilities.h:
+ (JSC::DFG::functionCapabilityLevel):
+ (JSC::DFG::mightCompileFunctionFor):
+ * dfg/DFGClobberize.h:
+ (JSC::DFG::clobberize):
+ * dfg/DFGCommon.cpp:
+ (WTF::printInternal):
+ * dfg/DFGCommon.h:
+ (JSC::DFG::canInline):
+ (JSC::DFG::leastUpperBound):
+ * dfg/DFGDoesGC.cpp:
+ (JSC::DFG::doesGC):
+ * dfg/DFGFixupPhase.cpp:
+ (JSC::DFG::FixupPhase::fixupNode):
+ * dfg/DFGGraph.cpp:
+ (JSC::DFG::Graph::dump):
+ (JSC::DFG::Graph::dumpBlockHeader):
+ (JSC::DFG::Graph::isLiveInBytecode):
+ (JSC::DFG::Graph::valueProfileFor):
+ (JSC::DFG::Graph::methodOfGettingAValueProfileFor):
+ * dfg/DFGGraph.h:
+ (JSC::DFG::Graph::valueProfileFor): Deleted.
+ (JSC::DFG::Graph::methodOfGettingAValueProfileFor): Deleted.
+ * dfg/DFGJITCompiler.cpp:
+ (JSC::DFG::JITCompiler::compileExceptionHandlers):
+ (JSC::DFG::JITCompiler::link):
+ * dfg/DFGMayExit.cpp:
+ (JSC::DFG::mayExit):
+ * dfg/DFGNode.h:
+ (JSC::DFG::Node::hasCallVarargsData):
+ (JSC::DFG::Node::callVarargsData):
+ (JSC::DFG::Node::hasLoadVarargsData):
+ (JSC::DFG::Node::loadVarargsData):
+ (JSC::DFG::Node::hasHeapPrediction):
+ * dfg/DFGNodeType.h:
+ * dfg/DFGOSRAvailabilityAnalysisPhase.cpp:
+ (JSC::DFG::LocalOSRAvailabilityCalculator::executeNode):
+ * dfg/DFGOSRExitCompilerCommon.cpp:
+ (JSC::DFG::reifyInlinedCallFrames):
+ * dfg/DFGOperations.cpp:
+ * dfg/DFGOperations.h:
+ * dfg/DFGPlan.cpp:
+ (JSC::DFG::dumpAndVerifyGraph):
+ (JSC::DFG::Plan::compileInThreadImpl):
+ * dfg/DFGPreciseLocalClobberize.h:
+ (JSC::DFG::PreciseLocalClobberizeAdaptor::readTop):
+ (JSC::DFG::PreciseLocalClobberizeAdaptor::writeTop):
+ * dfg/DFGPredictionPropagationPhase.cpp:
+ (JSC::DFG::PredictionPropagationPhase::propagate):
+ * dfg/DFGSSAConversionPhase.cpp:
+ * dfg/DFGSafeToExecute.h:
+ (JSC::DFG::safeToExecute):
+ * dfg/DFGSpeculativeJIT.h:
+ (JSC::DFG::SpeculativeJIT::isFlushed):
+ (JSC::DFG::SpeculativeJIT::callOperation):
+ * dfg/DFGSpeculativeJIT32_64.cpp:
+ (JSC::DFG::SpeculativeJIT::emitCall):
+ (JSC::DFG::SpeculativeJIT::compile):
+ * dfg/DFGSpeculativeJIT64.cpp:
+ (JSC::DFG::SpeculativeJIT::emitCall):
+ (JSC::DFG::SpeculativeJIT::compile):
+ * dfg/DFGStackLayoutPhase.cpp:
+ (JSC::DFG::StackLayoutPhase::run):
+ (JSC::DFG::StackLayoutPhase::assign):
+ * dfg/DFGStrengthReductionPhase.cpp:
+ (JSC::DFG::StrengthReductionPhase::handleNode):
+ * dfg/DFGTypeCheckHoistingPhase.cpp:
+ (JSC::DFG::TypeCheckHoistingPhase::run):
+ * dfg/DFGValidate.cpp:
+ (JSC::DFG::Validate::validateCPS):
+ * ftl/FTLAbbreviations.h:
+ (JSC::FTL::functionType):
+ (JSC::FTL::buildCall):
+ * ftl/FTLCapabilities.cpp:
+ (JSC::FTL::canCompile):
+ * ftl/FTLCompile.cpp:
+ (JSC::FTL::mmAllocateDataSection):
+ * ftl/FTLInlineCacheSize.cpp:
+ (JSC::FTL::sizeOfCall):
+ (JSC::FTL::sizeOfCallVarargs):
+ (JSC::FTL::sizeOfCallForwardVarargs):
+ (JSC::FTL::sizeOfConstructVarargs):
+ (JSC::FTL::sizeOfIn):
+ (JSC::FTL::sizeOfICFor):
+ (JSC::FTL::sizeOfCheckIn): Deleted.
+ * ftl/FTLInlineCacheSize.h:
+ * ftl/FTLIntrinsicRepository.h:
+ * ftl/FTLJSCall.cpp:
+ (JSC::FTL::JSCall::JSCall):
+ * ftl/FTLJSCallBase.cpp:
+ * ftl/FTLJSCallBase.h:
+ * ftl/FTLJSCallVarargs.cpp: Added.
+ (JSC::FTL::JSCallVarargs::JSCallVarargs):
+ (JSC::FTL::JSCallVarargs::numSpillSlotsNeeded):
+ (JSC::FTL::JSCallVarargs::emit):
+ (JSC::FTL::JSCallVarargs::link):
+ * ftl/FTLJSCallVarargs.h: Added.
+ (JSC::FTL::JSCallVarargs::node):
+ (JSC::FTL::JSCallVarargs::stackmapID):
+ (JSC::FTL::JSCallVarargs::operator<):
+ * ftl/FTLLowerDFGToLLVM.cpp:
+ (JSC::FTL::LowerDFGToLLVM::lower):
+ (JSC::FTL::LowerDFGToLLVM::compileNode):
+ (JSC::FTL::LowerDFGToLLVM::compileGetMyArgumentsLength):
+ (JSC::FTL::LowerDFGToLLVM::compileGetMyArgumentByVal):
+ (JSC::FTL::LowerDFGToLLVM::compileCallOrConstructVarargs):
+ (JSC::FTL::LowerDFGToLLVM::compileLoadVarargs):
+ (JSC::FTL::LowerDFGToLLVM::compileIn):
+ (JSC::FTL::LowerDFGToLLVM::emitStoreBarrier):
+ (JSC::FTL::LowerDFGToLLVM::vmCall):
+ (JSC::FTL::LowerDFGToLLVM::vmCallNoExceptions):
+ (JSC::FTL::LowerDFGToLLVM::callCheck):
+ * ftl/FTLOutput.h:
+ (JSC::FTL::Output::call):
+ * ftl/FTLState.cpp:
+ (JSC::FTL::State::State):
+ * ftl/FTLState.h:
+ * interpreter/Interpreter.cpp:
+ (JSC::sizeOfVarargs):
+ (JSC::sizeFrameForVarargs):
+ * interpreter/Interpreter.h:
+ * interpreter/StackVisitor.cpp:
+ (JSC::StackVisitor::readInlinedFrame):
+ * jit/AssemblyHelpers.cpp:
+ (JSC::AssemblyHelpers::emitExceptionCheck):
+ * jit/AssemblyHelpers.h:
+ (JSC::AssemblyHelpers::addressFor):
+ (JSC::AssemblyHelpers::calleeFrameSlot):
+ (JSC::AssemblyHelpers::calleeArgumentSlot):
+ (JSC::AssemblyHelpers::calleeFrameTagSlot):
+ (JSC::AssemblyHelpers::calleeFramePayloadSlot):
+ (JSC::AssemblyHelpers::calleeArgumentTagSlot):
+ (JSC::AssemblyHelpers::calleeArgumentPayloadSlot):
+ (JSC::AssemblyHelpers::calleeFrameCallerFrame):
+ (JSC::AssemblyHelpers::selectScratchGPR):
+ * jit/CCallHelpers.h:
+ (JSC::CCallHelpers::setupArgumentsWithExecState):
+ * jit/GPRInfo.h:
+ * jit/JIT.cpp:
+ (JSC::JIT::privateCompile):
+ * jit/JIT.h:
+ * jit/JITCall.cpp:
+ (JSC::JIT::compileSetupVarargsFrame):
+ (JSC::JIT::compileOpCall):
+ * jit/JITCall32_64.cpp:
+ (JSC::JIT::compileSetupVarargsFrame):
+ (JSC::JIT::compileOpCall):
+ * jit/JITOperations.h:
+ * jit/SetupVarargsFrame.cpp:
+ (JSC::emitSetupVarargsFrameFastCase):
+ * jit/SetupVarargsFrame.h:
+ * runtime/Arguments.h:
+ (JSC::Arguments::create):
+ (JSC::Arguments::registerArraySizeInBytes):
+ (JSC::Arguments::finishCreation):
+ * runtime/Options.h:
+ * tests/stress/construct-varargs-inline-smaller-Foo.js: Added.
+ (Foo):
+ (bar):
+ (checkEqual):
+ (test):
+ * tests/stress/construct-varargs-inline.js: Added.
+ (Foo):
+ (bar):
+ (checkEqual):
+ (test):
+ * tests/stress/construct-varargs-no-inline.js: Added.
+ (Foo):
+ (bar):
+ (checkEqual):
+ (test):
+ * tests/stress/get-argument-by-val-in-inlined-varargs-call-out-of-bounds.js: Added.
+ (foo):
+ (bar):
+ * tests/stress/get-argument-by-val-safe-in-inlined-varargs-call-out-of-bounds.js: Added.
+ (foo):
+ (bar):
+ * tests/stress/get-my-argument-by-val-creates-arguments.js: Added.
+ (blah):
+ (foo):
+ (bar):
+ (checkEqual):
+ (test):
+ * tests/stress/load-varargs-then-inlined-call-exit-in-foo.js: Added.
+ (foo):
+ (bar):
+ (checkEqual):
+ * tests/stress/load-varargs-then-inlined-call-inlined.js: Added.
+ (foo):
+ (bar):
+ (baz):
+ (checkEqual):
+ (test):
+ * tests/stress/load-varargs-then-inlined-call.js: Added.
+ (foo):
+ (bar):
+ (checkEqual):
+ (test):
+
2015-02-17 Michael Saboff <msaboff@apple.com>
Unreviewed, Restoring the C LOOP insta-crash fix in r180184.
diff --git a/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj b/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj
index dd4183f..2a6795b 100644
--- a/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj
+++ b/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj
@@ -511,6 +511,7 @@
<ClCompile Include="..\ftl\FTLJITFinalizer.cpp" />
<ClCompile Include="..\ftl\FTLJSCall.cpp" />
<ClCompile Include="..\ftl\FTLJSCallBase.cpp" />
+ <ClCompile Include="..\ftl\FTLJSCallVarargs.cpp" />
<ClCompile Include="..\ftl\FTLLink.cpp" />
<ClCompile Include="..\ftl\FTLLocation.cpp" />
<ClCompile Include="..\ftl\FTLLowerDFGToLLVM.cpp" />
@@ -1195,6 +1196,7 @@
<ClInclude Include="..\ftl\FTLJITFinalizer.h" />
<ClInclude Include="..\ftl\FTLJSCall.h" />
<ClInclude Include="..\ftl\FTLJSCallBase.h" />
+ <ClInclude Include="..\ftl\FTLJSCallVarargs.h" />
<ClInclude Include="..\ftl\FTLLink.h" />
<ClInclude Include="..\ftl\FTLLocation.h" />
<ClInclude Include="..\ftl\FTLLowerDFGToLLVM.h" />
diff --git a/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj b/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
index 4228724..439012a 100644
--- a/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
+++ b/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
@@ -564,6 +564,8 @@
0FCEFAE0180738C000472CE4 /* FTLLocation.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FCEFADE180738C000472CE4 /* FTLLocation.h */; settings = {ATTRIBUTES = (Private, ); }; };
0FD1202F1A8AED12000F5280 /* FTLJSCallBase.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FD1202D1A8AED12000F5280 /* FTLJSCallBase.cpp */; };
0FD120301A8AED12000F5280 /* FTLJSCallBase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD1202E1A8AED12000F5280 /* FTLJSCallBase.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ 0FD120331A8C85BD000F5280 /* FTLJSCallVarargs.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FD120311A8C85BD000F5280 /* FTLJSCallVarargs.cpp */; };
+ 0FD120341A8C85BD000F5280 /* FTLJSCallVarargs.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD120321A8C85BD000F5280 /* FTLJSCallVarargs.h */; settings = {ATTRIBUTES = (Private, ); }; };
0FD2C92416D01EE900C7803F /* StructureInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD2C92316D01EE900C7803F /* StructureInlines.h */; settings = {ATTRIBUTES = (Private, ); }; };
0FD3C82614115D4000FD81CB /* DFGDriver.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FD3C82014115CF800FD81CB /* DFGDriver.cpp */; };
0FD3C82814115D4F00FD81CB /* DFGDriver.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD3C82214115D0E00FD81CB /* DFGDriver.h */; };
@@ -2249,6 +2251,8 @@
0FCEFADE180738C000472CE4 /* FTLLocation.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLLocation.h; path = ftl/FTLLocation.h; sourceTree = "<group>"; };
0FD1202D1A8AED12000F5280 /* FTLJSCallBase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLJSCallBase.cpp; path = ftl/FTLJSCallBase.cpp; sourceTree = "<group>"; };
0FD1202E1A8AED12000F5280 /* FTLJSCallBase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLJSCallBase.h; path = ftl/FTLJSCallBase.h; sourceTree = "<group>"; };
+ 0FD120311A8C85BD000F5280 /* FTLJSCallVarargs.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLJSCallVarargs.cpp; path = ftl/FTLJSCallVarargs.cpp; sourceTree = "<group>"; };
+ 0FD120321A8C85BD000F5280 /* FTLJSCallVarargs.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLJSCallVarargs.h; path = ftl/FTLJSCallVarargs.h; sourceTree = "<group>"; };
0FD2C92316D01EE900C7803F /* StructureInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = StructureInlines.h; sourceTree = "<group>"; };
0FD3C82014115CF800FD81CB /* DFGDriver.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGDriver.cpp; path = dfg/DFGDriver.cpp; sourceTree = "<group>"; };
0FD3C82214115D0E00FD81CB /* DFGDriver.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGDriver.h; path = dfg/DFGDriver.h; sourceTree = "<group>"; };
@@ -3610,6 +3614,8 @@
0F6B1CB4185FC9E900845D97 /* FTLJSCall.h */,
0FD1202D1A8AED12000F5280 /* FTLJSCallBase.cpp */,
0FD1202E1A8AED12000F5280 /* FTLJSCallBase.h */,
+ 0FD120311A8C85BD000F5280 /* FTLJSCallVarargs.cpp */,
+ 0FD120321A8C85BD000F5280 /* FTLJSCallVarargs.h */,
0F8F2B93172E049E007DBDA5 /* FTLLink.cpp */,
0F8F2B94172E049E007DBDA5 /* FTLLink.h */,
0FCEFADD180738C000472CE4 /* FTLLocation.cpp */,
@@ -5455,6 +5461,7 @@
6514F21918B3E1670098FF8B /* Bytecodes.h in Headers */,
65C0285D1717966800351E35 /* ARMv7DOpcode.h in Headers */,
0F8335B81639C1EA001443B5 /* ArrayAllocationProfile.h in Headers */,
+ 0FD120341A8C85BD000F5280 /* FTLJSCallVarargs.h in Headers */,
A7A8AF3517ADB5F3005AB174 /* ArrayBuffer.h in Headers */,
0FFC99D5184EE318009C10AB /* ArrayBufferNeuteringWatchpoint.h in Headers */,
A7A8AF3717ADB5F3005AB174 /* ArrayBufferView.h in Headers */,
@@ -6872,6 +6879,7 @@
A78A9774179738B8009DF744 /* DFGFailedFinalizer.cpp in Sources */,
A78A9776179738B8009DF744 /* DFGFinalizer.cpp in Sources */,
0F2BDC15151C5D4D00CD8910 /* DFGFixupPhase.cpp in Sources */,
+ 0FD120331A8C85BD000F5280 /* FTLJSCallVarargs.cpp in Sources */,
0F9D339617FFC4E60073C2BC /* DFGFlushedAt.cpp in Sources */,
A7D89CF717A0B8CC00773AD8 /* DFGFlushFormat.cpp in Sources */,
86EC9DC71328DF82002B2AD7 /* DFGGraph.cpp in Sources */,
diff --git a/Source/JavaScriptCore/assembler/MacroAssembler.h b/Source/JavaScriptCore/assembler/MacroAssembler.h
index c70f2b7..fd4c5bb 100644
--- a/Source/JavaScriptCore/assembler/MacroAssembler.h
+++ b/Source/JavaScriptCore/assembler/MacroAssembler.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2008, 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2008, 2012-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -471,6 +471,16 @@
{
lshift32(trustedImm32ForShift(imm), srcDest);
}
+
+ void rshiftPtr(Imm32 imm, RegisterID srcDest)
+ {
+ rshift32(trustedImm32ForShift(imm), srcDest);
+ }
+
+ void urshiftPtr(Imm32 imm, RegisterID srcDest)
+ {
+ urshift32(trustedImm32ForShift(imm), srcDest);
+ }
void negPtr(RegisterID dest)
{
@@ -750,6 +760,16 @@
lshift64(trustedImm32ForShift(imm), srcDest);
}
+ void rshiftPtr(Imm32 imm, RegisterID srcDest)
+ {
+ rshift64(trustedImm32ForShift(imm), srcDest);
+ }
+
+ void urshiftPtr(Imm32 imm, RegisterID srcDest)
+ {
+ urshift64(trustedImm32ForShift(imm), srcDest);
+ }
+
void negPtr(RegisterID dest)
{
neg64(dest);
diff --git a/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h b/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h
index 0a6dcea..86d34bb 100644
--- a/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h
+++ b/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h
@@ -689,6 +689,26 @@
urshift32(dest, imm, dest);
}
+ void urshift64(RegisterID src, RegisterID shiftAmount, RegisterID dest)
+ {
+ m_assembler.lsr<64>(dest, src, shiftAmount);
+ }
+
+ void urshift64(RegisterID src, TrustedImm32 imm, RegisterID dest)
+ {
+ m_assembler.lsr<64>(dest, src, imm.m_value & 0x1f);
+ }
+
+ void urshift64(RegisterID shiftAmount, RegisterID dest)
+ {
+ urshift64(dest, shiftAmount, dest);
+ }
+
+ void urshift64(TrustedImm32 imm, RegisterID dest)
+ {
+ urshift64(dest, imm, dest);
+ }
+
void xor32(RegisterID src, RegisterID dest)
{
xor32(dest, src, dest);
diff --git a/Source/JavaScriptCore/assembler/MacroAssemblerX86_64.h b/Source/JavaScriptCore/assembler/MacroAssemblerX86_64.h
index a8243e2..920de74 100644
--- a/Source/JavaScriptCore/assembler/MacroAssemblerX86_64.h
+++ b/Source/JavaScriptCore/assembler/MacroAssemblerX86_64.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2008, 2012, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2008, 2012, 2014, 2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -334,6 +334,11 @@
m_assembler.sarq_i8r(imm.m_value, dest);
}
+ void urshift64(TrustedImm32 imm, RegisterID dest)
+ {
+ m_assembler.shrq_i8r(imm.m_value, dest);
+ }
+
void mul64(RegisterID src, RegisterID dest)
{
m_assembler.imulq_rr(src, dest);
diff --git a/Source/JavaScriptCore/assembler/X86Assembler.h b/Source/JavaScriptCore/assembler/X86Assembler.h
index 7877eb7..e9ba4c5 100644
--- a/Source/JavaScriptCore/assembler/X86Assembler.h
+++ b/Source/JavaScriptCore/assembler/X86Assembler.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2008, 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2008, 2012-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -881,6 +881,16 @@
}
}
+ void shrq_i8r(int imm, RegisterID dst)
+ {
+ if (imm == 1)
+ m_formatter.oneByteOp64(OP_GROUP2_Ev1, GROUP2_OP_SHR, dst);
+ else {
+ m_formatter.oneByteOp64(OP_GROUP2_EvIb, GROUP2_OP_SHR, dst);
+ m_formatter.immediate8(imm);
+ }
+ }
+
void shlq_i8r(int imm, RegisterID dst)
{
if (imm == 1)
diff --git a/Source/JavaScriptCore/bytecode/CallLinkInfo.h b/Source/JavaScriptCore/bytecode/CallLinkInfo.h
index 040cdb6..50c0746 100644
--- a/Source/JavaScriptCore/bytecode/CallLinkInfo.h
+++ b/Source/JavaScriptCore/bytecode/CallLinkInfo.h
@@ -61,6 +61,7 @@
, hasSeenShouldRepatch(false)
, hasSeenClosure(false)
, callType(None)
+ , maxNumArguments(0)
, slowPathCount(0)
{
}
@@ -91,6 +92,7 @@
bool hasSeenClosure : 1;
unsigned callType : 5; // CallType
unsigned calleeGPR : 8;
+ uint8_t maxNumArguments; // Only used for varargs calls.
uint32_t slowPathCount;
CodeOrigin codeOrigin;
diff --git a/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp b/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp
index c8271e0..bf4618b 100644
--- a/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp
+++ b/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp
@@ -129,7 +129,9 @@
// We don't really need this, but anytime we have to debug this code, it becomes indispensable.
UNUSED_PARAM(profiledBlock);
- return computeFromCallLinkInfo(locker, callLinkInfo);
+ CallLinkStatus result = computeFromCallLinkInfo(locker, callLinkInfo);
+ result.m_maxNumArguments = callLinkInfo.maxNumArguments;
+ return result;
}
CallLinkStatus CallLinkStatus::computeFromCallLinkInfo(
@@ -291,6 +293,13 @@
return computeFor(profiledBlock, codeOrigin.bytecodeIndex, baselineMap);
}
+void CallLinkStatus::setProvenConstantCallee(CallVariant variant)
+{
+ m_variants = CallVariantList{ variant };
+ m_couldTakeSlowPath = false;
+ m_isProved = true;
+}
+
bool CallLinkStatus::isClosureCall() const
{
for (unsigned i = m_variants.size(); i--;) {
@@ -322,6 +331,9 @@
if (!m_variants.isEmpty())
out.print(comma, listDump(m_variants));
+
+ if (m_maxNumArguments)
+ out.print(comma, "maxNumArguments = ", m_maxNumArguments);
}
} // namespace JSC
diff --git a/Source/JavaScriptCore/bytecode/CallLinkStatus.h b/Source/JavaScriptCore/bytecode/CallLinkStatus.h
index 545c1bc..3ae2316 100644
--- a/Source/JavaScriptCore/bytecode/CallLinkStatus.h
+++ b/Source/JavaScriptCore/bytecode/CallLinkStatus.h
@@ -68,12 +68,6 @@
{
}
- CallLinkStatus& setIsProved(bool isProved)
- {
- m_isProved = isProved;
- return *this;
- }
-
static CallLinkStatus computeFor(
CodeBlock*, unsigned bytecodeIndex, const CallLinkInfoMap&);
@@ -108,6 +102,8 @@
static CallLinkStatus computeFor(
CodeBlock*, CodeOrigin, const CallLinkInfoMap&, const ContextMap&);
+ void setProvenConstantCallee(CallVariant);
+
bool isSet() const { return !m_variants.isEmpty() || m_couldTakeSlowPath; }
bool operator!() const { return !isSet(); }
@@ -123,6 +119,8 @@
bool isClosureCall() const; // Returns true if any callee is a closure call.
+ unsigned maxNumArguments() const { return m_maxNumArguments; }
+
void dump(PrintStream&) const;
private:
@@ -137,6 +135,7 @@
CallVariantList m_variants;
bool m_couldTakeSlowPath;
bool m_isProved;
+ unsigned m_maxNumArguments;
};
} // namespace JSC
diff --git a/Source/JavaScriptCore/bytecode/CodeOrigin.cpp b/Source/JavaScriptCore/bytecode/CodeOrigin.cpp
index 81b1e6a..6e1dd7d 100644
--- a/Source/JavaScriptCore/bytecode/CodeOrigin.cpp
+++ b/Source/JavaScriptCore/bytecode/CodeOrigin.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2012-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -206,6 +206,12 @@
case JSC::InlineCallFrame::Construct:
out.print("Construct");
return;
+ case JSC::InlineCallFrame::CallVarargs:
+ out.print("CallVarargs");
+ return;
+ case JSC::InlineCallFrame::ConstructVarargs:
+ out.print("ConstructVarargs");
+ return;
case JSC::InlineCallFrame::GetterCall:
out.print("GetterCall");
return;
diff --git a/Source/JavaScriptCore/bytecode/CodeOrigin.h b/Source/JavaScriptCore/bytecode/CodeOrigin.h
index 03dd781..3a96d67 100644
--- a/Source/JavaScriptCore/bytecode/CodeOrigin.h
+++ b/Source/JavaScriptCore/bytecode/CodeOrigin.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -121,6 +121,8 @@
enum Kind {
Call,
Construct,
+ CallVarargs,
+ ConstructVarargs,
// For these, the stackOffset incorporates the argument count plus the true return PC
// slot.
@@ -140,30 +142,48 @@
return Call;
}
+ static Kind varargsKindFor(CodeSpecializationKind kind)
+ {
+ switch (kind) {
+ case CodeForCall:
+ return CallVarargs;
+ case CodeForConstruct:
+ return ConstructVarargs;
+ }
+ RELEASE_ASSERT_NOT_REACHED();
+ return Call;
+ }
+
static CodeSpecializationKind specializationKindFor(Kind kind)
{
switch (kind) {
case Call:
+ case CallVarargs:
case GetterCall:
case SetterCall:
return CodeForCall;
case Construct:
+ case ConstructVarargs:
return CodeForConstruct;
}
RELEASE_ASSERT_NOT_REACHED();
return CodeForCall;
}
- static bool isNormalCall(Kind kind)
+ static bool isVarargs(Kind kind)
{
switch (kind) {
- case Call:
- case Construct:
+ case CallVarargs:
+ case ConstructVarargs:
return true;
default:
return false;
}
}
+ bool isVarargs() const
+ {
+ return isVarargs(static_cast<Kind>(kind));
+ }
Vector<ValueRecovery> arguments; // Includes 'this'.
WriteBarrier<ScriptExecutable> executable;
@@ -171,10 +191,11 @@
CodeOrigin caller;
BitVector capturedVars; // Indexed by the machine call frame's variable numbering.
- signed stackOffset : 29;
- unsigned kind : 2; // real type is Kind
+ signed stackOffset : 28;
+ unsigned kind : 3; // real type is Kind
bool isClosureCall : 1; // If false then we know that callee/scope are constants and the DFG won't treat them as variables, i.e. they have to be recovered manually.
VirtualRegister argumentsRegister; // This is only set if the code uses arguments. The unmodified arguments register follows the unmodifiedArgumentsRegister() convention (see CodeBlock.h).
+ VirtualRegister argumentCountRegister; // Only set when we inline a varargs call.
// There is really no good notion of a "default" set of values for
// InlineCallFrame's fields. This constructor is here just to reduce confusion if
diff --git a/Source/JavaScriptCore/bytecode/ExitKind.cpp b/Source/JavaScriptCore/bytecode/ExitKind.cpp
index 87ad2ed..a3f8150 100644
--- a/Source/JavaScriptCore/bytecode/ExitKind.cpp
+++ b/Source/JavaScriptCore/bytecode/ExitKind.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2012-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -66,6 +66,8 @@
return "ArgumentsEscaped";
case NotStringObject:
return "NotStringObject";
+ case VarargsOverflow:
+ return "VarargsOverflow";
case Uncountable:
return "Uncountable";
case UncountableInvalidation:
diff --git a/Source/JavaScriptCore/bytecode/ExitKind.h b/Source/JavaScriptCore/bytecode/ExitKind.h
index 150135d..855a867 100644
--- a/Source/JavaScriptCore/bytecode/ExitKind.h
+++ b/Source/JavaScriptCore/bytecode/ExitKind.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2012-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -45,6 +45,7 @@
InadequateCoverage, // We exited because we ended up in code that didn't have profiling coverage.
ArgumentsEscaped, // We exited because arguments escaped but we didn't expect them to.
NotStringObject, // We exited because we shouldn't have attempted to optimize string object access.
+ VarargsOverflow, // We exited because a varargs call passed more arguments than we expected.
Uncountable, // We exited for none of the above reasons, and we should not count it. Most uses of this should be viewed as a FIXME.
UncountableInvalidation, // We exited because the code block was invalidated; this means that we've already counted the reasons why the code block was invalidated.
WatchdogTimerFired, // We exited because we need to service the watchdog timer.
diff --git a/Source/JavaScriptCore/bytecode/ValueRecovery.cpp b/Source/JavaScriptCore/bytecode/ValueRecovery.cpp
index b7de34b..29aa56f 100644
--- a/Source/JavaScriptCore/bytecode/ValueRecovery.cpp
+++ b/Source/JavaScriptCore/bytecode/ValueRecovery.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2011, 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2011, 2013, 2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -92,25 +92,25 @@
return;
#endif
case DisplacedInJSStack:
- out.printf("*%d", virtualRegister().offset());
+ out.print("*", virtualRegister());
return;
case Int32DisplacedInJSStack:
- out.printf("*int32(%d)", virtualRegister().offset());
+ out.print("*int32(", virtualRegister(), ")");
return;
case Int52DisplacedInJSStack:
- out.printf("*int52(%d)", virtualRegister().offset());
+ out.print("*int52(", virtualRegister(), ")");
return;
case StrictInt52DisplacedInJSStack:
- out.printf("*strictInt52(%d)", virtualRegister().offset());
+ out.print("*strictInt52(", virtualRegister(), ")");
return;
case DoubleDisplacedInJSStack:
- out.printf("*double(%d)", virtualRegister().offset());
+ out.print("*double(", virtualRegister(), ")");
return;
case CellDisplacedInJSStack:
- out.printf("*cell(%d)", virtualRegister().offset());
+ out.print("*cell(", virtualRegister(), ")");
return;
case BooleanDisplacedInJSStack:
- out.printf("*bool(%d)", virtualRegister().offset());
+ out.print("*bool(", virtualRegister(), ")");
return;
case ArgumentsThatWereNotCreated:
out.printf("arguments");
diff --git a/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h b/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
index 267d605..402b5cd 100644
--- a/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
+++ b/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -195,9 +195,20 @@
}
case SetArgument:
- // Assert that the state of arguments has been set.
- ASSERT(!m_state.block()->valuesAtHead.operand(node->local()).isClear());
+ // Assert that the state of arguments has been set. SetArgument means that someone set
+ // the argument values out-of-band, and currently this always means setting to a
+ // non-clear value.
+ ASSERT(!m_state.variables().operand(node->local()).isClear());
break;
+
+ case LoadVarargs: {
+ clobberWorld(node->origin.semantic, clobberLimit);
+ LoadVarargsData* data = node->loadVarargsData();
+ m_state.variables().operand(data->count).setType(SpecInt32);
+ for (unsigned i = data->limit - 1; i--;)
+ m_state.variables().operand(data->start.offset() + i).makeHeapTop();
+ break;
+ }
case BitAnd:
case BitOr:
@@ -1325,7 +1336,8 @@
// the arguments a bit. Note that this is not sufficient to force constant folding
// of GetMyArgumentsLength, because GetMyArgumentsLength is a clobbering operation.
// We perform further optimizations on this later on.
- if (node->origin.semantic.inlineCallFrame) {
+ if (node->origin.semantic.inlineCallFrame
+ && !node->origin.semantic.inlineCallFrame->isVarargs()) {
setConstant(
node, jsNumber(node->origin.semantic.inlineCallFrame->arguments.size() - 1));
m_state.setDidClobber(true); // Pretend that we clobbered to prevent constant folding.
@@ -1974,6 +1986,9 @@
case Construct:
case NativeCall:
case NativeConstruct:
+ case CallVarargs:
+ case CallForwardVarargs:
+ case ConstructVarargs:
clobberWorld(node->origin.semantic, clobberLimit);
forNode(node).makeHeapTop();
break;
diff --git a/Source/JavaScriptCore/dfg/DFGArgumentsSimplificationPhase.cpp b/Source/JavaScriptCore/dfg/DFGArgumentsSimplificationPhase.cpp
index 920c466..c98fad5 100644
--- a/Source/JavaScriptCore/dfg/DFGArgumentsSimplificationPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGArgumentsSimplificationPhase.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2012, 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2013, 2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -503,6 +503,8 @@
NodeOrigin origin = node->origin;
if (!origin.semantic.inlineCallFrame)
break;
+ if (origin.semantic.inlineCallFrame->isVarargs())
+ break;
// We know exactly what this will return. But only after we have checked
// that nobody has escaped our arguments.
diff --git a/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp b/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
index 85e2a1c..7ef4e7f 100644
--- a/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
+++ b/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
@@ -170,7 +170,8 @@
}
// Helper for min and max.
- bool handleMinMax(int resultOperand, NodeType op, int registerOffset, int argumentCountIncludingThis);
+ template<typename ChecksFunctor>
+ bool handleMinMax(int resultOperand, NodeType op, int registerOffset, int argumentCountIncludingThis, const ChecksFunctor& insertChecks);
// Handle calls. This resolves issues surrounding inlining and intrinsics.
void handleCall(
@@ -182,20 +183,25 @@
Node* callTarget, int argCount, int registerOffset, CallLinkStatus);
void handleCall(int result, NodeType op, CodeSpecializationKind, unsigned instructionSize, int callee, int argCount, int registerOffset);
void handleCall(Instruction* pc, NodeType op, CodeSpecializationKind);
- void emitFunctionChecks(CallVariant, Node* callTarget, int registerOffset, CodeSpecializationKind);
- void undoFunctionChecks(CallVariant);
+ void handleVarargsCall(Instruction* pc, NodeType op, CodeSpecializationKind);
+ void emitFunctionChecks(CallVariant, Node* callTarget, VirtualRegister thisArgumnt);
void emitArgumentPhantoms(int registerOffset, int argumentCountIncludingThis, CodeSpecializationKind);
unsigned inliningCost(CallVariant, int argumentCountIncludingThis, CodeSpecializationKind); // Return UINT_MAX if it's not an inlining candidate. By convention, intrinsics have a cost of 1.
// Handle inlining. Return true if it succeeded, false if we need to plant a call.
- bool handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus&, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, NodeType callOp, InlineCallFrame::Kind, SpeculatedType prediction);
+ bool handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus&, int registerOffset, VirtualRegister thisArgument, VirtualRegister argumentsArgument, unsigned argumentsOffset, int argumentCountIncludingThis, unsigned nextOffset, NodeType callOp, InlineCallFrame::Kind, SpeculatedType prediction);
enum CallerLinkability { CallerDoesNormalLinking, CallerLinksManually };
- bool attemptToInlineCall(Node* callTargetNode, int resultOperand, CallVariant, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind, CallerLinkability, SpeculatedType prediction, unsigned& inliningBalance);
- void inlineCall(Node* callTargetNode, int resultOperand, CallVariant, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind, CallerLinkability);
+ template<typename ChecksFunctor>
+ bool attemptToInlineCall(Node* callTargetNode, int resultOperand, CallVariant, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind, CallerLinkability, SpeculatedType prediction, unsigned& inliningBalance, const ChecksFunctor& insertChecks);
+ template<typename ChecksFunctor>
+ void inlineCall(Node* callTargetNode, int resultOperand, CallVariant, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind, CallerLinkability, const ChecksFunctor& insertChecks);
void cancelLinkingForBlock(InlineStackEntry*, BasicBlock*); // Only works when the given block is the last one to have been added for that inline stack entry.
// Handle intrinsic functions. Return true if it succeeded, false if we need to plant a call.
- bool handleIntrinsic(int resultOperand, Intrinsic, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction);
- bool handleTypedArrayConstructor(int resultOperand, InternalFunction*, int registerOffset, int argumentCountIncludingThis, TypedArrayType);
- bool handleConstantInternalFunction(int resultOperand, InternalFunction*, int registerOffset, int argumentCountIncludingThis, CodeSpecializationKind);
+ template<typename ChecksFunctor>
+ bool handleIntrinsic(int resultOperand, Intrinsic, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction, const ChecksFunctor& insertChecks);
+ template<typename ChecksFunctor>
+ bool handleTypedArrayConstructor(int resultOperand, InternalFunction*, int registerOffset, int argumentCountIncludingThis, TypedArrayType, const ChecksFunctor& insertChecks);
+ template<typename ChecksFunctor>
+ bool handleConstantInternalFunction(int resultOperand, InternalFunction*, int registerOffset, int argumentCountIncludingThis, CodeSpecializationKind, const ChecksFunctor& insertChecks);
Node* handlePutByOffset(Node* base, unsigned identifier, PropertyOffset, Node* value);
Node* handleGetByOffset(SpeculatedType, Node* base, const StructureSet&, unsigned identifierNumber, PropertyOffset, NodeType op = GetByOffset);
void handleGetById(
@@ -528,6 +534,8 @@
numArguments = inlineCallFrame->arguments.size();
if (inlineCallFrame->isClosureCall)
flushDirect(inlineStackEntry->remapOperand(VirtualRegister(JSStack::Callee)));
+ if (inlineCallFrame->isVarargs())
+ flushDirect(inlineStackEntry->remapOperand(VirtualRegister(JSStack::ArgumentCount)));
} else
numArguments = inlineStackEntry->m_codeBlock->numParameters();
for (unsigned argument = numArguments; argument-- > 1;)
@@ -654,13 +662,6 @@
return result;
}
- void removeLastNodeFromGraph(NodeType expectedNodeType)
- {
- Node* node = m_currentBlock->takeLast();
- RELEASE_ASSERT(node->op() == expectedNodeType);
- m_graph.m_allocator.free(node);
- }
-
void addVarArgChild(Node* child)
{
m_graph.m_varArgChildren.append(Edge(child));
@@ -691,7 +692,7 @@
op, opInfo, callee, argCount, registerOffset, prediction);
VirtualRegister resultReg(result);
if (resultReg.isValid())
- set(VirtualRegister(result), call);
+ set(resultReg, call);
return call;
}
@@ -1042,8 +1043,8 @@
{
ASSERT(registerOffset <= 0);
- if (callTarget->hasConstant())
- callLinkStatus = CallLinkStatus(callTarget->asJSValue()).setIsProved(true);
+ if (callTarget->isCellConstant())
+ callLinkStatus.setProvenConstantCallee(CallVariant(callTarget->asCell()));
if (Options::verboseDFGByteCodeParsing())
dataLog(" Handling call at ", currentCodeOrigin(), ": ", callLinkStatus, "\n");
@@ -1060,7 +1061,7 @@
OpInfo callOpInfo;
- if (handleInlining(callTarget, result, callLinkStatus, registerOffset, argumentCountIncludingThis, nextOffset, op, kind, prediction)) {
+ if (handleInlining(callTarget, result, callLinkStatus, registerOffset, virtualRegisterForArgument(0, registerOffset), VirtualRegister(), 0, argumentCountIncludingThis, nextOffset, op, kind, prediction)) {
if (m_graph.compilation())
m_graph.compilation()->noticeInlinedCall();
return;
@@ -1072,7 +1073,7 @@
JSFunction* function = callee.function();
CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);
if (function && function->isHostFunction()) {
- emitFunctionChecks(callee, callTarget, registerOffset, specializationKind);
+ emitFunctionChecks(callee, callTarget, virtualRegisterForArgument(0, registerOffset));
callOpInfo = OpInfo(m_graph.freeze(function));
if (op == Call)
@@ -1088,11 +1089,54 @@
addCall(result, op, callOpInfo, callTarget, argumentCountIncludingThis, registerOffset, prediction);
}
-void ByteCodeParser::emitFunctionChecks(CallVariant callee, Node* callTarget, int registerOffset, CodeSpecializationKind kind)
+void ByteCodeParser::handleVarargsCall(Instruction* pc, NodeType op, CodeSpecializationKind kind)
+{
+ ASSERT(OPCODE_LENGTH(op_call_varargs) == OPCODE_LENGTH(op_construct_varargs));
+
+ int result = pc[1].u.operand;
+ int callee = pc[2].u.operand;
+ int thisReg = pc[3].u.operand;
+ int arguments = pc[4].u.operand;
+ int firstFreeReg = pc[5].u.operand;
+ int firstVarArgOffset = pc[6].u.operand;
+
+ SpeculatedType prediction = getPrediction();
+
+ Node* callTarget = get(VirtualRegister(callee));
+
+ CallLinkStatus callLinkStatus = CallLinkStatus::computeFor(
+ m_inlineStackTop->m_profiledBlock, currentCodeOrigin(),
+ m_inlineStackTop->m_callLinkInfos, m_callContextMap);
+ if (callTarget->isCellConstant())
+ callLinkStatus.setProvenConstantCallee(CallVariant(callTarget->asCell()));
+
+ if (callLinkStatus.canOptimize()
+ && handleInlining(callTarget, result, callLinkStatus, firstFreeReg, VirtualRegister(thisReg), VirtualRegister(arguments), firstVarArgOffset, 0, m_currentIndex + OPCODE_LENGTH(op_call_varargs), op, InlineCallFrame::varargsKindFor(kind), prediction)) {
+ if (m_graph.compilation())
+ m_graph.compilation()->noticeInlinedCall();
+ return;
+ }
+
+ CallVarargsData* data = m_graph.m_callVarargsData.add();
+ data->firstVarArgOffset = firstVarArgOffset;
+
+ Node* thisChild;
+ if (kind == CodeForCall)
+ thisChild = get(VirtualRegister(thisReg));
+ else
+ thisChild = nullptr;
+
+ Node* call = addToGraph(op, OpInfo(data), OpInfo(prediction), callTarget, get(VirtualRegister(arguments)), thisChild);
+ VirtualRegister resultReg(result);
+ if (resultReg.isValid())
+ set(resultReg, call);
+}
+
+void ByteCodeParser::emitFunctionChecks(CallVariant callee, Node* callTarget, VirtualRegister thisArgumentReg)
{
Node* thisArgument;
- if (kind == CodeForCall)
- thisArgument = get(virtualRegisterForArgument(0, registerOffset));
+ if (thisArgumentReg.isValid())
+ thisArgument = get(thisArgumentReg);
else
thisArgument = 0;
@@ -1110,13 +1154,6 @@
addToGraph(CheckCell, OpInfo(m_graph.freeze(calleeCell)), callTargetForCheck, thisArgument);
}
-void ByteCodeParser::undoFunctionChecks(CallVariant callee)
-{
- removeLastNodeFromGraph(CheckCell);
- if (callee.isClosureCall())
- removeLastNodeFromGraph(GetExecutable);
-}
-
void ByteCodeParser::emitArgumentPhantoms(int registerOffset, int argumentCountIncludingThis, CodeSpecializationKind kind)
{
for (int i = kind == CodeForCall ? 0 : 1; i < argumentCountIncludingThis; ++i)
@@ -1131,7 +1168,7 @@
FunctionExecutable* executable = callee.functionExecutable();
if (!executable) {
if (verbose)
- dataLog(" Failing because there is no function executable.");
+ dataLog(" Failing because there is no function executable.\n");
return UINT_MAX;
}
@@ -1158,6 +1195,16 @@
}
CapabilityLevel capabilityLevel = inlineFunctionForCapabilityLevel(
codeBlock, kind, callee.isClosureCall());
+ if (verbose) {
+ dataLog(" Kind: ", kind, "\n");
+ dataLog(" Is closure call: ", callee.isClosureCall(), "\n");
+ dataLog(" Capability level: ", capabilityLevel, "\n");
+ dataLog(" Might inline function: ", mightInlineFunctionFor(codeBlock, kind), "\n");
+ dataLog(" Might compile function: ", mightCompileFunctionFor(codeBlock, kind), "\n");
+ dataLog(" Is supported for inlining: ", isSupportedForInlining(codeBlock), "\n");
+ dataLog(" Needs activation: ", codeBlock->ownerExecutable()->needsActivation(), "\n");
+ dataLog(" Is inlining candidate: ", codeBlock->ownerExecutable()->isInliningCandidate(), "\n");
+ }
if (!canInline(capabilityLevel)) {
if (verbose)
dataLog(" Failing because the function is not inlineable.\n");
@@ -1210,13 +1257,15 @@
return codeBlock->instructionCount();
}
-void ByteCodeParser::inlineCall(Node* callTargetNode, int resultOperand, CallVariant callee, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind kind, CallerLinkability callerLinkability)
+template<typename ChecksFunctor>
+void ByteCodeParser::inlineCall(Node* callTargetNode, int resultOperand, CallVariant callee, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind kind, CallerLinkability callerLinkability, const ChecksFunctor& insertChecks)
{
CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);
ASSERT(inliningCost(callee, argumentCountIncludingThis, specializationKind) != UINT_MAX);
CodeBlock* codeBlock = callee.functionExecutable()->baselineCodeBlockFor(specializationKind);
+ insertChecks(codeBlock);
// FIXME: Don't flush constants!
@@ -1356,44 +1405,62 @@
}
}
-bool ByteCodeParser::attemptToInlineCall(Node* callTargetNode, int resultOperand, CallVariant callee, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind kind, CallerLinkability callerLinkability, SpeculatedType prediction, unsigned& inliningBalance)
+template<typename ChecksFunctor>
+bool ByteCodeParser::attemptToInlineCall(Node* callTargetNode, int resultOperand, CallVariant callee, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind kind, CallerLinkability callerLinkability, SpeculatedType prediction, unsigned& inliningBalance, const ChecksFunctor& insertChecks)
{
CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);
if (!inliningBalance)
return false;
+ bool didInsertChecks = false;
+ auto insertChecksWithAccounting = [&] () {
+ insertChecks(nullptr);
+ didInsertChecks = true;
+ };
+
+ if (verbose)
+ dataLog(" Considering callee ", callee, "\n");
+
if (InternalFunction* function = callee.internalFunction()) {
- if (handleConstantInternalFunction(resultOperand, function, registerOffset, argumentCountIncludingThis, specializationKind)) {
+ if (handleConstantInternalFunction(resultOperand, function, registerOffset, argumentCountIncludingThis, specializationKind, insertChecksWithAccounting)) {
+ RELEASE_ASSERT(didInsertChecks);
addToGraph(Phantom, callTargetNode);
emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, specializationKind);
inliningBalance--;
return true;
}
+ RELEASE_ASSERT(!didInsertChecks);
return false;
}
Intrinsic intrinsic = callee.intrinsicFor(specializationKind);
if (intrinsic != NoIntrinsic) {
- if (handleIntrinsic(resultOperand, intrinsic, registerOffset, argumentCountIncludingThis, prediction)) {
+ if (handleIntrinsic(resultOperand, intrinsic, registerOffset, argumentCountIncludingThis, prediction, insertChecksWithAccounting)) {
+ RELEASE_ASSERT(didInsertChecks);
addToGraph(Phantom, callTargetNode);
emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, specializationKind);
inliningBalance--;
return true;
}
+ RELEASE_ASSERT(!didInsertChecks);
return false;
}
unsigned myInliningCost = inliningCost(callee, argumentCountIncludingThis, specializationKind);
if (myInliningCost > inliningBalance)
return false;
-
- inlineCall(callTargetNode, resultOperand, callee, registerOffset, argumentCountIncludingThis, nextOffset, kind, callerLinkability);
+
+ inlineCall(callTargetNode, resultOperand, callee, registerOffset, argumentCountIncludingThis, nextOffset, kind, callerLinkability, insertChecks);
inliningBalance -= myInliningCost;
return true;
}
-bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus& callLinkStatus, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, NodeType callOp, InlineCallFrame::Kind kind, SpeculatedType prediction)
+bool ByteCodeParser::handleInlining(
+ Node* callTargetNode, int resultOperand, const CallLinkStatus& callLinkStatus,
+ int registerOffsetOrFirstFreeReg, VirtualRegister thisArgument,
+ VirtualRegister argumentsArgument, unsigned argumentsOffset, int argumentCountIncludingThis,
+ unsigned nextOffset, NodeType callOp, InlineCallFrame::Kind kind, SpeculatedType prediction)
{
if (verbose) {
dataLog("Handling inlining...\n");
@@ -1407,6 +1474,13 @@
return false;
}
+ if (InlineCallFrame::isVarargs(kind)
+ && callLinkStatus.maxNumArguments() > Options::maximumVarargsForInlining()) {
+ if (verbose)
+ dataLog("Bailing inlining because of varargs.\n");
+ return false;
+ }
+
unsigned inliningBalance = Options::maximumFunctionForCallInlineCandidateInstructionCount();
if (specializationKind == CodeForConstruct)
inliningBalance = std::min(inliningBalance, Options::maximumFunctionForConstructInlineCandidateInstructionCount());
@@ -1417,17 +1491,105 @@
// simplification on the fly and this helps reduce compile times, but we can only leverage
// this in cases where we don't need control flow diamonds to check the callee.
if (!callLinkStatus.couldTakeSlowPath() && callLinkStatus.size() == 1) {
- emitFunctionChecks(
- callLinkStatus[0], callTargetNode, registerOffset, specializationKind);
+ int registerOffset;
+
+ // Only used for varargs calls.
+ unsigned mandatoryMinimum = 0;
+ unsigned maxNumArguments = 0;
+
+ if (InlineCallFrame::isVarargs(kind)) {
+ if (FunctionExecutable* functionExecutable = callLinkStatus[0].functionExecutable())
+ mandatoryMinimum = functionExecutable->parameterCount();
+ else
+ mandatoryMinimum = 0;
+
+ // includes "this"
+ maxNumArguments = std::max(
+ callLinkStatus.maxNumArguments(),
+ mandatoryMinimum + 1);
+
+ // We sort of pretend that this *is* the number of arguments that were passed.
+ argumentCountIncludingThis = maxNumArguments;
+
+ registerOffset = registerOffsetOrFirstFreeReg + 1;
+ registerOffset -= maxNumArguments; // includes "this"
+ registerOffset -= JSStack::CallFrameHeaderSize;
+ registerOffset = -WTF::roundUpToMultipleOf(
+ stackAlignmentRegisters(),
+ -registerOffset);
+ } else
+ registerOffset = registerOffsetOrFirstFreeReg;
+
bool result = attemptToInlineCall(
callTargetNode, resultOperand, callLinkStatus[0], registerOffset,
argumentCountIncludingThis, nextOffset, kind, CallerDoesNormalLinking, prediction,
- inliningBalance);
- if (!result && !callLinkStatus.isProved())
- undoFunctionChecks(callLinkStatus[0]);
+ inliningBalance, [&] (CodeBlock* codeBlock) {
+ emitFunctionChecks(callLinkStatus[0], callTargetNode, specializationKind == CodeForCall ? thisArgument : VirtualRegister());
+
+ // If we have a varargs call, we want to extract the arguments right now.
+ if (InlineCallFrame::isVarargs(kind)) {
+ int remappedRegisterOffset =
+ m_inlineStackTop->remapOperand(VirtualRegister(registerOffset)).offset();
+
+ int argumentStart = registerOffset + JSStack::CallFrameHeaderSize;
+ int remappedArgumentStart =
+ m_inlineStackTop->remapOperand(VirtualRegister(argumentStart)).offset();
+
+ LoadVarargsData* data = m_graph.m_loadVarargsData.add();
+ data->start = VirtualRegister(remappedArgumentStart + 1);
+ data->count = VirtualRegister(remappedRegisterOffset + JSStack::ArgumentCount);
+ data->offset = argumentsOffset;
+ data->limit = maxNumArguments;
+ data->mandatoryMinimum = mandatoryMinimum;
+
+ addToGraph(LoadVarargs, OpInfo(data), get(argumentsArgument));
+
+ // In DFG IR before SSA, we cannot insert control flow between after the
+ // LoadVarargs and the last SetArgument. This isn't a problem once we get to DFG
+ // SSA. Fortunately, we also have other reasons for not inserting control flow
+ // before SSA.
+
+ VariableAccessData* countVariable = newVariableAccessData(
+ VirtualRegister(remappedRegisterOffset + JSStack::ArgumentCount), false);
+ // This is pretty lame, but it will force the count to be flushed as an int. This doesn't
+ // matter very much, since our use of a SetArgument and Flushes for this local slot is
+ // mostly just a formality.
+ countVariable->predict(SpecInt32);
+ countVariable->mergeIsProfitableToUnbox(true);
+ Node* setArgumentCount = addToGraph(SetArgument, OpInfo(countVariable));
+ m_currentBlock->variablesAtTail.setOperand(countVariable->local(), setArgumentCount);
+
+ if (specializationKind == CodeForCall)
+ set(VirtualRegister(argumentStart), get(thisArgument), ImmediateNakedSet);
+ for (unsigned argument = 1; argument < maxNumArguments; ++argument) {
+ VariableAccessData* variable = newVariableAccessData(
+ VirtualRegister(remappedArgumentStart + argument), false);
+ variable->mergeShouldNeverUnbox(true); // We currently have nowhere to put the type check on the LoadVarargs. LoadVarargs is effectful, so after it finishes, we cannot exit.
+
+ // For a while it had been my intention to do things like this inside the
+ // prediction injection phase. But in this case it's really best to do it here,
+ // because it's here that we have access to the variable access datas for the
+ // inlining we're about to do.
+ //
+ // Something else that's interesting here is that we'd really love to get
+ // predictions from the arguments loaded at the callsite, rather than the
+ // arguments received inside the callee. But that probably won't matter for most
+ // calls.
+ if (codeBlock && argument < static_cast<unsigned>(codeBlock->numParameters())) {
+ ConcurrentJITLocker locker(codeBlock->m_lock);
+ if (ValueProfile* profile = codeBlock->valueProfileForArgument(argument))
+ variable->predict(profile->computeUpdatedPrediction(locker));
+ }
+
+ Node* setArgument = addToGraph(SetArgument, OpInfo(variable));
+ m_currentBlock->variablesAtTail.setOperand(variable->local(), setArgument);
+ }
+ }
+ });
if (verbose) {
dataLog("Done inlining (simple).\n");
dataLog("Stack: ", currentCodeOrigin(), "\n");
+ dataLog("Result: ", result, "\n");
}
return result;
}
@@ -1437,11 +1599,9 @@
// do more detailed polyvariant/polymorphic profiling; and second, it reduces compile times in
// the DFG. And by polyvariant profiling we mean polyvariant profiling of *this* call. Note that
// we could improve that aspect of this by doing polymorphic inlining but having the profiling
- // also. Currently we opt against this, but it could be interesting. That would require having a
- // separate node for call edge profiling.
- // FIXME: Introduce the notion of a separate call edge profiling node.
- // https://bugs.webkit.org/show_bug.cgi?id=136033
- if (!isFTL(m_graph.m_plan.mode) || !Options::enablePolymorphicCallInlining()) {
+ // also.
+ if (!isFTL(m_graph.m_plan.mode) || !Options::enablePolymorphicCallInlining()
+ || InlineCallFrame::isVarargs(kind)) {
if (verbose) {
dataLog("Bailing inlining (hard).\n");
dataLog("Stack: ", currentCodeOrigin(), "\n");
@@ -1482,6 +1642,8 @@
dataLog("Stack: ", currentCodeOrigin(), "\n");
}
+ int registerOffset = registerOffsetOrFirstFreeReg;
+
// This makes me wish that we were in SSA all the time. We need to pick a variable into which to
// store the callee so that it will be accessible to all of the blocks we're about to create. We
// get away with doing an immediate-set here because we wouldn't have performed any side effects
@@ -1526,7 +1688,7 @@
bool inliningResult = attemptToInlineCall(
myCallTargetNode, resultOperand, callLinkStatus[i], registerOffset,
argumentCountIncludingThis, nextOffset, kind, CallerLinksManually, prediction,
- inliningBalance);
+ inliningBalance, [&] (CodeBlock*) { });
if (!inliningResult) {
// That failed so we let the block die. Nothing interesting should have been added to
@@ -1610,14 +1772,17 @@
return true;
}
-bool ByteCodeParser::handleMinMax(int resultOperand, NodeType op, int registerOffset, int argumentCountIncludingThis)
+template<typename ChecksFunctor>
+bool ByteCodeParser::handleMinMax(int resultOperand, NodeType op, int registerOffset, int argumentCountIncludingThis, const ChecksFunctor& insertChecks)
{
if (argumentCountIncludingThis == 1) { // Math.min()
+ insertChecks();
set(VirtualRegister(resultOperand), addToGraph(JSConstant, OpInfo(m_constantNaN)));
return true;
}
if (argumentCountIncludingThis == 2) { // Math.min(x)
+ insertChecks();
Node* result = get(VirtualRegister(virtualRegisterForArgument(1, registerOffset)));
addToGraph(Phantom, Edge(result, NumberUse));
set(VirtualRegister(resultOperand), result);
@@ -1625,6 +1790,7 @@
}
if (argumentCountIncludingThis == 3) { // Math.min(x, y)
+ insertChecks();
set(VirtualRegister(resultOperand), addToGraph(op, get(virtualRegisterForArgument(1, registerOffset)), get(virtualRegisterForArgument(2, registerOffset))));
return true;
}
@@ -1633,11 +1799,13 @@
return false;
}
-bool ByteCodeParser::handleIntrinsic(int resultOperand, Intrinsic intrinsic, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction)
+template<typename ChecksFunctor>
+bool ByteCodeParser::handleIntrinsic(int resultOperand, Intrinsic intrinsic, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction, const ChecksFunctor& insertChecks)
{
switch (intrinsic) {
case AbsIntrinsic: {
if (argumentCountIncludingThis == 1) { // Math.abs()
+ insertChecks();
set(VirtualRegister(resultOperand), addToGraph(JSConstant, OpInfo(m_constantNaN)));
return true;
}
@@ -1645,6 +1813,7 @@
if (!MacroAssembler::supportsFloatingPointAbs())
return false;
+ insertChecks();
Node* node = addToGraph(ArithAbs, get(virtualRegisterForArgument(1, registerOffset)));
if (m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, Overflow))
node->mergeFlags(NodeMayOverflowInDFG);
@@ -1653,29 +1822,33 @@
}
case MinIntrinsic:
- return handleMinMax(resultOperand, ArithMin, registerOffset, argumentCountIncludingThis);
+ return handleMinMax(resultOperand, ArithMin, registerOffset, argumentCountIncludingThis, insertChecks);
case MaxIntrinsic:
- return handleMinMax(resultOperand, ArithMax, registerOffset, argumentCountIncludingThis);
+ return handleMinMax(resultOperand, ArithMax, registerOffset, argumentCountIncludingThis, insertChecks);
case SqrtIntrinsic:
case CosIntrinsic:
case SinIntrinsic: {
if (argumentCountIncludingThis == 1) {
+ insertChecks();
set(VirtualRegister(resultOperand), addToGraph(JSConstant, OpInfo(m_constantNaN)));
return true;
}
switch (intrinsic) {
case SqrtIntrinsic:
+ insertChecks();
set(VirtualRegister(resultOperand), addToGraph(ArithSqrt, get(virtualRegisterForArgument(1, registerOffset))));
return true;
case CosIntrinsic:
+ insertChecks();
set(VirtualRegister(resultOperand), addToGraph(ArithCos, get(virtualRegisterForArgument(1, registerOffset))));
return true;
case SinIntrinsic:
+ insertChecks();
set(VirtualRegister(resultOperand), addToGraph(ArithSin, get(virtualRegisterForArgument(1, registerOffset))));
return true;
@@ -1688,9 +1861,11 @@
case PowIntrinsic: {
if (argumentCountIncludingThis < 3) {
// Math.pow() and Math.pow(x) return NaN.
+ insertChecks();
set(VirtualRegister(resultOperand), addToGraph(JSConstant, OpInfo(m_constantNaN)));
return true;
}
+ insertChecks();
VirtualRegister xOperand = virtualRegisterForArgument(1, registerOffset);
VirtualRegister yOperand = virtualRegisterForArgument(2, registerOffset);
set(VirtualRegister(resultOperand), addToGraph(ArithPow, get(xOperand), get(yOperand)));
@@ -1710,6 +1885,7 @@
case Array::Double:
case Array::Contiguous:
case Array::ArrayStorage: {
+ insertChecks();
Node* arrayPush = addToGraph(ArrayPush, OpInfo(arrayMode.asWord()), OpInfo(prediction), get(virtualRegisterForArgument(0, registerOffset)), get(virtualRegisterForArgument(1, registerOffset)));
set(VirtualRegister(resultOperand), arrayPush);
@@ -1733,6 +1909,7 @@
case Array::Double:
case Array::Contiguous:
case Array::ArrayStorage: {
+ insertChecks();
Node* arrayPop = addToGraph(ArrayPop, OpInfo(arrayMode.asWord()), OpInfo(prediction), get(virtualRegisterForArgument(0, registerOffset)));
set(VirtualRegister(resultOperand), arrayPop);
return true;
@@ -1747,6 +1924,7 @@
if (argumentCountIncludingThis != 2)
return false;
+ insertChecks();
VirtualRegister thisOperand = virtualRegisterForArgument(0, registerOffset);
VirtualRegister indexOperand = virtualRegisterForArgument(1, registerOffset);
Node* charCode = addToGraph(StringCharCodeAt, OpInfo(ArrayMode(Array::String).asWord()), get(thisOperand), get(indexOperand));
@@ -1759,6 +1937,7 @@
if (argumentCountIncludingThis != 2)
return false;
+ insertChecks();
VirtualRegister thisOperand = virtualRegisterForArgument(0, registerOffset);
VirtualRegister indexOperand = virtualRegisterForArgument(1, registerOffset);
Node* charCode = addToGraph(StringCharAt, OpInfo(ArrayMode(Array::String).asWord()), get(thisOperand), get(indexOperand));
@@ -1770,6 +1949,7 @@
if (argumentCountIncludingThis != 2)
return false;
+ insertChecks();
VirtualRegister indexOperand = virtualRegisterForArgument(1, registerOffset);
Node* charCode = addToGraph(StringFromCharCode, get(indexOperand));
@@ -1782,6 +1962,7 @@
if (argumentCountIncludingThis != 2)
return false;
+ insertChecks();
Node* regExpExec = addToGraph(RegExpExec, OpInfo(0), OpInfo(prediction), get(virtualRegisterForArgument(0, registerOffset)), get(virtualRegisterForArgument(1, registerOffset)));
set(VirtualRegister(resultOperand), regExpExec);
@@ -1792,6 +1973,7 @@
if (argumentCountIncludingThis != 2)
return false;
+ insertChecks();
Node* regExpExec = addToGraph(RegExpTest, OpInfo(0), OpInfo(prediction), get(virtualRegisterForArgument(0, registerOffset)), get(virtualRegisterForArgument(1, registerOffset)));
set(VirtualRegister(resultOperand), regExpExec);
@@ -1801,6 +1983,7 @@
case IMulIntrinsic: {
if (argumentCountIncludingThis != 3)
return false;
+ insertChecks();
VirtualRegister leftOperand = virtualRegisterForArgument(1, registerOffset);
VirtualRegister rightOperand = virtualRegisterForArgument(2, registerOffset);
Node* left = get(leftOperand);
@@ -1812,29 +1995,34 @@
case FRoundIntrinsic: {
if (argumentCountIncludingThis != 2)
return false;
+ insertChecks();
VirtualRegister operand = virtualRegisterForArgument(1, registerOffset);
set(VirtualRegister(resultOperand), addToGraph(ArithFRound, get(operand)));
return true;
}
case DFGTrueIntrinsic: {
+ insertChecks();
set(VirtualRegister(resultOperand), jsConstant(jsBoolean(true)));
return true;
}
case OSRExitIntrinsic: {
+ insertChecks();
addToGraph(ForceOSRExit);
set(VirtualRegister(resultOperand), addToGraph(JSConstant, OpInfo(m_constantUndefined)));
return true;
}
case IsFinalTierIntrinsic: {
+ insertChecks();
set(VirtualRegister(resultOperand),
jsConstant(jsBoolean(Options::useFTLJIT() ? isFTL(m_graph.m_plan.mode) : true)));
return true;
}
case SetInt32HeapPredictionIntrinsic: {
+ insertChecks();
for (int i = 1; i < argumentCountIncludingThis; ++i) {
Node* node = get(virtualRegisterForArgument(i, registerOffset));
if (node->hasHeapPrediction())
@@ -1847,6 +2035,7 @@
case FiatInt52Intrinsic: {
if (argumentCountIncludingThis != 2)
return false;
+ insertChecks();
VirtualRegister operand = virtualRegisterForArgument(1, registerOffset);
if (enableInt52())
set(VirtualRegister(resultOperand), addToGraph(FiatInt52, get(operand)));
@@ -1860,9 +2049,10 @@
}
}
+template<typename ChecksFunctor>
bool ByteCodeParser::handleTypedArrayConstructor(
int resultOperand, InternalFunction* function, int registerOffset,
- int argumentCountIncludingThis, TypedArrayType type)
+ int argumentCountIncludingThis, TypedArrayType type, const ChecksFunctor& insertChecks)
{
if (!isTypedView(type))
return false;
@@ -1906,16 +2096,21 @@
if (argumentCountIncludingThis != 2)
return false;
-
+
+ insertChecks();
set(VirtualRegister(resultOperand),
addToGraph(NewTypedArray, OpInfo(type), get(virtualRegisterForArgument(1, registerOffset))));
return true;
}
+template<typename ChecksFunctor>
bool ByteCodeParser::handleConstantInternalFunction(
int resultOperand, InternalFunction* function, int registerOffset,
- int argumentCountIncludingThis, CodeSpecializationKind kind)
+ int argumentCountIncludingThis, CodeSpecializationKind kind, const ChecksFunctor& insertChecks)
{
+ if (verbose)
+ dataLog(" Handling constant internal function ", JSValue(function), "\n");
+
// If we ever find that we have a lot of internal functions that we specialize for,
// then we should probably have some sort of hashtable dispatch, or maybe even
// dispatch straight through the MethodTable of the InternalFunction. But for now,
@@ -1927,6 +2122,7 @@
if (function->globalObject() != m_inlineStackTop->m_codeBlock->globalObject())
return false;
+ insertChecks();
if (argumentCountIncludingThis == 2) {
set(VirtualRegister(resultOperand),
addToGraph(NewArrayWithSize, OpInfo(ArrayWithUndecided), get(virtualRegisterForArgument(1, registerOffset))));
@@ -1941,6 +2137,8 @@
}
if (function->classInfo() == StringConstructor::info()) {
+ insertChecks();
+
Node* result;
if (argumentCountIncludingThis <= 1)
@@ -1958,7 +2156,7 @@
for (unsigned typeIndex = 0; typeIndex < NUMBER_OF_TYPED_ARRAY_TYPES; ++typeIndex) {
bool result = handleTypedArrayConstructor(
resultOperand, function, registerOffset, argumentCountIncludingThis,
- indexToTypedArrayType(typeIndex));
+ indexToTypedArrayType(typeIndex), insertChecks);
if (result)
return true;
}
@@ -3122,43 +3320,70 @@
int thisReg = currentInstruction[3].u.operand;
int arguments = currentInstruction[4].u.operand;
int firstFreeReg = currentInstruction[5].u.operand;
+ int firstVarArgOffset = currentInstruction[6].u.operand;
- ASSERT(inlineCallFrame());
- ASSERT_UNUSED(arguments, arguments == m_inlineStackTop->m_codeBlock->argumentsRegister().offset());
- ASSERT(!m_inlineStackTop->m_codeBlock->symbolTable()->slowArguments());
+ if (arguments == m_inlineStackTop->m_codeBlock->uncheckedArgumentsRegister().offset()
+ && !m_inlineStackTop->m_codeBlock->symbolTable()->slowArguments()) {
+ if (inlineCallFrame()
+ && !inlineCallFrame()->isVarargs()
+ && !firstVarArgOffset) {
+ addToGraph(CheckArgumentsNotCreated);
- addToGraph(CheckArgumentsNotCreated);
+ unsigned argCount = inlineCallFrame()->arguments.size();
+
+ // Let's compute the register offset. We start with the last used register, and
+ // then adjust for the things we want in the call frame.
+ int registerOffset = firstFreeReg + 1;
+ registerOffset -= argCount; // We will be passing some arguments.
+ registerOffset -= JSStack::CallFrameHeaderSize; // We will pretend to have a call frame header.
+
+ // Get the alignment right.
+ registerOffset = -WTF::roundUpToMultipleOf(
+ stackAlignmentRegisters(),
+ -registerOffset);
- unsigned argCount = inlineCallFrame()->arguments.size();
+ ensureLocals(
+ m_inlineStackTop->remapOperand(
+ VirtualRegister(registerOffset)).toLocal());
- // Let's compute the register offset. We start with the last used register, and
- // then adjust for the things we want in the call frame.
- int registerOffset = firstFreeReg + 1;
- registerOffset -= argCount; // We will be passing some arguments.
- registerOffset -= JSStack::CallFrameHeaderSize; // We will pretend to have a call frame header.
+ // The bytecode wouldn't have set up the arguments. But we'll do it and make it
+ // look like the bytecode had done it.
+ int nextRegister = registerOffset + JSStack::CallFrameHeaderSize;
+ set(VirtualRegister(nextRegister++), get(VirtualRegister(thisReg)), ImmediateNakedSet);
+ for (unsigned argument = 1; argument < argCount; ++argument)
+ set(VirtualRegister(nextRegister++), get(virtualRegisterForArgument(argument)), ImmediateNakedSet);
- // Get the alignment right.
- registerOffset = -WTF::roundUpToMultipleOf(
- stackAlignmentRegisters(),
- -registerOffset);
-
- ensureLocals(
- m_inlineStackTop->remapOperand(
- VirtualRegister(registerOffset)).toLocal());
+ handleCall(
+ result, Call, CodeForCall, OPCODE_LENGTH(op_call_varargs),
+ callee, argCount, registerOffset);
+ NEXT_OPCODE(op_call_varargs);
+ }
+
+ // Emit CallForwardVarargs
+ // FIXME: This means we cannot inline forwarded varargs calls inside a varargs
+ // call frame. We will probably fix that once we finally get rid of the
+ // arguments object special-casing.
+ CallVarargsData* data = m_graph.m_callVarargsData.add();
+ data->firstVarArgOffset = firstVarArgOffset;
+
+ Node* call = addToGraph(
+ CallForwardVarargs, OpInfo(data), OpInfo(getPrediction()),
+ get(VirtualRegister(callee)), get(VirtualRegister(thisReg)));
+ VirtualRegister resultReg(result);
+ if (resultReg.isValid())
+ set(resultReg, call);
+ NEXT_OPCODE(op_call_varargs);
+ }
- // The bytecode wouldn't have set up the arguments. But we'll do it and make it
- // look like the bytecode had done it.
- int nextRegister = registerOffset + JSStack::CallFrameHeaderSize;
- set(VirtualRegister(nextRegister++), get(VirtualRegister(thisReg)), ImmediateNakedSet);
- for (unsigned argument = 1; argument < argCount; ++argument)
- set(VirtualRegister(nextRegister++), get(virtualRegisterForArgument(argument)), ImmediateNakedSet);
-
- handleCall(
- result, Call, CodeForCall, OPCODE_LENGTH(op_call_varargs),
- callee, argCount, registerOffset);
+ handleVarargsCall(currentInstruction, CallVarargs, CodeForCall);
NEXT_OPCODE(op_call_varargs);
}
+ case op_construct_varargs: {
+ handleVarargsCall(currentInstruction, ConstructVarargs, CodeForConstruct);
+ NEXT_OPCODE(op_construct_varargs);
+ }
+
case op_jneq_ptr:
// Statically speculate for now. It makes sense to let speculate-only jneq_ptr
// support simmer for a while before making it more general, since it's
diff --git a/Source/JavaScriptCore/dfg/DFGCapabilities.cpp b/Source/JavaScriptCore/dfg/DFGCapabilities.cpp
index 37b38e2..ee4d4d1 100644
--- a/Source/JavaScriptCore/dfg/DFGCapabilities.cpp
+++ b/Source/JavaScriptCore/dfg/DFGCapabilities.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2011, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011, 2013-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -96,6 +96,8 @@
CapabilityLevel capabilityLevel(OpcodeID opcodeID, CodeBlock* codeBlock, Instruction* pc)
{
+ UNUSED_PARAM(codeBlock); // This function does some bytecode parsing. Ordinarily bytecode parsing requires the owning CodeBlock. It's sort of strange that we don't use it here right now.
+
switch (opcodeID) {
case op_enter:
case op_touch_entry:
@@ -182,6 +184,8 @@
case op_throw_static_error:
case op_call:
case op_construct:
+ case op_call_varargs:
+ case op_construct_varargs:
case op_init_lazy_reg:
case op_create_arguments:
case op_tear_off_arguments:
@@ -223,14 +227,6 @@
return CanCompileAndInline;
}
- case op_call_varargs:
- if (codeBlock->usesArguments() && pc[4].u.operand == codeBlock->argumentsRegister().offset()
- && !pc[6].u.operand)
- return CanInline;
- // FIXME: We should handle this.
- // https://bugs.webkit.org/show_bug.cgi?id=127626
- return CannotCompile;
-
case op_new_regexp:
case op_create_lexical_environment:
case op_new_func:
diff --git a/Source/JavaScriptCore/dfg/DFGCapabilities.h b/Source/JavaScriptCore/dfg/DFGCapabilities.h
index da0390d..3047ad3 100644
--- a/Source/JavaScriptCore/dfg/DFGCapabilities.h
+++ b/Source/JavaScriptCore/dfg/DFGCapabilities.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -86,9 +86,7 @@
return leastUpperBound(CanCompileAndInline, computedCapabilityLevel);
if (mightCompile && !mightInline)
return leastUpperBound(CanCompile, computedCapabilityLevel);
- if (!mightCompile && mightInline)
- return leastUpperBound(CanInline, computedCapabilityLevel);
- if (!mightCompile && !mightInline)
+ if (!mightCompile)
return CannotCompile;
RELEASE_ASSERT_NOT_REACHED();
return CannotCompile;
@@ -142,6 +140,14 @@
return mightInlineFunctionForConstruct(codeBlock);
}
+inline bool mightCompileFunctionFor(CodeBlock* codeBlock, CodeSpecializationKind kind)
+{
+ if (kind == CodeForCall)
+ return mightCompileFunctionForCall(codeBlock);
+ ASSERT(kind == CodeForConstruct);
+ return mightCompileFunctionForConstruct(codeBlock);
+}
+
inline bool mightInlineFunction(CodeBlock* codeBlock)
{
return mightInlineFunctionFor(codeBlock, codeBlock->specializationKind());
diff --git a/Source/JavaScriptCore/dfg/DFGClobberize.h b/Source/JavaScriptCore/dfg/DFGClobberize.h
index 2c35f5c..723edaa 100644
--- a/Source/JavaScriptCore/dfg/DFGClobberize.h
+++ b/Source/JavaScriptCore/dfg/DFGClobberize.h
@@ -366,6 +366,9 @@
case Construct:
case NativeCall:
case NativeConstruct:
+ case CallVarargs:
+ case CallForwardVarargs:
+ case ConstructVarargs:
case ToPrimitive:
case In:
case GetMyArgumentsLengthSafe:
@@ -401,6 +404,13 @@
def(HeapLocation(VariableLoc, AbstractHeap(Variables, node->local())), node->child1().node());
return;
+ case LoadVarargs:
+ // This actually writes to local variables as well. But when it reads the array, it does
+ // so in a way that may trigger getters or various traps.
+ read(World);
+ write(World);
+ return;
+
case GetLocalUnlinked:
read(AbstractHeap(Variables, node->unlinkedLocal()));
def(HeapLocation(VariableLoc, AbstractHeap(Variables, node->unlinkedLocal())), node);
@@ -881,7 +891,7 @@
return;
}
- RELEASE_ASSERT_NOT_REACHED();
+ DFG_CRASH(graph, node, toCString("Unrecognized node type: ", Graph::opName(node->op())).data());
}
class NoOpClobberize {
diff --git a/Source/JavaScriptCore/dfg/DFGCommon.cpp b/Source/JavaScriptCore/dfg/DFGCommon.cpp
index a11d7b8..69ce603 100644
--- a/Source/JavaScriptCore/dfg/DFGCommon.cpp
+++ b/Source/JavaScriptCore/dfg/DFGCommon.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -26,10 +26,11 @@
#include "config.h"
#include "DFGCommon.h"
-#if ENABLE(DFG_JIT)
-
#include "DFGNode.h"
#include "JSCInlines.h"
+#include <wtf/PrintStream.h>
+
+#if ENABLE(DFG_JIT)
namespace JSC { namespace DFG {
@@ -131,3 +132,28 @@
#endif // ENABLE(DFG_JIT)
+namespace WTF {
+
+using namespace JSC::DFG;
+
+void printInternal(PrintStream& out, CapabilityLevel capabilityLevel)
+{
+ switch (capabilityLevel) {
+ case CannotCompile:
+ out.print("CannotCompile");
+ return;
+ case CanCompile:
+ out.print("CanCompile");
+ return;
+ case CanCompileAndInline:
+ out.print("CanCompileAndInline");
+ return;
+ case CapabilityLevelNotSet:
+ out.print("CapabilityLevelNotSet");
+ return;
+ }
+ RELEASE_ASSERT_NOT_REACHED();
+}
+
+} // namespace WTF
+
diff --git a/Source/JavaScriptCore/dfg/DFGCommon.h b/Source/JavaScriptCore/dfg/DFGCommon.h
index 68e7a41..e91274d 100644
--- a/Source/JavaScriptCore/dfg/DFGCommon.h
+++ b/Source/JavaScriptCore/dfg/DFGCommon.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2011-2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -292,7 +292,6 @@
enum CapabilityLevel {
CannotCompile,
- CanInline,
CanCompile,
CanCompileAndInline,
CapabilityLevelNotSet
@@ -312,7 +311,6 @@
inline bool canInline(CapabilityLevel level)
{
switch (level) {
- case CanInline:
case CanCompileAndInline:
return true;
default:
@@ -325,14 +323,6 @@
switch (a) {
case CannotCompile:
return CannotCompile;
- case CanInline:
- switch (b) {
- case CanInline:
- case CanCompileAndInline:
- return CanInline;
- default:
- return CannotCompile;
- }
case CanCompile:
switch (b) {
case CanCompile:
@@ -364,5 +354,11 @@
} } // namespace JSC::DFG
+namespace WTF {
+
+void printInternal(PrintStream&, JSC::DFG::CapabilityLevel);
+
+} // namespace WTF
+
#endif // DFGCommon_h
diff --git a/Source/JavaScriptCore/dfg/DFGDoesGC.cpp b/Source/JavaScriptCore/dfg/DFGDoesGC.cpp
index 61aa9cb..b0b7896 100644
--- a/Source/JavaScriptCore/dfg/DFGDoesGC.cpp
+++ b/Source/JavaScriptCore/dfg/DFGDoesGC.cpp
@@ -117,6 +117,10 @@
case CompareStrictEq:
case Call:
case Construct:
+ case CallVarargs:
+ case ConstructVarargs:
+ case LoadVarargs:
+ case CallForwardVarargs:
case NativeCall:
case NativeConstruct:
case Breakpoint:
diff --git a/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp b/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
index 0dc3bf2..0c40677 100644
--- a/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
@@ -1214,6 +1214,10 @@
case AllocationProfileWatchpoint:
case Call:
case Construct:
+ case CallVarargs:
+ case ConstructVarargs:
+ case CallForwardVarargs:
+ case LoadVarargs:
case ProfileControlFlow:
case NativeCall:
case NativeConstruct:
diff --git a/Source/JavaScriptCore/dfg/DFGGraph.cpp b/Source/JavaScriptCore/dfg/DFGGraph.cpp
index 8b70cc9..7c9f804 100644
--- a/Source/JavaScriptCore/dfg/DFGGraph.cpp
+++ b/Source/JavaScriptCore/dfg/DFGGraph.cpp
@@ -313,6 +313,18 @@
out.print(comma, RawPointer(node->storagePointer()));
if (node->hasObjectMaterializationData())
out.print(comma, node->objectMaterializationData());
+ if (node->hasCallVarargsData())
+ out.print(comma, "firstVarArgOffset = ", node->callVarargsData()->firstVarArgOffset);
+ if (node->hasLoadVarargsData()) {
+ LoadVarargsData* data = node->loadVarargsData();
+ out.print(comma, "start = ", data->start, ", count = ", data->count);
+ if (data->machineStart.isValid())
+ out.print(", machineStart = ", data->machineStart);
+ if (data->machineCount.isValid())
+ out.print(", machineCount = ", data->machineCount);
+ out.print(", offset = ", data->offset, ", mandatoryMinimum = ", data->mandatoryMinimum);
+ out.print(", limit = ", data->limit);
+ }
if (node->isConstant())
out.print(comma, pointerDumpInContext(node->constant(), context));
if (node->isJump())
@@ -400,7 +412,7 @@
Node* phiNode = block->phis[i];
if (!phiNode->shouldGenerate() && phiNodeDumpMode == DumpLivePhisOnly)
continue;
- out.print(" @", phiNode->index(), "<", phiNode->refCount(), ">->(");
+ out.print(" @", phiNode->index(), "<", phiNode->local(), ",", phiNode->refCount(), ">->(");
if (phiNode->child1()) {
out.print("@", phiNode->child1()->index());
if (phiNode->child2()) {
@@ -869,10 +881,12 @@
if (reg.isArgument()) {
RELEASE_ASSERT(reg.offset() < JSStack::CallFrameHeaderSize);
- if (!codeOrigin.inlineCallFrame->isClosureCall)
- return false;
+ if (codeOrigin.inlineCallFrame->isClosureCall
+ && reg.offset() == JSStack::Callee)
+ return true;
- if (reg.offset() == JSStack::Callee)
+ if (codeOrigin.inlineCallFrame->isVarargs()
+ && reg.offset() == JSStack::ArgumentCount)
return true;
return false;
@@ -1235,6 +1249,51 @@
crash(*this, toCString("While handling block ", pointerDump(block), "\n\n"), file, line, function, assertion);
}
+ValueProfile* Graph::valueProfileFor(Node* node)
+{
+ if (!node)
+ return nullptr;
+
+ CodeBlock* profiledBlock = baselineCodeBlockFor(node->origin.semantic);
+
+ if (node->hasLocal(*this)) {
+ if (!node->local().isArgument())
+ return nullptr;
+ int argument = node->local().toArgument();
+ Node* argumentNode = m_arguments[argument];
+ if (!argumentNode)
+ return nullptr;
+ if (node->variableAccessData() != argumentNode->variableAccessData())
+ return nullptr;
+ return profiledBlock->valueProfileForArgument(argument);
+ }
+
+ if (node->hasHeapPrediction())
+ return profiledBlock->valueProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex);
+
+ return nullptr;
+}
+
+MethodOfGettingAValueProfile Graph::methodOfGettingAValueProfileFor(Node* node)
+{
+ if (!node)
+ return MethodOfGettingAValueProfile();
+
+ if (ValueProfile* valueProfile = valueProfileFor(node))
+ return MethodOfGettingAValueProfile(valueProfile);
+
+ if (node->op() == GetLocal) {
+ CodeBlock* profiledBlock = baselineCodeBlockFor(node->origin.semantic);
+
+ return MethodOfGettingAValueProfile::fromLazyOperand(
+ profiledBlock,
+ LazyOperandValueProfileKey(
+ node->origin.semantic.bytecodeIndex, node->local()));
+ }
+
+ return MethodOfGettingAValueProfile();
+}
+
} } // namespace JSC::DFG
#endif // ENABLE(DFG_JIT)
diff --git a/Source/JavaScriptCore/dfg/DFGGraph.h b/Source/JavaScriptCore/dfg/DFGGraph.h
index 5898b80..ba0934c 100644
--- a/Source/JavaScriptCore/dfg/DFGGraph.h
+++ b/Source/JavaScriptCore/dfg/DFGGraph.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -480,50 +480,8 @@
return m_profiledBlock->uncheckedActivationRegister();
}
- ValueProfile* valueProfileFor(Node* node)
- {
- if (!node)
- return nullptr;
-
- CodeBlock* profiledBlock = baselineCodeBlockFor(node->origin.semantic);
-
- if (node->hasLocal(*this)) {
- if (!node->local().isArgument())
- return 0;
- int argument = node->local().toArgument();
- Node* argumentNode = m_arguments[argument];
- if (!argumentNode)
- return nullptr;
- if (node->variableAccessData() != argumentNode->variableAccessData())
- return nullptr;
- return profiledBlock->valueProfileForArgument(argument);
- }
-
- if (node->hasHeapPrediction())
- return profiledBlock->valueProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex);
-
- return 0;
- }
-
- MethodOfGettingAValueProfile methodOfGettingAValueProfileFor(Node* node)
- {
- if (!node)
- return MethodOfGettingAValueProfile();
-
- if (ValueProfile* valueProfile = valueProfileFor(node))
- return MethodOfGettingAValueProfile(valueProfile);
-
- if (node->op() == GetLocal) {
- CodeBlock* profiledBlock = baselineCodeBlockFor(node->origin.semantic);
-
- return MethodOfGettingAValueProfile::fromLazyOperand(
- profiledBlock,
- LazyOperandValueProfileKey(
- node->origin.semantic.bytecodeIndex, node->local()));
- }
-
- return MethodOfGettingAValueProfile();
- }
+ ValueProfile* valueProfileFor(Node*);
+ MethodOfGettingAValueProfile methodOfGettingAValueProfileFor(Node*);
bool usesArguments() const
{
@@ -861,6 +819,8 @@
Bag<MultiGetByOffsetData> m_multiGetByOffsetData;
Bag<MultiPutByOffsetData> m_multiPutByOffsetData;
Bag<ObjectMaterializationData> m_objectMaterializationData;
+ Bag<CallVarargsData> m_callVarargsData;
+ Bag<LoadVarargsData> m_loadVarargsData;
Vector<InlineVariableData, 4> m_inlineVariableData;
HashMap<CodeBlock*, std::unique_ptr<FullBytecodeLiveness>> m_bytecodeLiveness;
bool m_hasArguments;
diff --git a/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp b/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
index 1379e51..fb1cdee 100644
--- a/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
+++ b/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2011, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011, 2013-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -122,6 +122,7 @@
// lookupExceptionHandlerFromCallerFrame is passed two arguments, the VM and the exec (the CallFrame*).
move(TrustedImmPtr(vm()), GPRInfo::argumentGPR0);
move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR1);
+ addPtr(TrustedImm32(m_graph.stackPointerOffset() * sizeof(Register)), GPRInfo::callFrameRegister, stackPointerRegister);
#if CPU(X86)
// FIXME: should use the call abstraction, but this is currently in the SpeculativeJIT layer!
@@ -247,7 +248,7 @@
JSCallRecord& record = m_jsCalls[i];
CallLinkInfo& info = *record.m_info;
ThunkGenerator generator = linkThunkGeneratorFor(
- info.callType == CallLinkInfo::Construct ? CodeForConstruct : CodeForCall,
+ info.specializationKind(),
RegisterPreservationNotRequired);
linkBuffer.link(record.m_slowCall, FunctionPtr(m_vm->getCTIStub(generator).code().executableAddress()));
info.callReturnLocation = linkBuffer.locationOfNearCall(record.m_slowCall);
diff --git a/Source/JavaScriptCore/dfg/DFGMayExit.cpp b/Source/JavaScriptCore/dfg/DFGMayExit.cpp
index 559fff2..e77b7c3 100644
--- a/Source/JavaScriptCore/dfg/DFGMayExit.cpp
+++ b/Source/JavaScriptCore/dfg/DFGMayExit.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -84,6 +84,7 @@
case GetCallee:
case GetScope:
case PhantomLocal:
+ case CountExecution:
break;
default:
diff --git a/Source/JavaScriptCore/dfg/DFGNode.h b/Source/JavaScriptCore/dfg/DFGNode.h
index 2718b7b..7d083ce 100644
--- a/Source/JavaScriptCore/dfg/DFGNode.h
+++ b/Source/JavaScriptCore/dfg/DFGNode.h
@@ -183,6 +183,20 @@
bool didUseJumpTable;
};
+struct CallVarargsData {
+ int firstVarArgOffset;
+};
+
+struct LoadVarargsData {
+ VirtualRegister start; // Local for the first element.
+ VirtualRegister count; // Local for the count.
+ VirtualRegister machineStart;
+ VirtualRegister machineCount;
+ unsigned offset; // Which array element to start with. Usually this is 0.
+ unsigned mandatoryMinimum; // The number of elements on the stack that must be initialized; if the array is too short then the missing elements must get undefined. Does not include "this".
+ unsigned limit; // Maximum number of elements to load. Includes "this".
+};
+
// This type used in passing an immediate argument to Node constructor;
// distinguishes an immediate value (typically an index into a CodeBlock data structure -
// a constant index, argument, or identifier) from a Node*.
@@ -895,6 +909,35 @@
return bitwise_cast<WriteBarrier<Unknown>*>(m_opInfo);
}
+ bool hasCallVarargsData()
+ {
+ switch (op()) {
+ case CallVarargs:
+ case CallForwardVarargs:
+ case ConstructVarargs:
+ return true;
+ default:
+ return false;
+ }
+ }
+
+ CallVarargsData* callVarargsData()
+ {
+ ASSERT(hasCallVarargsData());
+ return bitwise_cast<CallVarargsData*>(m_opInfo);
+ }
+
+ bool hasLoadVarargsData()
+ {
+ return op() == LoadVarargs;
+ }
+
+ LoadVarargsData* loadVarargsData()
+ {
+ ASSERT(hasLoadVarargsData());
+ return bitwise_cast<LoadVarargsData*>(m_opInfo);
+ }
+
bool hasResult()
{
return !!result();
@@ -1049,6 +1092,9 @@
case GetMyArgumentByValSafe:
case Call:
case Construct:
+ case CallVarargs:
+ case ConstructVarargs:
+ case CallForwardVarargs:
case NativeCall:
case NativeConstruct:
case GetByOffset:
diff --git a/Source/JavaScriptCore/dfg/DFGNodeType.h b/Source/JavaScriptCore/dfg/DFGNodeType.h
index 4e8e5e6..e003137 100644
--- a/Source/JavaScriptCore/dfg/DFGNodeType.h
+++ b/Source/JavaScriptCore/dfg/DFGNodeType.h
@@ -147,6 +147,7 @@
/* this must be the directly subsequent property put. Note that PutByVal */\
/* opcodes use VarArgs beause they may have up to 4 children. */\
macro(GetByVal, NodeResultJS | NodeMustGenerate) \
+ macro(LoadVarargs, NodeMustGenerate) \
macro(PutByValDirect, NodeMustGenerate | NodeHasVarArgs) \
macro(PutByVal, NodeMustGenerate | NodeHasVarArgs) \
macro(PutByValAlias, NodeMustGenerate | NodeHasVarArgs) \
@@ -217,6 +218,9 @@
/* Calls. */\
macro(Call, NodeResultJS | NodeMustGenerate | NodeHasVarArgs) \
macro(Construct, NodeResultJS | NodeMustGenerate | NodeHasVarArgs) \
+ macro(CallVarargs, NodeResultJS | NodeMustGenerate) \
+ macro(CallForwardVarargs, NodeResultJS | NodeMustGenerate) \
+ macro(ConstructVarargs, NodeResultJS | NodeMustGenerate) \
macro(NativeCall, NodeResultJS | NodeMustGenerate | NodeHasVarArgs) \
macro(NativeConstruct, NodeResultJS | NodeMustGenerate | NodeHasVarArgs) \
\
diff --git a/Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.cpp b/Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.cpp
index 85b6f3f..41c6d24 100644
--- a/Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -156,6 +156,17 @@
break;
}
+ case LoadVarargs: {
+ LoadVarargsData* data = node->loadVarargsData();
+ m_availability.m_locals.operand(data->count) =
+ Availability(FlushedAt(FlushedInt32, data->machineCount));
+ for (unsigned i = data->limit; i--;) {
+ m_availability.m_locals.operand(VirtualRegister(data->start.offset() + i)) =
+ Availability(FlushedAt(FlushedJSValue, VirtualRegister(data->machineStart.offset() + i)));
+ }
+ break;
+ }
+
default:
break;
}
diff --git a/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp b/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
index be3e2c9..2d690bf 100644
--- a/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -152,7 +152,9 @@
switch (inlineCallFrame->kind) {
case InlineCallFrame::Call:
- case InlineCallFrame::Construct: {
+ case InlineCallFrame::Construct:
+ case InlineCallFrame::CallVarargs:
+ case InlineCallFrame::ConstructVarargs: {
CallLinkInfo* callLinkInfo =
baselineCodeBlockForCaller->getCallLinkInfoForBytecodeIndex(callBytecodeIndex);
RELEASE_ASSERT(callLinkInfo);
@@ -195,12 +197,13 @@
if (trueReturnPC)
jit.storePtr(AssemblyHelpers::TrustedImmPtr(trueReturnPC), AssemblyHelpers::addressFor(inlineCallFrame->stackOffset + virtualRegisterForArgument(inlineCallFrame->arguments.size()).offset()));
-#if USE(JSVALUE64)
jit.storePtr(AssemblyHelpers::TrustedImmPtr(baselineCodeBlock), AssemblyHelpers::addressFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::CodeBlock)));
+ if (!inlineCallFrame->isVarargs())
+ jit.store32(AssemblyHelpers::TrustedImm32(inlineCallFrame->arguments.size()), AssemblyHelpers::payloadFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ArgumentCount)));
+#if USE(JSVALUE64)
jit.store64(callerFrameGPR, AssemblyHelpers::addressForByteOffset(inlineCallFrame->callerFrameOffset()));
uint32_t locationBits = CallFrame::Location::encodeAsBytecodeOffset(codeOrigin.bytecodeIndex);
jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ArgumentCount)));
- jit.store32(AssemblyHelpers::TrustedImm32(inlineCallFrame->arguments.size()), AssemblyHelpers::payloadFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ArgumentCount)));
if (!inlineCallFrame->isClosureCall)
jit.store64(AssemblyHelpers::TrustedImm64(JSValue::encode(JSValue(inlineCallFrame->calleeConstant()))), AssemblyHelpers::addressFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::Callee)));
@@ -208,12 +211,10 @@
if (baselineCodeBlock->usesArguments())
jit.loadPtr(AssemblyHelpers::addressFor(VirtualRegister(inlineCallFrame->stackOffset + unmodifiedArgumentsRegister(baselineCodeBlock->argumentsRegister()).offset())), GPRInfo::regT3);
#else // USE(JSVALUE64) // so this is the 32-bit part
- jit.storePtr(AssemblyHelpers::TrustedImmPtr(baselineCodeBlock), AssemblyHelpers::addressFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::CodeBlock)));
jit.storePtr(callerFrameGPR, AssemblyHelpers::addressForByteOffset(inlineCallFrame->callerFrameOffset()));
Instruction* instruction = baselineCodeBlock->instructions().begin() + codeOrigin.bytecodeIndex;
uint32_t locationBits = CallFrame::Location::encodeAsBytecodeInstruction(instruction);
jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ArgumentCount)));
- jit.store32(AssemblyHelpers::TrustedImm32(inlineCallFrame->arguments.size()), AssemblyHelpers::payloadFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ArgumentCount)));
jit.store32(AssemblyHelpers::TrustedImm32(JSValue::CellTag), AssemblyHelpers::tagFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::Callee)));
if (!inlineCallFrame->isClosureCall)
jit.storePtr(AssemblyHelpers::TrustedImmPtr(inlineCallFrame->calleeConstant()), AssemblyHelpers::payloadFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::Callee)));
diff --git a/Source/JavaScriptCore/dfg/DFGOperations.cpp b/Source/JavaScriptCore/dfg/DFGOperations.cpp
index 33641e9..9c46fc9 100644
--- a/Source/JavaScriptCore/dfg/DFGOperations.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOperations.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2011, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011, 2013-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -1018,6 +1018,27 @@
set->notifyWrite(vm, value, "Executed NotifyWrite");
}
+int32_t JIT_OPERATION operationSizeOfVarargs(ExecState* exec, EncodedJSValue encodedArguments, int32_t firstVarArgOffset)
+{
+ VM& vm = exec->vm();
+ NativeCallFrameTracer tracer(&vm, exec);
+ JSValue arguments = JSValue::decode(encodedArguments);
+
+ return sizeOfVarargs(exec, arguments, firstVarArgOffset);
+}
+
+void JIT_OPERATION operationLoadVarargs(ExecState* exec, int32_t firstElementDest, EncodedJSValue encodedArguments, int32_t offset, int32_t length, int32_t mandatoryMinimum)
+{
+ VM& vm = exec->vm();
+ NativeCallFrameTracer tracer(&vm, exec);
+ JSValue arguments = JSValue::decode(encodedArguments);
+
+ loadVarargs(exec, VirtualRegister(firstElementDest), arguments, offset, length);
+
+ for (int32_t i = length; i < mandatoryMinimum; ++i)
+ exec->r(firstElementDest + i) = jsUndefined();
+}
+
double JIT_OPERATION operationFModOnInts(int32_t a, int32_t b)
{
return fmod(a, b);
diff --git a/Source/JavaScriptCore/dfg/DFGOperations.h b/Source/JavaScriptCore/dfg/DFGOperations.h
index 2ae4687..78574e1c 100644
--- a/Source/JavaScriptCore/dfg/DFGOperations.h
+++ b/Source/JavaScriptCore/dfg/DFGOperations.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2011, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011, 2013-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -125,6 +125,8 @@
char* JIT_OPERATION operationFindSwitchImmTargetForDouble(ExecState*, EncodedJSValue, size_t tableIndex);
char* JIT_OPERATION operationSwitchString(ExecState*, size_t tableIndex, JSString*);
void JIT_OPERATION operationNotifyWrite(ExecState*, VariableWatchpointSet*, EncodedJSValue);
+int32_t JIT_OPERATION operationSizeOfVarargs(ExecState*, EncodedJSValue arguments, int32_t firstVarArgOffset);
+void JIT_OPERATION operationLoadVarargs(ExecState*, int32_t firstElementDest, EncodedJSValue arguments, int32_t offset, int32_t length, int32_t mandatoryMinimum);
int64_t JIT_OPERATION operationConvertBoxedDoubleToInt52(EncodedJSValue);
int64_t JIT_OPERATION operationConvertDoubleToInt52(double);
diff --git a/Source/JavaScriptCore/dfg/DFGPlan.cpp b/Source/JavaScriptCore/dfg/DFGPlan.cpp
index 32deaa1..383128a 100644
--- a/Source/JavaScriptCore/dfg/DFGPlan.cpp
+++ b/Source/JavaScriptCore/dfg/DFGPlan.cpp
@@ -87,10 +87,10 @@
namespace JSC { namespace DFG {
-static void dumpAndVerifyGraph(Graph& graph, const char* text)
+static void dumpAndVerifyGraph(Graph& graph, const char* text, bool forceDump = false)
{
GraphDumpMode modeForFinalValidate = DumpGraph;
- if (verboseCompilationEnabled(graph.m_plan.mode)) {
+ if (verboseCompilationEnabled(graph.m_plan.mode) || forceDump) {
dataLog(text, "\n");
graph.dump();
modeForFinalValidate = DontDumpGraph;
@@ -369,7 +369,7 @@
return FailPath;
}
- dumpAndVerifyGraph(dfg, "Graph just before FTL lowering:");
+ dumpAndVerifyGraph(dfg, "Graph just before FTL lowering:", shouldShowDisassembly(mode));
bool haveLLVM;
Safepoint::Result safepointResult;
diff --git a/Source/JavaScriptCore/dfg/DFGPreciseLocalClobberize.h b/Source/JavaScriptCore/dfg/DFGPreciseLocalClobberize.h
index 15f86dc..cbda2c4 100644
--- a/Source/JavaScriptCore/dfg/DFGPreciseLocalClobberize.h
+++ b/Source/JavaScriptCore/dfg/DFGPreciseLocalClobberize.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -133,11 +133,25 @@
m_read(VirtualRegister(inlineCallFrame->stackOffset + virtualRegisterForArgument(i).offset()));
if (inlineCallFrame->isClosureCall)
m_read(VirtualRegister(inlineCallFrame->stackOffset + JSStack::Callee));
+ if (inlineCallFrame->isVarargs())
+ m_read(VirtualRegister(inlineCallFrame->stackOffset + JSStack::ArgumentCount));
}
}
void writeTop()
{
+ if (m_node->op() == LoadVarargs) {
+ // Make sure we note the writes to the locals that will store the array elements and
+ // count.
+ LoadVarargsData* data = m_node->loadVarargsData();
+ m_write(data->count);
+ for (unsigned i = data->limit; i--;)
+ m_write(VirtualRegister(data->start.offset() + i));
+ }
+
+ // Note that we don't need to do anything special for CallForwardVarargs, since it reads
+ // our arguments the same way that any effectful thing might.
+
if (m_graph.m_codeBlock->usesArguments()) {
for (unsigned i = m_graph.m_codeBlock->numParameters(); i-- > 1;)
m_write(virtualRegisterForArgument(i));
diff --git a/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp b/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
index 5126299..5a5cec5 100644
--- a/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -188,6 +188,9 @@
case GetDirectPname:
case Call:
case Construct:
+ case CallVarargs:
+ case ConstructVarargs:
+ case CallForwardVarargs:
case NativeCall:
case NativeConstruct:
case GetGlobalVar:
@@ -635,6 +638,7 @@
case ConstantStoragePointer:
case MovHint:
case ZombieHint:
+ case LoadVarargs:
break;
// This gets ignored because it only pretends to produce a value.
diff --git a/Source/JavaScriptCore/dfg/DFGSSAConversionPhase.cpp b/Source/JavaScriptCore/dfg/DFGSSAConversionPhase.cpp
index dd5835f..cc0a06b 100644
--- a/Source/JavaScriptCore/dfg/DFGSSAConversionPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSSAConversionPhase.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
diff --git a/Source/JavaScriptCore/dfg/DFGSafeToExecute.h b/Source/JavaScriptCore/dfg/DFGSafeToExecute.h
index a8c85f0..9199e54 100644
--- a/Source/JavaScriptCore/dfg/DFGSafeToExecute.h
+++ b/Source/JavaScriptCore/dfg/DFGSafeToExecute.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -189,6 +189,10 @@
case CompareStrictEq:
case Call:
case Construct:
+ case CallVarargs:
+ case ConstructVarargs:
+ case LoadVarargs:
+ case CallForwardVarargs:
case NewObject:
case NewArray:
case NewArrayWithSize:
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
index 33c777e4..86c836d 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -578,7 +578,6 @@
}
}
-#ifndef NDEBUG
// Used to ASSERT flushRegisters() has been called prior to
// calling out from JIT code to a C helper function.
bool isFlushed()
@@ -593,7 +592,6 @@
}
return true;
}
-#endif
#if USE(JSVALUE64)
static MacroAssembler::Imm64 valueOfJSConstantAsImm64(Node* node)
@@ -1454,6 +1452,26 @@
return appendCallWithExceptionCheckSetResult(operation, result);
}
+ JITCompiler::Call callOperation(Z_JITOperation_EJZZ operation, GPRReg result, GPRReg arg1, unsigned arg2, unsigned arg3)
+ {
+ m_jit.setupArgumentsWithExecState(arg1, TrustedImm32(arg2), TrustedImm32(arg3));
+ return appendCallWithExceptionCheckSetResult(operation, result);
+ }
+ JITCompiler::Call callOperation(F_JITOperation_EFJZZ operation, GPRReg result, GPRReg arg1, GPRReg arg2, unsigned arg3, GPRReg arg4)
+ {
+ m_jit.setupArgumentsWithExecState(arg1, arg2, TrustedImm32(arg3), arg4);
+ return appendCallWithExceptionCheckSetResult(operation, result);
+ }
+ JITCompiler::Call callOperation(Z_JITOperation_EJZ operation, GPRReg result, GPRReg arg1, unsigned arg2)
+ {
+ m_jit.setupArgumentsWithExecState(arg1, TrustedImm32(arg2));
+ return appendCallWithExceptionCheckSetResult(operation, result);
+ }
+ JITCompiler::Call callOperation(V_JITOperation_EZJZZZ operation, unsigned arg1, GPRReg arg2, unsigned arg3, GPRReg arg4, unsigned arg5)
+ {
+ m_jit.setupArgumentsWithExecState(TrustedImm32(arg1), arg2, TrustedImm32(arg3), arg4, TrustedImm32(arg5));
+ return appendCallWithExceptionCheck(operation);
+ }
#else // USE(JSVALUE32_64)
// EncodedJSValue in JSVALUE32_64 is a 64-bit integer. When being compiled in ARM EABI, it must be aligned even-numbered register (r0, r2 or [sp]).
@@ -1750,6 +1768,26 @@
return appendCallWithExceptionCheckSetResult(operation, result);
}
+ JITCompiler::Call callOperation(Z_JITOperation_EJZZ operation, GPRReg result, GPRReg arg1Tag, GPRReg arg1Payload, unsigned arg2, unsigned arg3)
+ {
+ m_jit.setupArgumentsWithExecState(arg1Payload, arg1Tag, TrustedImm32(arg2), TrustedImm32(arg3));
+ return appendCallWithExceptionCheckSetResult(operation, result);
+ }
+ JITCompiler::Call callOperation(F_JITOperation_EFJZZ operation, GPRReg result, GPRReg arg1, GPRReg arg2Tag, GPRReg arg2Payload, unsigned arg3, GPRReg arg4)
+ {
+ m_jit.setupArgumentsWithExecState(arg1, arg2Payload, arg2Tag, TrustedImm32(arg3), arg4);
+ return appendCallWithExceptionCheckSetResult(operation, result);
+ }
+ JITCompiler::Call callOperation(Z_JITOperation_EJZ operation, GPRReg result, GPRReg arg1Tag, GPRReg arg1Payload, unsigned arg2)
+ {
+ m_jit.setupArgumentsWithExecState(arg1Payload, arg1Tag, TrustedImm32(arg2));
+ return appendCallWithExceptionCheckSetResult(operation, result);
+ }
+ JITCompiler::Call callOperation(V_JITOperation_EZJZZZ operation, unsigned arg1, GPRReg arg2Tag, GPRReg arg2Payload, unsigned arg3, GPRReg arg4, unsigned arg5)
+ {
+ m_jit.setupArgumentsWithExecState(TrustedImm32(arg1), arg2Payload, arg2Tag, TrustedImm32(arg3), arg4, TrustedImm32(arg5));
+ return appendCallWithExceptionCheck(operation);
+ }
#undef EABI_32BIT_DUMMY_ARG
#undef SH4_32BIT_DUMMY_ARG
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
index 1e720ec..62b1836 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2015 Apple Inc. All rights reserved.
* Copyright (C) 2011 Intel Corporation. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@@ -40,6 +40,7 @@
#include "JSPropertyNameEnumerator.h"
#include "ObjectPrototype.h"
#include "JSCInlines.h"
+#include "SetupVarargsFrame.h"
#include "TypeProfilerLog.h"
namespace JSC { namespace DFG {
@@ -638,35 +639,171 @@
void SpeculativeJIT::emitCall(Node* node)
{
- bool isCall = node->op() == Call;
- if (!isCall)
- ASSERT(node->op() == Construct);
+ CallLinkInfo::CallType callType;
+ bool isCall;
+ bool isVarargs;
+ switch (node->op()) {
+ case Call:
+ callType = CallLinkInfo::Call;
+ isCall = true;
+ isVarargs = false;
+ break;
+ case Construct:
+ callType = CallLinkInfo::Construct;
+ isCall = false;
+ isVarargs = false;
+ break;
+ case CallVarargs:
+ case CallForwardVarargs:
+ callType = CallLinkInfo::CallVarargs;
+ isCall = true;
+ isVarargs = true;
+ break;
+ case ConstructVarargs:
+ callType = CallLinkInfo::ConstructVarargs;
+ isCall = false;
+ isVarargs = true;
+ break;
+ default:
+ DFG_CRASH(m_jit.graph(), node, "bad node type");
+ break;
+ }
- // For constructors, the this argument is not passed but we have to make space
- // for it.
- int dummyThisArgument = isCall ? 0 : 1;
-
- CallLinkInfo::CallType callType = isCall ? CallLinkInfo::Call : CallLinkInfo::Construct;
-
- Edge calleeEdge = m_jit.graph().m_varArgChildren[node->firstChild()];
-
- // The call instruction's first child is either the function (normal call) or the
- // receiver (method call). subsequent children are the arguments.
- int numPassedArgs = node->numChildren() - 1;
+ Edge calleeEdge = m_jit.graph().child(node, 0);
- int numArgs = numPassedArgs + dummyThisArgument;
+ // Gotta load the arguments somehow. Varargs is trickier.
+ if (isVarargs) {
+ CallVarargsData* data = node->callVarargsData();
- m_jit.store32(MacroAssembler::TrustedImm32(numArgs), m_jit.calleeFramePayloadSlot(JSStack::ArgumentCount));
+ GPRReg argumentsPayloadGPR;
+ GPRReg argumentsTagGPR;
+ GPRReg scratchGPR1;
+ GPRReg scratchGPR2;
+ GPRReg scratchGPR3;
+
+ if (node->op() == CallForwardVarargs) {
+ // We avoid calling flushRegisters() inside the control flow of CallForwardVarargs.
+ flushRegisters();
+ }
+
+ auto loadArgumentsGPR = [&] (GPRReg reservedGPR) {
+ if (node->op() == CallForwardVarargs) {
+ argumentsTagGPR = JITCompiler::selectScratchGPR(reservedGPR);
+ argumentsPayloadGPR = JITCompiler::selectScratchGPR(reservedGPR, argumentsTagGPR);
+ m_jit.load32(
+ JITCompiler::tagFor(
+ m_jit.graph().machineArgumentsRegisterFor(node->origin.semantic)),
+ argumentsTagGPR);
+ m_jit.load32(
+ JITCompiler::payloadFor(
+ m_jit.graph().machineArgumentsRegisterFor(node->origin.semantic)),
+ argumentsPayloadGPR);
+ } else {
+ if (reservedGPR != InvalidGPRReg)
+ lock(reservedGPR);
+ JSValueOperand arguments(this, node->child2());
+ argumentsTagGPR = arguments.tagGPR();
+ argumentsPayloadGPR = arguments.payloadGPR();
+ if (reservedGPR != InvalidGPRReg)
+ unlock(reservedGPR);
+ flushRegisters();
+ }
+
+ scratchGPR1 = JITCompiler::selectScratchGPR(argumentsPayloadGPR, argumentsTagGPR, reservedGPR);
+ scratchGPR2 = JITCompiler::selectScratchGPR(argumentsPayloadGPR, argumentsTagGPR, scratchGPR1, reservedGPR);
+ scratchGPR3 = JITCompiler::selectScratchGPR(argumentsPayloadGPR, argumentsTagGPR, scratchGPR1, scratchGPR2, reservedGPR);
+ };
+
+ loadArgumentsGPR(InvalidGPRReg);
+
+ // At this point we have the whole register file to ourselves, and argumentsGPR has the
+ // arguments register. Select some scratch registers.
+
+ // We will use scratchGPR2 to point to our stack frame.
+
+ unsigned numUsedStackSlots = m_jit.graph().m_nextMachineLocal;
+
+ JITCompiler::Jump haveArguments;
+ GPRReg resultGPR = GPRInfo::regT0;
+ if (node->op() == CallForwardVarargs) {
+ // Do the horrific foo.apply(this, arguments) optimization.
+ // FIXME: do this optimization at the IR level instead of dynamically by testing the
+ // arguments register. This will happen once we get rid of the arguments lazy creation and
+ // lazy tear-off.
+
+ JITCompiler::JumpList slowCase;
+ slowCase.append(
+ m_jit.branch32(
+ JITCompiler::NotEqual,
+ argumentsTagGPR, TrustedImm32(JSValue::EmptyValueTag)));
+
+ m_jit.move(TrustedImm32(numUsedStackSlots), scratchGPR2);
+ emitSetupVarargsFrameFastCase(m_jit, scratchGPR2, scratchGPR1, scratchGPR2, scratchGPR3, node->origin.semantic.inlineCallFrame, data->firstVarArgOffset, slowCase);
+ resultGPR = scratchGPR2;
+
+ haveArguments = m_jit.jump();
+ slowCase.link(&m_jit);
+ }
- for (int i = 0; i < numPassedArgs; i++) {
- Edge argEdge = m_jit.graph().m_varArgChildren[node->firstChild() + 1 + i];
- JSValueOperand arg(this, argEdge);
- GPRReg argTagGPR = arg.tagGPR();
- GPRReg argPayloadGPR = arg.payloadGPR();
- use(argEdge);
-
- m_jit.store32(argTagGPR, m_jit.calleeArgumentTagSlot(i + dummyThisArgument));
- m_jit.store32(argPayloadGPR, m_jit.calleeArgumentPayloadSlot(i + dummyThisArgument));
+ DFG_ASSERT(m_jit.graph(), node, isFlushed());
+
+ // Right now, arguments is in argumentsTagGPR/argumentsPayloadGPR and the register file is
+ // flushed.
+ callOperation(operationSizeFrameForVarargs, GPRInfo::returnValueGPR, argumentsTagGPR, argumentsPayloadGPR, numUsedStackSlots, data->firstVarArgOffset);
+
+ // Now we have the argument count of the callee frame, but we've lost the arguments operand.
+ // Reconstruct the arguments operand while preserving the callee frame.
+ loadArgumentsGPR(GPRInfo::returnValueGPR);
+ m_jit.move(TrustedImm32(numUsedStackSlots), scratchGPR1);
+ emitSetVarargsFrame(m_jit, GPRInfo::returnValueGPR, false, scratchGPR1, scratchGPR1);
+ m_jit.addPtr(TrustedImm32(-(sizeof(CallerFrameAndPC) + WTF::roundUpToMultipleOf(stackAlignmentBytes(), 6 * sizeof(void*)))), scratchGPR1, JITCompiler::stackPointerRegister);
+
+ callOperation(operationSetupVarargsFrame, GPRInfo::returnValueGPR, scratchGPR1, argumentsTagGPR, argumentsPayloadGPR, data->firstVarArgOffset, GPRInfo::returnValueGPR);
+ m_jit.move(GPRInfo::returnValueGPR, resultGPR);
+
+ if (node->op() == CallForwardVarargs)
+ haveArguments.link(&m_jit);
+
+ m_jit.addPtr(TrustedImm32(sizeof(CallerFrameAndPC)), resultGPR, JITCompiler::stackPointerRegister);
+
+ DFG_ASSERT(m_jit.graph(), node, isFlushed());
+
+ if (node->op() != CallForwardVarargs)
+ use(node->child2());
+
+ if (isCall) {
+ // Now set up the "this" argument.
+ JSValueOperand thisArgument(this, node->op() == CallForwardVarargs ? node->child2() : node->child3());
+ GPRReg thisArgumentTagGPR = thisArgument.tagGPR();
+ GPRReg thisArgumentPayloadGPR = thisArgument.payloadGPR();
+ thisArgument.use();
+
+ m_jit.store32(thisArgumentTagGPR, JITCompiler::calleeArgumentTagSlot(0));
+ m_jit.store32(thisArgumentPayloadGPR, JITCompiler::calleeArgumentPayloadSlot(0));
+ }
+ } else {
+ // For constructors, the this argument is not passed but we have to make space
+ // for it.
+ int dummyThisArgument = isCall ? 0 : 1;
+
+ // The call instruction's first child is either the function (normal call) or the
+ // receiver (method call). subsequent children are the arguments.
+ int numPassedArgs = node->numChildren() - 1;
+
+ int numArgs = numPassedArgs + dummyThisArgument;
+
+ m_jit.store32(MacroAssembler::TrustedImm32(numArgs), m_jit.calleeFramePayloadSlot(JSStack::ArgumentCount));
+
+ for (int i = 0; i < numPassedArgs; i++) {
+ Edge argEdge = m_jit.graph().m_varArgChildren[node->firstChild() + 1 + i];
+ JSValueOperand arg(this, argEdge);
+ GPRReg argTagGPR = arg.tagGPR();
+ GPRReg argPayloadGPR = arg.payloadGPR();
+ use(argEdge);
+
+ m_jit.store32(argTagGPR, m_jit.calleeArgumentTagSlot(i + dummyThisArgument));
+ m_jit.store32(argPayloadGPR, m_jit.calleeArgumentPayloadSlot(i + dummyThisArgument));
+ }
}
JSValueOperand callee(this, calleeEdge);
@@ -724,6 +861,10 @@
info->codeOrigin = node->origin.semantic;
info->calleeGPR = calleePayloadGPR;
m_jit.addJSCall(fastCall, slowCall, targetToCheck, info);
+
+ // If we were varargs, then after the calls are done, we need to reestablish our stack pointer.
+ if (isVarargs)
+ m_jit.addPtr(TrustedImm32(m_jit.graph().stackPointerOffset() * sizeof(Register)), GPRInfo::callFrameRegister, JITCompiler::stackPointerRegister);
}
template<bool strict>
@@ -4156,9 +4297,59 @@
case Call:
case Construct:
+ case CallVarargs:
+ case CallForwardVarargs:
+ case ConstructVarargs:
emitCall(node);
break;
+ case LoadVarargs: {
+ LoadVarargsData* data = node->loadVarargsData();
+
+ GPRReg argumentsTagGPR;
+ GPRReg argumentsPayloadGPR;
+ {
+ JSValueOperand arguments(this, node->child1());
+ argumentsTagGPR = arguments.tagGPR();
+ argumentsPayloadGPR = arguments.payloadGPR();
+ flushRegisters();
+ }
+
+ callOperation(operationSizeOfVarargs, GPRInfo::returnValueGPR, argumentsTagGPR, argumentsPayloadGPR, data->offset);
+
+ lock(GPRInfo::returnValueGPR);
+ {
+ JSValueOperand arguments(this, node->child1());
+ argumentsTagGPR = arguments.tagGPR();
+ argumentsPayloadGPR = arguments.payloadGPR();
+ flushRegisters();
+ }
+ unlock(GPRInfo::returnValueGPR);
+
+ // FIXME: There is a chance that we will call an effectful length property twice. This is safe
+ // from the standpoint of the VM's integrity, but it's subtly wrong from a spec compliance
+ // standpoint. The best solution would be one where we can exit *into* the op_call_varargs right
+ // past the sizing.
+ // https://bugs.webkit.org/show_bug.cgi?id=141448
+
+ GPRReg argCountIncludingThisGPR =
+ JITCompiler::selectScratchGPR(GPRInfo::returnValueGPR, argumentsTagGPR, argumentsPayloadGPR);
+
+ m_jit.add32(TrustedImm32(1), GPRInfo::returnValueGPR, argCountIncludingThisGPR);
+ speculationCheck(
+ VarargsOverflow, JSValueSource(), Edge(), m_jit.branch32(
+ MacroAssembler::Above,
+ argCountIncludingThisGPR,
+ TrustedImm32(data->limit)));
+
+ m_jit.store32(argCountIncludingThisGPR, JITCompiler::payloadFor(data->machineCount));
+
+ callOperation(operationLoadVarargs, data->machineStart.offset(), argumentsTagGPR, argumentsPayloadGPR, data->offset, GPRInfo::returnValueGPR, data->mandatoryMinimum);
+
+ noResult(node);
+ break;
+ }
+
case CreateActivation: {
GPRTemporary result(this);
GPRReg resultGPR = result.gpr();
@@ -4278,9 +4469,20 @@
TrustedImm32(JSValue::EmptyValueTag)));
}
- ASSERT(!node->origin.semantic.inlineCallFrame);
- m_jit.load32(JITCompiler::payloadFor(JSStack::ArgumentCount), resultGPR);
- m_jit.sub32(TrustedImm32(1), resultGPR);
+ if (node->origin.semantic.inlineCallFrame
+ && !node->origin.semantic.inlineCallFrame->isVarargs()) {
+ m_jit.move(
+ TrustedImm32(node->origin.semantic.inlineCallFrame->arguments.size() - 1),
+ resultGPR);
+ } else {
+ VirtualRegister argumentCountRegister;
+ if (!node->origin.semantic.inlineCallFrame)
+ argumentCountRegister = VirtualRegister(JSStack::ArgumentCount);
+ else
+ argumentCountRegister = node->origin.semantic.inlineCallFrame->argumentCountRegister;
+ m_jit.load32(JITCompiler::payloadFor(argumentCountRegister), resultGPR);
+ m_jit.sub32(TrustedImm32(1), resultGPR);
+ }
int32Result(resultGPR, node);
break;
}
@@ -4296,14 +4498,21 @@
JITCompiler::tagFor(m_jit.graph().machineArgumentsRegisterFor(node->origin.semantic)),
TrustedImm32(JSValue::EmptyValueTag));
- if (node->origin.semantic.inlineCallFrame) {
+ if (node->origin.semantic.inlineCallFrame
+ && !node->origin.semantic.inlineCallFrame->isVarargs()) {
m_jit.move(
- Imm32(node->origin.semantic.inlineCallFrame->arguments.size() - 1),
+ TrustedImm32(node->origin.semantic.inlineCallFrame->arguments.size() - 1),
resultPayloadGPR);
} else {
- m_jit.load32(JITCompiler::payloadFor(JSStack::ArgumentCount), resultPayloadGPR);
+ VirtualRegister argumentCountRegister;
+ if (!node->origin.semantic.inlineCallFrame)
+ argumentCountRegister = VirtualRegister(JSStack::ArgumentCount);
+ else
+ argumentCountRegister = node->origin.semantic.inlineCallFrame->argumentCountRegister;
+ m_jit.load32(JITCompiler::payloadFor(argumentCountRegister), resultPayloadGPR);
m_jit.sub32(TrustedImm32(1), resultPayloadGPR);
}
+
m_jit.move(TrustedImm32(JSValue::Int32Tag), resultTagGPR);
// FIXME: the slow path generator should perform a forward speculation that the
@@ -4339,7 +4548,8 @@
TrustedImm32(JSValue::EmptyValueTag)));
}
- if (node->origin.semantic.inlineCallFrame) {
+ if (node->origin.semantic.inlineCallFrame
+ && !node->origin.semantic.inlineCallFrame->isVarargs()) {
speculationCheck(
Uncountable, JSValueRegs(), 0,
m_jit.branch32(
@@ -4347,7 +4557,12 @@
indexGPR,
Imm32(node->origin.semantic.inlineCallFrame->arguments.size() - 1)));
} else {
- m_jit.load32(JITCompiler::payloadFor(JSStack::ArgumentCount), resultPayloadGPR);
+ VirtualRegister argumentCountRegister;
+ if (!node->origin.semantic.inlineCallFrame)
+ argumentCountRegister = VirtualRegister(JSStack::ArgumentCount);
+ else
+ argumentCountRegister = node->origin.semantic.inlineCallFrame->argumentCountRegister;
+ m_jit.load32(JITCompiler::payloadFor(argumentCountRegister), resultPayloadGPR);
m_jit.sub32(TrustedImm32(1), resultPayloadGPR);
speculationCheck(
Uncountable, JSValueRegs(), 0,
@@ -4416,14 +4631,20 @@
JITCompiler::tagFor(m_jit.graph().machineArgumentsRegisterFor(node->origin.semantic)),
TrustedImm32(JSValue::EmptyValueTag)));
- if (node->origin.semantic.inlineCallFrame) {
+ if (node->origin.semantic.inlineCallFrame
+ && !node->origin.semantic.inlineCallFrame->isVarargs()) {
slowPath.append(
m_jit.branch32(
JITCompiler::AboveOrEqual,
indexGPR,
Imm32(node->origin.semantic.inlineCallFrame->arguments.size() - 1)));
} else {
- m_jit.load32(JITCompiler::payloadFor(JSStack::ArgumentCount), resultPayloadGPR);
+ VirtualRegister argumentCountRegister;
+ if (!node->origin.semantic.inlineCallFrame)
+ argumentCountRegister = VirtualRegister(JSStack::ArgumentCount);
+ else
+ argumentCountRegister = node->origin.semantic.inlineCallFrame->argumentCountRegister;
+ m_jit.load32(JITCompiler::payloadFor(argumentCountRegister), resultPayloadGPR);
m_jit.sub32(TrustedImm32(1), resultPayloadGPR);
slowPath.append(
m_jit.branch32(JITCompiler::AboveOrEqual, indexGPR, resultPayloadGPR));
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
index 33c314c..3f18556 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -39,6 +39,7 @@
#include "JSCInlines.h"
#include "JSPropertyNameEnumerator.h"
#include "ObjectPrototype.h"
+#include "SetupVarargsFrame.h"
#include "SpillRegistersMode.h"
#include "TypeProfilerLog.h"
@@ -624,38 +625,163 @@
void SpeculativeJIT::emitCall(Node* node)
{
- bool isCall = node->op() == Call;
- if (!isCall)
- DFG_ASSERT(m_jit.graph(), node, node->op() == Construct);
+ CallLinkInfo::CallType callType;
+ bool isCall;
+ bool isVarargs;
+ switch (node->op()) {
+ case Call:
+ callType = CallLinkInfo::Call;
+ isCall = true;
+ isVarargs = false;
+ break;
+ case Construct:
+ callType = CallLinkInfo::Construct;
+ isCall = false;
+ isVarargs = false;
+ break;
+ case CallVarargs:
+ case CallForwardVarargs:
+ callType = CallLinkInfo::CallVarargs;
+ isCall = true;
+ isVarargs = true;
+ break;
+ case ConstructVarargs:
+ callType = CallLinkInfo::ConstructVarargs;
+ isCall = false;
+ isVarargs = true;
+ break;
+ default:
+ DFG_CRASH(m_jit.graph(), node, "bad node type");
+ break;
+ }
+
+ Edge calleeEdge = m_jit.graph().child(node, 0);
- // For constructors, the this argument is not passed but we have to make space
- // for it.
- int dummyThisArgument = isCall ? 0 : 1;
-
- CallLinkInfo::CallType callType = isCall ? CallLinkInfo::Call : CallLinkInfo::Construct;
-
- Edge calleeEdge = m_jit.graph().m_varArgChildren[node->firstChild()];
- // The call instruction's first child is the function; the subsequent children are the
- // arguments.
- int numPassedArgs = node->numChildren() - 1;
-
- int numArgs = numPassedArgs + dummyThisArgument;
-
- m_jit.store32(MacroAssembler::TrustedImm32(numArgs), m_jit.calleeFramePayloadSlot(JSStack::ArgumentCount));
-
- for (int i = 0; i < numPassedArgs; i++) {
- Edge argEdge = m_jit.graph().m_varArgChildren[node->firstChild() + 1 + i];
- JSValueOperand arg(this, argEdge);
- GPRReg argGPR = arg.gpr();
- use(argEdge);
+ // Gotta load the arguments somehow. Varargs is trickier.
+ if (isVarargs) {
+ CallVarargsData* data = node->callVarargsData();
+
+ GPRReg argumentsGPR;
+ GPRReg scratchGPR1;
+ GPRReg scratchGPR2;
+ GPRReg scratchGPR3;
- m_jit.store64(argGPR, m_jit.calleeArgumentSlot(i + dummyThisArgument));
+ if (node->op() == CallForwardVarargs) {
+ // We avoid calling flushRegisters() inside the control flow of CallForwardVarargs.
+ flushRegisters();
+ }
+
+ auto loadArgumentsGPR = [&] (GPRReg reservedGPR) {
+ if (node->op() == CallForwardVarargs) {
+ argumentsGPR = JITCompiler::selectScratchGPR(reservedGPR);
+ m_jit.load64(
+ JITCompiler::addressFor(
+ m_jit.graph().machineArgumentsRegisterFor(node->origin.semantic)),
+ argumentsGPR);
+ } else {
+ if (reservedGPR != InvalidGPRReg)
+ lock(reservedGPR);
+ JSValueOperand arguments(this, node->child2());
+ argumentsGPR = arguments.gpr();
+ if (reservedGPR != InvalidGPRReg)
+ unlock(reservedGPR);
+ flushRegisters();
+ }
+
+ scratchGPR1 = JITCompiler::selectScratchGPR(argumentsGPR, reservedGPR);
+ scratchGPR2 = JITCompiler::selectScratchGPR(argumentsGPR, scratchGPR1, reservedGPR);
+ scratchGPR3 = JITCompiler::selectScratchGPR(argumentsGPR, scratchGPR1, scratchGPR2, reservedGPR);
+ };
+
+ loadArgumentsGPR(InvalidGPRReg);
+
+ // At this point we have the whole register file to ourselves, and argumentsGPR has the
+ // arguments register. Select some scratch registers.
+
+ // We will use scratchGPR2 to point to our stack frame.
+
+ unsigned numUsedStackSlots = m_jit.graph().m_nextMachineLocal;
+
+ JITCompiler::Jump haveArguments;
+ GPRReg resultGPR = GPRInfo::regT0;
+ if (node->op() == CallForwardVarargs) {
+ // Do the horrific foo.apply(this, arguments) optimization.
+ // FIXME: do this optimization at the IR level instead of dynamically by testing the
+ // arguments register. This will happen once we get rid of the arguments lazy creation and
+ // lazy tear-off.
+
+ JITCompiler::JumpList slowCase;
+ slowCase.append(m_jit.branchTest64(JITCompiler::NonZero, argumentsGPR));
+
+ m_jit.move(TrustedImm32(numUsedStackSlots), scratchGPR2);
+ emitSetupVarargsFrameFastCase(m_jit, scratchGPR2, scratchGPR1, scratchGPR2, scratchGPR3, node->origin.semantic.inlineCallFrame, data->firstVarArgOffset, slowCase);
+ resultGPR = scratchGPR2;
+
+ haveArguments = m_jit.jump();
+ slowCase.link(&m_jit);
+ }
+
+ DFG_ASSERT(m_jit.graph(), node, isFlushed());
+
+ // Right now, arguments is in argumentsGPR and the register file is flushed.
+ callOperation(operationSizeFrameForVarargs, GPRInfo::returnValueGPR, argumentsGPR, numUsedStackSlots, data->firstVarArgOffset);
+
+ // Now we have the argument count of the callee frame, but we've lost the arguments operand.
+ // Reconstruct the arguments operand while preserving the callee frame.
+ loadArgumentsGPR(GPRInfo::returnValueGPR);
+ m_jit.move(TrustedImm32(numUsedStackSlots), scratchGPR1);
+ emitSetVarargsFrame(m_jit, GPRInfo::returnValueGPR, false, scratchGPR1, scratchGPR1);
+ m_jit.addPtr(TrustedImm32(-(sizeof(CallerFrameAndPC) + WTF::roundUpToMultipleOf(stackAlignmentBytes(), 5 * sizeof(void*)))), scratchGPR1, JITCompiler::stackPointerRegister);
+
+ callOperation(operationSetupVarargsFrame, GPRInfo::returnValueGPR, scratchGPR1, argumentsGPR, data->firstVarArgOffset, GPRInfo::returnValueGPR);
+ m_jit.move(GPRInfo::returnValueGPR, resultGPR);
+
+ if (node->op() == CallForwardVarargs)
+ haveArguments.link(&m_jit);
+
+ m_jit.addPtr(TrustedImm32(sizeof(CallerFrameAndPC)), resultGPR, JITCompiler::stackPointerRegister);
+
+ DFG_ASSERT(m_jit.graph(), node, isFlushed());
+
+ // We don't need the arguments array anymore.
+ if (node->op() != CallForwardVarargs)
+ use(node->child2());
+
+ if (isCall) {
+ // Now set up the "this" argument.
+ JSValueOperand thisArgument(this, node->op() == CallForwardVarargs ? node->child2() : node->child3());
+ GPRReg thisArgumentGPR = thisArgument.gpr();
+ thisArgument.use();
+
+ m_jit.store64(thisArgumentGPR, JITCompiler::calleeArgumentSlot(0));
+ }
+ } else {
+ // For constructors, the this argument is not passed but we have to make space
+ // for it.
+ int dummyThisArgument = isCall ? 0 : 1;
+
+ // The call instruction's first child is the function; the subsequent children are the
+ // arguments.
+ int numPassedArgs = node->numChildren() - 1;
+
+ int numArgs = numPassedArgs + dummyThisArgument;
+
+ m_jit.store32(MacroAssembler::TrustedImm32(numArgs), JITCompiler::calleeFramePayloadSlot(JSStack::ArgumentCount));
+
+ for (int i = 0; i < numPassedArgs; i++) {
+ Edge argEdge = m_jit.graph().m_varArgChildren[node->firstChild() + 1 + i];
+ JSValueOperand arg(this, argEdge);
+ GPRReg argGPR = arg.gpr();
+ use(argEdge);
+
+ m_jit.store64(argGPR, JITCompiler::calleeArgumentSlot(i + dummyThisArgument));
+ }
}
JSValueOperand callee(this, calleeEdge);
GPRReg calleeGPR = callee.gpr();
- use(calleeEdge);
- m_jit.store64(calleeGPR, m_jit.calleeFrameSlot(JSStack::Callee));
+ callee.use();
+ m_jit.store64(calleeGPR, JITCompiler::calleeFrameSlot(JSStack::Callee));
flushRegisters();
@@ -692,6 +818,10 @@
callLinkInfo->calleeGPR = calleeGPR;
m_jit.addJSCall(fastCall, slowCall, targetToCheck, callLinkInfo);
+
+ // If we were varargs, then after the calls are done, we need to reestablish our stack pointer.
+ if (isVarargs)
+ m_jit.addPtr(TrustedImm32(m_jit.graph().stackPointerOffset() * sizeof(Register)), GPRInfo::callFrameRegister, JITCompiler::stackPointerRegister);
}
// Clang should allow unreachable [[clang::fallthrough]] in template functions if any template expansion uses it
@@ -4221,9 +4351,56 @@
case Call:
case Construct:
+ case CallVarargs:
+ case CallForwardVarargs:
+ case ConstructVarargs:
emitCall(node);
break;
+ case LoadVarargs: {
+ LoadVarargsData* data = node->loadVarargsData();
+
+ GPRReg argumentsGPR;
+ {
+ JSValueOperand arguments(this, node->child1());
+ argumentsGPR = arguments.gpr();
+ flushRegisters();
+ }
+
+ callOperation(operationSizeOfVarargs, GPRInfo::returnValueGPR, argumentsGPR, data->offset);
+
+ lock(GPRInfo::returnValueGPR);
+ {
+ JSValueOperand arguments(this, node->child1());
+ argumentsGPR = arguments.gpr();
+ flushRegisters();
+ }
+ unlock(GPRInfo::returnValueGPR);
+
+ // FIXME: There is a chance that we will call an effectful length property twice. This is safe
+ // from the standpoint of the VM's integrity, but it's subtly wrong from a spec compliance
+ // standpoint. The best solution would be one where we can exit *into* the op_call_varargs right
+ // past the sizing.
+ // https://bugs.webkit.org/show_bug.cgi?id=141448
+
+ GPRReg argCountIncludingThisGPR =
+ JITCompiler::selectScratchGPR(GPRInfo::returnValueGPR, argumentsGPR);
+
+ m_jit.add32(TrustedImm32(1), GPRInfo::returnValueGPR, argCountIncludingThisGPR);
+ speculationCheck(
+ VarargsOverflow, JSValueSource(), Edge(), m_jit.branch32(
+ MacroAssembler::Above,
+ argCountIncludingThisGPR,
+ TrustedImm32(data->limit)));
+
+ m_jit.store32(argCountIncludingThisGPR, JITCompiler::payloadFor(data->machineCount));
+
+ callOperation(operationLoadVarargs, data->machineStart.offset(), argumentsGPR, data->offset, GPRInfo::returnValueGPR, data->mandatoryMinimum);
+
+ noResult(node);
+ break;
+ }
+
case CreateActivation: {
DFG_ASSERT(m_jit.graph(), node, !node->origin.semantic.inlineCallFrame);
@@ -4328,9 +4505,20 @@
m_jit.graph().machineArgumentsRegisterFor(node->origin.semantic))));
}
- DFG_ASSERT(m_jit.graph(), node, !node->origin.semantic.inlineCallFrame);
- m_jit.load32(JITCompiler::payloadFor(JSStack::ArgumentCount), resultGPR);
- m_jit.sub32(TrustedImm32(1), resultGPR);
+ if (node->origin.semantic.inlineCallFrame
+ && !node->origin.semantic.inlineCallFrame->isVarargs()) {
+ m_jit.move(
+ TrustedImm32(node->origin.semantic.inlineCallFrame->arguments.size() - 1),
+ resultGPR);
+ } else {
+ VirtualRegister argumentCountRegister;
+ if (!node->origin.semantic.inlineCallFrame)
+ argumentCountRegister = VirtualRegister(JSStack::ArgumentCount);
+ else
+ argumentCountRegister = node->origin.semantic.inlineCallFrame->argumentCountRegister;
+ m_jit.load32(JITCompiler::payloadFor(argumentCountRegister), resultGPR);
+ m_jit.sub32(TrustedImm32(1), resultGPR);
+ }
int32Result(resultGPR, node);
break;
}
@@ -4344,20 +4532,22 @@
JITCompiler::addressFor(
m_jit.graph().machineArgumentsRegisterFor(node->origin.semantic)));
- if (node->origin.semantic.inlineCallFrame) {
+ if (node->origin.semantic.inlineCallFrame
+ && !node->origin.semantic.inlineCallFrame->isVarargs()) {
m_jit.move(
Imm64(JSValue::encode(jsNumber(node->origin.semantic.inlineCallFrame->arguments.size() - 1))),
resultGPR);
} else {
- m_jit.load32(JITCompiler::payloadFor(JSStack::ArgumentCount), resultGPR);
+ VirtualRegister argumentCountRegister;
+ if (!node->origin.semantic.inlineCallFrame)
+ argumentCountRegister = VirtualRegister(JSStack::ArgumentCount);
+ else
+ argumentCountRegister = node->origin.semantic.inlineCallFrame->argumentCountRegister;
+ m_jit.load32(JITCompiler::payloadFor(argumentCountRegister), resultGPR);
m_jit.sub32(TrustedImm32(1), resultGPR);
m_jit.or64(GPRInfo::tagTypeNumberRegister, resultGPR);
}
- // FIXME: the slow path generator should perform a forward speculation that the
- // result is an integer. For now we postpone the speculation by having this return
- // a JSValue.
-
addSlowPathGenerator(
slowPathCall(
created, this, operationGetArgumentsLength, resultGPR,
@@ -4384,7 +4574,8 @@
m_jit.graph().machineArgumentsRegisterFor(node->origin.semantic))));
}
- if (node->origin.semantic.inlineCallFrame) {
+ if (node->origin.semantic.inlineCallFrame
+ && !node->origin.semantic.inlineCallFrame->isVarargs()) {
speculationCheck(
Uncountable, JSValueRegs(), 0,
m_jit.branch32(
@@ -4392,7 +4583,12 @@
indexGPR,
Imm32(node->origin.semantic.inlineCallFrame->arguments.size() - 1)));
} else {
- m_jit.load32(JITCompiler::payloadFor(JSStack::ArgumentCount), resultGPR);
+ VirtualRegister argumentCountRegister;
+ if (!node->origin.semantic.inlineCallFrame)
+ argumentCountRegister = VirtualRegister(JSStack::ArgumentCount);
+ else
+ argumentCountRegister = node->origin.semantic.inlineCallFrame->argumentCountRegister;
+ m_jit.load32(JITCompiler::payloadFor(argumentCountRegister), resultGPR);
m_jit.sub32(TrustedImm32(1), resultGPR);
speculationCheck(
Uncountable, JSValueRegs(), 0,
@@ -4449,14 +4645,20 @@
JITCompiler::addressFor(
m_jit.graph().machineArgumentsRegisterFor(node->origin.semantic))));
- if (node->origin.semantic.inlineCallFrame) {
+ if (node->origin.semantic.inlineCallFrame
+ && !node->origin.semantic.inlineCallFrame->isVarargs()) {
slowPath.append(
m_jit.branch32(
JITCompiler::AboveOrEqual,
resultGPR,
Imm32(node->origin.semantic.inlineCallFrame->arguments.size() - 1)));
} else {
- m_jit.load32(JITCompiler::payloadFor(JSStack::ArgumentCount), resultGPR);
+ VirtualRegister argumentCountRegister;
+ if (!node->origin.semantic.inlineCallFrame)
+ argumentCountRegister = VirtualRegister(JSStack::ArgumentCount);
+ else
+ argumentCountRegister = node->origin.semantic.inlineCallFrame->argumentCountRegister;
+ m_jit.load32(JITCompiler::payloadFor(argumentCountRegister), resultGPR);
m_jit.sub32(TrustedImm32(1), resultGPR);
slowPath.append(
m_jit.branch32(JITCompiler::AboveOrEqual, indexGPR, resultGPR));
diff --git a/Source/JavaScriptCore/dfg/DFGStackLayoutPhase.cpp b/Source/JavaScriptCore/dfg/DFGStackLayoutPhase.cpp
index f86e08d..90f0de1 100644
--- a/Source/JavaScriptCore/dfg/DFGStackLayoutPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGStackLayoutPhase.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2013, 2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -56,7 +56,7 @@
BitVector usedLocals;
// Collect those variables that are used from IR.
- bool hasGetLocalUnlinked = false;
+ bool hasNodesThatNeedFixup = false;
for (BlockIndex blockIndex = m_graph.numBlocks(); blockIndex--;) {
BasicBlock* block = m_graph.block(blockIndex);
if (!block)
@@ -81,7 +81,22 @@
if (operand.isArgument())
break;
usedLocals.set(operand.toLocal());
- hasGetLocalUnlinked = true;
+ hasNodesThatNeedFixup = true;
+ break;
+ }
+
+ case LoadVarargs: {
+ LoadVarargsData* data = node->loadVarargsData();
+ if (data->count.isLocal())
+ usedLocals.set(data->count.toLocal());
+ if (data->start.isLocal()) {
+ // This part really relies on the contiguity of stack layout
+ // assignments.
+ ASSERT(VirtualRegister(data->start.offset() + data->limit - 1).isLocal());
+ for (unsigned i = data->limit; i--;)
+ usedLocals.set(VirtualRegister(data->start.offset() + i).toLocal());
+ } // the else case shouldn't happen.
+ hasNodesThatNeedFixup = true;
break;
}
@@ -113,6 +128,11 @@
usedLocals.set(argumentsRegister.toLocal());
usedLocals.set(unmodifiedArgumentsRegister(argumentsRegister).toLocal());
+ if (inlineCallFrame->isVarargs()) {
+ usedLocals.set(VirtualRegister(
+ JSStack::ArgumentCount + inlineCallFrame->stackOffset).toLocal());
+ }
+
for (unsigned argument = inlineCallFrame->arguments.size(); argument-- > 1;) {
usedLocals.set(VirtualRegister(
virtualRegisterForArgument(argument).offset() +
@@ -148,24 +168,21 @@
if (allocation[local] == UINT_MAX)
continue;
- variable->machineLocal() = virtualRegisterForLocal(
- allocation[variable->local().toLocal()]);
+ variable->machineLocal() = assign(allocation, variable->local());
}
if (codeBlock()->usesArguments()) {
- VirtualRegister argumentsRegister = virtualRegisterForLocal(
- allocation[codeBlock()->argumentsRegister().toLocal()]);
+ VirtualRegister argumentsRegister =
+ assign(allocation, codeBlock()->argumentsRegister());
RELEASE_ASSERT(
- virtualRegisterForLocal(allocation[
- unmodifiedArgumentsRegister(
- codeBlock()->argumentsRegister()).toLocal()])
+ assign(allocation, unmodifiedArgumentsRegister(codeBlock()->argumentsRegister()))
== unmodifiedArgumentsRegister(argumentsRegister));
codeBlock()->setArgumentsRegister(argumentsRegister);
}
if (codeBlock()->uncheckedActivationRegister().isValid()) {
codeBlock()->setActivationRegister(
- virtualRegisterForLocal(allocation[codeBlock()->activationRegister().toLocal()]));
+ assign(allocation, codeBlock()->activationRegister()));
}
// This register is never valid for DFG code blocks.
@@ -176,15 +193,19 @@
InlineCallFrame* inlineCallFrame = data.inlineCallFrame;
if (m_graph.usesArguments(inlineCallFrame)) {
- inlineCallFrame->argumentsRegister = virtualRegisterForLocal(
- allocation[m_graph.argumentsRegisterFor(inlineCallFrame).toLocal()]);
+ inlineCallFrame->argumentsRegister = assign(
+ allocation, m_graph.argumentsRegisterFor(inlineCallFrame));
RELEASE_ASSERT(
- virtualRegisterForLocal(allocation[unmodifiedArgumentsRegister(
- m_graph.argumentsRegisterFor(inlineCallFrame)).toLocal()])
+ assign(allocation, unmodifiedArgumentsRegister(m_graph.argumentsRegisterFor(inlineCallFrame)))
== unmodifiedArgumentsRegister(inlineCallFrame->argumentsRegister));
}
+ if (inlineCallFrame->isVarargs()) {
+ inlineCallFrame->argumentCountRegister = assign(
+ allocation, VirtualRegister(inlineCallFrame->stackOffset + JSStack::ArgumentCount));
+ }
+
for (unsigned argument = inlineCallFrame->arguments.size(); argument-- > 1;) {
ArgumentPosition& position = m_graph.m_argumentPositions[
data.argumentPositionStart + argument];
@@ -227,9 +248,7 @@
symbolTable->parameterCount());
for (size_t i = symbolTable->parameterCount(); i--;) {
newSlowArguments[i] = slowArguments[i];
- VirtualRegister reg = VirtualRegister(slowArguments[i].index);
- if (reg.isLocal())
- newSlowArguments[i].index = virtualRegisterForLocal(allocation[reg.toLocal()]).offset();
+ newSlowArguments[i].index = assign(allocation, VirtualRegister(slowArguments[i].index)).offset();
}
m_graph.m_slowArguments = WTF::move(newSlowArguments);
@@ -237,7 +256,7 @@
}
// Fix GetLocalUnlinked's variable references.
- if (hasGetLocalUnlinked) {
+ if (hasNodesThatNeedFixup) {
for (BlockIndex blockIndex = m_graph.numBlocks(); blockIndex--;) {
BasicBlock* block = m_graph.block(blockIndex);
if (!block)
@@ -246,10 +265,14 @@
Node* node = block->at(nodeIndex);
switch (node->op()) {
case GetLocalUnlinked: {
- VirtualRegister operand = node->unlinkedLocal();
- if (operand.isLocal())
- operand = virtualRegisterForLocal(allocation[operand.toLocal()]);
- node->setUnlinkedMachineLocal(operand);
+ node->setUnlinkedMachineLocal(assign(allocation, node->unlinkedLocal()));
+ break;
+ }
+
+ case LoadVarargs: {
+ LoadVarargsData* data = node->loadVarargsData();
+ data->machineCount = assign(allocation, data->count);
+ data->machineStart = assign(allocation, data->start);
break;
}
@@ -262,6 +285,20 @@
return true;
}
+
+private:
+ VirtualRegister assign(const Vector<unsigned>& allocation, VirtualRegister src)
+ {
+ VirtualRegister result = src;
+ if (result.isLocal()) {
+ unsigned myAllocation = allocation[result.toLocal()];
+ if (myAllocation == UINT_MAX)
+ result = VirtualRegister();
+ else
+ result = virtualRegisterForLocal(myAllocation);
+ }
+ return result;
+ }
};
bool performStackLayout(Graph& graph)
diff --git a/Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp b/Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp
index 5e30dfe..f214cb9 100644
--- a/Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp
@@ -271,6 +271,7 @@
case GetScope:
case PhantomLocal:
case GetCallee:
+ case CountExecution:
break;
default:
diff --git a/Source/JavaScriptCore/dfg/DFGTypeCheckHoistingPhase.cpp b/Source/JavaScriptCore/dfg/DFGTypeCheckHoistingPhase.cpp
index 5945315..d0875b5 100644
--- a/Source/JavaScriptCore/dfg/DFGTypeCheckHoistingPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGTypeCheckHoistingPhase.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2012, 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2013, 2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -115,7 +115,6 @@
// from the node, before doing any appending.
switch (node->op()) {
case SetArgument: {
- ASSERT(!blockIndex);
// Insert a GetLocal and a CheckStructure immediately following this
// SetArgument, if the variable was a candidate for structure hoisting.
// If the basic block previously only had the SetArgument as its
@@ -127,6 +126,9 @@
if (!iter->value.m_structure && !iter->value.m_arrayModeIsValid)
break;
+ // Currently we should only be doing this hoisting for SetArguments at the prologue.
+ ASSERT(!blockIndex);
+
NodeOrigin origin = node->origin;
Node* getLocal = insertionSet.insertNode(
diff --git a/Source/JavaScriptCore/dfg/DFGValidate.cpp b/Source/JavaScriptCore/dfg/DFGValidate.cpp
index 7d7b506..56eb756 100644
--- a/Source/JavaScriptCore/dfg/DFGValidate.cpp
+++ b/Source/JavaScriptCore/dfg/DFGValidate.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2012-2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2012-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -50,7 +50,7 @@
startCrashing(); \
dataLogF("\n\n\nAt "); \
reportValidationContext context; \
- dataLogF(": validation %s (%s:%d) failed.\n", #assertion, __FILE__, __LINE__); \
+ dataLogF(": validation failed: %s (%s:%d).\n", #assertion, __FILE__, __LINE__); \
dumpGraphIfAppropriate(); \
WTFReportAssertionFailure(__FILE__, __LINE__, WTF_PRETTY_FUNCTION, #assertion); \
CRASH(); \
@@ -62,11 +62,11 @@
startCrashing(); \
dataLogF("\n\n\nAt "); \
reportValidationContext context; \
- dataLogF(": validation (%s = ", #left); \
+ dataLogF(": validation failed: (%s = ", #left); \
dataLog(left); \
dataLogF(") == (%s = ", #right); \
dataLog(right); \
- dataLogF(") (%s:%d) failed.\n", __FILE__, __LINE__); \
+ dataLogF(") (%s:%d).\n", __FILE__, __LINE__); \
dumpGraphIfAppropriate(); \
WTFReportAssertionFailure(__FILE__, __LINE__, WTF_PRETTY_FUNCTION, #left " == " #right); \
CRASH(); \
@@ -456,6 +456,14 @@
break;
setLocalPositions.operand(node->local()) = i;
break;
+ case SetArgument:
+ if (node->variableAccessData()->isCaptured())
+ break;
+ // This acts like a reset. It's ok to have a second GetLocal for a local in the same
+ // block if we had a SetArgument for that local.
+ getLocalPositions.operand(node->local()) = notSet;
+ setLocalPositions.operand(node->local()) = notSet;
+ break;
default:
break;
}
diff --git a/Source/JavaScriptCore/ftl/FTLAbbreviations.h b/Source/JavaScriptCore/ftl/FTLAbbreviations.h
index fc7cb1b..d3bff4c 100644
--- a/Source/JavaScriptCore/ftl/FTLAbbreviations.h
+++ b/Source/JavaScriptCore/ftl/FTLAbbreviations.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2013, 2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -77,6 +77,9 @@
return structType(context, elements, 2, packing);
}
+// FIXME: Make the Variadicity argument not be the last argument to functionType() so that this function
+// can use C++11 variadic templates
+// https://bugs.webkit.org/show_bug.cgi?id=141575
enum Variadicity { NotVariadic, Variadic };
static inline LType functionType(LType returnType, const LType* paramTypes, unsigned paramCount, Variadicity variadicity)
{
@@ -110,6 +113,16 @@
LType paramTypes[] = { param1, param2, param3, param4 };
return functionType(returnType, paramTypes, 4, variadicity);
}
+static inline LType functionType(LType returnType, LType param1, LType param2, LType param3, LType param4, LType param5, Variadicity variadicity = NotVariadic)
+{
+ LType paramTypes[] = { param1, param2, param3, param4, param5 };
+ return functionType(returnType, paramTypes, 5, variadicity);
+}
+static inline LType functionType(LType returnType, LType param1, LType param2, LType param3, LType param4, LType param5, LType param6, Variadicity variadicity = NotVariadic)
+{
+ LType paramTypes[] = { param1, param2, param3, param4, param5, param6 };
+ return functionType(returnType, paramTypes, 6, variadicity);
+}
static inline LType typeOf(LValue value) { return llvm->TypeOf(value); }
@@ -298,41 +311,13 @@
{
return buildCall(builder, function, &arg1, 1);
}
-static inline LValue buildCall(LBuilder builder, LValue function, LValue arg1, LValue arg2)
+template<typename... Args>
+LValue buildCall(LBuilder builder, LValue function, LValue arg1, Args... args)
{
- LValue args[] = { arg1, arg2 };
- return buildCall(builder, function, args, 2);
+ LValue argsArray[] = { arg1, args... };
+ return buildCall(builder, function, argsArray, sizeof(argsArray) / sizeof(LValue));
}
-static inline LValue buildCall(LBuilder builder, LValue function, LValue arg1, LValue arg2, LValue arg3)
-{
- LValue args[] = { arg1, arg2, arg3 };
- return buildCall(builder, function, args, 3);
-}
-static inline LValue buildCall(LBuilder builder, LValue function, LValue arg1, LValue arg2, LValue arg3, LValue arg4)
-{
- LValue args[] = { arg1, arg2, arg3, arg4 };
- return buildCall(builder, function, args, 4);
-}
-static inline LValue buildCall(LBuilder builder, LValue function, LValue arg1, LValue arg2, LValue arg3, LValue arg4, LValue arg5)
-{
- LValue args[] = { arg1, arg2, arg3, arg4, arg5 };
- return buildCall(builder, function, args, 5);
-}
-static inline LValue buildCall(LBuilder builder, LValue function, LValue arg1, LValue arg2, LValue arg3, LValue arg4, LValue arg5, LValue arg6)
-{
- LValue args[] = { arg1, arg2, arg3, arg4, arg5, arg6 };
- return buildCall(builder, function, args, 6);
-}
-static inline LValue buildCall(LBuilder builder, LValue function, LValue arg1, LValue arg2, LValue arg3, LValue arg4, LValue arg5, LValue arg6, LValue arg7)
-{
- LValue args[] = { arg1, arg2, arg3, arg4, arg5, arg6, arg7 };
- return buildCall(builder, function, args, 7);
-}
-static inline LValue buildCall(LBuilder builder, LValue function, LValue arg1, LValue arg2, LValue arg3, LValue arg4, LValue arg5, LValue arg6, LValue arg7, LValue arg8)
-{
- LValue args[] = { arg1, arg2, arg3, arg4, arg5, arg6, arg7, arg8 };
- return buildCall(builder, function, args, 8);
-}
+
static inline void setInstructionCallingConvention(LValue instruction, LCallConv callingConvention) { llvm->SetInstructionCallConv(instruction, callingConvention); }
static inline LValue buildExtractValue(LBuilder builder, LValue aggVal, unsigned index) { return llvm->BuildExtractValue(builder, aggVal, index, ""); }
static inline LValue buildSelect(LBuilder builder, LValue condition, LValue taken, LValue notTaken) { return llvm->BuildSelect(builder, condition, taken, notTaken, ""); }
diff --git a/Source/JavaScriptCore/ftl/FTLCapabilities.cpp b/Source/JavaScriptCore/ftl/FTLCapabilities.cpp
index c050739..b4313be 100644
--- a/Source/JavaScriptCore/ftl/FTLCapabilities.cpp
+++ b/Source/JavaScriptCore/ftl/FTLCapabilities.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -120,6 +120,10 @@
case StoreBarrierWithNullCheck:
case Call:
case Construct:
+ case CallVarargs:
+ case CallForwardVarargs:
+ case ConstructVarargs:
+ case LoadVarargs:
case NativeCall:
case NativeConstruct:
case ValueToInt32:
diff --git a/Source/JavaScriptCore/ftl/FTLCompile.cpp b/Source/JavaScriptCore/ftl/FTLCompile.cpp
index c8612c9..f06efc6 100644
--- a/Source/JavaScriptCore/ftl/FTLCompile.cpp
+++ b/Source/JavaScriptCore/ftl/FTLCompile.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
* Copyright (C) 2014 Samsung Electronics
* Copyright (C) 2014 University of Szeged
*
@@ -128,6 +128,20 @@
}
}
+static int offsetOfStackRegion(StackMaps::RecordMap& recordMap, uint32_t stackmapID)
+{
+ StackMaps::RecordMap::iterator iter = recordMap.find(stackmapID);
+ RELEASE_ASSERT(iter != recordMap.end());
+ RELEASE_ASSERT(iter->value.size() == 1);
+ RELEASE_ASSERT(iter->value[0].locations.size() == 1);
+ Location capturedLocation =
+ Location::forStackmaps(nullptr, iter->value[0].locations[0]);
+ RELEASE_ASSERT(capturedLocation.kind() == Location::Register);
+ RELEASE_ASSERT(capturedLocation.gpr() == GPRInfo::callFrameRegister);
+ RELEASE_ASSERT(!(capturedLocation.addend() % sizeof(Register)));
+ return capturedLocation.addend() / sizeof(Register);
+}
+
template<typename DescriptorType>
void generateICFastPath(
State& state, CodeBlock* codeBlock, GeneratedFunction generatedFunction,
@@ -243,6 +257,32 @@
return RegisterSet(record.usedRegisterSet(), RegisterSet::calleeSaveRegisters());
}
+template<typename CallType>
+void adjustCallICsForStackmaps(Vector<CallType>& calls, StackMaps::RecordMap& recordMap)
+{
+ // Handling JS calls is weird: we need to ensure that we sort them by the PC in LLVM
+ // generated code. That implies first pruning the ones that LLVM didn't generate.
+
+ Vector<CallType> oldCalls;
+ oldCalls.swap(calls);
+
+ for (unsigned i = 0; i < oldCalls.size(); ++i) {
+ CallType& call = oldCalls[i];
+
+ StackMaps::RecordMap::iterator iter = recordMap.find(call.stackmapID());
+ if (iter == recordMap.end())
+ continue;
+
+ for (unsigned j = 0; j < iter->value.size(); ++j) {
+ CallType copy = call;
+ copy.m_instructionOffset = iter->value[j].instructionOffset;
+ calls.append(copy);
+ }
+ }
+
+ std::sort(calls.begin(), calls.end());
+}
+
static void fixFunctionBasedOnStackMaps(
State& state, CodeBlock* codeBlock, JITCode* jitCode, GeneratedFunction generatedFunction,
StackMaps::RecordMap& recordMap, bool didSeeUnwindInfo)
@@ -251,16 +291,14 @@
VM& vm = graph.m_vm;
StackMaps stackmaps = jitCode->stackmaps;
- StackMaps::RecordMap::iterator iter = recordMap.find(state.capturedStackmapID);
- RELEASE_ASSERT(iter != recordMap.end());
- RELEASE_ASSERT(iter->value.size() == 1);
- RELEASE_ASSERT(iter->value[0].locations.size() == 1);
- Location capturedLocation =
- Location::forStackmaps(&jitCode->stackmaps, iter->value[0].locations[0]);
- RELEASE_ASSERT(capturedLocation.kind() == Location::Register);
- RELEASE_ASSERT(capturedLocation.gpr() == GPRInfo::callFrameRegister);
- RELEASE_ASSERT(!(capturedLocation.addend() % sizeof(Register)));
- int32_t localsOffset = capturedLocation.addend() / sizeof(Register) + graph.m_nextMachineLocal;
+ int localsOffset =
+ offsetOfStackRegion(recordMap, state.capturedStackmapID) + graph.m_nextMachineLocal;
+
+ int varargsSpillSlotsOffset;
+ if (state.varargsSpillSlotsStackmapID != UINT_MAX)
+ varargsSpillSlotsOffset = offsetOfStackRegion(recordMap, state.varargsSpillSlotsStackmapID);
+ else
+ varargsSpillSlotsOffset = 0;
for (unsigned i = graph.m_inlineVariableData.size(); i--;) {
InlineCallFrame* inlineCallFrame = graph.m_inlineVariableData[i].inlineCallFrame;
@@ -293,18 +331,12 @@
// At this point it's perfectly fair to just blow away all state and restore the
// JS JIT view of the universe.
- checkJIT.move(MacroAssembler::TrustedImm64(TagTypeNumber), GPRInfo::tagTypeNumberRegister);
- checkJIT.move(MacroAssembler::TrustedImm64(TagMask), GPRInfo::tagMaskRegister);
-
checkJIT.move(MacroAssembler::TrustedImmPtr(&vm), GPRInfo::argumentGPR0);
checkJIT.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR1);
MacroAssembler::Call callLookupExceptionHandler = checkJIT.call();
checkJIT.jumpToExceptionHandler();
stackOverflowException = checkJIT.label();
- checkJIT.move(MacroAssembler::TrustedImm64(TagTypeNumber), GPRInfo::tagTypeNumberRegister);
- checkJIT.move(MacroAssembler::TrustedImm64(TagMask), GPRInfo::tagMaskRegister);
-
checkJIT.move(MacroAssembler::TrustedImmPtr(&vm), GPRInfo::argumentGPR0);
checkJIT.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR1);
MacroAssembler::Call callLookupExceptionHandlerFromCallerFrame = checkJIT.call();
@@ -336,7 +368,7 @@
if (verboseCompilationEnabled())
dataLog("Handling OSR stackmap #", exit.m_stackmapID, " for ", exit.m_codeOrigin, "\n");
- iter = recordMap.find(exit.m_stackmapID);
+ auto iter = recordMap.find(exit.m_stackmapID);
if (iter == recordMap.end()) {
// It was optimized out.
continue;
@@ -375,7 +407,7 @@
if (verboseCompilationEnabled())
dataLog("Handling GetById stackmap #", getById.stackmapID(), "\n");
- iter = recordMap.find(getById.stackmapID());
+ auto iter = recordMap.find(getById.stackmapID());
if (iter == recordMap.end()) {
// It was optimized out.
continue;
@@ -412,7 +444,7 @@
if (verboseCompilationEnabled())
dataLog("Handling PutById stackmap #", putById.stackmapID(), "\n");
- iter = recordMap.find(putById.stackmapID());
+ auto iter = recordMap.find(putById.stackmapID());
if (iter == recordMap.end()) {
// It was optimized out.
continue;
@@ -444,14 +476,13 @@
}
}
-
for (unsigned i = state.checkIns.size(); i--;) {
CheckInDescriptor& checkIn = state.checkIns[i];
if (verboseCompilationEnabled())
dataLog("Handling checkIn stackmap #", checkIn.stackmapID(), "\n");
- iter = recordMap.find(checkIn.stackmapID());
+ auto iter = recordMap.find(checkIn.stackmapID());
if (iter == recordMap.end()) {
// It was optimized out.
continue;
@@ -480,7 +511,6 @@
checkIn.m_generators.append(CheckInGenerator(stubInfo, slowCall, begin));
}
}
-
exceptionTarget.link(&slowPathJIT);
MacroAssembler::Jump exceptionJump = slowPathJIT.jump();
@@ -503,29 +533,11 @@
for (unsigned i = state.checkIns.size(); i--;) {
generateCheckInICFastPath(
state, codeBlock, generatedFunction, recordMap, state.checkIns[i],
- sizeOfCheckIn());
+ sizeOfIn());
}
}
- // Handling JS calls is weird: we need to ensure that we sort them by the PC in LLVM
- // generated code. That implies first pruning the ones that LLVM didn't generate.
- Vector<JSCall> oldCalls = state.jsCalls;
- state.jsCalls.resize(0);
- for (unsigned i = 0; i < oldCalls.size(); ++i) {
- JSCall& call = oldCalls[i];
-
- StackMaps::RecordMap::iterator iter = recordMap.find(call.stackmapID());
- if (iter == recordMap.end())
- continue;
-
- for (unsigned j = 0; j < iter->value.size(); ++j) {
- JSCall copy = call;
- copy.m_instructionOffset = iter->value[j].instructionOffset;
- state.jsCalls.append(copy);
- }
- }
-
- std::sort(state.jsCalls.begin(), state.jsCalls.end());
+ adjustCallICsForStackmaps(state.jsCalls, recordMap);
for (unsigned i = state.jsCalls.size(); i--;) {
JSCall& call = state.jsCalls[i];
@@ -547,9 +559,32 @@
call.link(vm, linkBuffer);
}
+ adjustCallICsForStackmaps(state.jsCallVarargses, recordMap);
+
+ for (unsigned i = state.jsCallVarargses.size(); i--;) {
+ JSCallVarargs& call = state.jsCallVarargses[i];
+
+ CCallHelpers fastPathJIT(&vm, codeBlock);
+ call.emit(fastPathJIT, graph, varargsSpillSlotsOffset);
+
+ char* startOfIC = bitwise_cast<char*>(generatedFunction) + call.m_instructionOffset;
+ size_t sizeOfIC = sizeOfICFor(call.node());
+
+ LinkBuffer linkBuffer(vm, fastPathJIT, startOfIC, sizeOfIC);
+ if (!linkBuffer.isValid()) {
+ dataLog("Failed to insert inline cache for varargs call (specifically, ", Graph::opName(call.node()->op()), ") because we thought the size would be ", sizeOfIC, " but it ended up being ", fastPathJIT.m_assembler.codeSize(), " prior to compaction.\n");
+ RELEASE_ASSERT_NOT_REACHED();
+ }
+
+ MacroAssembler::AssemblerType_T::fillNops(
+ startOfIC + linkBuffer.size(), sizeOfIC - linkBuffer.size());
+
+ call.link(vm, linkBuffer, state.finalizer->handleExceptionsLinkBuffer->entrypoint());
+ }
+
RepatchBuffer repatchBuffer(codeBlock);
- iter = recordMap.find(state.handleStackOverflowExceptionStackmapID);
+ auto iter = recordMap.find(state.handleStackOverflowExceptionStackmapID);
// It's sort of remotely possible that we won't have an in-band exception handling
// path, for some kinds of functions.
if (iter != recordMap.end()) {
diff --git a/Source/JavaScriptCore/ftl/FTLInlineCacheSize.cpp b/Source/JavaScriptCore/ftl/FTLInlineCacheSize.cpp
index f9cfce4..1d0beec 100644
--- a/Source/JavaScriptCore/ftl/FTLInlineCacheSize.cpp
+++ b/Source/JavaScriptCore/ftl/FTLInlineCacheSize.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -28,11 +28,14 @@
#if ENABLE(FTL_JIT)
+#include "DFGNode.h"
#include "JITInlineCacheGenerator.h"
#include "MacroAssembler.h"
namespace JSC { namespace FTL {
+using namespace DFG;
+
// The default sizes are x86-64-specific, and were found empirically. They have to cover the worst
// possible combination of registers leading to the largest possible encoding of each instruction in
// the IC.
@@ -61,7 +64,43 @@
#endif
}
-size_t sizeOfCheckIn()
+size_t sizeOfCall()
+{
+#if CPU(ARM64)
+ return 56;
+#else
+ return 53;
+#endif
+}
+
+size_t sizeOfCallVarargs()
+{
+#if CPU(ARM64)
+ return 300;
+#else
+ return 275;
+#endif
+}
+
+size_t sizeOfCallForwardVarargs()
+{
+#if CPU(ARM64)
+ return 460;
+#else
+ return 372;
+#endif
+}
+
+size_t sizeOfConstructVarargs()
+{
+#if CPU(ARM64)
+ return 284;
+#else
+ return 253;
+#endif
+}
+
+size_t sizeOfIn()
{
#if CPU(ARM64)
return 4;
@@ -70,14 +109,27 @@
#endif
}
-
-size_t sizeOfCall()
+size_t sizeOfICFor(Node* node)
{
-#if CPU(ARM64)
- return 56;
-#else
- return 53;
-#endif
+ switch (node->op()) {
+ case GetById:
+ return sizeOfGetById();
+ case PutById:
+ return sizeOfPutById();
+ case Call:
+ case Construct:
+ return sizeOfCall();
+ case CallVarargs:
+ return sizeOfCallVarargs();
+ case CallForwardVarargs:
+ return sizeOfCallForwardVarargs();
+ case ConstructVarargs:
+ return sizeOfConstructVarargs();
+ case In:
+ return sizeOfIn();
+ default:
+ return 0;
+ }
}
} } // namespace JSC::FTL
diff --git a/Source/JavaScriptCore/ftl/FTLInlineCacheSize.h b/Source/JavaScriptCore/ftl/FTLInlineCacheSize.h
index db76424..6fe9116 100644
--- a/Source/JavaScriptCore/ftl/FTLInlineCacheSize.h
+++ b/Source/JavaScriptCore/ftl/FTLInlineCacheSize.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2013, 2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -28,12 +28,23 @@
#if ENABLE(FTL_JIT)
-namespace JSC { namespace FTL {
+namespace JSC {
+
+namespace DFG {
+struct Node;
+}
+
+namespace FTL {
size_t sizeOfGetById();
size_t sizeOfPutById();
size_t sizeOfCall();
-size_t sizeOfCheckIn();
+size_t sizeOfCallVarargs();
+size_t sizeOfCallForwardVarargs();
+size_t sizeOfConstructVarargs();
+size_t sizeOfIn();
+
+size_t sizeOfICFor(DFG::Node*);
} } // namespace JSC::FTL
diff --git a/Source/JavaScriptCore/ftl/FTLIntrinsicRepository.h b/Source/JavaScriptCore/ftl/FTLIntrinsicRepository.h
index f284e8d..4f36029 100644
--- a/Source/JavaScriptCore/ftl/FTLIntrinsicRepository.h
+++ b/Source/JavaScriptCore/ftl/FTLIntrinsicRepository.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -99,10 +99,12 @@
macro(V_JITOperation_EC, functionType(voidType, intPtr, intPtr)) \
macro(V_JITOperation_ECb, functionType(voidType, intPtr, intPtr)) \
macro(V_JITOperation_EVwsJ, functionType(voidType, intPtr, intPtr, int64)) \
+ macro(V_JITOperation_EZJZZZ, functionType(voidType, intPtr, int32, int64, int32, int32, int32)) \
macro(V_JITOperation_J, functionType(voidType, int64)) \
macro(V_JITOperation_Z, functionType(voidType, int32)) \
macro(Z_JITOperation_D, functionType(int32, doubleType)) \
- macro(Z_JITOperation_EC, functionType(int32, intPtr, intPtr))
+ macro(Z_JITOperation_EC, functionType(int32, intPtr, intPtr)) \
+ macro(Z_JITOperation_EJZ, functionType(int32, intPtr, int64, int32))
class IntrinsicRepository : public CommonValues {
public:
diff --git a/Source/JavaScriptCore/ftl/FTLJSCall.cpp b/Source/JavaScriptCore/ftl/FTLJSCall.cpp
index b06ec03..e6f6bda 100644
--- a/Source/JavaScriptCore/ftl/FTLJSCall.cpp
+++ b/Source/JavaScriptCore/ftl/FTLJSCall.cpp
@@ -48,6 +48,7 @@
, m_stackmapID(stackmapID)
, m_instructionOffset(0)
{
+ ASSERT(node->op() == Call || node->op() == Construct);
}
} } // namespace JSC::FTL
diff --git a/Source/JavaScriptCore/ftl/FTLJSCallBase.cpp b/Source/JavaScriptCore/ftl/FTLJSCallBase.cpp
index 20367c2..84f4cd0 100644
--- a/Source/JavaScriptCore/ftl/FTLJSCallBase.cpp
+++ b/Source/JavaScriptCore/ftl/FTLJSCallBase.cpp
@@ -33,6 +33,8 @@
namespace JSC { namespace FTL {
+using namespace DFG;
+
JSCallBase::JSCallBase()
: m_type(CallLinkInfo::None)
, m_callLinkInfo(nullptr)
diff --git a/Source/JavaScriptCore/ftl/FTLJSCallBase.h b/Source/JavaScriptCore/ftl/FTLJSCallBase.h
index 9922656..595ac69 100644
--- a/Source/JavaScriptCore/ftl/FTLJSCallBase.h
+++ b/Source/JavaScriptCore/ftl/FTLJSCallBase.h
@@ -36,6 +36,10 @@
class LinkBuffer;
+namespace DFG {
+struct Node;
+}
+
namespace FTL {
class JSCallBase {
diff --git a/Source/JavaScriptCore/ftl/FTLJSCallVarargs.cpp b/Source/JavaScriptCore/ftl/FTLJSCallVarargs.cpp
new file mode 100644
index 0000000..b729ff0
--- /dev/null
+++ b/Source/JavaScriptCore/ftl/FTLJSCallVarargs.cpp
@@ -0,0 +1,241 @@
+/*
+ * Copyright (C) 2015 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "config.h"
+#include "FTLJSCallVarargs.h"
+
+#if ENABLE(FTL_JIT)
+
+#include "DFGGraph.h"
+#include "DFGNode.h"
+#include "DFGOperations.h"
+#include "JSCInlines.h"
+#include "LinkBuffer.h"
+#include "ScratchRegisterAllocator.h"
+#include "SetupVarargsFrame.h"
+
+namespace JSC { namespace FTL {
+
+using namespace DFG;
+
+JSCallVarargs::JSCallVarargs()
+ : m_stackmapID(UINT_MAX)
+ , m_node(nullptr)
+ , m_instructionOffset(UINT_MAX)
+{
+}
+
+JSCallVarargs::JSCallVarargs(unsigned stackmapID, Node* node)
+ : m_stackmapID(stackmapID)
+ , m_node(node)
+ , m_callBase(
+ node->op() == ConstructVarargs ? CallLinkInfo::ConstructVarargs : CallLinkInfo::CallVarargs,
+ node->origin.semantic)
+ , m_instructionOffset(0)
+{
+ ASSERT(node->op() == CallVarargs || node->op() == CallForwardVarargs || node->op() == ConstructVarargs);
+}
+
+unsigned JSCallVarargs::numSpillSlotsNeeded()
+{
+ return 4;
+}
+
+void JSCallVarargs::emit(CCallHelpers& jit, Graph& graph, int32_t spillSlotsOffset)
+{
+ // We are passed three pieces of information:
+ // - The callee.
+ // - The arguments object.
+ // - The "this" value, if it's a constructor call.
+
+ bool isCall = m_node->op() != ConstructVarargs;
+
+ CallVarargsData* data = m_node->callVarargsData();
+
+ GPRReg calleeGPR = GPRInfo::argumentGPR0;
+
+ GPRReg argumentsGPR = InvalidGPRReg;
+ GPRReg thisGPR = InvalidGPRReg;
+ bool argumentsOnStack = false;
+
+ switch (m_node->op()) {
+ case CallVarargs:
+ argumentsGPR = GPRInfo::argumentGPR1;
+ thisGPR = GPRInfo::argumentGPR2;
+ break;
+ case CallForwardVarargs:
+ thisGPR = GPRInfo::argumentGPR1;
+ argumentsOnStack = true;
+ break;
+ case ConstructVarargs:
+ argumentsGPR = GPRInfo::argumentGPR1;
+ break;
+ default:
+ RELEASE_ASSERT_NOT_REACHED();
+ break;
+ }
+
+ const unsigned calleeSpillSlot = 0;
+ const unsigned argumentsSpillSlot = 1;
+ const unsigned thisSpillSlot = 2;
+ const unsigned stackPointerSpillSlot = 3;
+
+ // Get some scratch registers.
+ RegisterSet usedRegisters;
+ usedRegisters.merge(RegisterSet::stackRegisters());
+ usedRegisters.merge(RegisterSet::reservedHardwareRegisters());
+ usedRegisters.merge(RegisterSet::calleeSaveRegisters());
+ usedRegisters.set(calleeGPR);
+ if (argumentsGPR != InvalidGPRReg)
+ usedRegisters.set(argumentsGPR);
+ if (thisGPR != InvalidGPRReg)
+ usedRegisters.set(thisGPR);
+ ScratchRegisterAllocator allocator(usedRegisters);
+ GPRReg scratchGPR1 = allocator.allocateScratchGPR();
+ GPRReg scratchGPR2 = allocator.allocateScratchGPR();
+ GPRReg scratchGPR3 = allocator.allocateScratchGPR();
+ if (argumentsOnStack)
+ argumentsGPR = allocator.allocateScratchGPR();
+ RELEASE_ASSERT(!allocator.numberOfReusedRegisters());
+
+ auto loadArguments = [&] (bool clobbered) {
+ if (argumentsOnStack) {
+ jit.load64(
+ CCallHelpers::addressFor(graph.machineArgumentsRegisterFor(m_node->origin.semantic)),
+ argumentsGPR);
+ } else if (clobbered) {
+ jit.load64(
+ CCallHelpers::addressFor(spillSlotsOffset + argumentsSpillSlot), argumentsGPR);
+ }
+ };
+
+ auto computeUsedStack = [&] (GPRReg targetGPR, unsigned extra) {
+ if (isARM64()) {
+ // Have to do this the weird way because $sp on ARM64 means zero when used in a subtraction.
+ jit.move(CCallHelpers::stackPointerRegister, targetGPR);
+ jit.negPtr(targetGPR);
+ jit.addPtr(GPRInfo::callFrameRegister, targetGPR);
+ } else {
+ jit.move(GPRInfo::callFrameRegister, targetGPR);
+ jit.subPtr(CCallHelpers::stackPointerRegister, targetGPR);
+ }
+ if (extra)
+ jit.subPtr(CCallHelpers::TrustedImm32(extra), targetGPR);
+ jit.urshiftPtr(CCallHelpers::Imm32(3), targetGPR);
+ };
+
+ auto callWithExceptionCheck = [&] (void* callee) {
+ jit.move(CCallHelpers::TrustedImmPtr(callee), GPRInfo::nonPreservedNonArgumentGPR);
+ jit.call(GPRInfo::nonPreservedNonArgumentGPR);
+ m_exceptions.append(jit.emitExceptionCheck(AssemblyHelpers::NormalExceptionCheck, AssemblyHelpers::FarJumpWidth));
+ };
+
+ loadArguments(false);
+
+ if (isARM64()) {
+ jit.move(CCallHelpers::stackPointerRegister, scratchGPR1);
+ jit.storePtr(scratchGPR1, CCallHelpers::addressFor(spillSlotsOffset + stackPointerSpillSlot));
+ } else
+ jit.storePtr(CCallHelpers::stackPointerRegister, CCallHelpers::addressFor(spillSlotsOffset + stackPointerSpillSlot));
+
+ // Attempt the forwarding fast path, if it's been requested.
+ CCallHelpers::Jump haveArguments;
+ if (m_node->op() == CallForwardVarargs) {
+ // Do the horrific foo.apply(this, arguments) optimization.
+ // FIXME: do this optimization at the IR level.
+
+ CCallHelpers::JumpList slowCase;
+ slowCase.append(jit.branchTest64(CCallHelpers::NonZero, argumentsGPR));
+
+ computeUsedStack(scratchGPR2, 0);
+ emitSetupVarargsFrameFastCase(jit, scratchGPR2, scratchGPR1, scratchGPR2, scratchGPR3, m_node->origin.semantic.inlineCallFrame, data->firstVarArgOffset, slowCase);
+
+ jit.move(calleeGPR, GPRInfo::regT0);
+ haveArguments = jit.jump();
+ slowCase.link(&jit);
+ }
+
+ // Gotta spill the callee, arguments, and this because we will need them later and we will have some
+ // calls that clobber them.
+ jit.store64(calleeGPR, CCallHelpers::addressFor(spillSlotsOffset + calleeSpillSlot));
+ if (!argumentsOnStack)
+ jit.store64(argumentsGPR, CCallHelpers::addressFor(spillSlotsOffset + argumentsSpillSlot));
+ if (isCall)
+ jit.store64(thisGPR, CCallHelpers::addressFor(spillSlotsOffset + thisSpillSlot));
+
+ unsigned extraStack = sizeof(CallerFrameAndPC) +
+ WTF::roundUpToMultipleOf(stackAlignmentBytes(), 5 * sizeof(void*));
+ computeUsedStack(scratchGPR1, 0);
+ jit.subPtr(CCallHelpers::TrustedImm32(extraStack), CCallHelpers::stackPointerRegister);
+ jit.setupArgumentsWithExecState(argumentsGPR, scratchGPR1, CCallHelpers::TrustedImm32(data->firstVarArgOffset));
+ callWithExceptionCheck(bitwise_cast<void*>(operationSizeFrameForVarargs));
+
+ jit.move(GPRInfo::returnValueGPR, scratchGPR1);
+ computeUsedStack(scratchGPR2, extraStack);
+ loadArguments(true);
+ emitSetVarargsFrame(jit, scratchGPR1, false, scratchGPR2, scratchGPR2);
+ jit.addPtr(CCallHelpers::TrustedImm32(-extraStack), scratchGPR2, CCallHelpers::stackPointerRegister);
+ jit.setupArgumentsWithExecState(scratchGPR2, argumentsGPR, CCallHelpers::TrustedImm32(data->firstVarArgOffset), scratchGPR1);
+ callWithExceptionCheck(bitwise_cast<void*>(operationSetupVarargsFrame));
+
+ jit.move(GPRInfo::returnValueGPR, scratchGPR2);
+
+ if (isCall)
+ jit.load64(CCallHelpers::addressFor(spillSlotsOffset + thisSpillSlot), thisGPR);
+ jit.load64(CCallHelpers::addressFor(spillSlotsOffset + calleeSpillSlot), GPRInfo::regT0);
+
+ if (m_node->op() == CallForwardVarargs)
+ haveArguments.link(&jit);
+
+ jit.addPtr(CCallHelpers::TrustedImm32(sizeof(CallerFrameAndPC)), scratchGPR2, CCallHelpers::stackPointerRegister);
+
+ if (isCall)
+ jit.store64(thisGPR, CCallHelpers::calleeArgumentSlot(0));
+
+ // Henceforth we make the call. The base FTL call machinery expects the callee in regT0 and for the
+ // stack frame to already be set up, which it is.
+ jit.store64(GPRInfo::regT0, CCallHelpers::calleeFrameSlot(JSStack::Callee));
+
+ m_callBase.emit(jit);
+
+ // Undo the damage we've done.
+ if (isARM64()) {
+ GPRReg scratchGPRAtReturn = CCallHelpers::selectScratchGPR(GPRInfo::returnValueGPR);
+ jit.loadPtr(CCallHelpers::addressFor(spillSlotsOffset + stackPointerSpillSlot), scratchGPRAtReturn);
+ jit.move(scratchGPRAtReturn, CCallHelpers::stackPointerRegister);
+ } else
+ jit.loadPtr(CCallHelpers::addressFor(spillSlotsOffset + stackPointerSpillSlot), CCallHelpers::stackPointerRegister);
+}
+
+void JSCallVarargs::link(VM& vm, LinkBuffer& linkBuffer, CodeLocationLabel exceptionHandler)
+{
+ m_callBase.link(vm, linkBuffer);
+ linkBuffer.link(m_exceptions, exceptionHandler);
+}
+
+} } // namespace JSC::FTL
+
+#endif // ENABLE(FTL_JIT)
+
diff --git a/Source/JavaScriptCore/ftl/FTLJSCallVarargs.h b/Source/JavaScriptCore/ftl/FTLJSCallVarargs.h
new file mode 100644
index 0000000..cdaefb9
--- /dev/null
+++ b/Source/JavaScriptCore/ftl/FTLJSCallVarargs.h
@@ -0,0 +1,78 @@
+/*
+ * Copyright (C) 2015 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef FTLJSCallVarargs_h
+#define FTLJSCallVarargs_h
+
+#if ENABLE(FTL_JIT)
+
+#include "FTLJSCallBase.h"
+
+namespace JSC {
+
+class LinkBuffer;
+
+namespace DFG {
+class Graph;
+struct Node;
+}
+
+namespace FTL {
+
+class JSCallVarargs {
+public:
+ JSCallVarargs();
+ JSCallVarargs(unsigned stackmapID, DFG::Node*);
+
+ DFG::Node* node() const { return m_node; }
+
+ static unsigned numSpillSlotsNeeded();
+
+ void emit(CCallHelpers&, DFG::Graph&, int32_t spillSlotsOffset);
+ void link(VM&, LinkBuffer&, CodeLocationLabel exceptionHandler);
+
+ unsigned stackmapID() const { return m_stackmapID; }
+
+ bool operator<(const JSCallVarargs& other) const
+ {
+ return m_instructionOffset < other.m_instructionOffset;
+ }
+
+private:
+ unsigned m_stackmapID;
+ DFG::Node* m_node;
+ JSCallBase m_callBase;
+ CCallHelpers::JumpList m_exceptions;
+
+public:
+ uint32_t m_instructionOffset;
+};
+
+} } // namespace JSC::FTL
+
+#endif // ENABLE(FTL_JIT)
+
+#endif // FTLJSCallVarargs_h
+
diff --git a/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp b/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp
index 92bc133..00b1707 100644
--- a/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp
+++ b/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp
@@ -192,6 +192,30 @@
m_out.stackmapIntrinsic(), m_out.constInt64(m_ftlState.capturedStackmapID),
m_out.int32Zero, capturedAlloca);
+ // If we have any CallVarargs then we nee to have a spill slot for it.
+ bool hasVarargs = false;
+ for (BasicBlock* block : preOrder) {
+ for (Node* node : *block) {
+ switch (node->op()) {
+ case CallVarargs:
+ case CallForwardVarargs:
+ case ConstructVarargs:
+ hasVarargs = true;
+ break;
+ default:
+ break;
+ }
+ }
+ }
+ if (hasVarargs) {
+ LValue varargsSpillSlots = m_out.alloca(
+ arrayType(m_out.int64, JSCallVarargs::numSpillSlotsNeeded()));
+ m_ftlState.varargsSpillSlotsStackmapID = m_stackmapIDs++;
+ m_out.call(
+ m_out.stackmapIntrinsic(), m_out.constInt64(m_ftlState.varargsSpillSlotsStackmapID),
+ m_out.int32Zero, varargsSpillSlots);
+ }
+
m_callFrame = m_out.ptrToInt(
m_out.call(m_out.frameAddressIntrinsic(), m_out.int32Zero), m_out.intPtr);
m_tagTypeNumber = m_out.constInt64(TagTypeNumber);
@@ -698,6 +722,14 @@
case Construct:
compileCallOrConstruct();
break;
+ case CallVarargs:
+ case CallForwardVarargs:
+ case ConstructVarargs:
+ compileCallOrConstructVarargs();
+ break;
+ case LoadVarargs:
+ compileLoadVarargs();
+ break;
#if ENABLE(FTL_NATIVE_CALL_INLINING)
case NativeCall:
case NativeConstruct:
@@ -2082,8 +2114,22 @@
{
checkArgumentsNotCreated();
- DFG_ASSERT(m_graph, m_node, !m_node->origin.semantic.inlineCallFrame);
- setInt32(m_out.add(m_out.load32NonNegative(payloadFor(JSStack::ArgumentCount)), m_out.constInt32(-1)));
+ if (m_node->origin.semantic.inlineCallFrame
+ && !m_node->origin.semantic.inlineCallFrame->isVarargs()) {
+ setInt32(
+ m_out.constInt32(
+ m_node->origin.semantic.inlineCallFrame->arguments.size() - 1));
+ } else {
+ VirtualRegister argumentCountRegister;
+ if (!m_node->origin.semantic.inlineCallFrame)
+ argumentCountRegister = VirtualRegister(JSStack::ArgumentCount);
+ else
+ argumentCountRegister = m_node->origin.semantic.inlineCallFrame->argumentCountRegister;
+ setInt32(
+ m_out.add(
+ m_out.load32NonNegative(payloadFor(argumentCountRegister)),
+ m_out.constInt32(-1)));
+ }
}
void compileGetMyArgumentByVal()
@@ -2095,10 +2141,17 @@
LValue index = lowInt32(m_node->child1());
LValue limit;
- if (codeOrigin.inlineCallFrame)
+ if (codeOrigin.inlineCallFrame
+ && !codeOrigin.inlineCallFrame->isVarargs())
limit = m_out.constInt32(codeOrigin.inlineCallFrame->arguments.size() - 1);
- else
- limit = m_out.sub(m_out.load32(payloadFor(JSStack::ArgumentCount)), m_out.int32One);
+ else {
+ VirtualRegister argumentCountRegister;
+ if (!codeOrigin.inlineCallFrame)
+ argumentCountRegister = VirtualRegister(JSStack::ArgumentCount);
+ else
+ argumentCountRegister = codeOrigin.inlineCallFrame->argumentCountRegister;
+ limit = m_out.sub(m_out.load32(payloadFor(argumentCountRegister)), m_out.int32One);
+ }
speculate(Uncountable, noValue(), 0, m_out.aboveOrEqual(index, limit));
@@ -3811,6 +3864,87 @@
setJSValue(call);
}
+
+ void compileCallOrConstructVarargs()
+ {
+ LValue jsCallee = lowJSValue(m_node->child1());
+
+ LValue jsArguments = nullptr;
+ LValue thisArg = nullptr;
+
+ switch (m_node->op()) {
+ case CallVarargs:
+ jsArguments = lowJSValue(m_node->child2());
+ thisArg = lowJSValue(m_node->child3());
+ break;
+ case CallForwardVarargs:
+ thisArg = lowJSValue(m_node->child2());
+ break;
+ case ConstructVarargs:
+ jsArguments = lowJSValue(m_node->child2());
+ break;
+ default:
+ DFG_CRASH(m_graph, m_node, "bad node type");
+ break;
+ }
+
+ unsigned stackmapID = m_stackmapIDs++;
+
+ Vector<LValue> arguments;
+ arguments.append(m_out.constInt64(stackmapID));
+ arguments.append(m_out.constInt32(sizeOfICFor(m_node)));
+ arguments.append(constNull(m_out.ref8));
+ arguments.append(m_out.constInt32(1 + !!jsArguments + !!thisArg));
+ arguments.append(jsCallee);
+ if (jsArguments)
+ arguments.append(jsArguments);
+ if (thisArg)
+ arguments.append(thisArg);
+
+ callPreflight();
+
+ LValue call = m_out.call(m_out.patchpointInt64Intrinsic(), arguments);
+ setInstructionCallingConvention(call, LLVMCCallConv);
+
+ m_ftlState.jsCallVarargses.append(JSCallVarargs(stackmapID, m_node));
+
+ setJSValue(call);
+ }
+
+ void compileLoadVarargs()
+ {
+ LoadVarargsData* data = m_node->loadVarargsData();
+ LValue jsArguments = lowJSValue(m_node->child1());
+
+ LValue length = vmCall(
+ m_out.operation(operationSizeOfVarargs), m_callFrame, jsArguments,
+ m_out.constInt32(data->offset));
+
+ // FIXME: There is a chance that we will call an effectful length property twice. This is safe
+ // from the standpoint of the VM's integrity, but it's subtly wrong from a spec compliance
+ // standpoint. The best solution would be one where we can exit *into* the op_call_varargs right
+ // past the sizing.
+ // https://bugs.webkit.org/show_bug.cgi?id=141448
+
+ LValue lengthIncludingThis = m_out.add(length, m_out.int32One);
+ speculate(
+ VarargsOverflow, noValue(), nullptr,
+ m_out.above(lengthIncludingThis, m_out.constInt32(data->limit)));
+
+ m_out.store32(lengthIncludingThis, payloadFor(data->machineCount));
+
+ // FIXME: This computation is rather silly. If operationLaodVarargs just took a pointer instead
+ // of a VirtualRegister, we wouldn't have to do this.
+ // https://bugs.webkit.org/show_bug.cgi?id=141660
+ LValue machineStart = m_out.lShr(
+ m_out.sub(addressFor(data->machineStart.offset()).value(), m_callFrame),
+ m_out.constIntPtr(3));
+
+ vmCall(
+ m_out.operation(operationLoadVarargs), m_callFrame,
+ m_out.castToInt32(machineStart), jsArguments, m_out.constInt32(data->offset),
+ length, m_out.constInt32(data->mandatoryMinimum));
+ }
void compileJump()
{
@@ -4115,7 +4249,7 @@
LValue call = m_out.call(
m_out.patchpointInt64Intrinsic(),
- m_out.constInt64(stackmapID), m_out.constInt32(sizeOfCheckIn()),
+ m_out.constInt64(stackmapID), m_out.constInt32(sizeOfIn()),
constNull(m_out.ref8), m_out.constInt32(1), cell);
setInstructionCallingConvention(call, LLVMAnyRegCallConv);
@@ -6510,7 +6644,7 @@
// Buffer is out of space, flush it.
m_out.appendTo(bufferIsFull, continuation);
- vmCall(m_out.operation(operationFlushWriteBarrierBuffer), m_callFrame, base, NoExceptions);
+ vmCallNoExceptions(m_out.operation(operationFlushWriteBarrierBuffer), m_callFrame, base);
m_out.jump(continuation);
m_out.appendTo(continuation, lastNext);
@@ -6519,44 +6653,23 @@
#endif
}
- enum ExceptionCheckMode { NoExceptions, CheckExceptions };
-
- LValue vmCall(LValue function, ExceptionCheckMode mode = CheckExceptions)
+ template<typename... Args>
+ LValue vmCall(LValue function, Args... args)
{
callPreflight();
- LValue result = m_out.call(function);
- callCheck(mode);
- return result;
- }
- LValue vmCall(LValue function, LValue arg1, ExceptionCheckMode mode = CheckExceptions)
- {
- callPreflight();
- LValue result = m_out.call(function, arg1);
- callCheck(mode);
- return result;
- }
- LValue vmCall(LValue function, LValue arg1, LValue arg2, ExceptionCheckMode mode = CheckExceptions)
- {
- callPreflight();
- LValue result = m_out.call(function, arg1, arg2);
- callCheck(mode);
- return result;
- }
- LValue vmCall(LValue function, LValue arg1, LValue arg2, LValue arg3, ExceptionCheckMode mode = CheckExceptions)
- {
- callPreflight();
- LValue result = m_out.call(function, arg1, arg2, arg3);
- callCheck(mode);
- return result;
- }
- LValue vmCall(LValue function, LValue arg1, LValue arg2, LValue arg3, LValue arg4, ExceptionCheckMode mode = CheckExceptions)
- {
- callPreflight();
- LValue result = m_out.call(function, arg1, arg2, arg3, arg4);
- callCheck(mode);
+ LValue result = m_out.call(function, args...);
+ callCheck();
return result;
}
+ template<typename... Args>
+ LValue vmCallNoExceptions(LValue function, Args... args)
+ {
+ callPreflight();
+ LValue result = m_out.call(function, args...);
+ return result;
+ }
+
void callPreflight(CodeOrigin codeOrigin)
{
m_out.store32(
@@ -6570,11 +6683,8 @@
callPreflight(m_node->origin.semantic);
}
- void callCheck(ExceptionCheckMode mode = CheckExceptions)
+ void callCheck()
{
- if (mode == NoExceptions)
- return;
-
if (Options::enableExceptionFuzz())
m_out.call(m_out.operation(operationExceptionFuzz));
diff --git a/Source/JavaScriptCore/ftl/FTLOutput.h b/Source/JavaScriptCore/ftl/FTLOutput.h
index af82cbd..27febd3 100644
--- a/Source/JavaScriptCore/ftl/FTLOutput.h
+++ b/Source/JavaScriptCore/ftl/FTLOutput.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -357,13 +357,8 @@
LValue call(LValue function, const VectorType& vector) { return buildCall(m_builder, function, vector); }
LValue call(LValue function) { return buildCall(m_builder, function); }
LValue call(LValue function, LValue arg1) { return buildCall(m_builder, function, arg1); }
- LValue call(LValue function, LValue arg1, LValue arg2) { return buildCall(m_builder, function, arg1, arg2); }
- LValue call(LValue function, LValue arg1, LValue arg2, LValue arg3) { return buildCall(m_builder, function, arg1, arg2, arg3); }
- LValue call(LValue function, LValue arg1, LValue arg2, LValue arg3, LValue arg4) { return buildCall(m_builder, function, arg1, arg2, arg3, arg4); }
- LValue call(LValue function, LValue arg1, LValue arg2, LValue arg3, LValue arg4, LValue arg5) { return buildCall(m_builder, function, arg1, arg2, arg3, arg4, arg5); }
- LValue call(LValue function, LValue arg1, LValue arg2, LValue arg3, LValue arg4, LValue arg5, LValue arg6) { return buildCall(m_builder, function, arg1, arg2, arg3, arg4, arg5, arg6); }
- LValue call(LValue function, LValue arg1, LValue arg2, LValue arg3, LValue arg4, LValue arg5, LValue arg6, LValue arg7) { return buildCall(m_builder, function, arg1, arg2, arg3, arg4, arg5, arg6, arg7); }
- LValue call(LValue function, LValue arg1, LValue arg2, LValue arg3, LValue arg4, LValue arg5, LValue arg6, LValue arg7, LValue arg8) { return buildCall(m_builder, function, arg1, arg2, arg3, arg4, arg5, arg6, arg7, arg8); }
+ template<typename... Args>
+ LValue call(LValue function, LValue arg1, Args... args) { return buildCall(m_builder, function, arg1, args...); }
template<typename FunctionType>
LValue operation(FunctionType function)
diff --git a/Source/JavaScriptCore/ftl/FTLState.cpp b/Source/JavaScriptCore/ftl/FTLState.cpp
index 62ce058..7937050 100644
--- a/Source/JavaScriptCore/ftl/FTLState.cpp
+++ b/Source/JavaScriptCore/ftl/FTLState.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2013, 2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -49,6 +49,10 @@
, module(0)
, function(0)
, generatedFunction(0)
+ , handleStackOverflowExceptionStackmapID(UINT_MAX)
+ , handleExceptionStackmapID(UINT_MAX)
+ , capturedStackmapID(UINT_MAX)
+ , varargsSpillSlotsStackmapID(UINT_MAX)
, unwindDataSection(0)
, unwindDataSectionSize(0)
{
diff --git a/Source/JavaScriptCore/ftl/FTLState.h b/Source/JavaScriptCore/ftl/FTLState.h
index e986517..56e17a3 100644
--- a/Source/JavaScriptCore/ftl/FTLState.h
+++ b/Source/JavaScriptCore/ftl/FTLState.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -36,6 +36,7 @@
#include "FTLJITCode.h"
#include "FTLJITFinalizer.h"
#include "FTLJSCall.h"
+#include "FTLJSCallVarargs.h"
#include "FTLStackMaps.h"
#include "FTLState.h"
#include <wtf/Noncopyable.h>
@@ -71,10 +72,12 @@
unsigned handleStackOverflowExceptionStackmapID;
unsigned handleExceptionStackmapID;
unsigned capturedStackmapID;
+ unsigned varargsSpillSlotsStackmapID;
SegmentedVector<GetByIdDescriptor> getByIds;
SegmentedVector<PutByIdDescriptor> putByIds;
SegmentedVector<CheckInDescriptor> checkIns;
Vector<JSCall> jsCalls;
+ Vector<JSCallVarargs> jsCallVarargses;
Vector<CString> codeSectionNames;
Vector<CString> dataSectionNames;
void* unwindDataSection;
diff --git a/Source/JavaScriptCore/interpreter/Interpreter.cpp b/Source/JavaScriptCore/interpreter/Interpreter.cpp
index 96eb2ab..74768c6 100644
--- a/Source/JavaScriptCore/interpreter/Interpreter.cpp
+++ b/Source/JavaScriptCore/interpreter/Interpreter.cpp
@@ -134,7 +134,7 @@
return interpreter->execute(eval, callFrame, thisValue, callerScopeChain);
}
-unsigned sizeFrameForVarargs(CallFrame* callFrame, JSStack* stack, JSValue arguments, unsigned numUsedStackSlots, uint32_t firstVarArgOffset)
+unsigned sizeOfVarargs(CallFrame* callFrame, JSValue arguments, uint32_t firstVarArgOffset)
{
unsigned length;
if (!arguments)
@@ -156,6 +156,13 @@
else
length = 0;
+ return length;
+}
+
+unsigned sizeFrameForVarargs(CallFrame* callFrame, JSStack* stack, JSValue arguments, unsigned numUsedStackSlots, uint32_t firstVarArgOffset)
+{
+ unsigned length = sizeOfVarargs(callFrame, arguments, firstVarArgOffset);
+
CallFrame* calleeFrame = calleeFrameForVarargs(callFrame, numUsedStackSlots, length + 1);
if (length > Arguments::MaxArguments || !stack->ensureCapacityFor(calleeFrame->registers())) {
throwStackOverflowError(callFrame);
diff --git a/Source/JavaScriptCore/interpreter/Interpreter.h b/Source/JavaScriptCore/interpreter/Interpreter.h
index 7eed235..3bfed01 100644
--- a/Source/JavaScriptCore/interpreter/Interpreter.h
+++ b/Source/JavaScriptCore/interpreter/Interpreter.h
@@ -308,6 +308,7 @@
return CallFrame::create(callFrame->registers() - paddedCalleeFrameOffset);
}
+ unsigned sizeOfVarargs(CallFrame* exec, JSValue arguments, uint32_t firstVarArgOffset);
unsigned sizeFrameForVarargs(CallFrame* exec, JSStack*, JSValue arguments, unsigned numUsedStackSlots, uint32_t firstVarArgOffset);
void loadVarargs(CallFrame* execCaller, VirtualRegister firstElementDest, JSValue source, uint32_t offset, uint32_t length);
void setupVarargsFrame(CallFrame* execCaller, CallFrame* execCallee, JSValue arguments, uint32_t firstVarArgOffset, uint32_t length);
diff --git a/Source/JavaScriptCore/interpreter/StackVisitor.cpp b/Source/JavaScriptCore/interpreter/StackVisitor.cpp
index b3036b7..c3a088d 100644
--- a/Source/JavaScriptCore/interpreter/StackVisitor.cpp
+++ b/Source/JavaScriptCore/interpreter/StackVisitor.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2013, 2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -149,7 +149,10 @@
m_frame.m_callFrame = callFrame;
m_frame.m_inlineCallFrame = inlineCallFrame;
- m_frame.m_argumentCountIncludingThis = inlineCallFrame->arguments.size();
+ if (inlineCallFrame->argumentCountRegister.isValid())
+ m_frame.m_argumentCountIncludingThis = callFrame->r(inlineCallFrame->argumentCountRegister.offset()).unboxedInt32();
+ else
+ m_frame.m_argumentCountIncludingThis = inlineCallFrame->arguments.size();
m_frame.m_codeBlock = inlineCallFrame->baselineCodeBlock();
m_frame.m_bytecodeOffset = codeOrigin->bytecodeIndex;
diff --git a/Source/JavaScriptCore/jit/AssemblyHelpers.cpp b/Source/JavaScriptCore/jit/AssemblyHelpers.cpp
index 059e5d9..443cd6c 100644
--- a/Source/JavaScriptCore/jit/AssemblyHelpers.cpp
+++ b/Source/JavaScriptCore/jit/AssemblyHelpers.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2011, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011, 2013-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -212,15 +212,27 @@
addPtr(TrustedImm32(stackAlignmentBytes()), stackPointerRegister);
}
-AssemblyHelpers::Jump AssemblyHelpers::emitExceptionCheck(ExceptionCheckKind kind)
+AssemblyHelpers::Jump AssemblyHelpers::emitExceptionCheck(ExceptionCheckKind kind, ExceptionJumpWidth width)
{
callExceptionFuzz();
+
+ if (width == FarJumpWidth)
+ kind = (kind == NormalExceptionCheck ? InvertedExceptionCheck : NormalExceptionCheck);
+ Jump result;
#if USE(JSVALUE64)
- return branchTest64(kind == NormalExceptionCheck ? NonZero : Zero, AbsoluteAddress(vm()->addressOfException()));
+ result = branchTest64(kind == NormalExceptionCheck ? NonZero : Zero, AbsoluteAddress(vm()->addressOfException()));
#elif USE(JSVALUE32_64)
- return branch32(kind == NormalExceptionCheck ? NotEqual : Equal, AbsoluteAddress(reinterpret_cast<char*>(vm()->addressOfException()) + OBJECT_OFFSETOF(JSValue, u.asBits.tag)), TrustedImm32(JSValue::EmptyValueTag));
+ result = branch32(kind == NormalExceptionCheck ? NotEqual : Equal, AbsoluteAddress(reinterpret_cast<char*>(vm()->addressOfException()) + OBJECT_OFFSETOF(JSValue, u.asBits.tag)), TrustedImm32(JSValue::EmptyValueTag));
#endif
+
+ if (width == NormalJumpWidth)
+ return result;
+
+ PatchableJump realJump = patchableJump();
+ result.link(this);
+
+ return realJump.m_jump;
}
void AssemblyHelpers::emitStoreStructureWithTypeInfo(AssemblyHelpers& jit, TrustedImmPtr structure, RegisterID dest)
diff --git a/Source/JavaScriptCore/jit/AssemblyHelpers.h b/Source/JavaScriptCore/jit/AssemblyHelpers.h
index 1a40059..70213ef 100644
--- a/Source/JavaScriptCore/jit/AssemblyHelpers.h
+++ b/Source/JavaScriptCore/jit/AssemblyHelpers.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2011, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011, 2013-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -338,6 +338,10 @@
}
static Address addressFor(VirtualRegister virtualRegister)
{
+ // NB. It's tempting on some architectures to sometimes use an offset from the stack
+ // register because for some offsets that will encode to a smaller instruction. But we
+ // cannot do this. We use this in places where the stack pointer has been moved to some
+ // unpredictable location.
ASSERT(virtualRegister.isValid());
return Address(GPRInfo::callFrameRegister, virtualRegister.offset() * sizeof(Register));
}
@@ -367,39 +371,39 @@
}
// Access to our fixed callee CallFrame.
- Address calleeFrameSlot(int slot)
+ static Address calleeFrameSlot(int slot)
{
ASSERT(slot >= JSStack::CallerFrameAndPCSize);
- return MacroAssembler::Address(MacroAssembler::stackPointerRegister, sizeof(Register) * (slot - JSStack::CallerFrameAndPCSize));
+ return Address(stackPointerRegister, sizeof(Register) * (slot - JSStack::CallerFrameAndPCSize));
}
// Access to our fixed callee CallFrame.
- Address calleeArgumentSlot(int argument)
+ static Address calleeArgumentSlot(int argument)
{
return calleeFrameSlot(virtualRegisterForArgument(argument).offset());
}
- Address calleeFrameTagSlot(int slot)
+ static Address calleeFrameTagSlot(int slot)
{
return calleeFrameSlot(slot).withOffset(TagOffset);
}
- Address calleeFramePayloadSlot(int slot)
+ static Address calleeFramePayloadSlot(int slot)
{
return calleeFrameSlot(slot).withOffset(PayloadOffset);
}
- Address calleeArgumentTagSlot(int argument)
+ static Address calleeArgumentTagSlot(int argument)
{
return calleeArgumentSlot(argument).withOffset(TagOffset);
}
- Address calleeArgumentPayloadSlot(int argument)
+ static Address calleeArgumentPayloadSlot(int argument)
{
return calleeArgumentSlot(argument).withOffset(PayloadOffset);
}
- Address calleeFrameCallerFrame()
+ static Address calleeFrameCallerFrame()
{
return calleeFrameSlot(0).withOffset(CallFrame::callerFrameOffset());
}
@@ -409,21 +413,24 @@
return branch8(Below, Address(cellReg, JSCell::typeInfoTypeOffset()), TrustedImm32(ObjectType));
}
- static GPRReg selectScratchGPR(GPRReg preserve1 = InvalidGPRReg, GPRReg preserve2 = InvalidGPRReg, GPRReg preserve3 = InvalidGPRReg, GPRReg preserve4 = InvalidGPRReg)
+ static GPRReg selectScratchGPR(GPRReg preserve1 = InvalidGPRReg, GPRReg preserve2 = InvalidGPRReg, GPRReg preserve3 = InvalidGPRReg, GPRReg preserve4 = InvalidGPRReg, GPRReg preserve5 = InvalidGPRReg)
{
- if (preserve1 != GPRInfo::regT0 && preserve2 != GPRInfo::regT0 && preserve3 != GPRInfo::regT0 && preserve4 != GPRInfo::regT0)
+ if (preserve1 != GPRInfo::regT0 && preserve2 != GPRInfo::regT0 && preserve3 != GPRInfo::regT0 && preserve4 != GPRInfo::regT0 && preserve5 != GPRInfo::regT0)
return GPRInfo::regT0;
- if (preserve1 != GPRInfo::regT1 && preserve2 != GPRInfo::regT1 && preserve3 != GPRInfo::regT1 && preserve4 != GPRInfo::regT1)
+ if (preserve1 != GPRInfo::regT1 && preserve2 != GPRInfo::regT1 && preserve3 != GPRInfo::regT1 && preserve4 != GPRInfo::regT1 && preserve5 != GPRInfo::regT1)
return GPRInfo::regT1;
- if (preserve1 != GPRInfo::regT2 && preserve2 != GPRInfo::regT2 && preserve3 != GPRInfo::regT2 && preserve4 != GPRInfo::regT2)
+ if (preserve1 != GPRInfo::regT2 && preserve2 != GPRInfo::regT2 && preserve3 != GPRInfo::regT2 && preserve4 != GPRInfo::regT2 && preserve5 != GPRInfo::regT2)
return GPRInfo::regT2;
- if (preserve1 != GPRInfo::regT3 && preserve2 != GPRInfo::regT3 && preserve3 != GPRInfo::regT3 && preserve4 != GPRInfo::regT3)
+ if (preserve1 != GPRInfo::regT3 && preserve2 != GPRInfo::regT3 && preserve3 != GPRInfo::regT3 && preserve4 != GPRInfo::regT3 && preserve5 != GPRInfo::regT3)
return GPRInfo::regT3;
- return GPRInfo::regT4;
+ if (preserve1 != GPRInfo::regT4 && preserve2 != GPRInfo::regT4 && preserve3 != GPRInfo::regT4 && preserve4 != GPRInfo::regT4 && preserve5 != GPRInfo::regT4)
+ return GPRInfo::regT4;
+
+ return GPRInfo::regT5;
}
// Add a debug call. This call has no effect on JIT code execution state.
@@ -571,7 +578,9 @@
void callExceptionFuzz();
enum ExceptionCheckKind { NormalExceptionCheck, InvertedExceptionCheck };
- Jump emitExceptionCheck(ExceptionCheckKind kind = NormalExceptionCheck);
+ enum ExceptionJumpWidth { NormalJumpWidth, FarJumpWidth };
+ Jump emitExceptionCheck(
+ ExceptionCheckKind = NormalExceptionCheck, ExceptionJumpWidth = NormalJumpWidth);
#if ENABLE(SAMPLING_COUNTERS)
static void emitCount(MacroAssembler& jit, AbstractSamplingCounter& counter, int32_t increment = 1)
diff --git a/Source/JavaScriptCore/jit/CCallHelpers.h b/Source/JavaScriptCore/jit/CCallHelpers.h
index b2a795e..685c12a 100644
--- a/Source/JavaScriptCore/jit/CCallHelpers.h
+++ b/Source/JavaScriptCore/jit/CCallHelpers.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2011 Apple Inc. All rights reserved.
+ * Copyright (C) 2011, 2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -289,6 +289,15 @@
addCallArgument(arg3);
}
+ ALWAYS_INLINE void setupArgumentsWithExecState(GPRReg arg1, TrustedImm32 arg2, TrustedImm32 arg3)
+ {
+ resetCallArguments();
+ addCallArgument(GPRInfo::callFrameRegister);
+ addCallArgument(arg1);
+ addCallArgument(arg2);
+ addCallArgument(arg3);
+ }
+
ALWAYS_INLINE void setupArgumentsWithExecState(TrustedImm32 arg1, GPRReg arg2, GPRReg arg3)
{
resetCallArguments();
@@ -298,6 +307,29 @@
addCallArgument(arg3);
}
+ ALWAYS_INLINE void setupArgumentsWithExecState(TrustedImm32 arg1, GPRReg arg2, TrustedImm32 arg3, GPRReg arg4, TrustedImm32 arg5)
+ {
+ resetCallArguments();
+ addCallArgument(GPRInfo::callFrameRegister);
+ addCallArgument(arg1);
+ addCallArgument(arg2);
+ addCallArgument(arg3);
+ addCallArgument(arg4);
+ addCallArgument(arg5);
+ }
+
+ ALWAYS_INLINE void setupArgumentsWithExecState(TrustedImm32 arg1, GPRReg arg2, GPRReg arg3, TrustedImm32 arg4, GPRReg arg5, TrustedImm32 arg6)
+ {
+ resetCallArguments();
+ addCallArgument(GPRInfo::callFrameRegister);
+ addCallArgument(arg1);
+ addCallArgument(arg2);
+ addCallArgument(arg3);
+ addCallArgument(arg4);
+ addCallArgument(arg5);
+ addCallArgument(arg6);
+ }
+
ALWAYS_INLINE void setupArgumentsWithExecState(TrustedImmPtr arg1, GPRReg arg2, GPRReg arg3)
{
resetCallArguments();
@@ -1785,6 +1817,15 @@
move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);
}
+ ALWAYS_INLINE void setupArgumentsWithExecState(TrustedImm32 arg1, GPRReg arg2, TrustedImm32 arg3, GPRReg arg4, TrustedImm32 arg5)
+ {
+ setupTwoStubArgsGPR<GPRInfo::argumentGPR2, GPRInfo::argumentGPR4>(arg2, arg4);
+ move(arg1, GPRInfo::argumentGPR1);
+ move(arg3, GPRInfo::argumentGPR3);
+ move(arg5, GPRInfo::argumentGPR5);
+ move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);
+ }
+
ALWAYS_INLINE void setupArgumentsWithExecState(TrustedImmPtr arg1, GPRReg arg2, GPRReg arg3, TrustedImm32 arg4, TrustedImm32 arg5)
{
setupTwoStubArgsGPR<GPRInfo::argumentGPR2, GPRInfo::argumentGPR3>(arg2, arg3);
diff --git a/Source/JavaScriptCore/jit/GPRInfo.h b/Source/JavaScriptCore/jit/GPRInfo.h
index 4b770a6..df8c3c0 100644
--- a/Source/JavaScriptCore/jit/GPRInfo.h
+++ b/Source/JavaScriptCore/jit/GPRInfo.h
@@ -402,6 +402,7 @@
static const GPRReg returnValueGPR = X86Registers::eax; // regT0
static const GPRReg returnValueGPR2 = X86Registers::edx; // regT1
static const GPRReg nonPreservedNonReturnGPR = X86Registers::esi;
+ static const GPRReg nonPreservedNonArgumentGPR = X86Registers::r10;
static const GPRReg patchpointScratchRegister = MacroAssembler::scratchRegister;
static GPRReg toRegister(unsigned index)
@@ -577,6 +578,7 @@
static const GPRReg returnValueGPR = ARM64Registers::x0; // regT0
static const GPRReg returnValueGPR2 = ARM64Registers::x1; // regT1
static const GPRReg nonPreservedNonReturnGPR = ARM64Registers::x2;
+ static const GPRReg nonPreservedNonArgumentGPR = ARM64Registers::x8;
static const GPRReg patchpointScratchRegister = ARM64Registers::ip0;
// GPRReg mapping is direct, the machine regsiter numbers can
diff --git a/Source/JavaScriptCore/jit/JIT.cpp b/Source/JavaScriptCore/jit/JIT.cpp
index d80eb84..9abf94b 100644
--- a/Source/JavaScriptCore/jit/JIT.cpp
+++ b/Source/JavaScriptCore/jit/JIT.cpp
@@ -464,11 +464,6 @@
m_canBeOptimizedOrInlined = false;
m_shouldEmitProfiling = false;
break;
- case DFG::CanInline:
- m_canBeOptimized = false;
- m_canBeOptimizedOrInlined = true;
- m_shouldEmitProfiling = true;
- break;
case DFG::CanCompile:
case DFG::CanCompileAndInline:
m_canBeOptimized = true;
diff --git a/Source/JavaScriptCore/jit/JIT.h b/Source/JavaScriptCore/jit/JIT.h
index d5bad5e..9b7ac09 100644
--- a/Source/JavaScriptCore/jit/JIT.h
+++ b/Source/JavaScriptCore/jit/JIT.h
@@ -296,7 +296,7 @@
void compileOpCall(OpcodeID, Instruction*, unsigned callLinkInfoIndex);
void compileOpCallSlowCase(OpcodeID, Instruction*, Vector<SlowCaseEntry>::iterator&, unsigned callLinkInfoIndex);
- void compileSetupVarargsFrame(Instruction*);
+ void compileSetupVarargsFrame(Instruction*, CallLinkInfo*);
void compileCallEval(Instruction*);
void compileCallEvalSlowCase(Instruction*, Vector<SlowCaseEntry>::iterator&);
void emitPutCallResult(Instruction*);
diff --git a/Source/JavaScriptCore/jit/JITCall.cpp b/Source/JavaScriptCore/jit/JITCall.cpp
index aaa9a3d..eb037ab 100644
--- a/Source/JavaScriptCore/jit/JITCall.cpp
+++ b/Source/JavaScriptCore/jit/JITCall.cpp
@@ -55,7 +55,7 @@
emitPutVirtualRegister(dst);
}
-void JIT::compileSetupVarargsFrame(Instruction* instruction)
+void JIT::compileSetupVarargsFrame(Instruction* instruction, CallLinkInfo* info)
{
int thisValue = instruction[3].u.operand;
int arguments = instruction[4].u.operand;
@@ -90,6 +90,16 @@
if (canOptimize)
end.link(this);
+ // Profile the argument count.
+ load32(Address(regT1, JSStack::ArgumentCount * static_cast<int>(sizeof(Register)) + PayloadOffset), regT2);
+ load8(&info->maxNumArguments, regT0);
+ Jump notBiggest = branch32(Above, regT0, regT2);
+ Jump notSaturated = branch32(BelowOrEqual, regT2, TrustedImm32(255));
+ move(TrustedImm32(255), regT2);
+ notSaturated.link(this);
+ store8(regT2, &info->maxNumArguments);
+ notBiggest.link(this);
+
// Initialize 'this'.
emitGetVirtualRegister(thisValue, regT0);
store64(regT0, Address(regT1, CallFrame::thisArgumentOffset() * static_cast<int>(sizeof(Register))));
@@ -134,6 +144,8 @@
void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned callLinkInfoIndex)
{
+ CallLinkInfo* info = m_codeBlock->addCallLinkInfo();
+
int callee = instruction[2].u.operand;
/* Caller always:
@@ -152,7 +164,7 @@
COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_call_varargs), call_and_call_varargs_opcodes_must_be_same_length);
COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_construct_varargs), call_and_construct_varargs_opcodes_must_be_same_length);
if (opcodeID == op_call_varargs || opcodeID == op_construct_varargs)
- compileSetupVarargsFrame(instruction);
+ compileSetupVarargsFrame(instruction, info);
else {
int argCount = instruction[3].u.operand;
int registerOffset = -instruction[4].u.operand;
@@ -176,8 +188,6 @@
store64(regT0, Address(stackPointerRegister, JSStack::Callee * static_cast<int>(sizeof(Register)) - sizeof(CallerFrameAndPC)));
- CallLinkInfo* info = m_codeBlock->addCallLinkInfo();
-
if (opcodeID == op_call_eval) {
compileCallEval(instruction);
return;
diff --git a/Source/JavaScriptCore/jit/JITCall32_64.cpp b/Source/JavaScriptCore/jit/JITCall32_64.cpp
index ee5e371..4c4fe7c 100644
--- a/Source/JavaScriptCore/jit/JITCall32_64.cpp
+++ b/Source/JavaScriptCore/jit/JITCall32_64.cpp
@@ -115,7 +115,7 @@
compileOpCall(op_construct, currentInstruction, m_callLinkInfoIndex++);
}
-void JIT::compileSetupVarargsFrame(Instruction* instruction)
+void JIT::compileSetupVarargsFrame(Instruction* instruction, CallLinkInfo* info)
{
int thisValue = instruction[3].u.operand;
int arguments = instruction[4].u.operand;
@@ -150,6 +150,16 @@
if (canOptimize)
end.link(this);
+ // Profile the argument count.
+ load32(Address(regT1, JSStack::ArgumentCount * static_cast<int>(sizeof(Register)) + PayloadOffset), regT2);
+ load8(&info->maxNumArguments, regT0);
+ Jump notBiggest = branch32(Above, regT0, regT2);
+ Jump notSaturated = branch32(BelowOrEqual, regT2, TrustedImm32(255));
+ move(TrustedImm32(255), regT2);
+ notSaturated.link(this);
+ store8(regT2, &info->maxNumArguments);
+ notBiggest.link(this);
+
// Initialize 'this'.
emitLoad(thisValue, regT2, regT0);
store32(regT0, Address(regT1, PayloadOffset + (CallFrame::thisArgumentOffset() * static_cast<int>(sizeof(Register)))));
@@ -198,6 +208,7 @@
void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned callLinkInfoIndex)
{
+ CallLinkInfo* info = m_codeBlock->addCallLinkInfo();
int callee = instruction[2].u.operand;
/* Caller always:
@@ -214,7 +225,7 @@
*/
if (opcodeID == op_call_varargs || opcodeID == op_construct_varargs)
- compileSetupVarargsFrame(instruction);
+ compileSetupVarargsFrame(instruction, info);
else {
int argCount = instruction[3].u.operand;
int registerOffset = -instruction[4].u.operand;
@@ -239,8 +250,6 @@
store32(regT0, Address(stackPointerRegister, JSStack::Callee * static_cast<int>(sizeof(Register)) + PayloadOffset - sizeof(CallerFrameAndPC)));
store32(regT1, Address(stackPointerRegister, JSStack::Callee * static_cast<int>(sizeof(Register)) + TagOffset - sizeof(CallerFrameAndPC)));
- CallLinkInfo* info = m_codeBlock->addCallLinkInfo();
-
if (opcodeID == op_call_eval) {
compileCallEval(instruction);
return;
diff --git a/Source/JavaScriptCore/jit/JITOperations.h b/Source/JavaScriptCore/jit/JITOperations.h
index 767ea5b..763c0dd 100644
--- a/Source/JavaScriptCore/jit/JITOperations.h
+++ b/Source/JavaScriptCore/jit/JITOperations.h
@@ -150,6 +150,7 @@
typedef int32_t JIT_OPERATION (*Z_JITOperation_D)(double);
typedef int32_t JIT_OPERATION (*Z_JITOperation_E)(ExecState*);
typedef int32_t JIT_OPERATION (*Z_JITOperation_EC)(ExecState*, JSCell*);
+typedef int32_t JIT_OPERATION (*Z_JITOperation_EJZ)(ExecState*, EncodedJSValue, int32_t);
typedef int32_t JIT_OPERATION (*Z_JITOperation_EJZZ)(ExecState*, EncodedJSValue, int32_t, int32_t);
typedef size_t JIT_OPERATION (*S_JITOperation_ECC)(ExecState*, JSCell*, JSCell*);
typedef size_t JIT_OPERATION (*S_JITOperation_EJ)(ExecState*, EncodedJSValue);
@@ -189,6 +190,7 @@
typedef void JIT_OPERATION (*V_JITOperation_EVwsJ)(ExecState*, VariableWatchpointSet*, EncodedJSValue);
typedef void JIT_OPERATION (*V_JITOperation_EZ)(ExecState*, int32_t);
typedef void JIT_OPERATION (*V_JITOperation_EZJ)(ExecState*, int32_t, EncodedJSValue);
+typedef void JIT_OPERATION (*V_JITOperation_EZJZZZ)(ExecState*, int32_t, EncodedJSValue, int32_t, int32_t, int32_t);
typedef void JIT_OPERATION (*V_JITOperation_EVm)(ExecState*, VM*);
typedef void JIT_OPERATION (*V_JITOperation_J)(EncodedJSValue);
typedef void JIT_OPERATION (*V_JITOperation_Z)(int32_t);
diff --git a/Source/JavaScriptCore/jit/SetupVarargsFrame.cpp b/Source/JavaScriptCore/jit/SetupVarargsFrame.cpp
index 1bdfc84..bb63596 100644
--- a/Source/JavaScriptCore/jit/SetupVarargsFrame.cpp
+++ b/Source/JavaScriptCore/jit/SetupVarargsFrame.cpp
@@ -101,11 +101,31 @@
void emitSetupVarargsFrameFastCase(CCallHelpers& jit, GPRReg numUsedSlotsGPR, GPRReg scratchGPR1, GPRReg scratchGPR2, GPRReg scratchGPR3, unsigned firstVarArgOffset, CCallHelpers::JumpList& slowCase)
{
- emitSetupVarargsFrameFastCase(
- jit, numUsedSlotsGPR, scratchGPR1, scratchGPR2, scratchGPR3,
- ValueRecovery::displacedInJSStack(VirtualRegister(JSStack::ArgumentCount), DataFormatInt32),
- VirtualRegister(CallFrame::argumentOffset(0)),
- firstVarArgOffset, slowCase);
+ emitSetupVarargsFrameFastCase(jit, numUsedSlotsGPR, scratchGPR1, scratchGPR2, scratchGPR3, nullptr, firstVarArgOffset, slowCase);
+}
+
+void emitSetupVarargsFrameFastCase(CCallHelpers& jit, GPRReg numUsedSlotsGPR, GPRReg scratchGPR1, GPRReg scratchGPR2, GPRReg scratchGPR3, InlineCallFrame* inlineCallFrame, unsigned firstVarArgOffset, CCallHelpers::JumpList& slowCase)
+{
+ ValueRecovery argumentCountRecovery;
+ VirtualRegister firstArgumentReg;
+ if (inlineCallFrame) {
+ if (inlineCallFrame->isVarargs()) {
+ argumentCountRecovery = ValueRecovery::displacedInJSStack(
+ inlineCallFrame->argumentCountRegister, DataFormatInt32);
+ } else {
+ argumentCountRecovery = ValueRecovery::constant(
+ jsNumber(inlineCallFrame->arguments.size()));
+ }
+ if (inlineCallFrame->arguments.size() > 1)
+ firstArgumentReg = inlineCallFrame->arguments[1].virtualRegister();
+ else
+ firstArgumentReg = VirtualRegister(0);
+ } else {
+ argumentCountRecovery = ValueRecovery::displacedInJSStack(
+ VirtualRegister(JSStack::ArgumentCount), DataFormatInt32);
+ firstArgumentReg = VirtualRegister(CallFrame::argumentOffset(0));
+ }
+ emitSetupVarargsFrameFastCase(jit, numUsedSlotsGPR, scratchGPR1, scratchGPR2, scratchGPR3, argumentCountRecovery, firstArgumentReg, firstVarArgOffset, slowCase);
}
} // namespace JSC
diff --git a/Source/JavaScriptCore/jit/SetupVarargsFrame.h b/Source/JavaScriptCore/jit/SetupVarargsFrame.h
index 4c04587..0e8933a 100644
--- a/Source/JavaScriptCore/jit/SetupVarargsFrame.h
+++ b/Source/JavaScriptCore/jit/SetupVarargsFrame.h
@@ -42,6 +42,9 @@
// Variant that assumes normal stack frame.
void emitSetupVarargsFrameFastCase(CCallHelpers&, GPRReg numUsedSlotsGPR, GPRReg scratchGPR1, GPRReg scratchGPR2, GPRReg scratchGPR3, unsigned firstVarArgOffset, CCallHelpers::JumpList& slowCase);
+// Variant for potentially inlined stack frames.
+void emitSetupVarargsFrameFastCase(CCallHelpers&, GPRReg numUsedSlotsGPR, GPRReg scratchGPR1, GPRReg scratchGPR2, GPRReg scratchGPR3, InlineCallFrame*, unsigned firstVarArgOffset, CCallHelpers::JumpList& slowCase);
+
} // namespace JSC
#endif // ENABLE(JIT)
diff --git a/Source/JavaScriptCore/runtime/Arguments.h b/Source/JavaScriptCore/runtime/Arguments.h
index a20f0fe..87cf721 100644
--- a/Source/JavaScriptCore/runtime/Arguments.h
+++ b/Source/JavaScriptCore/runtime/Arguments.h
@@ -56,7 +56,7 @@
static Arguments* create(VM& vm, CallFrame* callFrame, InlineCallFrame* inlineCallFrame, ArgumentsMode mode = NormalArgumentsCreationMode)
{
- Arguments* arguments = new (NotNull, allocateCell<Arguments>(vm.heap, offsetOfInlineRegisterArray() + registerArraySizeInBytes(inlineCallFrame))) Arguments(callFrame);
+ Arguments* arguments = new (NotNull, allocateCell<Arguments>(vm.heap, offsetOfInlineRegisterArray() + registerArraySizeInBytes(callFrame, inlineCallFrame))) Arguments(callFrame);
arguments->finishCreation(callFrame, inlineCallFrame, mode);
return arguments;
}
@@ -124,7 +124,15 @@
void createStrictModeCalleeIfNecessary(ExecState*);
static size_t registerArraySizeInBytes(CallFrame* callFrame) { return sizeof(WriteBarrier<Unknown>) * callFrame->argumentCount(); }
- static size_t registerArraySizeInBytes(InlineCallFrame* inlineCallFrame) { return sizeof(WriteBarrier<Unknown>) * (inlineCallFrame->arguments.size() - 1); }
+ static size_t registerArraySizeInBytes(CallFrame* callFrame, InlineCallFrame* inlineCallFrame)
+ {
+ unsigned argumentCountIncludingThis;
+ if (inlineCallFrame->argumentCountRegister.isValid())
+ argumentCountIncludingThis = callFrame->r(inlineCallFrame->argumentCountRegister.offset()).unboxedInt32();
+ else
+ argumentCountIncludingThis = inlineCallFrame->arguments.size();
+ return sizeof(WriteBarrier<Unknown>) * (argumentCountIncludingThis - 1);
+ }
bool isArgument(size_t);
bool trySetArgument(VM&, size_t argument, JSValue);
JSValue tryGetArgument(size_t argument);
@@ -340,11 +348,15 @@
m_overrodeCallee = false;
m_overrodeCaller = false;
m_isStrictMode = jsCast<FunctionExecutable*>(inlineCallFrame->executable.get())->isStrictMode();
-
+
+ if (inlineCallFrame->argumentCountRegister.isValid())
+ m_numArguments = callFrame->r(inlineCallFrame->argumentCountRegister.offset()).unboxedInt32();
+ else
+ m_numArguments = inlineCallFrame->arguments.size();
+ m_numArguments--;
+
switch (mode) {
case NormalArgumentsCreationMode: {
- m_numArguments = inlineCallFrame->arguments.size() - 1;
-
if (m_numArguments) {
int offsetForArgumentOne = inlineCallFrame->arguments[1].virtualRegister().offset();
m_registers = reinterpret_cast<WriteBarrierBase<Unknown>*>(callFrame->registers()) + offsetForArgumentOne - virtualRegisterForArgument(1).offset();
@@ -361,7 +373,6 @@
}
case ClonedArgumentsCreationMode: {
- m_numArguments = inlineCallFrame->arguments.size() - 1;
if (m_numArguments) {
int offsetForArgumentOne = inlineCallFrame->arguments[1].virtualRegister().offset();
m_registers = reinterpret_cast<WriteBarrierBase<Unknown>*>(callFrame->registers()) + offsetForArgumentOne - virtualRegisterForArgument(1).offset();
diff --git a/Source/JavaScriptCore/runtime/Options.h b/Source/JavaScriptCore/runtime/Options.h
index 3f71197..6509603 100644
--- a/Source/JavaScriptCore/runtime/Options.h
+++ b/Source/JavaScriptCore/runtime/Options.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -211,6 +211,8 @@
/* from super long compiles that take a lot of memory. */\
v(unsigned, maximumInliningCallerSize, 10000) \
\
+ v(unsigned, maximumVarargsForInlining, 100) \
+ \
v(bool, enablePolyvariantCallInlining, true) \
v(bool, enablePolyvariantByIdInlining, true) \
\
diff --git a/Source/JavaScriptCore/tests/stress/construct-varargs-inline-smaller-Foo.js b/Source/JavaScriptCore/tests/stress/construct-varargs-inline-smaller-Foo.js
new file mode 100644
index 0000000..77b5b3d
--- /dev/null
+++ b/Source/JavaScriptCore/tests/stress/construct-varargs-inline-smaller-Foo.js
@@ -0,0 +1,35 @@
+function Foo(a, b) {
+ var array = [];
+ for (var i = 0; i < arguments.length; ++i)
+ array.push(arguments[i]);
+ this.f = array;
+}
+
+function bar(array) {
+ return new Foo(...array);
+}
+
+noInline(bar);
+
+function checkEqual(a, b) {
+ if (a.length != b.length)
+ throw "Error: bad value of c, length mismatch: " + a + " versus " + b;
+ for (var i = a.length; i--;) {
+ if (a[i] != b[i])
+ throw "Error: bad value of c, mismatch at i = " + i + ": " + a + " versus " + b;
+ }
+}
+
+function test(array) {
+ var expected = array;
+ var actual = bar(array).f;
+ checkEqual(actual, expected);
+}
+
+for (var i = 0; i < 10000; ++i) {
+ var array = [];
+ for (var j = 0; j < i % 6; ++j)
+ array.push(j);
+ test(array);
+}
+
diff --git a/Source/JavaScriptCore/tests/stress/construct-varargs-inline.js b/Source/JavaScriptCore/tests/stress/construct-varargs-inline.js
new file mode 100644
index 0000000..ff4ccce
--- /dev/null
+++ b/Source/JavaScriptCore/tests/stress/construct-varargs-inline.js
@@ -0,0 +1,39 @@
+function Foo(a, b) {
+ var array = [];
+ for (var i = 0; i < arguments.length; ++i)
+ array.push(arguments[i]);
+ this.f = {a:a, b:b, c:array};
+}
+
+function bar(array) {
+ return new Foo(...array);
+}
+
+noInline(bar);
+
+function checkEqual(a, b) {
+ if (a.a != b.a)
+ throw "Error: bad value of a: " + a.a + " versus " + b.a;
+ if (a.b != b.b)
+ throw "Error: bad value of b: " + a.b + " versus " + b.b;
+ if (a.c.length != b.c.length)
+ throw "Error: bad value of c, length mismatch: " + a.c + " versus " + b.c;
+ for (var i = a.c.length; i--;) {
+ if (a.c[i] != b.c[i])
+ throw "Error: bad value of c, mismatch at i = " + i + ": " + a.c + " versus " + b.c;
+ }
+}
+
+function test(array) {
+ var expected = {a:array[0], b:array[1], c:array};
+ var actual = bar(array).f;
+ checkEqual(actual, expected);
+}
+
+for (var i = 0; i < 10000; ++i) {
+ var array = [];
+ for (var j = 0; j < i % 6; ++j)
+ array.push(j);
+ test(array);
+}
+
diff --git a/Source/JavaScriptCore/tests/stress/construct-varargs-no-inline.js b/Source/JavaScriptCore/tests/stress/construct-varargs-no-inline.js
new file mode 100644
index 0000000..333646e
--- /dev/null
+++ b/Source/JavaScriptCore/tests/stress/construct-varargs-no-inline.js
@@ -0,0 +1,41 @@
+function Foo(a, b) {
+ var array = [];
+ for (var i = 0; i < arguments.length; ++i)
+ array.push(arguments[i]);
+ this.f = {a:a, b:b, c:array};
+}
+
+noInline(Foo);
+
+function bar(array) {
+ return new Foo(...array);
+}
+
+noInline(bar);
+
+function checkEqual(a, b) {
+ if (a.a != b.a)
+ throw "Error: bad value of a: " + a.a + " versus " + b.a;
+ if (a.b != b.b)
+ throw "Error: bad value of b: " + a.b + " versus " + b.b;
+ if (a.c.length != b.c.length)
+ throw "Error: bad value of c, length mismatch: " + a.c + " versus " + b.c;
+ for (var i = a.c.length; i--;) {
+ if (a.c[i] != b.c[i])
+ throw "Error: bad value of c, mismatch at i = " + i + ": " + a.c + " versus " + b.c;
+ }
+}
+
+function test(array) {
+ var expected = {a:array[0], b:array[1], c:array};
+ var actual = bar(array).f;
+ checkEqual(actual, expected);
+}
+
+for (var i = 0; i < 10000; ++i) {
+ var array = [];
+ for (var j = 0; j < i % 6; ++j)
+ array.push(j);
+ test(array);
+}
+
diff --git a/Source/JavaScriptCore/tests/stress/get-argument-by-val-in-inlined-varargs-call-out-of-bounds.js b/Source/JavaScriptCore/tests/stress/get-argument-by-val-in-inlined-varargs-call-out-of-bounds.js
new file mode 100644
index 0000000..0f210ce
--- /dev/null
+++ b/Source/JavaScriptCore/tests/stress/get-argument-by-val-in-inlined-varargs-call-out-of-bounds.js
@@ -0,0 +1,31 @@
+var gi;
+
+function foo() {
+ return arguments[gi];
+}
+
+function bar(array, i) {
+ gi = i;
+ return foo.apply(this, array);
+}
+
+noInline(bar);
+
+var bigArray = [];
+for (var i = 0; i < 50; ++i)
+ bigArray.push(42);
+
+for (var i = 0; i < 10000; ++i) {
+ var mi = i % 50;
+ var result = bar(bigArray, mi);
+ if (result !== 42)
+ throw "Bad result in first loop: " + result + "; expected: " + 42;
+}
+
+for (var i = 0; i < 10000; ++i) {
+ var mi = i % 100;
+ var result = bar([42], mi);
+ var expected = mi ? void 0 : 42;
+ if (result !== expected)
+ throw "Bad result in second loop: " + result + "; expected: " + expected;
+}
diff --git a/Source/JavaScriptCore/tests/stress/get-argument-by-val-safe-in-inlined-varargs-call-out-of-bounds.js b/Source/JavaScriptCore/tests/stress/get-argument-by-val-safe-in-inlined-varargs-call-out-of-bounds.js
new file mode 100644
index 0000000..9fb3c1c
--- /dev/null
+++ b/Source/JavaScriptCore/tests/stress/get-argument-by-val-safe-in-inlined-varargs-call-out-of-bounds.js
@@ -0,0 +1,35 @@
+var gi;
+
+function foo() {
+ if (!effectful42())
+ arguments = "hello";
+ return arguments[gi];
+}
+
+function bar(array, i) {
+ gi = i;
+ return foo.apply(this, array);
+}
+
+noInline(bar);
+
+var bigArray = [];
+for (var i = 0; i < 50; ++i)
+ bigArray.push(42);
+
+for (var i = 0; i < 10000; ++i) {
+ var mi = i % 50;
+ var result = bar(bigArray, mi);
+ if (result !== 42)
+ throw "Bad result in first loop: " + result + "; expected: " + 42;
+}
+
+
+for (var i = 0; i < 10000; ++i) {
+ var mi = i % 100;
+ var result = bar([42], mi);
+ var expected = mi ? void 0 : 42;
+ if (result !== expected)
+ throw "Bad result in second loop: " + result + "; expected: " + expected;
+}
+
diff --git a/Source/JavaScriptCore/tests/stress/get-my-argument-by-val-creates-arguments.js b/Source/JavaScriptCore/tests/stress/get-my-argument-by-val-creates-arguments.js
new file mode 100644
index 0000000..ec8c0cf
--- /dev/null
+++ b/Source/JavaScriptCore/tests/stress/get-my-argument-by-val-creates-arguments.js
@@ -0,0 +1,42 @@
+function blah(args) {
+ var array = [];
+ for (var i = 0; i < args.length; ++i)
+ array.push(args[i]);
+ return array;
+}
+
+function foo() {
+ // Force creation of arguments by doing out-of-bounds access.
+ var tmp = arguments[42];
+
+ // Use the created arguments object.
+ return blah(arguments);
+}
+
+function bar(array) {
+ return foo.apply(this, array);
+}
+
+noInline(blah);
+noInline(bar);
+
+function checkEqual(a, b) {
+ if (a.length != b.length)
+ throw "Error: length mismatch: " + a + " versus " + b;
+ for (var i = a.length; i--;) {
+ if (a[i] != b[i])
+ throw "Error: mismatch at i = " + i + ": " + a + " versus " + b;
+ }
+}
+
+function test(array) {
+ var actual = bar(array);
+ checkEqual(actual, array);
+}
+
+for (var i = 0; i < 10000; ++i) {
+ var array = [];
+ for (var j = 0; j < i % 6; ++j)
+ array.push(j);
+ test(array);
+}
diff --git a/Source/JavaScriptCore/tests/stress/load-varargs-then-inlined-call-exit-in-foo.js b/Source/JavaScriptCore/tests/stress/load-varargs-then-inlined-call-exit-in-foo.js
new file mode 100644
index 0000000..ecb227c
--- /dev/null
+++ b/Source/JavaScriptCore/tests/stress/load-varargs-then-inlined-call-exit-in-foo.js
@@ -0,0 +1,46 @@
+function foo(a, b) {
+ var array = [];
+ for (var i = 0; i < arguments.length; ++i)
+ array.push(arguments[i] + 1);
+ return {a:a, b:b, c:array};
+}
+
+function bar(array) {
+ return foo.apply(this, array);
+}
+
+noInline(bar);
+
+function checkEqual(a, b) {
+ if (a.a != b.a)
+ throw "Error: bad value of a: " + a.a + " versus " + b.a;
+ if (a.b != b.b)
+ throw "Error: bad value of b: " + a.b + " versus " + b.b;
+ if (a.c.length != b.c.length)
+ throw "Error: bad value of c, length mismatch: " + a.c + " versus " + b.c;
+ for (var i = a.c.length; i--;) {
+ if (a.c[i] != b.c[i])
+ throw "Error: bad value of c, mismatch at i = " + i + ": " + a.c + " versus " + b.c;
+ }
+}
+
+function test(array) {
+ var expected = {a:array[0], b:array[1], c:array.map(function(value) { return value + 1 })};
+ var actual = bar(array);
+ checkEqual(actual, expected);
+}
+
+// This is pretty dumb. We need to first make sure that the VM is prepared for double arrays being
+// created.
+var array = [];
+array.push(42);
+array.push(42.5);
+
+for (var i = 0; i < 10000; ++i) {
+ var array = [];
+ for (var j = 0; j < i % 6; ++j)
+ array.push(j);
+ test(array);
+}
+
+test([1.5, 2.5, 3.5]);
diff --git a/Source/JavaScriptCore/tests/stress/load-varargs-then-inlined-call-inlined.js b/Source/JavaScriptCore/tests/stress/load-varargs-then-inlined-call-inlined.js
new file mode 100644
index 0000000..56d0526
--- /dev/null
+++ b/Source/JavaScriptCore/tests/stress/load-varargs-then-inlined-call-inlined.js
@@ -0,0 +1,43 @@
+function foo(a, b) {
+ var array = [];
+ for (var i = 0; i < arguments.length; ++i)
+ array.push(arguments[i]);
+ return {a:a, b:b, c:array};
+}
+
+function bar(array) {
+ return foo.apply(this, array);
+}
+
+function baz(array) {
+ return bar(array);
+}
+
+noInline(baz);
+
+function checkEqual(a, b) {
+ if (a.a != b.a)
+ throw "Error: bad value of a: " + a.a + " versus " + b.a;
+ if (a.b != b.b)
+ throw "Error: bad value of b: " + a.b + " versus " + b.b;
+ if (a.c.length != b.c.length)
+ throw "Error: bad value of c, length mismatch: " + a.c + " versus " + b.c;
+ for (var i = a.c.length; i--;) {
+ if (a.c[i] != b.c[i])
+ throw "Error: bad value of c, mismatch at i = " + i + ": " + a.c + " versus " + b.c;
+ }
+}
+
+function test(array) {
+ var expected = {a:array[0], b:array[1], c:array};
+ var actual = baz(array);
+ checkEqual(actual, expected);
+}
+
+for (var i = 0; i < 10000; ++i) {
+ var array = [];
+ for (var j = 0; j < i % 6; ++j)
+ array.push(j);
+ test(array);
+}
+
diff --git a/Source/JavaScriptCore/tests/stress/load-varargs-then-inlined-call.js b/Source/JavaScriptCore/tests/stress/load-varargs-then-inlined-call.js
new file mode 100644
index 0000000..45e3ea16
--- /dev/null
+++ b/Source/JavaScriptCore/tests/stress/load-varargs-then-inlined-call.js
@@ -0,0 +1,39 @@
+function foo(a, b) {
+ var array = [];
+ for (var i = 0; i < arguments.length; ++i)
+ array.push(arguments[i]);
+ return {a:a, b:b, c:array};
+}
+
+function bar(array) {
+ return foo.apply(this, array);
+}
+
+noInline(bar);
+
+function checkEqual(a, b) {
+ if (a.a != b.a)
+ throw "Error: bad value of a: " + a.a + " versus " + b.a;
+ if (a.b != b.b)
+ throw "Error: bad value of b: " + a.b + " versus " + b.b;
+ if (a.c.length != b.c.length)
+ throw "Error: bad value of c, length mismatch: " + a.c + " versus " + b.c;
+ for (var i = a.c.length; i--;) {
+ if (a.c[i] != b.c[i])
+ throw "Error: bad value of c, mismatch at i = " + i + ": " + a.c + " versus " + b.c;
+ }
+}
+
+function test(array) {
+ var expected = {a:array[0], b:array[1], c:array};
+ var actual = bar(array);
+ checkEqual(actual, expected);
+}
+
+for (var i = 0; i < 10000; ++i) {
+ var array = [];
+ for (var j = 0; j < i % 6; ++j)
+ array.push(j);
+ test(array);
+}
+