Merge r168635, r168780, r169005, r169014, and r169143 from ftlopt.
2014-05-20 Filip Pizlo <fpizlo@apple.com>
[ftlopt] DFG bytecode parser should turn GetById with nothing but a Getter stub as stuff+handleCall, and handleCall should be allowed to inline if it wants to
https://bugs.webkit.org/show_bug.cgi?id=133105
Reviewed by Michael Saboff.
Source/JavaScriptCore:
- GetByIdStatus now knows about getters and can report intelligent things about them.
As is usually the case with how we do these things, GetByIdStatus knows more about
getters than the DFG can actually handle: it'll report details about polymorphic
getter calls even though the DFG won't be able to handle those. This is fine; the DFG
will see those statuses and bail to a generic slow path.
- The DFG::ByteCodeParser now knows how to set up and do handleCall() for a getter call.
This can, and usually does, result in inlining of getters!
- CodeOrigin and OSR exit know about inlined getter calls. When you OSR out of an
inlined getter, we set the return PC to a getter return thunk that fixes up the stack.
We use the usual offset-true-return-PC trick, where OSR exit places the true return PC
of the getter's caller as a phony argument that only the thunk knows how to find.
- Removed a bunch of dead monomorphic chain support from StructureStubInfo.
- A large chunk of this change is dragging GetGetterSetterByOffset, GetGetter, and
GetSetter through the DFG and FTL. GetGetterSetterByOffset is like GetByOffset except
that we know that we're returning a GetterSetter cell. GetGetter and GetSetter extract
the getter, or setter, from the GetterSetter.
This is a ~2.5x speed-up on the getter microbenchmarks that we already had. So far none
of the "real" benchmarks exercise getters enough for this to matter. But I noticed that
some of the variants of the Richards benchmark in other languages - for example
Wolczko's Java translation of a C++ translation of Deutsch's Smalltalk version - use
getters and setters extensively. So, I created a getter/setter JavaScript version of
Richards and put it in regress/script-tests/getter-richards.js. That sees about a 2.4x
speed-up from this patch, which is very reassuring.
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::printGetByIdCacheStatus):
(JSC::CodeBlock::findStubInfo):
* bytecode/CodeBlock.h:
* bytecode/CodeOrigin.cpp:
(WTF::printInternal):
* bytecode/CodeOrigin.h:
(JSC::InlineCallFrame::specializationKindFor):
* bytecode/GetByIdStatus.cpp:
(JSC::GetByIdStatus::computeFor):
(JSC::GetByIdStatus::computeForStubInfo):
(JSC::GetByIdStatus::makesCalls):
(JSC::GetByIdStatus::computeForChain): Deleted.
* bytecode/GetByIdStatus.h:
(JSC::GetByIdStatus::makesCalls): Deleted.
* bytecode/GetByIdVariant.cpp:
(JSC::GetByIdVariant::~GetByIdVariant):
(JSC::GetByIdVariant::GetByIdVariant):
(JSC::GetByIdVariant::operator=):
(JSC::GetByIdVariant::dumpInContext):
* bytecode/GetByIdVariant.h:
(JSC::GetByIdVariant::GetByIdVariant):
(JSC::GetByIdVariant::callLinkStatus):
* bytecode/PolymorphicGetByIdList.cpp:
(JSC::GetByIdAccess::fromStructureStubInfo):
(JSC::PolymorphicGetByIdList::from):
* bytecode/SpeculatedType.h:
* bytecode/StructureStubInfo.cpp:
(JSC::StructureStubInfo::deref):
(JSC::StructureStubInfo::visitWeakReferences):
* bytecode/StructureStubInfo.h:
(JSC::isGetByIdAccess):
(JSC::StructureStubInfo::initGetByIdChain): Deleted.
* dfg/DFGAbstractHeap.h:
* dfg/DFGAbstractInterpreterInlines.h:
(JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::addCall):
(JSC::DFG::ByteCodeParser::handleCall):
(JSC::DFG::ByteCodeParser::handleInlining):
(JSC::DFG::ByteCodeParser::handleGetByOffset):
(JSC::DFG::ByteCodeParser::handleGetById):
(JSC::DFG::ByteCodeParser::InlineStackEntry::InlineStackEntry):
(JSC::DFG::ByteCodeParser::parse):
* dfg/DFGCSEPhase.cpp:
(JSC::DFG::CSEPhase::getGetterSetterByOffsetLoadElimination):
(JSC::DFG::CSEPhase::getInternalFieldLoadElimination):
(JSC::DFG::CSEPhase::performNodeCSE):
(JSC::DFG::CSEPhase::getTypedArrayByteOffsetLoadElimination): Deleted.
* dfg/DFGClobberize.h:
(JSC::DFG::clobberize):
* dfg/DFGFixupPhase.cpp:
(JSC::DFG::FixupPhase::fixupNode):
* dfg/DFGJITCompiler.cpp:
(JSC::DFG::JITCompiler::linkFunction):
* dfg/DFGNode.h:
(JSC::DFG::Node::hasStorageAccessData):
* dfg/DFGNodeType.h:
* dfg/DFGOSRExitCompilerCommon.cpp:
(JSC::DFG::reifyInlinedCallFrames):
* dfg/DFGPredictionPropagationPhase.cpp:
(JSC::DFG::PredictionPropagationPhase::propagate):
* dfg/DFGSafeToExecute.h:
(JSC::DFG::safeToExecute):
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::compile):
* ftl/FTLAbstractHeapRepository.cpp:
* ftl/FTLAbstractHeapRepository.h:
* ftl/FTLCapabilities.cpp:
(JSC::FTL::canCompile):
* ftl/FTLLink.cpp:
(JSC::FTL::link):
* ftl/FTLLowerDFGToLLVM.cpp:
(JSC::FTL::LowerDFGToLLVM::compileNode):
(JSC::FTL::LowerDFGToLLVM::compileGetGetter):
(JSC::FTL::LowerDFGToLLVM::compileGetSetter):
* jit/AccessorCallJITStubRoutine.h:
* jit/JIT.cpp:
(JSC::JIT::assertStackPointerOffset):
(JSC::JIT::privateCompile):
* jit/JIT.h:
* jit/JITPropertyAccess.cpp:
(JSC::JIT::emit_op_get_by_id):
* jit/ThunkGenerators.cpp:
(JSC::arityFixupGenerator):
(JSC::baselineGetterReturnThunkGenerator):
(JSC::baselineSetterReturnThunkGenerator):
(JSC::arityFixup): Deleted.
* jit/ThunkGenerators.h:
* runtime/CommonSlowPaths.cpp:
(JSC::setupArityCheckData):
* tests/stress/exit-from-getter.js: Added.
* tests/stress/poly-chain-getter.js: Added.
(Cons):
(foo):
(test):
* tests/stress/poly-chain-then-getter.js: Added.
(Cons1):
(Cons2):
(foo):
(test):
* tests/stress/poly-getter-combo.js: Added.
(Cons1):
(Cons2):
(foo):
(test):
(.test):
* tests/stress/poly-getter-then-chain.js: Added.
(Cons1):
(Cons2):
(foo):
(test):
* tests/stress/poly-getter-then-self.js: Added.
(foo):
(test):
(.test):
* tests/stress/poly-self-getter.js: Added.
(foo):
(test):
(getter):
* tests/stress/poly-self-then-getter.js: Added.
(foo):
(test):
* tests/stress/weird-getter-counter.js: Added.
(foo):
(test):
2014-05-17 Filip Pizlo <fpizlo@apple.com>
[ftlopt] Factor out how CallLinkStatus uses exit site data
https://bugs.webkit.org/show_bug.cgi?id=133042
Reviewed by Anders Carlsson.
This makes it easier to use CallLinkStatus from clients that are calling into after
already holding some of the relevant locks. This is necessary because we use a "one lock
at a time" policy for CodeBlock locks: if you hold one then you're not allowed to acquire
any of the others. So, any code that needs to lock multiple CodeBlock locks needs to sort
of lock one, do some stuff, release it, then lock another, and then do more stuff. The
exit site data corresponds to the stuff you do while holding the baseline lock, while the
CallLinkInfo method corresponds to the stuff you do while holding the CallLinkInfo owner's
lock.
* bytecode/CallLinkStatus.cpp:
(JSC::CallLinkStatus::computeFor):
(JSC::CallLinkStatus::computeExitSiteData):
(JSC::CallLinkStatus::computeDFGStatuses):
* bytecode/CallLinkStatus.h:
(JSC::CallLinkStatus::ExitSiteData::ExitSiteData):
2014-05-17 Filip Pizlo <fpizlo@apple.com>
[ftlopt] InlineCallFrame::isCall should be an enumeration
https://bugs.webkit.org/show_bug.cgi?id=133034
Reviewed by Sam Weinig.
Once we start inlining getters and setters, we'll want InlineCallFrame to be able to tell
us that the inlined call was a getter call or a setter call. Initially I thought I would
have a new field called "kind" that would have components NormalCall, GetterCall, and
SetterCall. But that doesn't make sense, because for GetterCall and SetterCall, isCall
would have to be true. Hence, It makes more sense to have one enumeration that is Call,
Construct, GetterCall, or SetterCall. This patch is a first step towards this.
It's interesting that isClosureCall should probably still be separate, since getter and
setter inlining could inline closure calls.
* bytecode/CodeBlock.h:
(JSC::baselineCodeBlockForInlineCallFrame):
* bytecode/CodeOrigin.cpp:
(JSC::InlineCallFrame::dumpInContext):
(WTF::printInternal):
* bytecode/CodeOrigin.h:
(JSC::InlineCallFrame::kindFor):
(JSC::InlineCallFrame::specializationKindFor):
(JSC::InlineCallFrame::InlineCallFrame):
(JSC::InlineCallFrame::specializationKind):
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::InlineStackEntry::InlineStackEntry):
* dfg/DFGOSRExitPreparation.cpp:
(JSC::DFG::prepareCodeOriginForOSRExit):
* runtime/Arguments.h:
(JSC::Arguments::finishCreation):
2014-05-13 Filip Pizlo <fpizlo@apple.com>
[ftlopt] DFG should not exit due to inadequate profiling coverage when it can trivially fill in the profiling coverage due to variable constant inference and the better prediction modeling of typed array GetByVals
https://bugs.webkit.org/show_bug.cgi?id=132896
Reviewed by Geoffrey Garen.
This is a slight win on SunSpider, but it's meant to ultimately help us on
embenchen/lua. We already do well on that benchmark but our convergence is slower than
I'd like.
* dfg/DFGArrayMode.cpp:
(JSC::DFG::ArrayMode::refine):
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::parseBlock):
* dfg/DFGFixupPhase.cpp:
(JSC::DFG::FixupPhase::fixupNode):
* dfg/DFGPredictionPropagationPhase.cpp:
(JSC::DFG::PredictionPropagationPhase::propagate):
2014-05-08 Filip Pizlo <fpizlo@apple.com>
jsSubstring() should be lazy
https://bugs.webkit.org/show_bug.cgi?id=132556
Reviewed by Andreas Kling.
jsSubstring() is now lazy by using a special rope that is a substring instead of a
concatenation. To make this patch super simple, we require that a substring's base is
never a rope. Hence, when resolving a rope, we either go down a non-recursive substring
path, or we go down a concatenation path which may see exactly one level of substrings in
its fibers.
This is up to a 50% speed-up on microbenchmarks and a 10% speed-up on Octane/regexp.
Relanding this with assertion fixes.
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::specializedSweep):
* runtime/JSString.cpp:
(JSC::JSRopeString::visitFibers):
(JSC::JSRopeString::resolveRopeInternal8):
(JSC::JSRopeString::resolveRopeInternal16):
(JSC::JSRopeString::clearFibers):
(JSC::JSRopeString::resolveRope):
(JSC::JSRopeString::resolveRopeSlowCase8):
(JSC::JSRopeString::resolveRopeSlowCase):
* runtime/JSString.h:
(JSC::JSRopeString::finishCreation):
(JSC::JSRopeString::append):
(JSC::JSRopeString::create):
(JSC::JSRopeString::offsetOfFibers):
(JSC::JSRopeString::fiber):
(JSC::JSRopeString::substringBase):
(JSC::JSRopeString::substringOffset):
(JSC::JSRopeString::notSubstringSentinel):
(JSC::JSRopeString::substringSentinel):
(JSC::JSRopeString::isSubstring):
(JSC::JSRopeString::setIsSubstring):
(JSC::jsSubstring):
* runtime/RegExpMatchesArray.cpp:
(JSC::RegExpMatchesArray::reifyAllProperties):
* runtime/StringPrototype.cpp:
(JSC::stringProtoFuncSubstring):
Source/WTF:
* wtf/Bag.h:
(WTF::Bag::iterator::operator!=):
LayoutTests:
* js/regress/getter-no-activation-expected.txt: Added.
* js/regress/getter-no-activation.html: Added.
* js/regress/script-tests/getter-no-activation.js: Added.
* js/regress/getter-richards-expected.txt: Added.
* js/regress/getter-richards.html: Added.
* js/regress/script-tests/getter-richards.js: Added.
2014-05-08 Filip Pizlo <fpizlo@apple.com>
jsSubstring() should be lazy
https://bugs.webkit.org/show_bug.cgi?id=132556
Reviewed by Andreas Kling.
These tests get 35-50% faster.
* js/regress/script-tests/substring-concat-weird.js: Added.
(foo):
* js/regress/script-tests/substring-concat.js: Added.
(foo):
* js/regress/script-tests/substring.js: Added.
(foo):
* js/regress/substring-concat-expected.txt: Added.
* js/regress/substring-concat-weird-expected.txt: Added.
* js/regress/substring-concat-weird.html: Added.
* js/regress/substring-concat.html: Added.
* js/regress/substring-expected.txt: Added.
* js/regress/substring.html: Added.
git-svn-id: http://svn.webkit.org/repository/webkit/trunk@171362 268f45cc-cd09-0410-ab3c-d52691b4dbfc
diff --git a/Source/JavaScriptCore/ChangeLog b/Source/JavaScriptCore/ChangeLog
index 762a2ff..b11ea59 100644
--- a/Source/JavaScriptCore/ChangeLog
+++ b/Source/JavaScriptCore/ChangeLog
@@ -1,3 +1,294 @@
+2014-07-22 Filip Pizlo <fpizlo@apple.com>
+
+ Merge r168635, r168780, r169005, r169014, and r169143 from ftlopt.
+
+ 2014-05-20 Filip Pizlo <fpizlo@apple.com>
+
+ [ftlopt] DFG bytecode parser should turn GetById with nothing but a Getter stub as stuff+handleCall, and handleCall should be allowed to inline if it wants to
+ https://bugs.webkit.org/show_bug.cgi?id=133105
+
+ Reviewed by Michael Saboff.
+
+ - GetByIdStatus now knows about getters and can report intelligent things about them.
+ As is usually the case with how we do these things, GetByIdStatus knows more about
+ getters than the DFG can actually handle: it'll report details about polymorphic
+ getter calls even though the DFG won't be able to handle those. This is fine; the DFG
+ will see those statuses and bail to a generic slow path.
+
+ - The DFG::ByteCodeParser now knows how to set up and do handleCall() for a getter call.
+ This can, and usually does, result in inlining of getters!
+
+ - CodeOrigin and OSR exit know about inlined getter calls. When you OSR out of an
+ inlined getter, we set the return PC to a getter return thunk that fixes up the stack.
+ We use the usual offset-true-return-PC trick, where OSR exit places the true return PC
+ of the getter's caller as a phony argument that only the thunk knows how to find.
+
+ - Removed a bunch of dead monomorphic chain support from StructureStubInfo.
+
+ - A large chunk of this change is dragging GetGetterSetterByOffset, GetGetter, and
+ GetSetter through the DFG and FTL. GetGetterSetterByOffset is like GetByOffset except
+ that we know that we're returning a GetterSetter cell. GetGetter and GetSetter extract
+ the getter, or setter, from the GetterSetter.
+
+ This is a ~2.5x speed-up on the getter microbenchmarks that we already had. So far none
+ of the "real" benchmarks exercise getters enough for this to matter. But I noticed that
+ some of the variants of the Richards benchmark in other languages - for example
+ Wolczko's Java translation of a C++ translation of Deutsch's Smalltalk version - use
+ getters and setters extensively. So, I created a getter/setter JavaScript version of
+ Richards and put it in regress/script-tests/getter-richards.js. That sees about a 2.4x
+ speed-up from this patch, which is very reassuring.
+
+ * bytecode/CodeBlock.cpp:
+ (JSC::CodeBlock::printGetByIdCacheStatus):
+ (JSC::CodeBlock::findStubInfo):
+ * bytecode/CodeBlock.h:
+ * bytecode/CodeOrigin.cpp:
+ (WTF::printInternal):
+ * bytecode/CodeOrigin.h:
+ (JSC::InlineCallFrame::specializationKindFor):
+ * bytecode/GetByIdStatus.cpp:
+ (JSC::GetByIdStatus::computeFor):
+ (JSC::GetByIdStatus::computeForStubInfo):
+ (JSC::GetByIdStatus::makesCalls):
+ (JSC::GetByIdStatus::computeForChain): Deleted.
+ * bytecode/GetByIdStatus.h:
+ (JSC::GetByIdStatus::makesCalls): Deleted.
+ * bytecode/GetByIdVariant.cpp:
+ (JSC::GetByIdVariant::~GetByIdVariant):
+ (JSC::GetByIdVariant::GetByIdVariant):
+ (JSC::GetByIdVariant::operator=):
+ (JSC::GetByIdVariant::dumpInContext):
+ * bytecode/GetByIdVariant.h:
+ (JSC::GetByIdVariant::GetByIdVariant):
+ (JSC::GetByIdVariant::callLinkStatus):
+ * bytecode/PolymorphicGetByIdList.cpp:
+ (JSC::GetByIdAccess::fromStructureStubInfo):
+ (JSC::PolymorphicGetByIdList::from):
+ * bytecode/SpeculatedType.h:
+ * bytecode/StructureStubInfo.cpp:
+ (JSC::StructureStubInfo::deref):
+ (JSC::StructureStubInfo::visitWeakReferences):
+ * bytecode/StructureStubInfo.h:
+ (JSC::isGetByIdAccess):
+ (JSC::StructureStubInfo::initGetByIdChain): Deleted.
+ * dfg/DFGAbstractHeap.h:
+ * dfg/DFGAbstractInterpreterInlines.h:
+ (JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
+ * dfg/DFGByteCodeParser.cpp:
+ (JSC::DFG::ByteCodeParser::addCall):
+ (JSC::DFG::ByteCodeParser::handleCall):
+ (JSC::DFG::ByteCodeParser::handleInlining):
+ (JSC::DFG::ByteCodeParser::handleGetByOffset):
+ (JSC::DFG::ByteCodeParser::handleGetById):
+ (JSC::DFG::ByteCodeParser::InlineStackEntry::InlineStackEntry):
+ (JSC::DFG::ByteCodeParser::parse):
+ * dfg/DFGCSEPhase.cpp:
+ (JSC::DFG::CSEPhase::getGetterSetterByOffsetLoadElimination):
+ (JSC::DFG::CSEPhase::getInternalFieldLoadElimination):
+ (JSC::DFG::CSEPhase::performNodeCSE):
+ (JSC::DFG::CSEPhase::getTypedArrayByteOffsetLoadElimination): Deleted.
+ * dfg/DFGClobberize.h:
+ (JSC::DFG::clobberize):
+ * dfg/DFGFixupPhase.cpp:
+ (JSC::DFG::FixupPhase::fixupNode):
+ * dfg/DFGJITCompiler.cpp:
+ (JSC::DFG::JITCompiler::linkFunction):
+ * dfg/DFGNode.h:
+ (JSC::DFG::Node::hasStorageAccessData):
+ * dfg/DFGNodeType.h:
+ * dfg/DFGOSRExitCompilerCommon.cpp:
+ (JSC::DFG::reifyInlinedCallFrames):
+ * dfg/DFGPredictionPropagationPhase.cpp:
+ (JSC::DFG::PredictionPropagationPhase::propagate):
+ * dfg/DFGSafeToExecute.h:
+ (JSC::DFG::safeToExecute):
+ * dfg/DFGSpeculativeJIT32_64.cpp:
+ (JSC::DFG::SpeculativeJIT::compile):
+ * dfg/DFGSpeculativeJIT64.cpp:
+ (JSC::DFG::SpeculativeJIT::compile):
+ * ftl/FTLAbstractHeapRepository.cpp:
+ * ftl/FTLAbstractHeapRepository.h:
+ * ftl/FTLCapabilities.cpp:
+ (JSC::FTL::canCompile):
+ * ftl/FTLLink.cpp:
+ (JSC::FTL::link):
+ * ftl/FTLLowerDFGToLLVM.cpp:
+ (JSC::FTL::LowerDFGToLLVM::compileNode):
+ (JSC::FTL::LowerDFGToLLVM::compileGetGetter):
+ (JSC::FTL::LowerDFGToLLVM::compileGetSetter):
+ * jit/AccessorCallJITStubRoutine.h:
+ * jit/JIT.cpp:
+ (JSC::JIT::assertStackPointerOffset):
+ (JSC::JIT::privateCompile):
+ * jit/JIT.h:
+ * jit/JITPropertyAccess.cpp:
+ (JSC::JIT::emit_op_get_by_id):
+ * jit/ThunkGenerators.cpp:
+ (JSC::arityFixupGenerator):
+ (JSC::baselineGetterReturnThunkGenerator):
+ (JSC::baselineSetterReturnThunkGenerator):
+ (JSC::arityFixup): Deleted.
+ * jit/ThunkGenerators.h:
+ * runtime/CommonSlowPaths.cpp:
+ (JSC::setupArityCheckData):
+ * tests/stress/exit-from-getter.js: Added.
+ * tests/stress/poly-chain-getter.js: Added.
+ (Cons):
+ (foo):
+ (test):
+ * tests/stress/poly-chain-then-getter.js: Added.
+ (Cons1):
+ (Cons2):
+ (foo):
+ (test):
+ * tests/stress/poly-getter-combo.js: Added.
+ (Cons1):
+ (Cons2):
+ (foo):
+ (test):
+ (.test):
+ * tests/stress/poly-getter-then-chain.js: Added.
+ (Cons1):
+ (Cons2):
+ (foo):
+ (test):
+ * tests/stress/poly-getter-then-self.js: Added.
+ (foo):
+ (test):
+ (.test):
+ * tests/stress/poly-self-getter.js: Added.
+ (foo):
+ (test):
+ (getter):
+ * tests/stress/poly-self-then-getter.js: Added.
+ (foo):
+ (test):
+ * tests/stress/weird-getter-counter.js: Added.
+ (foo):
+ (test):
+
+ 2014-05-17 Filip Pizlo <fpizlo@apple.com>
+
+ [ftlopt] Factor out how CallLinkStatus uses exit site data
+ https://bugs.webkit.org/show_bug.cgi?id=133042
+
+ Reviewed by Anders Carlsson.
+
+ This makes it easier to use CallLinkStatus from clients that are calling into after
+ already holding some of the relevant locks. This is necessary because we use a "one lock
+ at a time" policy for CodeBlock locks: if you hold one then you're not allowed to acquire
+ any of the others. So, any code that needs to lock multiple CodeBlock locks needs to sort
+ of lock one, do some stuff, release it, then lock another, and then do more stuff. The
+ exit site data corresponds to the stuff you do while holding the baseline lock, while the
+ CallLinkInfo method corresponds to the stuff you do while holding the CallLinkInfo owner's
+ lock.
+
+ * bytecode/CallLinkStatus.cpp:
+ (JSC::CallLinkStatus::computeFor):
+ (JSC::CallLinkStatus::computeExitSiteData):
+ (JSC::CallLinkStatus::computeDFGStatuses):
+ * bytecode/CallLinkStatus.h:
+ (JSC::CallLinkStatus::ExitSiteData::ExitSiteData):
+
+ 2014-05-17 Filip Pizlo <fpizlo@apple.com>
+
+ [ftlopt] InlineCallFrame::isCall should be an enumeration
+ https://bugs.webkit.org/show_bug.cgi?id=133034
+
+ Reviewed by Sam Weinig.
+
+ Once we start inlining getters and setters, we'll want InlineCallFrame to be able to tell
+ us that the inlined call was a getter call or a setter call. Initially I thought I would
+ have a new field called "kind" that would have components NormalCall, GetterCall, and
+ SetterCall. But that doesn't make sense, because for GetterCall and SetterCall, isCall
+ would have to be true. Hence, It makes more sense to have one enumeration that is Call,
+ Construct, GetterCall, or SetterCall. This patch is a first step towards this.
+
+ It's interesting that isClosureCall should probably still be separate, since getter and
+ setter inlining could inline closure calls.
+
+ * bytecode/CodeBlock.h:
+ (JSC::baselineCodeBlockForInlineCallFrame):
+ * bytecode/CodeOrigin.cpp:
+ (JSC::InlineCallFrame::dumpInContext):
+ (WTF::printInternal):
+ * bytecode/CodeOrigin.h:
+ (JSC::InlineCallFrame::kindFor):
+ (JSC::InlineCallFrame::specializationKindFor):
+ (JSC::InlineCallFrame::InlineCallFrame):
+ (JSC::InlineCallFrame::specializationKind):
+ * dfg/DFGByteCodeParser.cpp:
+ (JSC::DFG::ByteCodeParser::InlineStackEntry::InlineStackEntry):
+ * dfg/DFGOSRExitPreparation.cpp:
+ (JSC::DFG::prepareCodeOriginForOSRExit):
+ * runtime/Arguments.h:
+ (JSC::Arguments::finishCreation):
+
+ 2014-05-13 Filip Pizlo <fpizlo@apple.com>
+
+ [ftlopt] DFG should not exit due to inadequate profiling coverage when it can trivially fill in the profiling coverage due to variable constant inference and the better prediction modeling of typed array GetByVals
+ https://bugs.webkit.org/show_bug.cgi?id=132896
+
+ Reviewed by Geoffrey Garen.
+
+ This is a slight win on SunSpider, but it's meant to ultimately help us on
+ embenchen/lua. We already do well on that benchmark but our convergence is slower than
+ I'd like.
+
+ * dfg/DFGArrayMode.cpp:
+ (JSC::DFG::ArrayMode::refine):
+ * dfg/DFGByteCodeParser.cpp:
+ (JSC::DFG::ByteCodeParser::parseBlock):
+ * dfg/DFGFixupPhase.cpp:
+ (JSC::DFG::FixupPhase::fixupNode):
+ * dfg/DFGPredictionPropagationPhase.cpp:
+ (JSC::DFG::PredictionPropagationPhase::propagate):
+
+ 2014-05-08 Filip Pizlo <fpizlo@apple.com>
+
+ jsSubstring() should be lazy
+ https://bugs.webkit.org/show_bug.cgi?id=132556
+
+ Reviewed by Andreas Kling.
+
+ jsSubstring() is now lazy by using a special rope that is a substring instead of a
+ concatenation. To make this patch super simple, we require that a substring's base is
+ never a rope. Hence, when resolving a rope, we either go down a non-recursive substring
+ path, or we go down a concatenation path which may see exactly one level of substrings in
+ its fibers.
+
+ This is up to a 50% speed-up on microbenchmarks and a 10% speed-up on Octane/regexp.
+
+ Relanding this with assertion fixes.
+
+ * heap/MarkedBlock.cpp:
+ (JSC::MarkedBlock::specializedSweep):
+ * runtime/JSString.cpp:
+ (JSC::JSRopeString::visitFibers):
+ (JSC::JSRopeString::resolveRopeInternal8):
+ (JSC::JSRopeString::resolveRopeInternal16):
+ (JSC::JSRopeString::clearFibers):
+ (JSC::JSRopeString::resolveRope):
+ (JSC::JSRopeString::resolveRopeSlowCase8):
+ (JSC::JSRopeString::resolveRopeSlowCase):
+ * runtime/JSString.h:
+ (JSC::JSRopeString::finishCreation):
+ (JSC::JSRopeString::append):
+ (JSC::JSRopeString::create):
+ (JSC::JSRopeString::offsetOfFibers):
+ (JSC::JSRopeString::fiber):
+ (JSC::JSRopeString::substringBase):
+ (JSC::JSRopeString::substringOffset):
+ (JSC::JSRopeString::notSubstringSentinel):
+ (JSC::JSRopeString::substringSentinel):
+ (JSC::JSRopeString::isSubstring):
+ (JSC::JSRopeString::setIsSubstring):
+ (JSC::jsSubstring):
+ * runtime/RegExpMatchesArray.cpp:
+ (JSC::RegExpMatchesArray::reifyAllProperties):
+ * runtime/StringPrototype.cpp:
+ (JSC::stringProtoFuncSubstring):
+
2014-07-21 Sam Weinig <sam@webkit.org>
[Cocoa] WKScriptMessageHandlers don't seem to function properly after navigating
diff --git a/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp b/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp
index 29a9237..f68f904 100644
--- a/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp
+++ b/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp
@@ -120,28 +120,38 @@
UNUSED_PARAM(bytecodeIndex);
UNUSED_PARAM(map);
#if ENABLE(DFG_JIT)
- if (profiledBlock->hasExitSite(locker, DFG::FrequentExitSite(bytecodeIndex, BadCache))
- || profiledBlock->hasExitSite(locker, DFG::FrequentExitSite(bytecodeIndex, BadCacheWatchpoint))
- || profiledBlock->hasExitSite(locker, DFG::FrequentExitSite(bytecodeIndex, BadExecutable)))
+ ExitSiteData exitSiteData = computeExitSiteData(locker, profiledBlock, bytecodeIndex);
+ if (exitSiteData.m_takesSlowPath)
return takesSlowPath();
CallLinkInfo* callLinkInfo = map.get(CodeOrigin(bytecodeIndex));
if (!callLinkInfo)
return computeFromLLInt(locker, profiledBlock, bytecodeIndex);
- CallLinkStatus result = computeFor(locker, *callLinkInfo);
- if (!result)
- return computeFromLLInt(locker, profiledBlock, bytecodeIndex);
-
- if (profiledBlock->hasExitSite(locker, DFG::FrequentExitSite(bytecodeIndex, BadFunction)))
- result.makeClosureCall();
-
- return result;
+ return computeFor(locker, *callLinkInfo, exitSiteData);
#else
return CallLinkStatus();
#endif
}
+CallLinkStatus::ExitSiteData CallLinkStatus::computeExitSiteData(
+ const ConcurrentJITLocker& locker, CodeBlock* profiledBlock, unsigned bytecodeIndex,
+ ExitingJITType exitingJITType)
+{
+ ExitSiteData exitSiteData;
+
+#if ENABLE(DFG_JIT)
+ exitSiteData.m_takesSlowPath =
+ profiledBlock->hasExitSite(locker, DFG::FrequentExitSite(bytecodeIndex, BadCache, exitingJITType))
+ || profiledBlock->hasExitSite(locker, DFG::FrequentExitSite(bytecodeIndex, BadCacheWatchpoint, exitingJITType))
+ || profiledBlock->hasExitSite(locker, DFG::FrequentExitSite(bytecodeIndex, BadExecutable, exitingJITType));
+ exitSiteData.m_badFunction =
+ profiledBlock->hasExitSite(locker, DFG::FrequentExitSite(bytecodeIndex, BadFunction, exitingJITType));
+#endif
+
+ return exitSiteData;
+}
+
#if ENABLE(JIT)
CallLinkStatus CallLinkStatus::computeFor(const ConcurrentJITLocker&, CallLinkInfo& callLinkInfo)
{
@@ -173,6 +183,19 @@
return CallLinkStatus(target);
}
+
+CallLinkStatus CallLinkStatus::computeFor(
+ const ConcurrentJITLocker& locker, CallLinkInfo& callLinkInfo, ExitSiteData exitSiteData)
+{
+ if (exitSiteData.m_takesSlowPath)
+ return takesSlowPath();
+
+ CallLinkStatus result = computeFor(locker, callLinkInfo);
+ if (exitSiteData.m_badFunction)
+ result.makeClosureCall();
+
+ return result;
+}
#endif
void CallLinkStatus::computeDFGStatuses(
@@ -185,9 +208,6 @@
CallLinkInfo& info = **iter;
CodeOrigin codeOrigin = info.codeOrigin;
- bool takeSlowPath;
- bool badFunction;
-
// Check if we had already previously made a terrible mistake in the FTL for this
// code origin. Note that this is approximate because we could have a monovariant
// inline in the FTL that ended up failing. We should fix that at some point by
@@ -197,28 +217,16 @@
// InlineCallFrames.
CodeBlock* currentBaseline =
baselineCodeBlockForOriginAndBaselineCodeBlock(codeOrigin, baselineCodeBlock);
+ ExitSiteData exitSiteData;
{
ConcurrentJITLocker locker(currentBaseline->m_lock);
- takeSlowPath =
- currentBaseline->hasExitSite(locker, DFG::FrequentExitSite(codeOrigin.bytecodeIndex, BadCache, ExitFromFTL))
- || currentBaseline->hasExitSite(locker, DFG::FrequentExitSite(codeOrigin.bytecodeIndex, BadCacheWatchpoint, ExitFromFTL))
- || currentBaseline->hasExitSite(locker, DFG::FrequentExitSite(codeOrigin.bytecodeIndex, BadExecutable, ExitFromFTL));
- badFunction =
- currentBaseline->hasExitSite(locker, DFG::FrequentExitSite(codeOrigin.bytecodeIndex, BadFunction, ExitFromFTL));
+ exitSiteData = computeExitSiteData(
+ locker, currentBaseline, codeOrigin.bytecodeIndex, ExitFromFTL);
}
{
ConcurrentJITLocker locker(dfgCodeBlock->m_lock);
- if (takeSlowPath)
- map.add(info.codeOrigin, takesSlowPath());
- else {
- CallLinkStatus status = computeFor(locker, info);
- if (status.isSet()) {
- if (badFunction)
- status.makeClosureCall();
- map.add(info.codeOrigin, status);
- }
- }
+ map.add(info.codeOrigin, computeFor(locker, info, exitSiteData));
}
}
#else
diff --git a/Source/JavaScriptCore/bytecode/CallLinkStatus.h b/Source/JavaScriptCore/bytecode/CallLinkStatus.h
index 99b2fdc..a8ae082 100644
--- a/Source/JavaScriptCore/bytecode/CallLinkStatus.h
+++ b/Source/JavaScriptCore/bytecode/CallLinkStatus.h
@@ -30,6 +30,7 @@
#include "CodeOrigin.h"
#include "CodeSpecializationKind.h"
#include "ConcurrentJITLock.h"
+#include "ExitingJITType.h"
#include "Intrinsic.h"
#include "JSCJSValue.h"
@@ -79,10 +80,23 @@
static CallLinkStatus computeFor(
CodeBlock*, unsigned bytecodeIndex, const CallLinkInfoMap&);
+ struct ExitSiteData {
+ ExitSiteData()
+ : m_takesSlowPath(false)
+ , m_badFunction(false)
+ {
+ }
+
+ bool m_takesSlowPath;
+ bool m_badFunction;
+ };
+ static ExitSiteData computeExitSiteData(const ConcurrentJITLocker&, CodeBlock*, unsigned bytecodeIndex, ExitingJITType = ExitFromAnything);
+
#if ENABLE(JIT)
// Computes the status assuming that we never took slow path and never previously
// exited.
static CallLinkStatus computeFor(const ConcurrentJITLocker&, CallLinkInfo&);
+ static CallLinkStatus computeFor(const ConcurrentJITLocker&, CallLinkInfo&, ExitSiteData);
#endif
typedef HashMap<CodeOrigin, CallLinkStatus, CodeOriginApproximateHash> ContextMap;
diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.cpp b/Source/JavaScriptCore/bytecode/CodeBlock.cpp
index e50601d..b557268 100644
--- a/Source/JavaScriptCore/bytecode/CodeBlock.cpp
+++ b/Source/JavaScriptCore/bytecode/CodeBlock.cpp
@@ -348,11 +348,6 @@
out.printf("self");
baseStructure = stubInfo.u.getByIdSelf.baseObjectStructure.get();
break;
- case access_get_by_id_chain:
- out.printf("chain");
- baseStructure = stubInfo.u.getByIdChain.baseObjectStructure.get();
- chain = stubInfo.u.getByIdChain.chain.get();
- break;
case access_get_by_id_list:
out.printf("list");
list = stubInfo.u.getByIdList.list;
@@ -2325,6 +2320,15 @@
return m_stubInfos.add();
}
+StructureStubInfo* CodeBlock::findStubInfo(CodeOrigin codeOrigin)
+{
+ for (StructureStubInfo* stubInfo : m_stubInfos) {
+ if (stubInfo->codeOrigin == codeOrigin)
+ return stubInfo;
+ }
+ return nullptr;
+}
+
CallLinkInfo* CodeBlock::addCallLinkInfo()
{
ConcurrentJITLocker locker(m_lock);
diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.h b/Source/JavaScriptCore/bytecode/CodeBlock.h
index 18ef0e3..e840022 100644
--- a/Source/JavaScriptCore/bytecode/CodeBlock.h
+++ b/Source/JavaScriptCore/bytecode/CodeBlock.h
@@ -201,6 +201,10 @@
StructureStubInfo* addStubInfo();
Bag<StructureStubInfo>::iterator stubInfoBegin() { return m_stubInfos.begin(); }
Bag<StructureStubInfo>::iterator stubInfoEnd() { return m_stubInfos.end(); }
+
+ // O(n) operation. Use getStubInfoMap() unless you really only intend to get one
+ // stub info.
+ StructureStubInfo* findStubInfo(CodeOrigin);
void resetStub(StructureStubInfo&);
@@ -1191,7 +1195,7 @@
RELEASE_ASSERT(inlineCallFrame);
ExecutableBase* executable = inlineCallFrame->executable.get();
RELEASE_ASSERT(executable->structure()->classInfo() == FunctionExecutable::info());
- return static_cast<FunctionExecutable*>(executable)->baselineCodeBlockFor(inlineCallFrame->isCall ? CodeForCall : CodeForConstruct);
+ return static_cast<FunctionExecutable*>(executable)->baselineCodeBlockFor(inlineCallFrame->specializationKind());
}
inline CodeBlock* baselineCodeBlockForOriginAndBaselineCodeBlock(const CodeOrigin& codeOrigin, CodeBlock* baselineCodeBlock)
diff --git a/Source/JavaScriptCore/bytecode/CodeOrigin.cpp b/Source/JavaScriptCore/bytecode/CodeOrigin.cpp
index 7ec1ce2..81b1e6a 100644
--- a/Source/JavaScriptCore/bytecode/CodeOrigin.cpp
+++ b/Source/JavaScriptCore/bytecode/CodeOrigin.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2012, 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2013, 2014 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -178,7 +178,7 @@
out.print(briefFunctionInformation(), ":<", RawPointer(executable.get()));
if (executable->isStrictMode())
out.print(" (StrictMode)");
- out.print(", bc#", caller.bytecodeIndex, ", ", specializationKind());
+ out.print(", bc#", caller.bytecodeIndex, ", ", kind);
if (isClosureCall)
out.print(", closure call");
else
@@ -195,3 +195,26 @@
} // namespace JSC
+namespace WTF {
+
+void printInternal(PrintStream& out, JSC::InlineCallFrame::Kind kind)
+{
+ switch (kind) {
+ case JSC::InlineCallFrame::Call:
+ out.print("Call");
+ return;
+ case JSC::InlineCallFrame::Construct:
+ out.print("Construct");
+ return;
+ case JSC::InlineCallFrame::GetterCall:
+ out.print("GetterCall");
+ return;
+ case JSC::InlineCallFrame::SetterCall:
+ out.print("SetterCall");
+ return;
+ }
+ RELEASE_ASSERT_NOT_REACHED();
+}
+
+} // namespace WTF
+
diff --git a/Source/JavaScriptCore/bytecode/CodeOrigin.h b/Source/JavaScriptCore/bytecode/CodeOrigin.h
index 5136697..ce433f8 100644
--- a/Source/JavaScriptCore/bytecode/CodeOrigin.h
+++ b/Source/JavaScriptCore/bytecode/CodeOrigin.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2011, 2012, 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -118,13 +118,49 @@
};
struct InlineCallFrame {
+ enum Kind {
+ Call,
+ Construct,
+
+ // For these, the stackOffset incorporates the argument count plus the true return PC
+ // slot.
+ GetterCall,
+ SetterCall
+ };
+
+ static Kind kindFor(CodeSpecializationKind kind)
+ {
+ switch (kind) {
+ case CodeForCall:
+ return Call;
+ case CodeForConstruct:
+ return Construct;
+ }
+ RELEASE_ASSERT_NOT_REACHED();
+ return Call;
+ }
+
+ static CodeSpecializationKind specializationKindFor(Kind kind)
+ {
+ switch (kind) {
+ case Call:
+ case GetterCall:
+ case SetterCall:
+ return CodeForCall;
+ case Construct:
+ return CodeForConstruct;
+ }
+ RELEASE_ASSERT_NOT_REACHED();
+ return CodeForCall;
+ }
+
Vector<ValueRecovery> arguments; // Includes 'this'.
WriteBarrier<ScriptExecutable> executable;
ValueRecovery calleeRecovery;
CodeOrigin caller;
BitVector capturedVars; // Indexed by the machine call frame's variable numbering.
signed stackOffset : 30;
- bool isCall : 1;
+ Kind kind : 2;
bool isClosureCall : 1; // If false then we know that callee/scope are constants and the DFG won't treat them as variables, i.e. they have to be recovered manually.
VirtualRegister argumentsRegister; // This is only set if the code uses arguments. The unmodified arguments register follows the unmodifiedArgumentsRegister() convention (see CodeBlock.h).
@@ -133,12 +169,12 @@
// we forgot to initialize explicitly.
InlineCallFrame()
: stackOffset(0)
- , isCall(false)
+ , kind(Call)
, isClosureCall(false)
{
}
- CodeSpecializationKind specializationKind() const { return specializationFromIsCall(isCall); }
+ CodeSpecializationKind specializationKind() const { return specializationKindFor(kind); }
JSFunction* calleeConstant() const
{
@@ -209,6 +245,8 @@
namespace WTF {
+void printInternal(PrintStream&, JSC::InlineCallFrame::Kind);
+
template<typename T> struct DefaultHash;
template<> struct DefaultHash<JSC::CodeOrigin> {
typedef JSC::CodeOriginHash Hash;
diff --git a/Source/JavaScriptCore/bytecode/GetByIdStatus.cpp b/Source/JavaScriptCore/bytecode/GetByIdStatus.cpp
index 235cdbd..9a24d50 100644
--- a/Source/JavaScriptCore/bytecode/GetByIdStatus.cpp
+++ b/Source/JavaScriptCore/bytecode/GetByIdStatus.cpp
@@ -26,6 +26,7 @@
#include "config.h"
#include "GetByIdStatus.h"
+#include "AccessorCallJITStubRoutine.h"
#include "CodeBlock.h"
#include "JSCInlines.h"
#include "JSScope.h"
@@ -85,57 +86,6 @@
return GetByIdStatus(Simple, false, GetByIdVariant(StructureSet(structure), offset, specificValue));
}
-bool GetByIdStatus::computeForChain(CodeBlock* profiledBlock, StringImpl* uid, PassRefPtr<IntendedStructureChain> passedChain)
-{
-#if ENABLE(JIT)
- RefPtr<IntendedStructureChain> chain = passedChain;
-
- // Validate the chain. If the chain is invalid, then currently the best thing
- // we can do is to assume that TakesSlow is true. In the future, it might be
- // worth exploring reifying the structure chain from the structure we've got
- // instead of using the one from the cache, since that will do the right things
- // if the structure chain has changed. But that may be harder, because we may
- // then end up having a different type of access altogether. And it currently
- // does not appear to be worth it to do so -- effectively, the heuristic we
- // have now is that if the structure chain has changed between when it was
- // cached on in the baseline JIT and when the DFG tried to inline the access,
- // then we fall back on a polymorphic access.
- if (!chain->isStillValid())
- return false;
-
- if (chain->head()->takesSlowPathInDFGForImpureProperty())
- return false;
- size_t chainSize = chain->size();
- for (size_t i = 0; i < chainSize; i++) {
- if (chain->at(i)->takesSlowPathInDFGForImpureProperty())
- return false;
- }
-
- JSObject* currentObject = chain->terminalPrototype();
- Structure* currentStructure = chain->last();
-
- ASSERT_UNUSED(currentObject, currentObject);
-
- unsigned attributesIgnored;
- JSCell* specificValue;
-
- PropertyOffset offset = currentStructure->getConcurrently(
- *profiledBlock->vm(), uid, attributesIgnored, specificValue);
- if (currentStructure->isDictionary())
- specificValue = 0;
- if (!isValidOffset(offset))
- return false;
-
- return appendVariant(GetByIdVariant(StructureSet(chain->head()), offset, specificValue, chain));
-#else // ENABLE(JIT)
- UNUSED_PARAM(profiledBlock);
- UNUSED_PARAM(uid);
- UNUSED_PARAM(passedChain);
- UNREACHABLE_FOR_PLATFORM();
- return false;
-#endif // ENABLE(JIT)
-}
-
GetByIdStatus GetByIdStatus::computeFor(CodeBlock* profiledBlock, StubInfoMap& map, unsigned bytecodeIndex, StringImpl* uid)
{
ConcurrentJITLocker locker(profiledBlock->m_lock);
@@ -144,12 +94,13 @@
#if ENABLE(DFG_JIT)
result = computeForStubInfo(
- locker, profiledBlock, map.get(CodeOrigin(bytecodeIndex)), uid);
+ locker, profiledBlock, map.get(CodeOrigin(bytecodeIndex)), uid,
+ CallLinkStatus::computeExitSiteData(locker, profiledBlock, bytecodeIndex));
if (!result.takesSlowPath()
&& (hasExitSite(locker, profiledBlock, bytecodeIndex)
|| profiledBlock->likelyToTakeSlowCase(bytecodeIndex)))
- return GetByIdStatus(TakesSlowPath, true);
+ return GetByIdStatus(result.makesCalls() ? MakesCalls : TakesSlowPath, true);
#else
UNUSED_PARAM(map);
#endif
@@ -162,37 +113,26 @@
#if ENABLE(JIT)
GetByIdStatus GetByIdStatus::computeForStubInfo(
- const ConcurrentJITLocker&, CodeBlock* profiledBlock, StructureStubInfo* stubInfo,
- StringImpl* uid)
+ const ConcurrentJITLocker& locker, CodeBlock* profiledBlock, StructureStubInfo* stubInfo, StringImpl* uid,
+ CallLinkStatus::ExitSiteData callExitSiteData)
{
if (!stubInfo || !stubInfo->seen)
return GetByIdStatus(NoInformation);
+ PolymorphicGetByIdList* list = 0;
+ State slowPathState = TakesSlowPath;
+ if (stubInfo->accessType == access_get_by_id_list) {
+ list = stubInfo->u.getByIdList.list;
+ for (unsigned i = 0; i < list->size(); ++i) {
+ const GetByIdAccess& access = list->at(i);
+ if (access.doesCalls())
+ slowPathState = MakesCalls;
+ }
+ }
+
if (stubInfo->resetByGC)
return GetByIdStatus(TakesSlowPath, true);
- PolymorphicGetByIdList* list = 0;
- if (stubInfo->accessType == access_get_by_id_list) {
- list = stubInfo->u.getByIdList.list;
- bool makesCalls = false;
- bool isWatched = false;
- for (unsigned i = 0; i < list->size(); ++i) {
- const GetByIdAccess& access = list->at(i);
- if (access.doesCalls()) {
- makesCalls = true;
- break;
- }
- if (access.isWatched()) {
- isWatched = true;
- continue;
- }
- }
- if (makesCalls)
- return GetByIdStatus(MakesCalls, true);
- if (isWatched)
- return GetByIdStatus(TakesSlowPath, true);
- }
-
// Finally figure out if we can derive an access strategy.
GetByIdStatus result;
result.m_state = Simple;
@@ -204,14 +144,14 @@
case access_get_by_id_self: {
Structure* structure = stubInfo->u.getByIdSelf.baseObjectStructure.get();
if (structure->takesSlowPathInDFGForImpureProperty())
- return GetByIdStatus(TakesSlowPath, true);
+ return GetByIdStatus(slowPathState, true);
unsigned attributesIgnored;
JSCell* specificValue;
GetByIdVariant variant;
variant.m_offset = structure->getConcurrently(
*profiledBlock->vm(), uid, attributesIgnored, specificValue);
if (!isValidOffset(variant.m_offset))
- return GetByIdStatus(TakesSlowPath, true);
+ return GetByIdStatus(slowPathState, true);
if (structure->isDictionary())
specificValue = 0;
@@ -224,8 +164,6 @@
case access_get_by_id_list: {
for (unsigned listIndex = 0; listIndex < list->size(); ++listIndex) {
- ASSERT(list->at(listIndex).isSimple());
-
Structure* structure = list->at(listIndex).structure();
// FIXME: We should assert that we never see a structure that
@@ -235,72 +173,114 @@
// https://bugs.webkit.org/show_bug.cgi?id=131810
if (structure->takesSlowPathInDFGForImpureProperty())
- return GetByIdStatus(TakesSlowPath, true);
-
- if (list->at(listIndex).chain()) {
- RefPtr<IntendedStructureChain> chain = adoptRef(new IntendedStructureChain(
- profiledBlock, structure, list->at(listIndex).chain(),
- list->at(listIndex).chainCount()));
- if (!result.computeForChain(profiledBlock, uid, chain))
- return GetByIdStatus(TakesSlowPath, true);
- continue;
- }
+ return GetByIdStatus(slowPathState, true);
unsigned attributesIgnored;
JSCell* specificValue;
- PropertyOffset myOffset = structure->getConcurrently(
- *profiledBlock->vm(), uid, attributesIgnored, specificValue);
- if (structure->isDictionary())
- specificValue = 0;
-
- if (!isValidOffset(myOffset))
- return GetByIdStatus(TakesSlowPath, true);
+ PropertyOffset myOffset;
+ RefPtr<IntendedStructureChain> chain;
- bool found = false;
- for (unsigned variantIndex = 0; variantIndex < result.m_variants.size(); ++variantIndex) {
- GetByIdVariant& variant = result.m_variants[variantIndex];
- if (variant.m_chain)
- continue;
+ if (list->at(listIndex).chain()) {
+ chain = adoptRef(new IntendedStructureChain(
+ profiledBlock, structure, list->at(listIndex).chain(),
+ list->at(listIndex).chainCount()));
- if (variant.m_offset != myOffset)
- continue;
-
- found = true;
- if (variant.m_structureSet.contains(structure))
- break;
+ if (!chain->isStillValid())
+ return GetByIdStatus(slowPathState, true);
- if (variant.m_specificValue != JSValue(specificValue))
- variant.m_specificValue = JSValue();
+ if (chain->head()->takesSlowPathInDFGForImpureProperty())
+ return GetByIdStatus(slowPathState, true);
- variant.m_structureSet.add(structure);
- break;
+ size_t chainSize = chain->size();
+ for (size_t i = 0; i < chainSize; i++) {
+ if (chain->at(i)->takesSlowPathInDFGForImpureProperty())
+ return GetByIdStatus(slowPathState, true);
+ }
+
+ JSObject* currentObject = chain->terminalPrototype();
+ Structure* currentStructure = chain->last();
+
+ ASSERT_UNUSED(currentObject, currentObject);
+
+ myOffset = currentStructure->getConcurrently(
+ *profiledBlock->vm(), uid, attributesIgnored, specificValue);
+ if (currentStructure->isDictionary())
+ specificValue = 0;
+ } else {
+ myOffset = structure->getConcurrently(
+ *profiledBlock->vm(), uid, attributesIgnored, specificValue);
+ if (structure->isDictionary())
+ specificValue = 0;
}
- if (found)
- continue;
+ if (!isValidOffset(myOffset))
+ return GetByIdStatus(slowPathState, true);
+
+ if (!chain && !list->at(listIndex).doesCalls()) {
+ // For non-chain, non-getter accesses, we try to do some coalescing.
+ bool found = false;
+ for (unsigned variantIndex = 0; variantIndex < result.m_variants.size(); ++variantIndex) {
+ GetByIdVariant& variant = result.m_variants[variantIndex];
+ if (variant.m_chain)
+ continue;
+
+ if (variant.m_offset != myOffset)
+ continue;
+
+ if (variant.callLinkStatus())
+ continue;
+
+ found = true;
+ if (variant.m_structureSet.contains(structure))
+ break;
+
+ if (variant.m_specificValue != JSValue(specificValue))
+ variant.m_specificValue = JSValue();
+
+ variant.m_structureSet.add(structure);
+ break;
+ }
- if (!result.appendVariant(GetByIdVariant(StructureSet(structure), myOffset, specificValue)))
- return GetByIdStatus(TakesSlowPath, true);
+ if (found)
+ continue;
+ }
+
+ std::unique_ptr<CallLinkStatus> callLinkStatus;
+ switch (list->at(listIndex).type()) {
+ case GetByIdAccess::SimpleInline:
+ case GetByIdAccess::SimpleStub: {
+ break;
+ }
+ case GetByIdAccess::Getter: {
+ AccessorCallJITStubRoutine* stub = static_cast<AccessorCallJITStubRoutine*>(
+ list->at(listIndex).stubRoutine());
+ callLinkStatus = std::make_unique<CallLinkStatus>(
+ CallLinkStatus::computeFor(locker, *stub->m_callLinkInfo, callExitSiteData));
+ break;
+ }
+ case GetByIdAccess::CustomGetter:
+ case GetByIdAccess::WatchedStub: {
+ // FIXME: It would be totally sweet to support these at some point in the future.
+ // https://bugs.webkit.org/show_bug.cgi?id=133052
+ // https://bugs.webkit.org/show_bug.cgi?id=135172
+ return GetByIdStatus(slowPathState, true);
+ }
+ default:
+ RELEASE_ASSERT_NOT_REACHED();
+ }
+
+ GetByIdVariant variant(
+ StructureSet(structure), myOffset, specificValue, chain,
+ std::move(callLinkStatus));
+ if (!result.appendVariant(variant))
+ return GetByIdStatus(slowPathState, true);
}
return result;
}
- case access_get_by_id_chain: {
- if (!stubInfo->u.getByIdChain.isDirect)
- return GetByIdStatus(MakesCalls, true);
- RefPtr<IntendedStructureChain> chain = adoptRef(new IntendedStructureChain(
- profiledBlock,
- stubInfo->u.getByIdChain.baseObjectStructure.get(),
- stubInfo->u.getByIdChain.chain.get(),
- stubInfo->u.getByIdChain.count));
- if (result.computeForChain(profiledBlock, uid, chain))
- return result;
- return GetByIdStatus(TakesSlowPath, true);
- }
-
default:
- return GetByIdStatus(TakesSlowPath, true);
+ return GetByIdStatus(slowPathState, true);
}
RELEASE_ASSERT_NOT_REACHED();
@@ -314,10 +294,18 @@
{
#if ENABLE(DFG_JIT)
if (dfgBlock) {
+ CallLinkStatus::ExitSiteData exitSiteData;
+ {
+ ConcurrentJITLocker locker(profiledBlock->m_lock);
+ exitSiteData = CallLinkStatus::computeExitSiteData(
+ locker, profiledBlock, codeOrigin.bytecodeIndex, ExitFromFTL);
+ }
+
GetByIdStatus result;
{
ConcurrentJITLocker locker(dfgBlock->m_lock);
- result = computeForStubInfo(locker, dfgBlock, dfgMap.get(codeOrigin), uid);
+ result = computeForStubInfo(
+ locker, dfgBlock, dfgMap.get(codeOrigin), uid, exitSiteData);
}
if (result.takesSlowPath())
@@ -370,6 +358,24 @@
Simple, false, GetByIdVariant(StructureSet(structure), offset, specificValue));
}
+bool GetByIdStatus::makesCalls() const
+{
+ switch (m_state) {
+ case NoInformation:
+ case TakesSlowPath:
+ return false;
+ case Simple:
+ for (unsigned i = m_variants.size(); i--;) {
+ if (m_variants[i].callLinkStatus())
+ return true;
+ }
+ return false;
+ case MakesCalls:
+ return true;
+ }
+ RELEASE_ASSERT_NOT_REACHED();
+}
+
void GetByIdStatus::dump(PrintStream& out) const
{
out.print("(");
diff --git a/Source/JavaScriptCore/bytecode/GetByIdStatus.h b/Source/JavaScriptCore/bytecode/GetByIdStatus.h
index c350e2c..652201e 100644
--- a/Source/JavaScriptCore/bytecode/GetByIdStatus.h
+++ b/Source/JavaScriptCore/bytecode/GetByIdStatus.h
@@ -26,6 +26,7 @@
#ifndef GetByIdStatus_h
#define GetByIdStatus_h
+#include "CallLinkStatus.h"
#include "CodeOrigin.h"
#include "ConcurrentJITLock.h"
#include "ExitingJITType.h"
@@ -83,7 +84,7 @@
const GetByIdVariant& operator[](size_t index) const { return at(index); }
bool takesSlowPath() const { return m_state == TakesSlowPath || m_state == MakesCalls; }
- bool makesCalls() const { return m_state == MakesCalls; }
+ bool makesCalls() const;
bool wasSeenInJIT() const { return m_wasSeenInJIT; }
@@ -94,9 +95,10 @@
static bool hasExitSite(const ConcurrentJITLocker&, CodeBlock*, unsigned bytecodeIndex, ExitingJITType = ExitFromAnything);
#endif
#if ENABLE(JIT)
- static GetByIdStatus computeForStubInfo(const ConcurrentJITLocker&, CodeBlock*, StructureStubInfo*, StringImpl* uid);
+ static GetByIdStatus computeForStubInfo(
+ const ConcurrentJITLocker&, CodeBlock* profiledBlock, StructureStubInfo*,
+ StringImpl* uid, CallLinkStatus::ExitSiteData);
#endif
- bool computeForChain(CodeBlock*, StringImpl* uid, PassRefPtr<IntendedStructureChain>);
static GetByIdStatus computeFromLLInt(CodeBlock*, unsigned bytecodeIndex, StringImpl* uid);
bool appendVariant(const GetByIdVariant&);
diff --git a/Source/JavaScriptCore/bytecode/GetByIdVariant.cpp b/Source/JavaScriptCore/bytecode/GetByIdVariant.cpp
index b8bedce..7c7e128 100644
--- a/Source/JavaScriptCore/bytecode/GetByIdVariant.cpp
+++ b/Source/JavaScriptCore/bytecode/GetByIdVariant.cpp
@@ -26,10 +26,31 @@
#include "config.h"
#include "GetByIdVariant.h"
+#include "CallLinkStatus.h"
#include "JSCInlines.h"
namespace JSC {
+GetByIdVariant::~GetByIdVariant() { }
+
+GetByIdVariant::GetByIdVariant(const GetByIdVariant& other)
+{
+ *this = other;
+}
+
+GetByIdVariant& GetByIdVariant::operator=(const GetByIdVariant& other)
+{
+ m_structureSet = other.m_structureSet;
+ m_chain = other.m_chain;
+ m_specificValue = other.m_specificValue;
+ m_offset = other.m_offset;
+ if (other.m_callLinkStatus)
+ m_callLinkStatus = std::make_unique<CallLinkStatus>(*other.m_callLinkStatus);
+ else
+ m_callLinkStatus = nullptr;
+ return *this;
+}
+
void GetByIdVariant::dump(PrintStream& out) const
{
dumpInContext(out, 0);
@@ -45,7 +66,10 @@
out.print(
"<", inContext(structureSet(), context), ", ",
pointerDumpInContext(chain(), context), ", ",
- inContext(specificValue(), context), ", ", offset(), ">");
+ inContext(specificValue(), context), ", ", offset());
+ if (m_callLinkStatus)
+ out.print("call: ", *m_callLinkStatus);
+ out.print(">");
}
} // namespace JSC
diff --git a/Source/JavaScriptCore/bytecode/GetByIdVariant.h b/Source/JavaScriptCore/bytecode/GetByIdVariant.h
index f0201ee..f30448b 100644
--- a/Source/JavaScriptCore/bytecode/GetByIdVariant.h
+++ b/Source/JavaScriptCore/bytecode/GetByIdVariant.h
@@ -26,6 +26,7 @@
#ifndef GetByIdVariant_h
#define GetByIdVariant_h
+#include "CallLinkStatus.h"
#include "IntendedStructureChain.h"
#include "JSCJSValue.h"
#include "PropertyOffset.h"
@@ -33,6 +34,7 @@
namespace JSC {
+class CallLinkStatus;
class GetByIdStatus;
struct DumpContext;
@@ -41,11 +43,13 @@
GetByIdVariant(
const StructureSet& structureSet = StructureSet(),
PropertyOffset offset = invalidOffset, JSValue specificValue = JSValue(),
- PassRefPtr<IntendedStructureChain> chain = nullptr)
+ PassRefPtr<IntendedStructureChain> chain = nullptr,
+ std::unique_ptr<CallLinkStatus> callLinkStatus = nullptr)
: m_structureSet(structureSet)
, m_chain(chain)
, m_specificValue(specificValue)
, m_offset(offset)
+ , m_callLinkStatus(std::move(callLinkStatus))
{
if (!structureSet.size()) {
ASSERT(offset == invalidOffset);
@@ -54,12 +58,18 @@
}
}
+ ~GetByIdVariant();
+
+ GetByIdVariant(const GetByIdVariant&);
+ GetByIdVariant& operator=(const GetByIdVariant&);
+
bool isSet() const { return !!m_structureSet.size(); }
bool operator!() const { return !isSet(); }
const StructureSet& structureSet() const { return m_structureSet; }
IntendedStructureChain* chain() const { return const_cast<IntendedStructureChain*>(m_chain.get()); }
JSValue specificValue() const { return m_specificValue; }
PropertyOffset offset() const { return m_offset; }
+ CallLinkStatus* callLinkStatus() const { return m_callLinkStatus.get(); }
void dump(PrintStream&) const;
void dumpInContext(PrintStream&, DumpContext*) const;
@@ -71,6 +81,7 @@
RefPtr<IntendedStructureChain> m_chain;
JSValue m_specificValue;
PropertyOffset m_offset;
+ std::unique_ptr<CallLinkStatus> m_callLinkStatus;
};
} // namespace JSC
diff --git a/Source/JavaScriptCore/bytecode/PolymorphicGetByIdList.cpp b/Source/JavaScriptCore/bytecode/PolymorphicGetByIdList.cpp
index c4005c0..85011c6 100644
--- a/Source/JavaScriptCore/bytecode/PolymorphicGetByIdList.cpp
+++ b/Source/JavaScriptCore/bytecode/PolymorphicGetByIdList.cpp
@@ -58,27 +58,11 @@
GetByIdAccess result;
- switch (stubInfo.accessType) {
- case access_get_by_id_self:
- result.m_type = SimpleInline;
- result.m_structure.copyFrom(stubInfo.u.getByIdSelf.baseObjectStructure);
- result.m_stubRoutine = JITStubRoutine::createSelfManagedRoutine(initialSlowPath);
- break;
-
- case access_get_by_id_chain:
- result.m_structure.copyFrom(stubInfo.u.getByIdChain.baseObjectStructure);
- result.m_chain.copyFrom(stubInfo.u.getByIdChain.chain);
- result.m_chainCount = stubInfo.u.getByIdChain.count;
- result.m_stubRoutine = stubInfo.stubRoutine;
- if (stubInfo.u.getByIdChain.isDirect)
- result.m_type = SimpleStub;
- else
- result.m_type = Getter;
- break;
-
- default:
- RELEASE_ASSERT_NOT_REACHED();
- }
+ RELEASE_ASSERT(stubInfo.accessType == access_get_by_id_self);
+
+ result.m_type = SimpleInline;
+ result.m_structure.copyFrom(stubInfo.u.getByIdSelf.baseObjectStructure);
+ result.m_stubRoutine = JITStubRoutine::createSelfManagedRoutine(initialSlowPath);
return result;
}
@@ -109,7 +93,6 @@
ASSERT(
stubInfo.accessType == access_get_by_id_self
- || stubInfo.accessType == access_get_by_id_chain
|| stubInfo.accessType == access_unset);
PolymorphicGetByIdList* result = new PolymorphicGetByIdList(stubInfo);
diff --git a/Source/JavaScriptCore/bytecode/SpeculatedType.h b/Source/JavaScriptCore/bytecode/SpeculatedType.h
index 5e90635..6f658df 100644
--- a/Source/JavaScriptCore/bytecode/SpeculatedType.h
+++ b/Source/JavaScriptCore/bytecode/SpeculatedType.h
@@ -59,7 +59,7 @@
static const SpeculatedType SpecStringIdent = 0x00010000; // It's definitely a JSString, and it's an identifier.
static const SpeculatedType SpecStringVar = 0x00020000; // It's definitely a JSString, and it's not an identifier.
static const SpeculatedType SpecString = 0x00030000; // It's definitely a JSString.
-static const SpeculatedType SpecCellOther = 0x00040000; // It's definitely a JSCell but not a subclass of JSObject and definitely not a JSString.
+static const SpeculatedType SpecCellOther = 0x00040000; // It's definitely a JSCell but not a subclass of JSObject and definitely not a JSString. FIXME: This shouldn't be part of heap-top or bytecode-top. https://bugs.webkit.org/show_bug.cgi?id=133078
static const SpeculatedType SpecCell = 0x0007ffff; // It's definitely a JSCell.
static const SpeculatedType SpecInt32 = 0x00200000; // It's definitely an Int32.
static const SpeculatedType SpecInt52 = 0x00400000; // It's definitely an Int52 and we intend it to unbox it.
diff --git a/Source/JavaScriptCore/bytecode/StructureStubInfo.cpp b/Source/JavaScriptCore/bytecode/StructureStubInfo.cpp
index 4615a3c..5ea530c 100644
--- a/Source/JavaScriptCore/bytecode/StructureStubInfo.cpp
+++ b/Source/JavaScriptCore/bytecode/StructureStubInfo.cpp
@@ -49,7 +49,6 @@
return;
}
case access_get_by_id_self:
- case access_get_by_id_chain:
case access_put_by_id_transition_normal:
case access_put_by_id_transition_direct:
case access_put_by_id_replace:
@@ -68,11 +67,6 @@
if (!Heap::isMarked(u.getByIdSelf.baseObjectStructure.get()))
return false;
break;
- case access_get_by_id_chain:
- if (!Heap::isMarked(u.getByIdChain.baseObjectStructure.get())
- || !Heap::isMarked(u.getByIdChain.chain.get()))
- return false;
- break;
case access_get_by_id_list: {
if (!u.getByIdList.list->visitWeak(repatchBuffer))
return false;
diff --git a/Source/JavaScriptCore/bytecode/StructureStubInfo.h b/Source/JavaScriptCore/bytecode/StructureStubInfo.h
index 0089966..76c80c2 100644
--- a/Source/JavaScriptCore/bytecode/StructureStubInfo.h
+++ b/Source/JavaScriptCore/bytecode/StructureStubInfo.h
@@ -47,7 +47,6 @@
enum AccessType {
access_get_by_id_self,
- access_get_by_id_chain,
access_get_by_id_list,
access_put_by_id_transition_normal,
access_put_by_id_transition_direct,
@@ -61,7 +60,6 @@
{
switch (accessType) {
case access_get_by_id_self:
- case access_get_by_id_chain:
case access_get_by_id_list:
return true;
default:
@@ -107,16 +105,6 @@
u.getByIdSelf.baseObjectStructure.set(vm, owner, baseObjectStructure);
}
- void initGetByIdChain(VM& vm, JSCell* owner, Structure* baseObjectStructure, StructureChain* chain, unsigned count, bool isDirect)
- {
- accessType = access_get_by_id_chain;
-
- u.getByIdChain.baseObjectStructure.set(vm, owner, baseObjectStructure);
- u.getByIdChain.chain.set(vm, owner, chain);
- u.getByIdChain.count = count;
- u.getByIdChain.isDirect = isDirect;
- }
-
void initGetByIdList(PolymorphicGetByIdList* list)
{
accessType = access_get_by_id_list;
diff --git a/Source/JavaScriptCore/dfg/DFGAbstractHeap.h b/Source/JavaScriptCore/dfg/DFGAbstractHeap.h
index 338c99e..7c422e7 100644
--- a/Source/JavaScriptCore/dfg/DFGAbstractHeap.h
+++ b/Source/JavaScriptCore/dfg/DFGAbstractHeap.h
@@ -50,6 +50,8 @@
macro(Butterfly_arrayBuffer) \
macro(Butterfly_publicLength) \
macro(Butterfly_vectorLength) \
+ macro(GetterSetter_getter) \
+ macro(GetterSetter_setter) \
macro(JSArrayBufferView_length) \
macro(JSArrayBufferView_mode) \
macro(JSArrayBufferView_vector) \
diff --git a/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h b/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
index ee08938..5e51dea 100644
--- a/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
+++ b/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
@@ -1433,6 +1433,8 @@
break;
case GetCallee:
+ case GetGetter:
+ case GetSetter:
forNode(node).setType(SpecFunction);
break;
@@ -1658,6 +1660,11 @@
break;
}
+ case GetGetterSetterByOffset: {
+ forNode(node).set(m_graph, m_graph.m_vm.getterSetterStructure.get());
+ break;
+ }
+
case MultiGetByOffset: {
AbstractValue& value = forNode(node->child1());
ASSERT(!(value.m_type & ~SpecCell)); // Edge filtering should have already ensured this.
diff --git a/Source/JavaScriptCore/dfg/DFGArrayMode.cpp b/Source/JavaScriptCore/dfg/DFGArrayMode.cpp
index 0cbdc7c..7391c02 100644
--- a/Source/JavaScriptCore/dfg/DFGArrayMode.cpp
+++ b/Source/JavaScriptCore/dfg/DFGArrayMode.cpp
@@ -158,9 +158,6 @@
// should just trust the array profile.
switch (type()) {
- case Array::Unprofiled:
- return ArrayMode(Array::ForceExit);
-
case Array::Undecided:
if (!value)
return withType(Array::ForceExit);
@@ -189,6 +186,7 @@
return withConversion(Array::RageConvert);
return *this;
+ case Array::Unprofiled:
case Array::SelectUsingPredictions: {
base &= ~SpecOther;
@@ -239,6 +237,8 @@
if (isFloat64ArraySpeculation(base))
return result.withType(Array::Float64Array);
+ if (type() == Array::Unprofiled)
+ return ArrayMode(Array::ForceExit);
return ArrayMode(Array::Generic);
}
diff --git a/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp b/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
index 7067480..059c19a 100644
--- a/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
+++ b/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
@@ -171,21 +171,21 @@
bool handleMinMax(int resultOperand, NodeType op, int registerOffset, int argumentCountIncludingThis);
// Handle calls. This resolves issues surrounding inlining and intrinsics.
+ void handleCall(
+ int result, NodeType op, InlineCallFrame::Kind, unsigned instructionSize,
+ Node* callTarget, int argCount, int registerOffset, CallLinkStatus);
void handleCall(int result, NodeType op, CodeSpecializationKind, unsigned instructionSize, int callee, int argCount, int registerOffset);
void handleCall(Instruction* pc, NodeType op, CodeSpecializationKind);
void emitFunctionChecks(const CallLinkStatus&, Node* callTarget, int registerOffset, CodeSpecializationKind);
void emitArgumentPhantoms(int registerOffset, int argumentCountIncludingThis, CodeSpecializationKind);
// Handle inlining. Return true if it succeeded, false if we need to plant a call.
- bool handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus&, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, CodeSpecializationKind);
+ bool handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus&, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind);
// Handle intrinsic functions. Return true if it succeeded, false if we need to plant a call.
bool handleIntrinsic(int resultOperand, Intrinsic, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction);
bool handleTypedArrayConstructor(int resultOperand, InternalFunction*, int registerOffset, int argumentCountIncludingThis, TypedArrayType);
bool handleConstantInternalFunction(int resultOperand, InternalFunction*, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction, CodeSpecializationKind);
Node* handlePutByOffset(Node* base, unsigned identifier, PropertyOffset, Node* value);
- Node* handleGetByOffset(SpeculatedType, Node* base, unsigned identifierNumber, PropertyOffset);
- void handleGetByOffset(
- int destinationOperand, SpeculatedType, Node* base, unsigned identifierNumber,
- PropertyOffset);
+ Node* handleGetByOffset(SpeculatedType, Node* base, unsigned identifierNumber, PropertyOffset, NodeType op = GetByOffset);
void handleGetById(
int destinationOperand, SpeculatedType, Node* base, unsigned identifierNumber,
const GetByIdStatus&);
@@ -811,11 +811,11 @@
m_numPassedVarArgs++;
}
- Node* addCall(int result, NodeType op, int callee, int argCount, int registerOffset)
+ Node* addCall(int result, NodeType op, Node* callee, int argCount, int registerOffset)
{
SpeculatedType prediction = getPrediction();
- addVarArgChild(get(VirtualRegister(callee)));
+ addVarArgChild(callee);
size_t parameterSlots = JSStack::CallFrameHeaderSize - JSStack::CallerFrameAndPCSize + argCount;
if (parameterSlots > m_parameterSlots)
m_parameterSlots = parameterSlots;
@@ -1122,7 +1122,7 @@
VirtualRegister returnValueVR,
VirtualRegister inlineCallFrameStart,
int argumentCountIncludingThis,
- CodeSpecializationKind);
+ InlineCallFrame::Kind);
~InlineStackEntry()
{
@@ -1205,26 +1205,35 @@
int result, NodeType op, CodeSpecializationKind kind, unsigned instructionSize,
int callee, int argumentCountIncludingThis, int registerOffset)
{
- ASSERT(registerOffset <= 0);
-
Node* callTarget = get(VirtualRegister(callee));
- CallLinkStatus callLinkStatus;
-
+ CallLinkStatus callLinkStatus = CallLinkStatus::computeFor(
+ m_inlineStackTop->m_profiledBlock, currentCodeOrigin(),
+ m_inlineStackTop->m_callLinkInfos, m_callContextMap);
+
+ handleCall(
+ result, op, InlineCallFrame::kindFor(kind), instructionSize, callTarget,
+ argumentCountIncludingThis, registerOffset, callLinkStatus);
+}
+
+void ByteCodeParser::handleCall(
+ int result, NodeType op, InlineCallFrame::Kind kind, unsigned instructionSize,
+ Node* callTarget, int argumentCountIncludingThis, int registerOffset,
+ CallLinkStatus callLinkStatus)
+{
+ ASSERT(registerOffset <= 0);
+ CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);
+
if (m_graph.isConstant(callTarget)) {
callLinkStatus = CallLinkStatus(
m_graph.valueOfJSConstant(callTarget)).setIsProved(true);
- } else {
- callLinkStatus = CallLinkStatus::computeFor(
- m_inlineStackTop->m_profiledBlock, currentCodeOrigin(),
- m_inlineStackTop->m_callLinkInfos, m_callContextMap);
}
if (!callLinkStatus.canOptimize()) {
// Oddly, this conflates calls that haven't executed with calls that behaved sufficiently polymorphically
// that we cannot optimize them.
- addCall(result, op, callee, argumentCountIncludingThis, registerOffset);
+ addCall(result, op, callTarget, argumentCountIncludingThis, registerOffset);
return;
}
@@ -1232,28 +1241,28 @@
SpeculatedType prediction = getPrediction();
if (InternalFunction* function = callLinkStatus.internalFunction()) {
- if (handleConstantInternalFunction(result, function, registerOffset, argumentCountIncludingThis, prediction, kind)) {
+ if (handleConstantInternalFunction(result, function, registerOffset, argumentCountIncludingThis, prediction, specializationKind)) {
// This phantoming has to be *after* the code for the intrinsic, to signify that
// the inputs must be kept alive whatever exits the intrinsic may do.
addToGraph(Phantom, callTarget);
- emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, kind);
+ emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, specializationKind);
return;
}
// Can only handle this using the generic call handler.
- addCall(result, op, callee, argumentCountIncludingThis, registerOffset);
+ addCall(result, op, callTarget, argumentCountIncludingThis, registerOffset);
return;
}
- Intrinsic intrinsic = callLinkStatus.intrinsicFor(kind);
+ Intrinsic intrinsic = callLinkStatus.intrinsicFor(specializationKind);
if (intrinsic != NoIntrinsic) {
- emitFunctionChecks(callLinkStatus, callTarget, registerOffset, kind);
+ emitFunctionChecks(callLinkStatus, callTarget, registerOffset, specializationKind);
if (handleIntrinsic(result, intrinsic, registerOffset, argumentCountIncludingThis, prediction)) {
// This phantoming has to be *after* the code for the intrinsic, to signify that
// the inputs must be kept alive whatever exits the intrinsic may do.
addToGraph(Phantom, callTarget);
- emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, kind);
+ emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, specializationKind);
if (m_graph.compilation())
m_graph.compilation()->noticeInlinedCall();
return;
@@ -1264,7 +1273,7 @@
return;
}
- addCall(result, op, callee, argumentCountIncludingThis, registerOffset);
+ addCall(result, op, callTarget, argumentCountIncludingThis, registerOffset);
}
void ByteCodeParser::emitFunctionChecks(const CallLinkStatus& callLinkStatus, Node* callTarget, int registerOffset, CodeSpecializationKind kind)
@@ -1299,10 +1308,12 @@
addToGraph(Phantom, get(virtualRegisterForArgument(i, registerOffset)));
}
-bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus& callLinkStatus, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, CodeSpecializationKind kind)
+bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus& callLinkStatus, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind kind)
{
static const bool verbose = false;
+ CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);
+
if (verbose)
dataLog("Considering inlining ", callLinkStatus, " into ", currentCodeOrigin(), "\n");
@@ -1335,14 +1346,14 @@
// if we had a static proof of what was being called; this might happen for example if you call a
// global function, where watchpointing gives us static information. Overall, it's a rare case
// because we expect that any hot callees would have already been compiled.
- CodeBlock* codeBlock = executable->baselineCodeBlockFor(kind);
+ CodeBlock* codeBlock = executable->baselineCodeBlockFor(specializationKind);
if (!codeBlock) {
if (verbose)
dataLog(" Failing because no code block available.\n");
return false;
}
CapabilityLevel capabilityLevel = inlineFunctionForCapabilityLevel(
- codeBlock, kind, callLinkStatus.isClosureCall());
+ codeBlock, specializationKind, callLinkStatus.isClosureCall());
if (!canInline(capabilityLevel)) {
if (verbose)
dataLog(" Failing because the function is not inlineable.\n");
@@ -1394,7 +1405,7 @@
// Now we know without a doubt that we are committed to inlining. So begin the process
// by checking the callee (if necessary) and making sure that arguments and the callee
// are flushed.
- emitFunctionChecks(callLinkStatus, callTargetNode, registerOffset, kind);
+ emitFunctionChecks(callLinkStatus, callTargetNode, registerOffset, specializationKind);
// FIXME: Don't flush constants!
@@ -1873,14 +1884,14 @@
return false;
}
-Node* ByteCodeParser::handleGetByOffset(SpeculatedType prediction, Node* base, unsigned identifierNumber, PropertyOffset offset)
+Node* ByteCodeParser::handleGetByOffset(SpeculatedType prediction, Node* base, unsigned identifierNumber, PropertyOffset offset, NodeType op)
{
Node* propertyStorage;
if (isInlineOffset(offset))
propertyStorage = base;
else
propertyStorage = addToGraph(GetButterfly, base);
- Node* getByOffset = addToGraph(GetByOffset, OpInfo(m_graph.m_storageAccessData.size()), OpInfo(prediction), propertyStorage, base);
+ Node* getByOffset = addToGraph(op, OpInfo(m_graph.m_storageAccessData.size()), OpInfo(prediction), propertyStorage, base);
StorageAccessData storageAccessData;
storageAccessData.offset = offset;
@@ -1890,13 +1901,6 @@
return getByOffset;
}
-void ByteCodeParser::handleGetByOffset(
- int destinationOperand, SpeculatedType prediction, Node* base, unsigned identifierNumber,
- PropertyOffset offset)
-{
- set(VirtualRegister(destinationOperand), handleGetByOffset(prediction, base, identifierNumber, offset));
-}
-
Node* ByteCodeParser::handlePutByOffset(Node* base, unsigned identifier, PropertyOffset offset, Node* value)
{
Node* propertyStorage;
@@ -1933,18 +1937,19 @@
int destinationOperand, SpeculatedType prediction, Node* base, unsigned identifierNumber,
const GetByIdStatus& getByIdStatus)
{
+ NodeType getById = getByIdStatus.makesCalls() ? GetByIdFlush : GetById;
+
if (!getByIdStatus.isSimple() || !Options::enableAccessInlining()) {
set(VirtualRegister(destinationOperand),
- addToGraph(
- getByIdStatus.makesCalls() ? GetByIdFlush : GetById,
- OpInfo(identifierNumber), OpInfo(prediction), base));
+ addToGraph(getById, OpInfo(identifierNumber), OpInfo(prediction), base));
return;
}
if (getByIdStatus.numVariants() > 1) {
- if (!isFTL(m_graph.m_plan.mode) || !Options::enablePolymorphicAccessInlining()) {
+ if (getByIdStatus.makesCalls() || !isFTL(m_graph.m_plan.mode)
+ || !Options::enablePolymorphicAccessInlining()) {
set(VirtualRegister(destinationOperand),
- addToGraph(GetById, OpInfo(identifierNumber), OpInfo(prediction), base));
+ addToGraph(getById, OpInfo(identifierNumber), OpInfo(prediction), base));
return;
}
@@ -1977,7 +1982,7 @@
if (m_graph.compilation())
m_graph.compilation()->noticeInlinedGetById();
- Node* originalBaseForBaselineJIT = base;
+ Node* originalBase = base;
addToGraph(CheckStructure, OpInfo(m_graph.addStructureSet(variant.structureSet())), base);
@@ -1992,18 +1997,59 @@
// on something other than the base following the CheckStructure on base, or if the
// access was compiled to a WeakJSConstant specific value, in which case we might not
// have any explicit use of the base at all.
- if (variant.specificValue() || originalBaseForBaselineJIT != base)
- addToGraph(Phantom, originalBaseForBaselineJIT);
+ if (variant.specificValue() || originalBase != base)
+ addToGraph(Phantom, originalBase);
- if (variant.specificValue()) {
- ASSERT(variant.specificValue().isCell());
-
- set(VirtualRegister(destinationOperand), cellConstant(variant.specificValue().asCell()));
+ Node* loadedValue;
+ if (variant.specificValue())
+ loadedValue = cellConstant(variant.specificValue().asCell());
+ else {
+ loadedValue = handleGetByOffset(
+ prediction, base, identifierNumber, variant.offset(),
+ variant.callLinkStatus() ? GetGetterSetterByOffset : GetByOffset);
+ }
+
+ if (!variant.callLinkStatus()) {
+ set(VirtualRegister(destinationOperand), loadedValue);
return;
}
- handleGetByOffset(
- destinationOperand, prediction, base, identifierNumber, variant.offset());
+ Node* getter = addToGraph(GetGetter, loadedValue);
+
+ // Make a call. We don't try to get fancy with using the smallest operand number because
+ // the stack layout phase should compress the stack anyway.
+
+ unsigned numberOfParameters = 0;
+ numberOfParameters++; // The 'this' argument.
+ numberOfParameters++; // True return PC.
+
+ // Start with a register offset that corresponds to the last in-use register.
+ int registerOffset = virtualRegisterForLocal(
+ m_inlineStackTop->m_profiledBlock->m_numCalleeRegisters - 1).offset();
+ registerOffset -= numberOfParameters;
+ registerOffset -= JSStack::CallFrameHeaderSize;
+
+ // Get the alignment right.
+ registerOffset = -WTF::roundUpToMultipleOf(
+ stackAlignmentRegisters(),
+ -registerOffset);
+
+ ensureLocals(
+ m_inlineStackTop->remapOperand(
+ VirtualRegister(registerOffset)).toLocal());
+
+ // Issue SetLocals. This has two effects:
+ // 1) That's how handleCall() sees the arguments.
+ // 2) If we inline then this ensures that the arguments are flushed so that if you use
+ // the dreaded arguments object on the getter, the right things happen. Well, sort of -
+ // since we only really care about 'this' in this case. But we're not going to take that
+ // shortcut.
+ int nextRegister = registerOffset + JSStack::CallFrameHeaderSize;
+ set(VirtualRegister(nextRegister++), originalBase, ImmediateNakedSet);
+
+ handleCall(
+ destinationOperand, Call, InlineCallFrame::GetterCall, OPCODE_LENGTH(op_get_by_id),
+ getter, numberOfParameters - 1, registerOffset, *variant.callLinkStatus());
}
void ByteCodeParser::emitPutById(
@@ -2655,7 +2701,7 @@
// === Property access operations ===
case op_get_by_val: {
- SpeculatedType prediction = getPrediction();
+ SpeculatedType prediction = getPredictionWithoutOSRExit();
Node* base = get(VirtualRegister(currentInstruction[2].u.operand));
ArrayMode arrayMode = getArrayModeConsideringSlowPath(currentInstruction[4].u.arrayProfile, Array::Read);
@@ -3074,12 +3120,12 @@
UNUSED_PARAM(watchpoints); // We will use this in the future. For now we set it as a way of documenting the fact that that's what index 5 is in GlobalVar mode.
- SpeculatedType prediction = getPrediction();
JSGlobalObject* globalObject = m_inlineStackTop->m_codeBlock->globalObject();
switch (resolveType) {
case GlobalProperty:
case GlobalPropertyWithVarInjectionChecks: {
+ SpeculatedType prediction = getPrediction();
GetByIdStatus status = GetByIdStatus::computeFor(*m_vm, structure, uid);
if (status.state() != GetByIdStatus::Simple || status.numVariants() != 1) {
set(VirtualRegister(dst), addToGraph(GetByIdFlush, OpInfo(identifierNumber), OpInfo(prediction), get(VirtualRegister(scope))));
@@ -3101,6 +3147,7 @@
JSValue specificValue =
watchpointSet ? watchpointSet->inferredValue() : JSValue();
if (!specificValue) {
+ SpeculatedType prediction = getPrediction();
set(VirtualRegister(dst), addToGraph(GetGlobalVar, OpInfo(operand), OpInfo(prediction)));
break;
}
@@ -3127,6 +3174,7 @@
}
}
}
+ SpeculatedType prediction = getPrediction();
set(VirtualRegister(dst),
addToGraph(GetClosureVar, OpInfo(operand), OpInfo(prediction),
addToGraph(GetClosureRegisters, scopeNode)));
@@ -3397,7 +3445,7 @@
VirtualRegister returnValueVR,
VirtualRegister inlineCallFrameStart,
int argumentCountIncludingThis,
- CodeSpecializationKind kind)
+ InlineCallFrame::Kind kind)
: m_byteCodeParser(byteCodeParser)
, m_codeBlock(codeBlock)
, m_profiledBlock(profiledBlock)
@@ -3456,7 +3504,7 @@
m_inlineCallFrame->isClosureCall = true;
m_inlineCallFrame->caller = byteCodeParser->currentCodeOrigin();
m_inlineCallFrame->arguments.resize(argumentCountIncludingThis); // Set the number of arguments including this, but don't configure the value recoveries, yet.
- m_inlineCallFrame->isCall = isCall(kind);
+ m_inlineCallFrame->kind = kind;
if (m_inlineCallFrame->caller.inlineCallFrame)
m_inlineCallFrame->capturedVars = m_inlineCallFrame->caller.inlineCallFrame->capturedVars;
@@ -3695,7 +3743,7 @@
InlineStackEntry inlineStackEntry(
this, m_codeBlock, m_profiledBlock, 0, 0, VirtualRegister(), VirtualRegister(),
- m_codeBlock->numParameters(), CodeForCall);
+ m_codeBlock->numParameters(), InlineCallFrame::Call);
parseCodeBlock();
diff --git a/Source/JavaScriptCore/dfg/DFGCSEPhase.cpp b/Source/JavaScriptCore/dfg/DFGCSEPhase.cpp
index d4e1023..635ef21 100644
--- a/Source/JavaScriptCore/dfg/DFGCSEPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGCSEPhase.cpp
@@ -690,6 +690,40 @@
return 0;
}
+ Node* getGetterSetterByOffsetLoadElimination(unsigned identifierNumber, Node* base)
+ {
+ for (unsigned i = m_indexInBlock; i--;) {
+ Node* node = m_currentBlock->at(i);
+ if (node == base)
+ break;
+
+ switch (node->op()) {
+ case GetGetterSetterByOffset:
+ if (node->child2() == base
+ && m_graph.m_storageAccessData[node->storageAccessDataIndex()].identifierNumber == identifierNumber)
+ return node;
+ break;
+
+ case PutByValDirect:
+ case PutByVal:
+ case PutByValAlias:
+ if (m_graph.byValIsPure(node)) {
+ // If PutByVal speculates that it's accessing an array with an
+ // integer index, then it's impossible for it to cause a structure
+ // change.
+ break;
+ }
+ return 0;
+
+ default:
+ if (m_graph.clobbersWorld(node))
+ return 0;
+ break;
+ }
+ }
+ return 0;
+ }
+
Node* putByOffsetStoreElimination(unsigned identifierNumber, Node* child1)
{
for (unsigned i = m_indexInBlock; i--;) {
@@ -845,25 +879,18 @@
return 0;
}
- Node* getTypedArrayByteOffsetLoadElimination(Node* child1)
+ Node* getInternalFieldLoadElimination(NodeType op, Node* child1)
{
for (unsigned i = m_indexInBlock; i--;) {
Node* node = m_currentBlock->at(i);
if (node == child1)
break;
- switch (node->op()) {
- case GetTypedArrayByteOffset: {
- if (node->child1() == child1)
- return node;
- break;
- }
+ if (node->op() == op && node->child1() == child1)
+ return node;
- default:
- if (m_graph.clobbersWorld(node))
- return 0;
- break;
- }
+ if (m_graph.clobbersWorld(node))
+ return 0;
}
return 0;
}
@@ -1437,10 +1464,12 @@
break;
}
- case GetTypedArrayByteOffset: {
+ case GetTypedArrayByteOffset:
+ case GetGetter:
+ case GetSetter: {
if (cseMode == StoreElimination)
break;
- setReplacement(getTypedArrayByteOffsetLoadElimination(node->child1().node()));
+ setReplacement(getInternalFieldLoadElimination(node->op(), node->child1().node()));
break;
}
@@ -1456,6 +1485,12 @@
setReplacement(getByOffsetLoadElimination(m_graph.m_storageAccessData[node->storageAccessDataIndex()].identifierNumber, node->child2().node()));
break;
+ case GetGetterSetterByOffset:
+ if (cseMode == StoreElimination)
+ break;
+ setReplacement(getGetterSetterByOffsetLoadElimination(m_graph.m_storageAccessData[node->storageAccessDataIndex()].identifierNumber, node->child2().node()));
+ break;
+
case MultiGetByOffset:
if (cseMode == StoreElimination)
break;
diff --git a/Source/JavaScriptCore/dfg/DFGClobberize.h b/Source/JavaScriptCore/dfg/DFGClobberize.h
index 9e464cf..17a59dd 100644
--- a/Source/JavaScriptCore/dfg/DFGClobberize.h
+++ b/Source/JavaScriptCore/dfg/DFGClobberize.h
@@ -215,6 +215,14 @@
write(World);
return;
+ case GetGetter:
+ read(GetterSetter_getter);
+ return;
+
+ case GetSetter:
+ read(GetterSetter_setter);
+ return;
+
case GetCallee:
read(AbstractHeap(Variables, JSStack::Callee));
return;
@@ -484,6 +492,7 @@
return;
case GetByOffset:
+ case GetGetterSetterByOffset:
read(AbstractHeap(NamedProperties, graph.m_storageAccessData[node->storageAccessDataIndex()].identifierNumber));
return;
diff --git a/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp b/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
index 3e34b5b..1ad8113 100644
--- a/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
@@ -494,6 +494,11 @@
}
case GetByVal: {
+ if (!node->prediction()) {
+ m_insertionSet.insertNode(
+ m_indexInBlock, SpecNone, ForceOSRExit, node->origin);
+ }
+
node->setArrayMode(
node->arrayMode().refine(
m_graph, node,
@@ -877,7 +882,9 @@
case GetClosureRegisters:
case SkipTopScope:
case SkipScope:
- case GetScope: {
+ case GetScope:
+ case GetGetter:
+ case GetSetter: {
fixEdge<KnownCellUse>(node->child1());
break;
}
@@ -937,7 +944,8 @@
break;
}
- case GetByOffset: {
+ case GetByOffset:
+ case GetGetterSetterByOffset: {
if (!node->child1()->hasStorageResult())
fixEdge<KnownCellUse>(node->child1());
fixEdge<KnownCellUse>(node->child2());
@@ -1035,6 +1043,8 @@
Node* globalObjectNode = m_insertionSet.insertNode(
m_indexInBlock, SpecNone, WeakJSConstant, node->origin,
OpInfo(m_graph.globalObjectFor(node->origin.semantic)));
+ // FIXME: This probably shouldn't have an unconditional barrier.
+ // https://bugs.webkit.org/show_bug.cgi?id=133104
Node* barrierNode = m_graph.addNode(
SpecNone, StoreBarrier, m_currentNode->origin,
Edge(globalObjectNode, KnownCellUse));
diff --git a/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp b/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
index c54746e..6d2c207 100644
--- a/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
+++ b/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
@@ -417,7 +417,7 @@
m_jitCode->shrinkToFit();
codeBlock()->shrinkToFit(CodeBlock::LateShrink);
- linkBuffer->link(m_callArityFixup, FunctionPtr((m_vm->getCTIStub(arityFixup)).code().executableAddress()));
+ linkBuffer->link(m_callArityFixup, FunctionPtr((m_vm->getCTIStub(arityFixupGenerator)).code().executableAddress()));
disassemble(*linkBuffer);
diff --git a/Source/JavaScriptCore/dfg/DFGNode.h b/Source/JavaScriptCore/dfg/DFGNode.h
index 3688142..9255394 100644
--- a/Source/JavaScriptCore/dfg/DFGNode.h
+++ b/Source/JavaScriptCore/dfg/DFGNode.h
@@ -1157,7 +1157,7 @@
bool hasStorageAccessData()
{
- return op() == GetByOffset || op() == PutByOffset;
+ return op() == GetByOffset || op() == GetGetterSetterByOffset || op() == PutByOffset;
}
unsigned storageAccessDataIndex()
diff --git a/Source/JavaScriptCore/dfg/DFGNodeType.h b/Source/JavaScriptCore/dfg/DFGNodeType.h
index 37433e3..2575198 100644
--- a/Source/JavaScriptCore/dfg/DFGNodeType.h
+++ b/Source/JavaScriptCore/dfg/DFGNodeType.h
@@ -181,7 +181,10 @@
macro(GetIndexedPropertyStorage, NodeResultStorage) \
macro(ConstantStoragePointer, NodeResultStorage) \
macro(TypedArrayWatchpoint, NodeMustGenerate) \
+ macro(GetGetter, NodeResultJS) \
+ macro(GetSetter, NodeResultJS) \
macro(GetByOffset, NodeResultJS) \
+ macro(GetGetterSetterByOffset, NodeResultJS) \
macro(MultiGetByOffset, NodeResultJS) \
macro(PutByOffset, NodeMustGenerate) \
macro(MultiPutByOffset, NodeMustGenerate) \
diff --git a/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp b/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
index 5b78cb2..b9ec462 100644
--- a/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
@@ -108,12 +108,44 @@
InlineCallFrame* inlineCallFrame = codeOrigin.inlineCallFrame;
CodeBlock* baselineCodeBlock = jit.baselineCodeBlockFor(codeOrigin);
CodeBlock* baselineCodeBlockForCaller = jit.baselineCodeBlockFor(inlineCallFrame->caller);
- unsigned callBytecodeIndex = inlineCallFrame->caller.bytecodeIndex;
- CallLinkInfo* callLinkInfo =
- baselineCodeBlockForCaller->getCallLinkInfoForBytecodeIndex(callBytecodeIndex);
- RELEASE_ASSERT(callLinkInfo);
+ void* jumpTarget;
+ void* trueReturnPC = nullptr;
- void* jumpTarget = callLinkInfo->callReturnLocation.executableAddress();
+ unsigned callBytecodeIndex = inlineCallFrame->caller.bytecodeIndex;
+
+ switch (inlineCallFrame->kind) {
+ case InlineCallFrame::Call:
+ case InlineCallFrame::Construct: {
+ CallLinkInfo* callLinkInfo =
+ baselineCodeBlockForCaller->getCallLinkInfoForBytecodeIndex(callBytecodeIndex);
+ RELEASE_ASSERT(callLinkInfo);
+
+ jumpTarget = callLinkInfo->callReturnLocation.executableAddress();
+ break;
+ }
+
+ case InlineCallFrame::GetterCall:
+ case InlineCallFrame::SetterCall: {
+ StructureStubInfo* stubInfo =
+ baselineCodeBlockForCaller->findStubInfo(CodeOrigin(callBytecodeIndex));
+ RELEASE_ASSERT(stubInfo);
+
+ switch (inlineCallFrame->kind) {
+ case InlineCallFrame::GetterCall:
+ jumpTarget = jit.vm()->getCTIStub(baselineGetterReturnThunkGenerator).code().executableAddress();
+ break;
+ case InlineCallFrame::SetterCall:
+ jumpTarget = jit.vm()->getCTIStub(baselineSetterReturnThunkGenerator).code().executableAddress();
+ break;
+ default:
+ RELEASE_ASSERT_NOT_REACHED();
+ break;
+ }
+
+ trueReturnPC = stubInfo->callReturnLocation.labelAtOffset(
+ stubInfo->patch.deltaCallToDone).executableAddress();
+ break;
+ } }
GPRReg callerFrameGPR;
if (inlineCallFrame->caller.inlineCallFrame) {
@@ -122,12 +154,15 @@
} else
callerFrameGPR = GPRInfo::callFrameRegister;
+ jit.storePtr(AssemblyHelpers::TrustedImmPtr(jumpTarget), AssemblyHelpers::addressForByteOffset(inlineCallFrame->returnPCOffset()));
+ if (trueReturnPC)
+ jit.storePtr(AssemblyHelpers::TrustedImmPtr(trueReturnPC), AssemblyHelpers::addressFor(inlineCallFrame->stackOffset + virtualRegisterForArgument(inlineCallFrame->arguments.size()).offset()));
+
#if USE(JSVALUE64)
jit.storePtr(AssemblyHelpers::TrustedImmPtr(baselineCodeBlock), AssemblyHelpers::addressFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::CodeBlock)));
if (!inlineCallFrame->isClosureCall)
jit.store64(AssemblyHelpers::TrustedImm64(JSValue::encode(JSValue(inlineCallFrame->calleeConstant()->scope()))), AssemblyHelpers::addressFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ScopeChain)));
jit.store64(callerFrameGPR, AssemblyHelpers::addressForByteOffset(inlineCallFrame->callerFrameOffset()));
- jit.storePtr(AssemblyHelpers::TrustedImmPtr(jumpTarget), AssemblyHelpers::addressForByteOffset(inlineCallFrame->returnPCOffset()));
uint32_t locationBits = CallFrame::Location::encodeAsBytecodeOffset(codeOrigin.bytecodeIndex);
jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ArgumentCount)));
jit.store32(AssemblyHelpers::TrustedImm32(inlineCallFrame->arguments.size()), AssemblyHelpers::payloadFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ArgumentCount)));
@@ -143,7 +178,6 @@
if (!inlineCallFrame->isClosureCall)
jit.storePtr(AssemblyHelpers::TrustedImmPtr(inlineCallFrame->calleeConstant()->scope()), AssemblyHelpers::payloadFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ScopeChain)));
jit.storePtr(callerFrameGPR, AssemblyHelpers::addressForByteOffset(inlineCallFrame->callerFrameOffset()));
- jit.storePtr(AssemblyHelpers::TrustedImmPtr(jumpTarget), AssemblyHelpers::addressForByteOffset(inlineCallFrame->returnPCOffset()));
Instruction* instruction = baselineCodeBlock->instructions().begin() + codeOrigin.bytecodeIndex;
uint32_t locationBits = CallFrame::Location::encodeAsBytecodeInstruction(instruction);
jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ArgumentCount)));
diff --git a/Source/JavaScriptCore/dfg/DFGOSRExitPreparation.cpp b/Source/JavaScriptCore/dfg/DFGOSRExitPreparation.cpp
index 43dd9f3..51d6e5a 100644
--- a/Source/JavaScriptCore/dfg/DFGOSRExitPreparation.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOSRExitPreparation.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -45,7 +45,7 @@
FunctionExecutable* executable =
static_cast<FunctionExecutable*>(codeOrigin.inlineCallFrame->executable.get());
CodeBlock* codeBlock = executable->baselineCodeBlockFor(
- codeOrigin.inlineCallFrame->isCall ? CodeForCall : CodeForConstruct);
+ codeOrigin.inlineCallFrame->specializationKind());
if (codeBlock->jitType() == JSC::JITCode::BaselineJIT)
continue;
diff --git a/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp b/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
index 6a35b9c..1fcb3ab 100644
--- a/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
@@ -193,6 +193,20 @@
changed |= setPrediction(node->getHeapPrediction());
break;
}
+
+ case GetGetterSetterByOffset: {
+ changed |= setPrediction(SpecCellOther);
+ break;
+ }
+
+ case GetGetter:
+ case GetSetter:
+ case GetCallee:
+ case NewFunctionNoCheck:
+ case NewFunctionExpression: {
+ changed |= setPrediction(SpecFunction);
+ break;
+ }
case StringCharCodeAt: {
changed |= setPrediction(SpecInt32);
@@ -396,13 +410,20 @@
changed |= mergePrediction(SpecFullDouble);
break;
case Array::Uint32Array:
- if (isInt32Speculation(node->getHeapPrediction()))
+ if (isInt32SpeculationForArithmetic(node->getHeapPrediction()))
changed |= mergePrediction(SpecInt32);
else if (enableInt52())
changed |= mergePrediction(SpecMachineInt);
else
changed |= mergePrediction(SpecInt32 | SpecInt52AsDouble);
break;
+ case Array::Int8Array:
+ case Array::Uint8Array:
+ case Array::Int16Array:
+ case Array::Uint16Array:
+ case Array::Int32Array:
+ changed |= mergePrediction(SpecInt32);
+ break;
default:
changed |= mergePrediction(node->getHeapPrediction());
break;
@@ -443,11 +464,6 @@
break;
}
- case GetCallee: {
- changed |= setPrediction(SpecFunction);
- break;
- }
-
case CreateThis:
case NewObject: {
changed |= setPrediction(SpecFinalObject);
@@ -510,12 +526,6 @@
break;
}
- case NewFunctionNoCheck:
- case NewFunctionExpression: {
- changed |= setPrediction(SpecFunction);
- break;
- }
-
case FiatInt52: {
RELEASE_ASSERT(enableInt52());
changed |= setPrediction(SpecMachineInt);
diff --git a/Source/JavaScriptCore/dfg/DFGSafeToExecute.h b/Source/JavaScriptCore/dfg/DFGSafeToExecute.h
index 52bbfe4..866cb22 100644
--- a/Source/JavaScriptCore/dfg/DFGSafeToExecute.h
+++ b/Source/JavaScriptCore/dfg/DFGSafeToExecute.h
@@ -257,6 +257,8 @@
case Int52Rep:
case BooleanToNumber:
case FiatInt52:
+ case GetGetter:
+ case GetSetter:
return true;
case GetByVal:
@@ -289,6 +291,7 @@
StructureSet(node->structureTransitionData().previousStructure));
case GetByOffset:
+ case GetGetterSetterByOffset:
case PutByOffset:
return state.forNode(node->child1()).m_currentKnownStructure.isValidOffset(
graph.m_storageAccessData[node->storageAccessDataIndex()].offset);
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
index 8b2b696..a2965e0 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
@@ -35,6 +35,7 @@
#include "DFGOperations.h"
#include "DFGSlowPathGenerator.h"
#include "Debugger.h"
+#include "GetterSetter.h"
#include "JSActivation.h"
#include "ObjectPrototype.h"
#include "JSCInlines.h"
@@ -3834,6 +3835,47 @@
break;
}
+ case GetGetterSetterByOffset: {
+ StorageOperand storage(this, node->child1());
+ GPRTemporary resultPayload(this);
+
+ GPRReg storageGPR = storage.gpr();
+ GPRReg resultPayloadGPR = resultPayload.gpr();
+
+ StorageAccessData& storageAccessData = m_jit.graph().m_storageAccessData[node->storageAccessDataIndex()];
+
+ m_jit.load32(JITCompiler::Address(storageGPR, offsetRelativeToBase(storageAccessData.offset) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload)), resultPayloadGPR);
+
+ cellResult(resultPayloadGPR, node);
+ break;
+ }
+
+ case GetGetter: {
+ SpeculateCellOperand op1(this, node->child1());
+ GPRTemporary result(this, Reuse, op1);
+
+ GPRReg op1GPR = op1.gpr();
+ GPRReg resultGPR = result.gpr();
+
+ m_jit.loadPtr(JITCompiler::Address(op1GPR, GetterSetter::offsetOfGetter()), resultGPR);
+
+ cellResult(resultGPR, node);
+ break;
+ }
+
+ case GetSetter: {
+ SpeculateCellOperand op1(this, node->child1());
+ GPRTemporary result(this, Reuse, op1);
+
+ GPRReg op1GPR = op1.gpr();
+ GPRReg resultGPR = result.gpr();
+
+ m_jit.loadPtr(JITCompiler::Address(op1GPR, GetterSetter::offsetOfSetter()), resultGPR);
+
+ cellResult(resultGPR, node);
+ break;
+ }
+
case PutByOffset: {
StorageOperand storage(this, node->child1());
JSValueOperand value(this, node->child3());
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
index eb11796..c0055e4 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
@@ -35,6 +35,7 @@
#include "DFGOperations.h"
#include "DFGSlowPathGenerator.h"
#include "Debugger.h"
+#include "GetterSetter.h"
#include "JSCInlines.h"
#include "ObjectPrototype.h"
#include "SpillRegistersMode.h"
@@ -3938,7 +3939,8 @@
break;
}
- case GetByOffset: {
+ case GetByOffset:
+ case GetGetterSetterByOffset: {
StorageOperand storage(this, node->child1());
GPRTemporary result(this, Reuse, storage);
@@ -3953,6 +3955,32 @@
break;
}
+ case GetGetter: {
+ SpeculateCellOperand op1(this, node->child1());
+ GPRTemporary result(this, Reuse, op1);
+
+ GPRReg op1GPR = op1.gpr();
+ GPRReg resultGPR = result.gpr();
+
+ m_jit.loadPtr(JITCompiler::Address(op1GPR, GetterSetter::offsetOfGetter()), resultGPR);
+
+ cellResult(resultGPR, node);
+ break;
+ }
+
+ case GetSetter: {
+ SpeculateCellOperand op1(this, node->child1());
+ GPRTemporary result(this, Reuse, op1);
+
+ GPRReg op1GPR = op1.gpr();
+ GPRReg resultGPR = result.gpr();
+
+ m_jit.loadPtr(JITCompiler::Address(op1GPR, GetterSetter::offsetOfSetter()), resultGPR);
+
+ cellResult(resultGPR, node);
+ break;
+ }
+
case PutByOffset: {
StorageOperand storage(this, node->child1());
JSValueOperand value(this, node->child3());
diff --git a/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.cpp b/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.cpp
index 2189cd9..77ba105 100644
--- a/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.cpp
+++ b/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -28,6 +28,7 @@
#if ENABLE(FTL_JIT)
+#include "GetterSetter.h"
#include "JSScope.h"
#include "JSVariableObject.h"
#include "JSCInlines.h"
diff --git a/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.h b/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.h
index d59ad6a..b98b736 100644
--- a/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.h
+++ b/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -46,6 +46,8 @@
macro(Butterfly_publicLength, Butterfly::offsetOfPublicLength()) \
macro(Butterfly_vectorLength, Butterfly::offsetOfVectorLength()) \
macro(CallFrame_callerFrame, CallFrame::callerFrameOffset()) \
+ macro(GetterSetter_getter, GetterSetter::offsetOfGetter()) \
+ macro(GetterSetter_setter, GetterSetter::offsetOfSetter()) \
macro(JSArrayBufferView_length, JSArrayBufferView::offsetOfLength()) \
macro(JSArrayBufferView_mode, JSArrayBufferView::offsetOfMode()) \
macro(JSArrayBufferView_vector, JSArrayBufferView::offsetOfVector()) \
diff --git a/Source/JavaScriptCore/ftl/FTLCapabilities.cpp b/Source/JavaScriptCore/ftl/FTLCapabilities.cpp
index 3c02547..be0233f 100644
--- a/Source/JavaScriptCore/ftl/FTLCapabilities.cpp
+++ b/Source/JavaScriptCore/ftl/FTLCapabilities.cpp
@@ -72,6 +72,9 @@
case NewArray:
case NewArrayBuffer:
case GetByOffset:
+ case GetGetterSetterByOffset:
+ case GetGetter:
+ case GetSetter:
case PutByOffset:
case GetGlobalVar:
case PutGlobalVar:
diff --git a/Source/JavaScriptCore/ftl/FTLLink.cpp b/Source/JavaScriptCore/ftl/FTLLink.cpp
index d23cece..b1b74ee 100644
--- a/Source/JavaScriptCore/ftl/FTLLink.cpp
+++ b/Source/JavaScriptCore/ftl/FTLLink.cpp
@@ -177,7 +177,7 @@
linkBuffer = adoptPtr(new LinkBuffer(vm, jit, codeBlock, JITCompilationMustSucceed));
linkBuffer->link(callArityCheck, codeBlock->m_isConstructor ? operationConstructArityCheck : operationCallArityCheck);
- linkBuffer->link(callArityFixup, FunctionPtr((vm.getCTIStub(arityFixup)).code().executableAddress()));
+ linkBuffer->link(callArityFixup, FunctionPtr((vm.getCTIStub(arityFixupGenerator)).code().executableAddress()));
linkBuffer->link(mainPathJumps, CodeLocationLabel(bitwise_cast<void*>(state.generatedFunction)));
state.jitCode->initializeAddressForCall(MacroAssemblerCodePtr(bitwise_cast<void*>(state.generatedFunction)));
diff --git a/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp b/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp
index 3f6050a..d5336ae 100644
--- a/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp
+++ b/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp
@@ -495,8 +495,15 @@
compileStringCharCodeAt();
break;
case GetByOffset:
+ case GetGetterSetterByOffset:
compileGetByOffset();
break;
+ case GetGetter:
+ compileGetGetter();
+ break;
+ case GetSetter:
+ compileGetSetter();
+ break;
case MultiGetByOffset:
compileMultiGetByOffset();
break;
@@ -3187,6 +3194,16 @@
lowStorage(m_node->child1()), data.identifierNumber, data.offset));
}
+ void compileGetGetter()
+ {
+ setJSValue(m_out.loadPtr(lowCell(m_node->child1()), m_heaps.GetterSetter_getter));
+ }
+
+ void compileGetSetter()
+ {
+ setJSValue(m_out.loadPtr(lowCell(m_node->child1()), m_heaps.GetterSetter_setter));
+ }
+
void compileMultiGetByOffset()
{
LValue base = lowCell(m_node->child1());
diff --git a/Source/JavaScriptCore/heap/MarkedBlock.cpp b/Source/JavaScriptCore/heap/MarkedBlock.cpp
index f4d39fc..1d12cf0 100644
--- a/Source/JavaScriptCore/heap/MarkedBlock.cpp
+++ b/Source/JavaScriptCore/heap/MarkedBlock.cpp
@@ -74,6 +74,8 @@
ASSERT(blockState != Allocated && blockState != FreeListed);
ASSERT(!(dtorType == MarkedBlock::None && sweepMode == SweepOnly));
+ SamplingRegion samplingRegion((dtorType != MarkedBlock::None && blockState != New) ? "Calling destructors" : "sweeping");
+
// This produces a free list that is ordered in reverse through the block.
// This is fine, since the allocation code makes no assumptions about the
// order of the free list.
diff --git a/Source/JavaScriptCore/jit/AccessorCallJITStubRoutine.h b/Source/JavaScriptCore/jit/AccessorCallJITStubRoutine.h
index d5d3855..5dcb206 100644
--- a/Source/JavaScriptCore/jit/AccessorCallJITStubRoutine.h
+++ b/Source/JavaScriptCore/jit/AccessorCallJITStubRoutine.h
@@ -46,7 +46,6 @@
virtual bool visitWeak(RepatchBuffer&) override;
-private:
std::unique_ptr<CallLinkInfo> m_callLinkInfo;
};
diff --git a/Source/JavaScriptCore/jit/JIT.cpp b/Source/JavaScriptCore/jit/JIT.cpp
index 4e43a48..33e733b 100644
--- a/Source/JavaScriptCore/jit/JIT.cpp
+++ b/Source/JavaScriptCore/jit/JIT.cpp
@@ -109,6 +109,17 @@
}
#endif
+void JIT::assertStackPointerOffset()
+{
+ if (ASSERT_DISABLED)
+ return;
+
+ addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, regT0);
+ Jump ok = branchPtr(Equal, regT0, stackPointerRegister);
+ breakpoint();
+ ok.link(this);
+}
+
#define NEXT_OPCODE(name) \
m_bytecodeOffset += OPCODE_LENGTH(name); \
break;
@@ -574,7 +585,7 @@
#endif
move(TrustedImmPtr(m_vm->arityCheckFailReturnThunks->returnPCsFor(*m_vm, m_codeBlock->numParameters())), thunkReg);
loadPtr(BaseIndex(thunkReg, regT0, timesPtr()), thunkReg);
- emitNakedCall(m_vm->getCTIStub(arityFixup).code());
+ emitNakedCall(m_vm->getCTIStub(arityFixupGenerator).code());
#if !ASSERT_DISABLED
m_bytecodeOffset = (unsigned)-1; // Reset this, in order to guard its use with ASSERTs.
diff --git a/Source/JavaScriptCore/jit/JIT.h b/Source/JavaScriptCore/jit/JIT.h
index 2234507..520ea2a 100644
--- a/Source/JavaScriptCore/jit/JIT.h
+++ b/Source/JavaScriptCore/jit/JIT.h
@@ -447,6 +447,8 @@
void emit_compareAndJump(OpcodeID, int op1, int op2, unsigned target, RelationalCondition);
void emit_compareAndJumpSlow(int op1, int op2, unsigned target, DoubleCondition, size_t (JIT_OPERATION *operation)(ExecState*, EncodedJSValue, EncodedJSValue), bool invert, Vector<SlowCaseEntry>::iterator&);
+
+ void assertStackPointerOffset();
void emit_op_touch_entry(Instruction*);
void emit_op_add(Instruction*);
diff --git a/Source/JavaScriptCore/jit/JITPropertyAccess.cpp b/Source/JavaScriptCore/jit/JITPropertyAccess.cpp
index 34d5151..a155baa 100644
--- a/Source/JavaScriptCore/jit/JITPropertyAccess.cpp
+++ b/Source/JavaScriptCore/jit/JITPropertyAccess.cpp
@@ -528,6 +528,7 @@
emitValueProfilingSite();
emitPutVirtualRegister(resultVReg);
+ assertStackPointerOffset();
}
void JIT::emitSlow_op_get_by_id(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
diff --git a/Source/JavaScriptCore/jit/ThunkGenerators.cpp b/Source/JavaScriptCore/jit/ThunkGenerators.cpp
index 3b6302b..73b624f 100644
--- a/Source/JavaScriptCore/jit/ThunkGenerators.cpp
+++ b/Source/JavaScriptCore/jit/ThunkGenerators.cpp
@@ -434,7 +434,7 @@
return nativeForGenerator(vm, CodeForConstruct);
}
-MacroAssemblerCodeRef arityFixup(VM* vm)
+MacroAssemblerCodeRef arityFixupGenerator(VM* vm)
{
JSInterfaceJIT jit(vm);
@@ -537,6 +537,83 @@
return FINALIZE_CODE(patchBuffer, ("fixup arity"));
}
+MacroAssemblerCodeRef baselineGetterReturnThunkGenerator(VM* vm)
+{
+ JSInterfaceJIT jit(vm);
+
+#if USE(JSVALUE64)
+ jit.move(GPRInfo::returnValueGPR, GPRInfo::regT0);
+#else
+ jit.setupResults(GPRInfo::regT0, GPRInfo::regT1);
+#endif
+
+ unsigned numberOfParameters = 0;
+ numberOfParameters++; // The 'this' argument.
+ numberOfParameters++; // The true return PC.
+
+ unsigned numberOfRegsForCall =
+ JSStack::CallFrameHeaderSize + numberOfParameters;
+
+ unsigned numberOfBytesForCall =
+ numberOfRegsForCall * sizeof(Register) - sizeof(CallerFrameAndPC);
+
+ unsigned alignedNumberOfBytesForCall =
+ WTF::roundUpToMultipleOf(stackAlignmentBytes(), numberOfBytesForCall);
+
+ // The real return address is stored above the arguments. We passed one argument, which is
+ // 'this'. So argument at index 1 is the return address.
+ jit.loadPtr(
+ AssemblyHelpers::Address(
+ AssemblyHelpers::stackPointerRegister,
+ (virtualRegisterForArgument(1).offset() - JSStack::CallerFrameAndPCSize) * sizeof(Register)),
+ GPRInfo::regT2);
+
+ jit.addPtr(
+ AssemblyHelpers::TrustedImm32(alignedNumberOfBytesForCall),
+ AssemblyHelpers::stackPointerRegister);
+
+ jit.jump(GPRInfo::regT2);
+
+ LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID);
+ return FINALIZE_CODE(patchBuffer, ("baseline getter return thunk"));
+}
+
+MacroAssemblerCodeRef baselineSetterReturnThunkGenerator(VM* vm)
+{
+ JSInterfaceJIT jit(vm);
+
+ unsigned numberOfParameters = 0;
+ numberOfParameters++; // The 'this' argument.
+ numberOfParameters++; // The value to set.
+ numberOfParameters++; // The true return PC.
+
+ unsigned numberOfRegsForCall =
+ JSStack::CallFrameHeaderSize + numberOfParameters;
+
+ unsigned numberOfBytesForCall =
+ numberOfRegsForCall * sizeof(Register) - sizeof(CallerFrameAndPC);
+
+ unsigned alignedNumberOfBytesForCall =
+ WTF::roundUpToMultipleOf(stackAlignmentBytes(), numberOfBytesForCall);
+
+ // The real return address is stored above the arguments. We passed two arguments, so
+ // the argument at index 2 is the return address.
+ jit.loadPtr(
+ AssemblyHelpers::Address(
+ AssemblyHelpers::stackPointerRegister,
+ (virtualRegisterForArgument(2).offset() - JSStack::CallerFrameAndPCSize) * sizeof(Register)),
+ GPRInfo::regT2);
+
+ jit.addPtr(
+ AssemblyHelpers::TrustedImm32(alignedNumberOfBytesForCall),
+ AssemblyHelpers::stackPointerRegister);
+
+ jit.jump(GPRInfo::regT2);
+
+ LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID);
+ return FINALIZE_CODE(patchBuffer, ("baseline setter return thunk"));
+}
+
static void stringCharLoad(SpecializedThunkJIT& jit, VM* vm)
{
// load string
diff --git a/Source/JavaScriptCore/jit/ThunkGenerators.h b/Source/JavaScriptCore/jit/ThunkGenerators.h
index c35a2f5..0abe1af 100644
--- a/Source/JavaScriptCore/jit/ThunkGenerators.h
+++ b/Source/JavaScriptCore/jit/ThunkGenerators.h
@@ -113,7 +113,10 @@
MacroAssemblerCodeRef nativeCallGenerator(VM*);
MacroAssemblerCodeRef nativeConstructGenerator(VM*);
MacroAssemblerCodeRef nativeTailCallGenerator(VM*);
-MacroAssemblerCodeRef arityFixup(VM*);
+MacroAssemblerCodeRef arityFixupGenerator(VM*);
+
+MacroAssemblerCodeRef baselineGetterReturnThunkGenerator(VM* vm);
+MacroAssemblerCodeRef baselineSetterReturnThunkGenerator(VM* vm);
MacroAssemblerCodeRef charCodeAtThunkGenerator(VM*);
MacroAssemblerCodeRef charAtThunkGenerator(VM*);
diff --git a/Source/JavaScriptCore/runtime/Arguments.h b/Source/JavaScriptCore/runtime/Arguments.h
index 6970cb8..602364d 100644
--- a/Source/JavaScriptCore/runtime/Arguments.h
+++ b/Source/JavaScriptCore/runtime/Arguments.h
@@ -1,6 +1,6 @@
/*
* Copyright (C) 1999-2000 Harri Porten (porten@kde.org)
- * Copyright (C) 2003, 2006, 2007, 2008, 2009 Apple Inc. All rights reserved.
+ * Copyright (C) 2003, 2006, 2007, 2008, 2009, 2014 Apple Inc. All rights reserved.
* Copyright (C) 2007 Cameron Zwarich (cwzwarich@uwaterloo.ca)
* Copyright (C) 2007 Maks Orlovich
*
@@ -321,7 +321,7 @@
m_overrodeCallee = false;
m_overrodeCaller = false;
m_isStrictMode = jsCast<FunctionExecutable*>(inlineCallFrame->executable.get())->isStrictMode();
- ASSERT(!jsCast<FunctionExecutable*>(inlineCallFrame->executable.get())->symbolTable(inlineCallFrame->isCall ? CodeForCall : CodeForConstruct)->slowArguments());
+ ASSERT(!jsCast<FunctionExecutable*>(inlineCallFrame->executable.get())->symbolTable(inlineCallFrame->specializationKind())->slowArguments());
// The bytecode generator omits op_tear_off_activation in cases of no
// declared parameters, so we need to tear off immediately.
diff --git a/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp b/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp
index bc81aa2..bb9797b 100644
--- a/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp
+++ b/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp
@@ -165,7 +165,7 @@
result->paddedStackSpace = slotsToAdd;
#if ENABLE(JIT)
if (vm.canUseJIT()) {
- result->thunkToCall = vm.getCTIStub(arityFixup).code().executableAddress();
+ result->thunkToCall = vm.getCTIStub(arityFixupGenerator).code().executableAddress();
result->returnPC = vm.arityCheckFailReturnThunks->returnPCFor(vm, slotsToAdd * stackAlignmentRegisters()).executableAddress();
} else
#endif
diff --git a/Source/JavaScriptCore/runtime/JSString.cpp b/Source/JavaScriptCore/runtime/JSString.cpp
index 10a16d9..b5141b1 100644
--- a/Source/JavaScriptCore/runtime/JSString.cpp
+++ b/Source/JavaScriptCore/runtime/JSString.cpp
@@ -83,24 +83,39 @@
void JSRopeString::visitFibers(SlotVisitor& visitor)
{
- for (size_t i = 0; i < s_maxInternalRopeLength && m_fibers[i]; ++i)
- visitor.append(&m_fibers[i]);
+ if (isSubstring()) {
+ visitor.append(&substringBase());
+ return;
+ }
+ for (size_t i = 0; i < s_maxInternalRopeLength && fiber(i); ++i)
+ visitor.append(&fiber(i));
}
static const unsigned maxLengthForOnStackResolve = 2048;
void JSRopeString::resolveRopeInternal8(LChar* buffer) const
{
- for (size_t i = 0; i < s_maxInternalRopeLength && m_fibers[i]; ++i) {
- if (m_fibers[i]->isRope()) {
+ if (isSubstring()) {
+ StringImpl::copyChars(
+ buffer, substringBase()->m_value.characters8() + substringOffset(), m_length);
+ return;
+ }
+
+ resolveRopeInternal8NoSubstring(buffer);
+}
+
+void JSRopeString::resolveRopeInternal8NoSubstring(LChar* buffer) const
+{
+ for (size_t i = 0; i < s_maxInternalRopeLength && fiber(i); ++i) {
+ if (fiber(i)->isRope()) {
resolveRopeSlowCase8(buffer);
return;
}
}
LChar* position = buffer;
- for (size_t i = 0; i < s_maxInternalRopeLength && m_fibers[i]; ++i) {
- const StringImpl& fiberString = *m_fibers[i]->m_value.impl();
+ for (size_t i = 0; i < s_maxInternalRopeLength && fiber(i); ++i) {
+ const StringImpl& fiberString = *fiber(i)->m_value.impl();
unsigned length = fiberString.length();
StringImpl::copyChars(position, fiberString.characters8(), length);
position += length;
@@ -110,16 +125,27 @@
void JSRopeString::resolveRopeInternal16(UChar* buffer) const
{
- for (size_t i = 0; i < s_maxInternalRopeLength && m_fibers[i]; ++i) {
- if (m_fibers[i]->isRope()) {
+ if (isSubstring()) {
+ StringImpl::copyChars(
+ buffer, substringBase()->m_value.characters16() + substringOffset(), m_length);
+ return;
+ }
+
+ resolveRopeInternal16NoSubstring(buffer);
+}
+
+void JSRopeString::resolveRopeInternal16NoSubstring(UChar* buffer) const
+{
+ for (size_t i = 0; i < s_maxInternalRopeLength && fiber(i); ++i) {
+ if (fiber(i)->isRope()) {
resolveRopeSlowCase(buffer);
return;
}
}
UChar* position = buffer;
- for (size_t i = 0; i < s_maxInternalRopeLength && m_fibers[i]; ++i) {
- const StringImpl& fiberString = *m_fibers[i]->m_value.impl();
+ for (size_t i = 0; i < s_maxInternalRopeLength && fiber(i); ++i) {
+ const StringImpl& fiberString = *fiber(i)->m_value.impl();
unsigned length = fiberString.length();
if (fiberString.is8Bit())
StringImpl::copyChars(position, fiberString.characters8(), length);
@@ -157,8 +183,8 @@
void JSRopeString::clearFibers() const
{
- for (size_t i = 0; i < s_maxInternalRopeLength && m_fibers[i]; ++i)
- m_fibers[i].clear();
+ for (size_t i = 0; i < s_maxInternalRopeLength; ++i)
+ u[i].number = 0;
}
AtomicStringImpl* JSRopeString::resolveRopeToExistingAtomicString(ExecState* exec) const
@@ -172,7 +198,7 @@
}
return nullptr;
}
-
+
if (is8Bit()) {
LChar buffer[maxLengthForOnStackResolve];
resolveRopeInternal8(buffer);
@@ -197,7 +223,14 @@
void JSRopeString::resolveRope(ExecState* exec) const
{
ASSERT(isRope());
-
+
+ if (isSubstring()) {
+ ASSERT(!substringBase()->isRope());
+ m_value = substringBase()->m_value.substring(substringOffset(), m_length);
+ substringBase().clear();
+ return;
+ }
+
if (is8Bit()) {
LChar* buffer;
if (RefPtr<StringImpl> newImpl = StringImpl::tryCreateUninitialized(m_length, buffer)) {
@@ -207,7 +240,7 @@
outOfMemory(exec);
return;
}
- resolveRopeInternal8(buffer);
+ resolveRopeInternal8NoSubstring(buffer);
clearFibers();
ASSERT(!isRope());
return;
@@ -222,7 +255,7 @@
return;
}
- resolveRopeInternal16(buffer);
+ resolveRopeInternal16NoSubstring(buffer);
clearFibers();
ASSERT(!isRope());
}
@@ -242,24 +275,32 @@
LChar* position = buffer + m_length; // We will be working backwards over the rope.
Vector<JSString*, 32, UnsafeVectorOverflow> workQueue; // Putting strings into a Vector is only OK because there are no GC points in this method.
- for (size_t i = 0; i < s_maxInternalRopeLength && m_fibers[i]; ++i)
- workQueue.append(m_fibers[i].get());
+ for (size_t i = 0; i < s_maxInternalRopeLength && fiber(i); ++i)
+ workQueue.append(fiber(i).get());
while (!workQueue.isEmpty()) {
JSString* currentFiber = workQueue.last();
workQueue.removeLast();
+ const LChar* characters;
+
if (currentFiber->isRope()) {
JSRopeString* currentFiberAsRope = static_cast<JSRopeString*>(currentFiber);
- for (size_t i = 0; i < s_maxInternalRopeLength && currentFiberAsRope->m_fibers[i]; ++i)
- workQueue.append(currentFiberAsRope->m_fibers[i].get());
- continue;
- }
-
- StringImpl* string = static_cast<StringImpl*>(currentFiber->m_value.impl());
- unsigned length = string->length();
+ if (!currentFiberAsRope->isSubstring()) {
+ for (size_t i = 0; i < s_maxInternalRopeLength && currentFiberAsRope->fiber(i); ++i)
+ workQueue.append(currentFiberAsRope->fiber(i).get());
+ continue;
+ }
+ ASSERT(!currentFiberAsRope->substringBase()->isRope());
+ characters =
+ currentFiberAsRope->substringBase()->m_value.characters8() +
+ currentFiberAsRope->substringOffset();
+ } else
+ characters = currentFiber->m_value.characters8();
+
+ unsigned length = currentFiber->length();
position -= length;
- StringImpl::copyChars(position, string->characters8(), length);
+ StringImpl::copyChars(position, characters, length);
}
ASSERT(buffer == position);
@@ -270,8 +311,8 @@
UChar* position = buffer + m_length; // We will be working backwards over the rope.
Vector<JSString*, 32, UnsafeVectorOverflow> workQueue; // These strings are kept alive by the parent rope, so using a Vector is OK.
- for (size_t i = 0; i < s_maxInternalRopeLength && m_fibers[i]; ++i)
- workQueue.append(m_fibers[i].get());
+ for (size_t i = 0; i < s_maxInternalRopeLength && fiber(i); ++i)
+ workQueue.append(fiber(i).get());
while (!workQueue.isEmpty()) {
JSString* currentFiber = workQueue.last();
@@ -279,8 +320,21 @@
if (currentFiber->isRope()) {
JSRopeString* currentFiberAsRope = static_cast<JSRopeString*>(currentFiber);
- for (size_t i = 0; i < s_maxInternalRopeLength && currentFiberAsRope->m_fibers[i]; ++i)
- workQueue.append(currentFiberAsRope->m_fibers[i].get());
+ if (currentFiberAsRope->isSubstring()) {
+ ASSERT(!currentFiberAsRope->substringBase()->isRope());
+ StringImpl* string = static_cast<StringImpl*>(
+ currentFiberAsRope->substringBase()->m_value.impl());
+ unsigned offset = currentFiberAsRope->substringOffset();
+ unsigned length = currentFiberAsRope->length();
+ position -= length;
+ if (string->is8Bit())
+ StringImpl::copyChars(position, string->characters8() + offset, length);
+ else
+ StringImpl::copyChars(position, string->characters16() + offset, length);
+ continue;
+ }
+ for (size_t i = 0; i < s_maxInternalRopeLength && currentFiberAsRope->fiber(i); ++i)
+ workQueue.append(currentFiberAsRope->fiber(i).get());
continue;
}
diff --git a/Source/JavaScriptCore/runtime/JSString.h b/Source/JavaScriptCore/runtime/JSString.h
index 1287b66..e77b220 100644
--- a/Source/JavaScriptCore/runtime/JSString.h
+++ b/Source/JavaScriptCore/runtime/JSString.h
@@ -273,8 +273,10 @@
Base::finishCreation(vm);
m_length = s1->length() + s2->length();
setIs8Bit(s1->is8Bit() && s2->is8Bit());
- m_fibers[0].set(vm, this, s1);
- m_fibers[1].set(vm, this, s2);
+ setIsSubstring(false);
+ fiber(0).set(vm, this, s1);
+ fiber(1).set(vm, this, s2);
+ fiber(2).clear();
}
void finishCreation(VM& vm, JSString* s1, JSString* s2, JSString* s3)
@@ -282,19 +284,37 @@
Base::finishCreation(vm);
m_length = s1->length() + s2->length() + s3->length();
setIs8Bit(s1->is8Bit() && s2->is8Bit() && s3->is8Bit());
- m_fibers[0].set(vm, this, s1);
- m_fibers[1].set(vm, this, s2);
- m_fibers[2].set(vm, this, s3);
+ setIsSubstring(false);
+ fiber(0).set(vm, this, s1);
+ fiber(1).set(vm, this, s2);
+ fiber(2).set(vm, this, s3);
+ }
+
+ void finishCreation(VM& vm, JSString* base, unsigned offset, unsigned length)
+ {
+ Base::finishCreation(vm);
+ ASSERT(!base->isRope());
+ ASSERT(!sumOverflows<int32_t>(offset, length));
+ ASSERT(offset + length <= base->length());
+ m_length = length;
+ setIs8Bit(base->is8Bit());
+ setIsSubstring(true);
+ substringBase().set(vm, this, base);
+ substringOffset() = offset;
}
void finishCreation(VM& vm)
{
JSString::finishCreation(vm);
+ setIsSubstring(false);
+ fiber(0).clear();
+ fiber(1).clear();
+ fiber(2).clear();
}
void append(VM& vm, size_t index, JSString* jsString)
{
- m_fibers[index].set(vm, this, jsString);
+ fiber(index).set(vm, this, jsString);
m_length += jsString->m_length;
RELEASE_ASSERT(static_cast<int32_t>(m_length) >= 0);
setIs8Bit(is8Bit() && jsString->is8Bit());
@@ -320,10 +340,17 @@
newString->finishCreation(vm, s1, s2, s3);
return newString;
}
+
+ static JSString* create(VM& vm, JSString* base, unsigned offset, unsigned length)
+ {
+ JSRopeString* newString = new (NotNull, allocateCell<JSRopeString>(vm.heap)) JSRopeString(vm);
+ newString->finishCreation(vm, base, offset, length);
+ return newString;
+ }
void visitFibers(SlotVisitor&);
- static ptrdiff_t offsetOfFibers() { return OBJECT_OFFSETOF(JSRopeString, m_fibers); }
+ static ptrdiff_t offsetOfFibers() { return OBJECT_OFFSETOF(JSRopeString, u); }
static const unsigned s_maxInternalRopeLength = 3;
@@ -338,12 +365,54 @@
void resolveRopeSlowCase(UChar*) const;
void outOfMemory(ExecState*) const;
void resolveRopeInternal8(LChar*) const;
+ void resolveRopeInternal8NoSubstring(LChar*) const;
void resolveRopeInternal16(UChar*) const;
+ void resolveRopeInternal16NoSubstring(UChar*) const;
void clearFibers() const;
JS_EXPORT_PRIVATE JSString* getIndexSlowCase(ExecState*, unsigned);
+
+ WriteBarrierBase<JSString>& fiber(unsigned i) const
+ {
+ ASSERT(!isSubstring());
+ ASSERT(i < s_maxInternalRopeLength);
+ return u[i].string;
+ }
+
+ WriteBarrierBase<JSString>& substringBase() const
+ {
+ return u[1].string;
+ }
+
+ uintptr_t& substringOffset() const
+ {
+ return u[2].number;
+ }
+
+ static uintptr_t notSubstringSentinel()
+ {
+ return 0;
+ }
+
+ static uintptr_t substringSentinel()
+ {
+ return 1;
+ }
+
+ bool isSubstring() const
+ {
+ return u[0].number == substringSentinel();
+ }
+
+ void setIsSubstring(bool isSubstring)
+ {
+ u[0].number = isSubstring ? substringSentinel() : notSubstringSentinel();
+ }
- mutable std::array<WriteBarrier<JSString>, s_maxInternalRopeLength> m_fibers;
+ mutable union {
+ uintptr_t number;
+ WriteBarrierBase<JSString> string;
+ } u[s_maxInternalRopeLength];
};
@@ -454,10 +523,11 @@
ASSERT(offset <= static_cast<unsigned>(s->length()));
ASSERT(length <= static_cast<unsigned>(s->length()));
ASSERT(offset + length <= static_cast<unsigned>(s->length()));
- VM* vm = &exec->vm();
+ VM& vm = exec->vm();
if (!length)
- return vm->smallStrings.emptyString();
- return jsSubstring(vm, s->value(exec), offset, length);
+ return vm.smallStrings.emptyString();
+ s->value(exec); // For effect. We need to ensure that any string that is used as a substring base is not a rope.
+ return JSRopeString::create(vm, s, offset, length);
}
inline JSString* jsSubstring8(VM* vm, const String& s, unsigned offset, unsigned length)
diff --git a/Source/JavaScriptCore/runtime/RegExpMatchesArray.cpp b/Source/JavaScriptCore/runtime/RegExpMatchesArray.cpp
index f0cc10b..de34e5c 100644
--- a/Source/JavaScriptCore/runtime/RegExpMatchesArray.cpp
+++ b/Source/JavaScriptCore/runtime/RegExpMatchesArray.cpp
@@ -74,6 +74,8 @@
ASSERT(m_state != ReifiedAll);
ASSERT(m_result);
+ SamplingRegion samplingRegion("Reifying substring properties");
+
reifyMatchPropertyIfNecessary(exec);
if (unsigned numSubpatterns = m_regExp->numSubpatterns()) {
diff --git a/Source/JavaScriptCore/runtime/StringPrototype.cpp b/Source/JavaScriptCore/runtime/StringPrototype.cpp
index c5d7d2b..fd19447 100644
--- a/Source/JavaScriptCore/runtime/StringPrototype.cpp
+++ b/Source/JavaScriptCore/runtime/StringPrototype.cpp
@@ -1218,6 +1218,7 @@
EncodedJSValue JSC_HOST_CALL stringProtoFuncSubstring(ExecState* exec)
{
+ SamplingRegion samplingRegion("Doing substringing");
JSValue thisValue = exec->thisValue();
if (!checkObjectCoercible(thisValue))
return throwVMTypeError(exec);
diff --git a/Source/JavaScriptCore/tests/stress/exit-from-getter.js b/Source/JavaScriptCore/tests/stress/exit-from-getter.js
new file mode 100644
index 0000000..11830b8
--- /dev/null
+++ b/Source/JavaScriptCore/tests/stress/exit-from-getter.js
@@ -0,0 +1,23 @@
+(function() {
+ var o = {_f:42};
+ o.__defineGetter__("f", function() { return this._f * 100; });
+ var result = 0;
+ var n = 50000;
+ function foo(o) {
+ return o.f + 11;
+ }
+ noInline(foo);
+ for (var i = 0; i < n; ++i) {
+ result += foo(o);
+ }
+ if (result != n * (42 * 100 + 11))
+ throw "Error: bad result: " + result;
+ o._f = 1000000000;
+ result = 0;
+ for (var i = 0; i < n; ++i) {
+ result += foo(o);
+ }
+ if (result != n * (1000000000 * 100 + 11))
+ throw "Error: bad result (2): " + result;
+})();
+
diff --git a/Source/JavaScriptCore/tests/stress/poly-chain-getter.js b/Source/JavaScriptCore/tests/stress/poly-chain-getter.js
new file mode 100644
index 0000000..1f617a2
--- /dev/null
+++ b/Source/JavaScriptCore/tests/stress/poly-chain-getter.js
@@ -0,0 +1,30 @@
+function Cons() {
+}
+Cons.prototype.__defineGetter__("f", function() {
+ counter++;
+ return 84;
+});
+
+function foo(o) {
+ return o.f;
+}
+
+noInline(foo);
+
+var counter = 0;
+
+function test(o, expected, expectedCount) {
+ var result = foo(o);
+ if (result != expected)
+ throw new Error("Bad result: " + result);
+ if (counter != expectedCount)
+ throw new Error("Bad counter value: " + counter);
+}
+
+for (var i = 0; i < 100000; ++i) {
+ test(new Cons(), 84, counter + 1);
+
+ var o = new Cons();
+ o.g = 54;
+ test(o, 84, counter + 1);
+}
diff --git a/Source/JavaScriptCore/tests/stress/poly-chain-then-getter.js b/Source/JavaScriptCore/tests/stress/poly-chain-then-getter.js
new file mode 100644
index 0000000..ecb89bd
--- /dev/null
+++ b/Source/JavaScriptCore/tests/stress/poly-chain-then-getter.js
@@ -0,0 +1,31 @@
+function Cons1() {
+}
+Cons1.prototype.f = 42;
+
+function Cons2() {
+}
+Cons2.prototype.__defineGetter__("f", function() {
+ counter++;
+ return 84;
+});
+
+function foo(o) {
+ return o.f;
+}
+
+noInline(foo);
+
+var counter = 0;
+
+function test(o, expected, expectedCount) {
+ var result = foo(o);
+ if (result != expected)
+ throw new Error("Bad result: " + result);
+ if (counter != expectedCount)
+ throw new Error("Bad counter value: " + counter);
+}
+
+for (var i = 0; i < 100000; ++i) {
+ test(new Cons1(), 42, counter);
+ test(new Cons2(), 84, counter + 1);
+}
diff --git a/Source/JavaScriptCore/tests/stress/poly-getter-combo.js b/Source/JavaScriptCore/tests/stress/poly-getter-combo.js
new file mode 100644
index 0000000..cdeeee5
--- /dev/null
+++ b/Source/JavaScriptCore/tests/stress/poly-getter-combo.js
@@ -0,0 +1,40 @@
+function Cons1() {
+}
+Cons1.prototype.f = 42;
+
+function Cons2() {
+}
+Cons2.prototype.__defineGetter__("f", function() {
+ counter++;
+ return 84;
+});
+
+function foo(o) {
+ return o.f;
+}
+
+noInline(foo);
+
+var counter = 0;
+
+function test(o, expected, expectedCount) {
+ var result = foo(o);
+ if (result != expected)
+ throw new Error("Bad result: " + result);
+ if (counter != expectedCount)
+ throw new Error("Bad counter value: " + counter);
+}
+
+for (var i = 0; i < 100000; ++i) {
+ test(new Cons1(), 42, counter);
+ test(new Cons2(), 84, counter + 1);
+
+ var o = {};
+ o.__defineGetter__("f", function() {
+ counter++;
+ return 84;
+ });
+ test(o, 84, counter + 1);
+
+ test({f: 42}, 42, counter);
+}
diff --git a/Source/JavaScriptCore/tests/stress/poly-getter-then-chain.js b/Source/JavaScriptCore/tests/stress/poly-getter-then-chain.js
new file mode 100644
index 0000000..8db9310
--- /dev/null
+++ b/Source/JavaScriptCore/tests/stress/poly-getter-then-chain.js
@@ -0,0 +1,31 @@
+function Cons1() {
+}
+Cons1.prototype.f = 42;
+
+function Cons2() {
+}
+Cons2.prototype.__defineGetter__("f", function() {
+ counter++;
+ return 84;
+});
+
+function foo(o) {
+ return o.f;
+}
+
+noInline(foo);
+
+var counter = 0;
+
+function test(o, expected, expectedCount) {
+ var result = foo(o);
+ if (result != expected)
+ throw new Error("Bad result: " + result);
+ if (counter != expectedCount)
+ throw new Error("Bad counter value: " + counter);
+}
+
+for (var i = 0; i < 100000; ++i) {
+ test(new Cons2(), 84, counter + 1);
+ test(new Cons1(), 42, counter);
+}
diff --git a/Source/JavaScriptCore/tests/stress/poly-getter-then-self.js b/Source/JavaScriptCore/tests/stress/poly-getter-then-self.js
new file mode 100644
index 0000000..d90fb10
--- /dev/null
+++ b/Source/JavaScriptCore/tests/stress/poly-getter-then-self.js
@@ -0,0 +1,26 @@
+function foo(o) {
+ return o.f;
+}
+
+noInline(foo);
+
+var counter = 0;
+
+function test(o, expected, expectedCount) {
+ var result = foo(o);
+ if (result != expected)
+ throw new Error("Bad result: " + result);
+ if (counter != expectedCount)
+ throw new Error("Bad counter value: " + counter);
+}
+
+for (var i = 0; i < 100000; ++i) {
+ var o = {};
+ o.__defineGetter__("f", function() {
+ counter++;
+ return 84;
+ });
+ test(o, 84, counter + 1);
+
+ test({f: 42}, 42, counter);
+}
diff --git a/Source/JavaScriptCore/tests/stress/poly-self-getter.js b/Source/JavaScriptCore/tests/stress/poly-self-getter.js
new file mode 100644
index 0000000..72d7d3a
--- /dev/null
+++ b/Source/JavaScriptCore/tests/stress/poly-self-getter.js
@@ -0,0 +1,31 @@
+function foo(o) {
+ return o.f;
+}
+
+noInline(foo);
+
+var counter = 0;
+
+function test(o, expected, expectedCount) {
+ var result = foo(o);
+ if (result != expected)
+ throw new Error("Bad result: " + result);
+ if (counter != expectedCount)
+ throw new Error("Bad counter value: " + counter);
+}
+
+function getter() {
+ counter++;
+ return 84;
+}
+
+for (var i = 0; i < 100000; ++i) {
+ var o = {};
+ o.__defineGetter__("f", getter);
+ test(o, 84, counter + 1);
+
+ var o = {};
+ o.__defineGetter__("f", getter);
+ o.g = 54;
+ test(o, 84, counter + 1);
+}
diff --git a/Source/JavaScriptCore/tests/stress/poly-self-then-getter.js b/Source/JavaScriptCore/tests/stress/poly-self-then-getter.js
new file mode 100644
index 0000000..24310ac
--- /dev/null
+++ b/Source/JavaScriptCore/tests/stress/poly-self-then-getter.js
@@ -0,0 +1,26 @@
+function foo(o) {
+ return o.f;
+}
+
+noInline(foo);
+
+var counter = 0;
+
+function test(o, expected, expectedCount) {
+ var result = foo(o);
+ if (result != expected)
+ throw new Error("Bad result: " + result);
+ if (counter != expectedCount)
+ throw new Error("Bad counter value: " + counter);
+}
+
+for (var i = 0; i < 100000; ++i) {
+ test({f: 42}, 42, counter);
+
+ var o = {};
+ o.__defineGetter__("f", function() {
+ counter++;
+ return 84;
+ });
+ test(o, 84, counter + 1);
+}
diff --git a/Source/JavaScriptCore/tests/stress/weird-getter-counter.js b/Source/JavaScriptCore/tests/stress/weird-getter-counter.js
new file mode 100644
index 0000000..ff94334
--- /dev/null
+++ b/Source/JavaScriptCore/tests/stress/weird-getter-counter.js
@@ -0,0 +1,24 @@
+function foo(o) {
+ return o.f;
+}
+
+noInline(foo);
+
+var counter = 0;
+
+function test(o, expected, expectedCount) {
+ var result = foo(o);
+ if (result != expected)
+ throw new Error("Bad result: " + result);
+ if (counter != expectedCount)
+ throw new Error("Bad counter value: " + counter);
+}
+
+for (var i = 0; i < 100000; ++i) {
+ var o = {};
+ o.__defineGetter__("f", function() {
+ counter++;
+ return 84;
+ });
+ test(o, 84, counter + 1);
+}