Compress CodeOrigin into a single word in the common case
https://bugs.webkit.org/show_bug.cgi?id=195928
Reviewed by Saam Barati.
The trick is that pointers only take 48 bits on x86_64 in practice (and we can even use the bottom three bits of that thanks to alignment), and even less on ARM64.
So we can shove the bytecode index in the top bits almost all the time.
If the bytecodeIndex is too ginormous (1<<16 in practice on x86_64), we just set one bit at the bottom and store a pointer to some out-of-line storage instead.
Finally we represent an invalid bytecodeIndex (which used to be represented by UINT_MAX) by setting the second least signifcant bit.
The patch looks very long, but most of it is just replacing direct accesses to inlineCallFrame and bytecodeIndex by the relevant getters.
End result: CodeOrigin in the common case moves from 16 bytes (8 for InlineCallFrame*, 4 for unsigned bytecodeIndex, 4 of padding) to 8.
As a reference, during running JetStream2 we allocate more than 35M CodeOrigins. While they won't all be alive at the same time, it is still quite a lot of objects, so I am hoping for some small
improvement to RAMification from this work.
The one slightly tricky part is that we must implement copy and move assignment operators and constructors to make sure that any out-of-line storage belongs to a single CodeOrigin and is destroyed exactly once.
* bytecode/ByValInfo.h:
* bytecode/CallLinkStatus.cpp:
(JSC::CallLinkStatus::computeFor):
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::globalObjectFor):
(JSC::CodeBlock::updateOSRExitCounterAndCheckIfNeedToReoptimize):
(JSC::CodeBlock::bytecodeOffsetFromCallSiteIndex):
* bytecode/CodeOrigin.cpp:
(JSC::CodeOrigin::inlineDepth const):
(JSC::CodeOrigin::isApproximatelyEqualTo const):
(JSC::CodeOrigin::approximateHash const):
(JSC::CodeOrigin::inlineStack const):
(JSC::CodeOrigin::codeOriginOwner const):
(JSC::CodeOrigin::stackOffset const):
(JSC::CodeOrigin::dump const):
(JSC::CodeOrigin::inlineDepthForCallFrame): Deleted.
* bytecode/CodeOrigin.h:
(JSC::OutOfLineCodeOrigin::OutOfLineCodeOrigin):
(JSC::CodeOrigin::CodeOrigin):
(JSC::CodeOrigin::~CodeOrigin):
(JSC::CodeOrigin::isSet const):
(JSC::CodeOrigin::isHashTableDeletedValue const):
(JSC::CodeOrigin::bytecodeIndex const):
(JSC::CodeOrigin::inlineCallFrame const):
(JSC::CodeOrigin::buildCompositeValue):
(JSC::CodeOrigin::hash const):
(JSC::CodeOrigin::operator== const):
(JSC::CodeOrigin::exitingInlineKind const): Deleted.
* bytecode/DeferredSourceDump.h:
* bytecode/GetByIdStatus.cpp:
(JSC::GetByIdStatus::computeForStubInfo):
(JSC::GetByIdStatus::computeFor):
* bytecode/ICStatusMap.cpp:
(JSC::ICStatusContext::isInlined const):
* bytecode/InByIdStatus.cpp:
(JSC::InByIdStatus::computeFor):
(JSC::InByIdStatus::computeForStubInfo):
* bytecode/InlineCallFrame.cpp:
(JSC::InlineCallFrame::dumpInContext const):
* bytecode/InlineCallFrame.h:
(JSC::InlineCallFrame::computeCallerSkippingTailCalls):
(JSC::InlineCallFrame::getCallerInlineFrameSkippingTailCalls):
(JSC::baselineCodeBlockForOriginAndBaselineCodeBlock):
(JSC::CodeOrigin::walkUpInlineStack):
* bytecode/InstanceOfStatus.h:
* bytecode/PutByIdStatus.cpp:
(JSC::PutByIdStatus::computeForStubInfo):
(JSC::PutByIdStatus::computeFor):
* dfg/DFGAbstractInterpreterInlines.h:
(JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
* dfg/DFGArgumentsEliminationPhase.cpp:
* dfg/DFGArgumentsUtilities.cpp:
(JSC::DFG::argumentsInvolveStackSlot):
(JSC::DFG::emitCodeToGetArgumentsArrayLength):
* dfg/DFGArrayMode.h:
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::injectLazyOperandSpeculation):
(JSC::DFG::ByteCodeParser::setLocal):
(JSC::DFG::ByteCodeParser::setArgument):
(JSC::DFG::ByteCodeParser::flushForTerminalImpl):
(JSC::DFG::ByteCodeParser::getPredictionWithoutOSRExit):
(JSC::DFG::ByteCodeParser::parseBlock):
(JSC::DFG::ByteCodeParser::parseCodeBlock):
(JSC::DFG::ByteCodeParser::handlePutByVal):
* dfg/DFGClobberize.h:
(JSC::DFG::clobberize):
* dfg/DFGConstantFoldingPhase.cpp:
(JSC::DFG::ConstantFoldingPhase::foldConstants):
* dfg/DFGFixupPhase.cpp:
(JSC::DFG::FixupPhase::attemptToMakeGetArrayLength):
* dfg/DFGForAllKills.h:
(JSC::DFG::forAllKilledOperands):
* dfg/DFGGraph.cpp:
(JSC::DFG::Graph::dumpCodeOrigin):
(JSC::DFG::Graph::dump):
(JSC::DFG::Graph::isLiveInBytecode):
(JSC::DFG::Graph::methodOfGettingAValueProfileFor):
(JSC::DFG::Graph::willCatchExceptionInMachineFrame):
* dfg/DFGGraph.h:
(JSC::DFG::Graph::executableFor):
(JSC::DFG::Graph::isStrictModeFor):
(JSC::DFG::Graph::hasExitSite):
(JSC::DFG::Graph::forAllLocalsLiveInBytecode):
* dfg/DFGLiveCatchVariablePreservationPhase.cpp:
(JSC::DFG::LiveCatchVariablePreservationPhase::handleBlockForTryCatch):
* dfg/DFGMinifiedNode.cpp:
(JSC::DFG::MinifiedNode::fromNode):
* dfg/DFGOSRAvailabilityAnalysisPhase.cpp:
(JSC::DFG::LocalOSRAvailabilityCalculator::executeNode):
* dfg/DFGOSRExit.cpp:
(JSC::DFG::OSRExit::executeOSRExit):
(JSC::DFG::reifyInlinedCallFrames):
(JSC::DFG::adjustAndJumpToTarget):
(JSC::DFG::printOSRExit):
(JSC::DFG::OSRExit::compileExit):
* dfg/DFGOSRExitBase.cpp:
(JSC::DFG::OSRExitBase::considerAddingAsFrequentExitSiteSlow):
* dfg/DFGOSRExitCompilerCommon.cpp:
(JSC::DFG::handleExitCounts):
(JSC::DFG::reifyInlinedCallFrames):
(JSC::DFG::adjustAndJumpToTarget):
* dfg/DFGOSRExitPreparation.cpp:
(JSC::DFG::prepareCodeOriginForOSRExit):
* dfg/DFGObjectAllocationSinkingPhase.cpp:
* dfg/DFGOperations.cpp:
* dfg/DFGPreciseLocalClobberize.h:
(JSC::DFG::PreciseLocalClobberizeAdaptor::readTop):
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::emitGetLength):
(JSC::DFG::SpeculativeJIT::emitGetCallee):
(JSC::DFG::SpeculativeJIT::compileCurrentBlock):
(JSC::DFG::SpeculativeJIT::compileValueAdd):
(JSC::DFG::SpeculativeJIT::compileValueSub):
(JSC::DFG::SpeculativeJIT::compileValueNegate):
(JSC::DFG::SpeculativeJIT::compileValueMul):
(JSC::DFG::SpeculativeJIT::compileForwardVarargs):
(JSC::DFG::SpeculativeJIT::compileCreateDirectArguments):
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::emitCall):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::emitCall):
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGTierUpCheckInjectionPhase.cpp:
(JSC::DFG::TierUpCheckInjectionPhase::run):
(JSC::DFG::TierUpCheckInjectionPhase::canOSREnterAtLoopHint):
(JSC::DFG::TierUpCheckInjectionPhase::buildNaturalLoopToLoopHintMap):
* dfg/DFGTypeCheckHoistingPhase.cpp:
(JSC::DFG::TypeCheckHoistingPhase::run):
* dfg/DFGVariableEventStream.cpp:
(JSC::DFG::VariableEventStream::reconstruct const):
* ftl/FTLLowerDFGToB3.cpp:
(JSC::FTL::DFG::LowerDFGToB3::compileValueAdd):
(JSC::FTL::DFG::LowerDFGToB3::compileValueSub):
(JSC::FTL::DFG::LowerDFGToB3::compileValueMul):
(JSC::FTL::DFG::LowerDFGToB3::compileArithAddOrSub):
(JSC::FTL::DFG::LowerDFGToB3::compileValueNegate):
(JSC::FTL::DFG::LowerDFGToB3::compileGetMyArgumentByVal):
(JSC::FTL::DFG::LowerDFGToB3::compileNewArrayWithSpread):
(JSC::FTL::DFG::LowerDFGToB3::compileSpread):
(JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargsSpread):
(JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargs):
(JSC::FTL::DFG::LowerDFGToB3::compileForwardVarargs):
(JSC::FTL::DFG::LowerDFGToB3::compileForwardVarargsWithSpread):
(JSC::FTL::DFG::LowerDFGToB3::getArgumentsLength):
(JSC::FTL::DFG::LowerDFGToB3::getCurrentCallee):
(JSC::FTL::DFG::LowerDFGToB3::getArgumentsStart):
(JSC::FTL::DFG::LowerDFGToB3::codeOriginDescriptionOfCallSite const):
* ftl/FTLOSRExitCompiler.cpp:
(JSC::FTL::compileStub):
* ftl/FTLOperations.cpp:
(JSC::FTL::operationMaterializeObjectInOSR):
* interpreter/CallFrame.cpp:
(JSC::CallFrame::bytecodeOffset):
* interpreter/StackVisitor.cpp:
(JSC::StackVisitor::unwindToMachineCodeBlockFrame):
(JSC::StackVisitor::readFrame):
(JSC::StackVisitor::readNonInlinedFrame):
(JSC::inlinedFrameOffset):
(JSC::StackVisitor::readInlinedFrame):
* interpreter/StackVisitor.h:
* jit/AssemblyHelpers.cpp:
(JSC::AssemblyHelpers::executableFor):
* jit/AssemblyHelpers.h:
(JSC::AssemblyHelpers::isStrictModeFor):
(JSC::AssemblyHelpers::argumentsStart):
(JSC::AssemblyHelpers::argumentCount):
* jit/PCToCodeOriginMap.cpp:
(JSC::PCToCodeOriginMap::PCToCodeOriginMap):
(JSC::PCToCodeOriginMap::findPC const):
* profiler/ProfilerOriginStack.cpp:
(JSC::Profiler::OriginStack::OriginStack):
* profiler/ProfilerOriginStack.h:
* runtime/ErrorInstance.cpp:
(JSC::appendSourceToError):
* runtime/SamplingProfiler.cpp:
(JSC::SamplingProfiler::processUnverifiedStackTraces):
git-svn-id: http://svn.webkit.org/repository/webkit/trunk@243232 268f45cc-cd09-0410-ab3c-d52691b4dbfc
diff --git a/Source/JavaScriptCore/ChangeLog b/Source/JavaScriptCore/ChangeLog
index 11fa9b6..7417ef1 100644
--- a/Source/JavaScriptCore/ChangeLog
+++ b/Source/JavaScriptCore/ChangeLog
@@ -1,3 +1,200 @@
+2019-03-20 Robin Morisset <rmorisset@apple.com>
+
+ Compress CodeOrigin into a single word in the common case
+ https://bugs.webkit.org/show_bug.cgi?id=195928
+
+ Reviewed by Saam Barati.
+
+ The trick is that pointers only take 48 bits on x86_64 in practice (and we can even use the bottom three bits of that thanks to alignment), and even less on ARM64.
+ So we can shove the bytecode index in the top bits almost all the time.
+ If the bytecodeIndex is too ginormous (1<<16 in practice on x86_64), we just set one bit at the bottom and store a pointer to some out-of-line storage instead.
+ Finally we represent an invalid bytecodeIndex (which used to be represented by UINT_MAX) by setting the second least signifcant bit.
+
+ The patch looks very long, but most of it is just replacing direct accesses to inlineCallFrame and bytecodeIndex by the relevant getters.
+
+ End result: CodeOrigin in the common case moves from 16 bytes (8 for InlineCallFrame*, 4 for unsigned bytecodeIndex, 4 of padding) to 8.
+ As a reference, during running JetStream2 we allocate more than 35M CodeOrigins. While they won't all be alive at the same time, it is still quite a lot of objects, so I am hoping for some small
+ improvement to RAMification from this work.
+
+ The one slightly tricky part is that we must implement copy and move assignment operators and constructors to make sure that any out-of-line storage belongs to a single CodeOrigin and is destroyed exactly once.
+
+ * bytecode/ByValInfo.h:
+ * bytecode/CallLinkStatus.cpp:
+ (JSC::CallLinkStatus::computeFor):
+ * bytecode/CodeBlock.cpp:
+ (JSC::CodeBlock::globalObjectFor):
+ (JSC::CodeBlock::updateOSRExitCounterAndCheckIfNeedToReoptimize):
+ (JSC::CodeBlock::bytecodeOffsetFromCallSiteIndex):
+ * bytecode/CodeOrigin.cpp:
+ (JSC::CodeOrigin::inlineDepth const):
+ (JSC::CodeOrigin::isApproximatelyEqualTo const):
+ (JSC::CodeOrigin::approximateHash const):
+ (JSC::CodeOrigin::inlineStack const):
+ (JSC::CodeOrigin::codeOriginOwner const):
+ (JSC::CodeOrigin::stackOffset const):
+ (JSC::CodeOrigin::dump const):
+ (JSC::CodeOrigin::inlineDepthForCallFrame): Deleted.
+ * bytecode/CodeOrigin.h:
+ (JSC::OutOfLineCodeOrigin::OutOfLineCodeOrigin):
+ (JSC::CodeOrigin::CodeOrigin):
+ (JSC::CodeOrigin::~CodeOrigin):
+ (JSC::CodeOrigin::isSet const):
+ (JSC::CodeOrigin::isHashTableDeletedValue const):
+ (JSC::CodeOrigin::bytecodeIndex const):
+ (JSC::CodeOrigin::inlineCallFrame const):
+ (JSC::CodeOrigin::buildCompositeValue):
+ (JSC::CodeOrigin::hash const):
+ (JSC::CodeOrigin::operator== const):
+ (JSC::CodeOrigin::exitingInlineKind const): Deleted.
+ * bytecode/DeferredSourceDump.h:
+ * bytecode/GetByIdStatus.cpp:
+ (JSC::GetByIdStatus::computeForStubInfo):
+ (JSC::GetByIdStatus::computeFor):
+ * bytecode/ICStatusMap.cpp:
+ (JSC::ICStatusContext::isInlined const):
+ * bytecode/InByIdStatus.cpp:
+ (JSC::InByIdStatus::computeFor):
+ (JSC::InByIdStatus::computeForStubInfo):
+ * bytecode/InlineCallFrame.cpp:
+ (JSC::InlineCallFrame::dumpInContext const):
+ * bytecode/InlineCallFrame.h:
+ (JSC::InlineCallFrame::computeCallerSkippingTailCalls):
+ (JSC::InlineCallFrame::getCallerInlineFrameSkippingTailCalls):
+ (JSC::baselineCodeBlockForOriginAndBaselineCodeBlock):
+ (JSC::CodeOrigin::walkUpInlineStack):
+ * bytecode/InstanceOfStatus.h:
+ * bytecode/PutByIdStatus.cpp:
+ (JSC::PutByIdStatus::computeForStubInfo):
+ (JSC::PutByIdStatus::computeFor):
+ * dfg/DFGAbstractInterpreterInlines.h:
+ (JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
+ * dfg/DFGArgumentsEliminationPhase.cpp:
+ * dfg/DFGArgumentsUtilities.cpp:
+ (JSC::DFG::argumentsInvolveStackSlot):
+ (JSC::DFG::emitCodeToGetArgumentsArrayLength):
+ * dfg/DFGArrayMode.h:
+ * dfg/DFGByteCodeParser.cpp:
+ (JSC::DFG::ByteCodeParser::injectLazyOperandSpeculation):
+ (JSC::DFG::ByteCodeParser::setLocal):
+ (JSC::DFG::ByteCodeParser::setArgument):
+ (JSC::DFG::ByteCodeParser::flushForTerminalImpl):
+ (JSC::DFG::ByteCodeParser::getPredictionWithoutOSRExit):
+ (JSC::DFG::ByteCodeParser::parseBlock):
+ (JSC::DFG::ByteCodeParser::parseCodeBlock):
+ (JSC::DFG::ByteCodeParser::handlePutByVal):
+ * dfg/DFGClobberize.h:
+ (JSC::DFG::clobberize):
+ * dfg/DFGConstantFoldingPhase.cpp:
+ (JSC::DFG::ConstantFoldingPhase::foldConstants):
+ * dfg/DFGFixupPhase.cpp:
+ (JSC::DFG::FixupPhase::attemptToMakeGetArrayLength):
+ * dfg/DFGForAllKills.h:
+ (JSC::DFG::forAllKilledOperands):
+ * dfg/DFGGraph.cpp:
+ (JSC::DFG::Graph::dumpCodeOrigin):
+ (JSC::DFG::Graph::dump):
+ (JSC::DFG::Graph::isLiveInBytecode):
+ (JSC::DFG::Graph::methodOfGettingAValueProfileFor):
+ (JSC::DFG::Graph::willCatchExceptionInMachineFrame):
+ * dfg/DFGGraph.h:
+ (JSC::DFG::Graph::executableFor):
+ (JSC::DFG::Graph::isStrictModeFor):
+ (JSC::DFG::Graph::hasExitSite):
+ (JSC::DFG::Graph::forAllLocalsLiveInBytecode):
+ * dfg/DFGLiveCatchVariablePreservationPhase.cpp:
+ (JSC::DFG::LiveCatchVariablePreservationPhase::handleBlockForTryCatch):
+ * dfg/DFGMinifiedNode.cpp:
+ (JSC::DFG::MinifiedNode::fromNode):
+ * dfg/DFGOSRAvailabilityAnalysisPhase.cpp:
+ (JSC::DFG::LocalOSRAvailabilityCalculator::executeNode):
+ * dfg/DFGOSRExit.cpp:
+ (JSC::DFG::OSRExit::executeOSRExit):
+ (JSC::DFG::reifyInlinedCallFrames):
+ (JSC::DFG::adjustAndJumpToTarget):
+ (JSC::DFG::printOSRExit):
+ (JSC::DFG::OSRExit::compileExit):
+ * dfg/DFGOSRExitBase.cpp:
+ (JSC::DFG::OSRExitBase::considerAddingAsFrequentExitSiteSlow):
+ * dfg/DFGOSRExitCompilerCommon.cpp:
+ (JSC::DFG::handleExitCounts):
+ (JSC::DFG::reifyInlinedCallFrames):
+ (JSC::DFG::adjustAndJumpToTarget):
+ * dfg/DFGOSRExitPreparation.cpp:
+ (JSC::DFG::prepareCodeOriginForOSRExit):
+ * dfg/DFGObjectAllocationSinkingPhase.cpp:
+ * dfg/DFGOperations.cpp:
+ * dfg/DFGPreciseLocalClobberize.h:
+ (JSC::DFG::PreciseLocalClobberizeAdaptor::readTop):
+ * dfg/DFGSpeculativeJIT.cpp:
+ (JSC::DFG::SpeculativeJIT::emitGetLength):
+ (JSC::DFG::SpeculativeJIT::emitGetCallee):
+ (JSC::DFG::SpeculativeJIT::compileCurrentBlock):
+ (JSC::DFG::SpeculativeJIT::compileValueAdd):
+ (JSC::DFG::SpeculativeJIT::compileValueSub):
+ (JSC::DFG::SpeculativeJIT::compileValueNegate):
+ (JSC::DFG::SpeculativeJIT::compileValueMul):
+ (JSC::DFG::SpeculativeJIT::compileForwardVarargs):
+ (JSC::DFG::SpeculativeJIT::compileCreateDirectArguments):
+ * dfg/DFGSpeculativeJIT32_64.cpp:
+ (JSC::DFG::SpeculativeJIT::emitCall):
+ * dfg/DFGSpeculativeJIT64.cpp:
+ (JSC::DFG::SpeculativeJIT::emitCall):
+ (JSC::DFG::SpeculativeJIT::compile):
+ * dfg/DFGTierUpCheckInjectionPhase.cpp:
+ (JSC::DFG::TierUpCheckInjectionPhase::run):
+ (JSC::DFG::TierUpCheckInjectionPhase::canOSREnterAtLoopHint):
+ (JSC::DFG::TierUpCheckInjectionPhase::buildNaturalLoopToLoopHintMap):
+ * dfg/DFGTypeCheckHoistingPhase.cpp:
+ (JSC::DFG::TypeCheckHoistingPhase::run):
+ * dfg/DFGVariableEventStream.cpp:
+ (JSC::DFG::VariableEventStream::reconstruct const):
+ * ftl/FTLLowerDFGToB3.cpp:
+ (JSC::FTL::DFG::LowerDFGToB3::compileValueAdd):
+ (JSC::FTL::DFG::LowerDFGToB3::compileValueSub):
+ (JSC::FTL::DFG::LowerDFGToB3::compileValueMul):
+ (JSC::FTL::DFG::LowerDFGToB3::compileArithAddOrSub):
+ (JSC::FTL::DFG::LowerDFGToB3::compileValueNegate):
+ (JSC::FTL::DFG::LowerDFGToB3::compileGetMyArgumentByVal):
+ (JSC::FTL::DFG::LowerDFGToB3::compileNewArrayWithSpread):
+ (JSC::FTL::DFG::LowerDFGToB3::compileSpread):
+ (JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargsSpread):
+ (JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargs):
+ (JSC::FTL::DFG::LowerDFGToB3::compileForwardVarargs):
+ (JSC::FTL::DFG::LowerDFGToB3::compileForwardVarargsWithSpread):
+ (JSC::FTL::DFG::LowerDFGToB3::getArgumentsLength):
+ (JSC::FTL::DFG::LowerDFGToB3::getCurrentCallee):
+ (JSC::FTL::DFG::LowerDFGToB3::getArgumentsStart):
+ (JSC::FTL::DFG::LowerDFGToB3::codeOriginDescriptionOfCallSite const):
+ * ftl/FTLOSRExitCompiler.cpp:
+ (JSC::FTL::compileStub):
+ * ftl/FTLOperations.cpp:
+ (JSC::FTL::operationMaterializeObjectInOSR):
+ * interpreter/CallFrame.cpp:
+ (JSC::CallFrame::bytecodeOffset):
+ * interpreter/StackVisitor.cpp:
+ (JSC::StackVisitor::unwindToMachineCodeBlockFrame):
+ (JSC::StackVisitor::readFrame):
+ (JSC::StackVisitor::readNonInlinedFrame):
+ (JSC::inlinedFrameOffset):
+ (JSC::StackVisitor::readInlinedFrame):
+ * interpreter/StackVisitor.h:
+ * jit/AssemblyHelpers.cpp:
+ (JSC::AssemblyHelpers::executableFor):
+ * jit/AssemblyHelpers.h:
+ (JSC::AssemblyHelpers::isStrictModeFor):
+ (JSC::AssemblyHelpers::argumentsStart):
+ (JSC::AssemblyHelpers::argumentCount):
+ * jit/PCToCodeOriginMap.cpp:
+ (JSC::PCToCodeOriginMap::PCToCodeOriginMap):
+ (JSC::PCToCodeOriginMap::findPC const):
+ * profiler/ProfilerOriginStack.cpp:
+ (JSC::Profiler::OriginStack::OriginStack):
+ * profiler/ProfilerOriginStack.h:
+ * runtime/ErrorInstance.cpp:
+ (JSC::appendSourceToError):
+ * runtime/SamplingProfiler.cpp:
+ (JSC::SamplingProfiler::processUnverifiedStackTraces):
+
2019-03-20 Devin Rousso <drousso@apple.com>
Web Inspector: Search: allow DOM searches to be case sensitive
diff --git a/Source/JavaScriptCore/bytecode/ByValInfo.h b/Source/JavaScriptCore/bytecode/ByValInfo.h
index a75641a..5df21ff 100644
--- a/Source/JavaScriptCore/bytecode/ByValInfo.h
+++ b/Source/JavaScriptCore/bytecode/ByValInfo.h
@@ -27,7 +27,6 @@
#include "ClassInfo.h"
#include "CodeLocation.h"
-#include "CodeOrigin.h"
#include "IndexingType.h"
#include "JITStubRoutine.h"
#include "Structure.h"
diff --git a/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp b/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp
index a67c73e..157a271 100644
--- a/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp
+++ b/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp
@@ -302,7 +302,7 @@
{
if (CallLinkStatusInternal::verbose)
dataLog("Figuring out call profiling for ", codeOrigin, "\n");
- ExitSiteData exitSiteData = computeExitSiteData(profiledBlock, codeOrigin.bytecodeIndex);
+ ExitSiteData exitSiteData = computeExitSiteData(profiledBlock, codeOrigin.bytecodeIndex());
if (CallLinkStatusInternal::verbose) {
dataLog("takesSlowPath = ", exitSiteData.takesSlowPath, "\n");
dataLog("badFunction = ", exitSiteData.badFunction, "\n");
@@ -346,7 +346,7 @@
auto bless = [&] (CallLinkStatus& result) {
if (!context->isInlined(codeOrigin))
- result.merge(computeFor(profiledBlock, codeOrigin.bytecodeIndex, baselineMap, exitSiteData));
+ result.merge(computeFor(profiledBlock, codeOrigin.bytecodeIndex(), baselineMap, exitSiteData));
};
auto checkInfo = [&] () -> CallLinkStatus {
@@ -393,7 +393,7 @@
return result;
}
- return computeFor(profiledBlock, codeOrigin.bytecodeIndex, baselineMap, exitSiteData);
+ return computeFor(profiledBlock, codeOrigin.bytecodeIndex(), baselineMap, exitSiteData);
}
#endif
diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.cpp b/Source/JavaScriptCore/bytecode/CodeBlock.cpp
index fb16b48..6289a56 100644
--- a/Source/JavaScriptCore/bytecode/CodeBlock.cpp
+++ b/Source/JavaScriptCore/bytecode/CodeBlock.cpp
@@ -2056,9 +2056,10 @@
JSGlobalObject* CodeBlock::globalObjectFor(CodeOrigin codeOrigin)
{
- if (!codeOrigin.inlineCallFrame)
+ auto* inlineCallFrame = codeOrigin.inlineCallFrame();
+ if (!inlineCallFrame)
return globalObject();
- return codeOrigin.inlineCallFrame->baselineCodeBlock->globalObject();
+ return inlineCallFrame->baselineCodeBlock->globalObject();
}
class RecursionCheckFunctor {
@@ -2403,7 +2404,7 @@
// Otherwise, we should use the normal reoptimization trigger.
bool didTryToEnterInLoop = false;
- for (InlineCallFrame* inlineCallFrame = exit.m_codeOrigin.inlineCallFrame; inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame) {
+ for (InlineCallFrame* inlineCallFrame = exit.m_codeOrigin.inlineCallFrame(); inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame()) {
if (inlineCallFrame->baselineCodeBlock->ownerExecutable()->didTryToEnterInLoop()) {
didTryToEnterInLoop = true;
break;
@@ -3175,7 +3176,7 @@
#if ENABLE(DFG_JIT)
RELEASE_ASSERT(canGetCodeOrigin(callSiteIndex));
CodeOrigin origin = codeOrigin(callSiteIndex);
- bytecodeOffset = origin.bytecodeIndex;
+ bytecodeOffset = origin.bytecodeIndex();
#else
RELEASE_ASSERT_NOT_REACHED();
#endif
diff --git a/Source/JavaScriptCore/bytecode/CodeOrigin.cpp b/Source/JavaScriptCore/bytecode/CodeOrigin.cpp
index 05e79da..2de8048 100644
--- a/Source/JavaScriptCore/bytecode/CodeOrigin.cpp
+++ b/Source/JavaScriptCore/bytecode/CodeOrigin.cpp
@@ -33,17 +33,12 @@
namespace JSC {
-unsigned CodeOrigin::inlineDepthForCallFrame(InlineCallFrame* inlineCallFrame)
-{
- unsigned result = 1;
- for (InlineCallFrame* current = inlineCallFrame; current; current = current->directCaller.inlineCallFrame)
- result++;
- return result;
-}
-
unsigned CodeOrigin::inlineDepth() const
{
- return inlineDepthForCallFrame(inlineCallFrame);
+ unsigned result = 1;
+ for (InlineCallFrame* current = inlineCallFrame(); current; current = current->directCaller.inlineCallFrame())
+ result++;
+ return result;
}
bool CodeOrigin::isApproximatelyEqualTo(const CodeOrigin& other, InlineCallFrame* terminal) const
@@ -65,22 +60,24 @@
ASSERT(a.isSet());
ASSERT(b.isSet());
- if (a.bytecodeIndex != b.bytecodeIndex)
+ if (a.bytecodeIndex() != b.bytecodeIndex())
return false;
-
- bool aHasInlineCallFrame = !!a.inlineCallFrame && a.inlineCallFrame != terminal;
- bool bHasInlineCallFrame = !!b.inlineCallFrame;
+
+ auto* aInlineCallFrame = a.inlineCallFrame();
+ auto* bInlineCallFrame = b.inlineCallFrame();
+ bool aHasInlineCallFrame = !!aInlineCallFrame && aInlineCallFrame != terminal;
+ bool bHasInlineCallFrame = !!bInlineCallFrame;
if (aHasInlineCallFrame != bHasInlineCallFrame)
return false;
if (!aHasInlineCallFrame)
return true;
- if (a.inlineCallFrame->baselineCodeBlock.get() != b.inlineCallFrame->baselineCodeBlock.get())
+ if (aInlineCallFrame->baselineCodeBlock.get() != bInlineCallFrame->baselineCodeBlock.get())
return false;
- a = a.inlineCallFrame->directCaller;
- b = b.inlineCallFrame->directCaller;
+ a = aInlineCallFrame->directCaller;
+ b = bInlineCallFrame->directCaller;
}
}
@@ -94,17 +91,19 @@
unsigned result = 2;
CodeOrigin codeOrigin = *this;
for (;;) {
- result += codeOrigin.bytecodeIndex;
-
- if (!codeOrigin.inlineCallFrame)
+ result += codeOrigin.bytecodeIndex();
+
+ auto* inlineCallFrame = codeOrigin.inlineCallFrame();
+
+ if (!inlineCallFrame)
return result;
- if (codeOrigin.inlineCallFrame == terminal)
+ if (inlineCallFrame == terminal)
return result;
- result += WTF::PtrHash<JSCell*>::hash(codeOrigin.inlineCallFrame->baselineCodeBlock.get());
+ result += WTF::PtrHash<JSCell*>::hash(inlineCallFrame->baselineCodeBlock.get());
- codeOrigin = codeOrigin.inlineCallFrame->directCaller;
+ codeOrigin = inlineCallFrame->directCaller;
}
}
@@ -113,24 +112,25 @@
Vector<CodeOrigin> result(inlineDepth());
result.last() = *this;
unsigned index = result.size() - 2;
- for (InlineCallFrame* current = inlineCallFrame; current; current = current->directCaller.inlineCallFrame)
+ for (InlineCallFrame* current = inlineCallFrame(); current; current = current->directCaller.inlineCallFrame())
result[index--] = current->directCaller;
- RELEASE_ASSERT(!result[0].inlineCallFrame);
+ RELEASE_ASSERT(!result[0].inlineCallFrame());
return result;
}
CodeBlock* CodeOrigin::codeOriginOwner() const
{
+ auto* inlineCallFrame = this->inlineCallFrame();
if (!inlineCallFrame)
- return 0;
+ return nullptr;
return inlineCallFrame->baselineCodeBlock.get();
}
int CodeOrigin::stackOffset() const
{
+ auto* inlineCallFrame = this->inlineCallFrame();
if (!inlineCallFrame)
return 0;
-
return inlineCallFrame->stackOffset;
}
@@ -146,13 +146,13 @@
if (i)
out.print(" --> ");
- if (InlineCallFrame* frame = stack[i].inlineCallFrame) {
+ if (InlineCallFrame* frame = stack[i].inlineCallFrame()) {
out.print(frame->briefFunctionInformation(), ":<", RawPointer(frame->baselineCodeBlock.get()), "> ");
if (frame->isClosureCall)
out.print("(closure) ");
}
- out.print("bc#", stack[i].bytecodeIndex);
+ out.print("bc#", stack[i].bytecodeIndex());
}
}
diff --git a/Source/JavaScriptCore/bytecode/CodeOrigin.h b/Source/JavaScriptCore/bytecode/CodeOrigin.h
index 19d9cd0..804d351 100644
--- a/Source/JavaScriptCore/bytecode/CodeOrigin.h
+++ b/Source/JavaScriptCore/bytecode/CodeOrigin.h
@@ -25,8 +25,6 @@
#pragma once
-#include "CodeBlockHash.h"
-#include "ExitingInlineKind.h"
#include <limits.h>
#include <wtf/HashMap.h>
#include <wtf/PrintStream.h>
@@ -39,42 +37,105 @@
struct DumpContext;
struct InlineCallFrame;
-struct CodeOrigin {
- static const unsigned invalidBytecodeIndex = UINT_MAX;
-
- // Bytecode offset that you'd use to re-execute this instruction, and the
- // bytecode index of the bytecode instruction that produces some result that
- // you're interested in (used for mapping Nodes whose values you're using
- // to bytecode instructions that have the appropriate value profile).
- unsigned bytecodeIndex;
-
- InlineCallFrame* inlineCallFrame;
-
+class CodeOrigin {
+public:
CodeOrigin()
- : bytecodeIndex(invalidBytecodeIndex)
- , inlineCallFrame(0)
+#if CPU(ADDRESS64)
+ : m_compositeValue(buildCompositeValue(nullptr, s_invalidBytecodeIndex))
+#else
+ : m_bytecodeIndex(s_invalidBytecodeIndex)
+ , m_inlineCallFrame(nullptr)
+#endif
{
}
CodeOrigin(WTF::HashTableDeletedValueType)
- : bytecodeIndex(invalidBytecodeIndex)
- , inlineCallFrame(deletedMarker())
+#if CPU(ADDRESS64)
+ : m_compositeValue(buildCompositeValue(deletedMarker(), s_invalidBytecodeIndex))
+#else
+ : m_bytecodeIndex(s_invalidBytecodeIndex)
+ , m_inlineCallFrame(deletedMarker())
+#endif
{
}
- explicit CodeOrigin(unsigned bytecodeIndex, InlineCallFrame* inlineCallFrame = 0)
- : bytecodeIndex(bytecodeIndex)
- , inlineCallFrame(inlineCallFrame)
+ explicit CodeOrigin(unsigned bytecodeIndex, InlineCallFrame* inlineCallFrame = nullptr)
+#if CPU(ADDRESS64)
+ : m_compositeValue(buildCompositeValue(inlineCallFrame, bytecodeIndex))
+#else
+ : m_bytecodeIndex(bytecodeIndex)
+ , m_inlineCallFrame(inlineCallFrame)
+#endif
{
- ASSERT(bytecodeIndex < invalidBytecodeIndex);
+ ASSERT(bytecodeIndex < s_invalidBytecodeIndex);
+#if CPU(ADDRESS64)
+ ASSERT(!(bitwise_cast<uintptr_t>(inlineCallFrame) & ~s_maskCompositeValueForPointer));
+#endif
}
- bool isSet() const { return bytecodeIndex != invalidBytecodeIndex; }
+#if CPU(ADDRESS64)
+ CodeOrigin& operator=(const CodeOrigin& other)
+ {
+ if (this != &other) {
+ if (UNLIKELY(isOutOfLine()))
+ delete outOfLineCodeOrigin();
+
+ if (UNLIKELY(other.isOutOfLine()))
+ m_compositeValue = buildCompositeValue(other.inlineCallFrame(), other.bytecodeIndex());
+ else
+ m_compositeValue = other.m_compositeValue;
+ }
+ return *this;
+ }
+ CodeOrigin& operator=(CodeOrigin&& other)
+ {
+ if (this != &other) {
+ if (UNLIKELY(isOutOfLine()))
+ delete outOfLineCodeOrigin();
+
+ m_compositeValue = std::exchange(other.m_compositeValue, 0);
+ }
+ return *this;
+ }
+
+ CodeOrigin(const CodeOrigin& other)
+ {
+ // We don't use the member initializer list because it would not let us optimize the common case where there is no out-of-line storage
+ // (in which case we don't have to extract the components of the composite value just to reassemble it).
+ if (UNLIKELY(other.isOutOfLine()))
+ m_compositeValue = buildCompositeValue(other.inlineCallFrame(), other.bytecodeIndex());
+ else
+ m_compositeValue = other.m_compositeValue;
+ }
+ CodeOrigin(CodeOrigin&& other)
+ : m_compositeValue(std::exchange(other.m_compositeValue, 0))
+ {
+ }
+
+ ~CodeOrigin()
+ {
+ if (UNLIKELY(isOutOfLine()))
+ delete outOfLineCodeOrigin();
+ }
+#endif
+
+ bool isSet() const
+ {
+#if CPU(ADDRESS64)
+ return !(m_compositeValue & s_maskIsBytecodeIndexInvalid);
+#else
+ return m_bytecodeIndex != s_invalidBytecodeIndex;
+#endif
+ }
explicit operator bool() const { return isSet(); }
bool isHashTableDeletedValue() const
{
- return bytecodeIndex == invalidBytecodeIndex && !!inlineCallFrame;
+#if CPU(ADDRESS64)
+ return !isSet() && (m_compositeValue & s_maskCompositeValueForPointer);
+#else
+ return m_bytecodeIndex == s_invalidBytecodeIndex && !!m_inlineCallFrame;
+#endif
}
// The inline depth is the depth of the inline stack, so 1 = not inlined,
@@ -87,13 +148,6 @@
int stackOffset() const;
- static unsigned inlineDepthForCallFrame(InlineCallFrame*);
-
- ExitingInlineKind exitingInlineKind() const
- {
- return inlineCallFrame ? ExitFromInlined : ExitFromNotInlined;
- }
-
unsigned hash() const;
bool operator==(const CodeOrigin& other) const;
bool operator!=(const CodeOrigin& other) const { return !(*this == other); }
@@ -112,24 +166,122 @@
JS_EXPORT_PRIVATE void dump(PrintStream&) const;
void dumpInContext(PrintStream&, DumpContext*) const;
-
+
+ unsigned bytecodeIndex() const
+ {
+#if CPU(ADDRESS64)
+ if (!isSet())
+ return s_invalidBytecodeIndex;
+ if (UNLIKELY(isOutOfLine()))
+ return outOfLineCodeOrigin()->bytecodeIndex;
+ return m_compositeValue >> (64 - s_freeBitsAtTop);
+#else
+ return m_bytecodeIndex;
+#endif
+ }
+
+ InlineCallFrame* inlineCallFrame() const
+ {
+#if CPU(ADDRESS64)
+ if (UNLIKELY(isOutOfLine()))
+ return outOfLineCodeOrigin()->inlineCallFrame;
+ return bitwise_cast<InlineCallFrame*>(m_compositeValue & s_maskCompositeValueForPointer);
+#else
+ return m_inlineCallFrame;
+#endif
+ }
+
private:
+ static constexpr unsigned s_invalidBytecodeIndex = UINT_MAX;
+
+#if CPU(ADDRESS64)
+ static constexpr uintptr_t s_maskIsOutOfLine = 1;
+ static constexpr uintptr_t s_maskIsBytecodeIndexInvalid = 2;
+
+ struct OutOfLineCodeOrigin {
+ WTF_MAKE_FAST_ALLOCATED;
+ public:
+ InlineCallFrame* inlineCallFrame;
+ unsigned bytecodeIndex;
+
+ OutOfLineCodeOrigin(InlineCallFrame* inlineCallFrame, unsigned bytecodeIndex)
+ : inlineCallFrame(inlineCallFrame)
+ , bytecodeIndex(bytecodeIndex)
+ {
+ }
+ };
+
+ bool isOutOfLine() const
+ {
+ return m_compositeValue & s_maskIsOutOfLine;
+ }
+ OutOfLineCodeOrigin* outOfLineCodeOrigin() const
+ {
+ ASSERT(isOutOfLine());
+ return bitwise_cast<OutOfLineCodeOrigin*>(m_compositeValue & s_maskCompositeValueForPointer);
+ }
+#endif
+
static InlineCallFrame* deletedMarker()
{
- return bitwise_cast<InlineCallFrame*>(static_cast<uintptr_t>(1));
+ auto value = static_cast<uintptr_t>(1 << 3);
+#if CPU(ADDRESS64)
+ ASSERT(value & s_maskCompositeValueForPointer);
+ ASSERT(!(value & ~s_maskCompositeValueForPointer));
+#endif
+ return bitwise_cast<InlineCallFrame*>(value);
}
+
+#if CPU(X86_64) && CPU(ADDRESS64)
+ static constexpr unsigned s_freeBitsAtTop = 16;
+ static constexpr uintptr_t s_maskCompositeValueForPointer = 0x0000fffffffffff8;
+#elif CPU(ARM64) && CPU(ADDRESS64)
+ static constexpr unsigned s_freeBitsAtTop = 28;
+ static constexpr uintptr_t s_maskCompositeValueForPointer = 0x0000000ffffffff8;
+#endif
+#if CPU(ADDRESS64)
+ static uintptr_t buildCompositeValue(InlineCallFrame* inlineCallFrame, unsigned bytecodeIndex)
+ {
+ if (bytecodeIndex == s_invalidBytecodeIndex)
+ return bitwise_cast<uintptr_t>(inlineCallFrame) | s_maskIsBytecodeIndexInvalid;
+
+ if (UNLIKELY(bytecodeIndex >= 1 << s_freeBitsAtTop)) {
+ auto* outOfLine = new OutOfLineCodeOrigin(inlineCallFrame, bytecodeIndex);
+ return bitwise_cast<uintptr_t>(outOfLine) | s_maskIsOutOfLine;
+ }
+
+ uintptr_t encodedBytecodeIndex = static_cast<uintptr_t>(bytecodeIndex) << (64 - s_freeBitsAtTop);
+ ASSERT(!(encodedBytecodeIndex & bitwise_cast<uintptr_t>(inlineCallFrame)));
+ return encodedBytecodeIndex | bitwise_cast<uintptr_t>(inlineCallFrame);
+ }
+
+ // The bottom bit indicates whether to look at an out-of-line implementation (because of a bytecode index which is too big for us to store).
+ // The next bit indicates whether this is an invalid bytecode (which depending on the InlineCallFrame* can either indicate an unset CodeOrigin,
+ // or a deletion marker for a hash table).
+ // The next bit is free
+ // The next 64-s_freeBitsAtTop-3 are the InlineCallFrame* or the OutOfLineCodeOrigin*
+ // Finally the last s_freeBitsAtTop are the bytecodeIndex if it is inline
+ uintptr_t m_compositeValue;
+#else
+ unsigned m_bytecodeIndex;
+ InlineCallFrame* m_inlineCallFrame;
+#endif
};
inline unsigned CodeOrigin::hash() const
{
- return WTF::IntHash<unsigned>::hash(bytecodeIndex) +
- WTF::PtrHash<InlineCallFrame*>::hash(inlineCallFrame);
+ return WTF::IntHash<unsigned>::hash(bytecodeIndex()) +
+ WTF::PtrHash<InlineCallFrame*>::hash(inlineCallFrame());
}
inline bool CodeOrigin::operator==(const CodeOrigin& other) const
{
- return bytecodeIndex == other.bytecodeIndex
- && inlineCallFrame == other.inlineCallFrame;
+#if CPU(ADDRESS64)
+ if (m_compositeValue == other.m_compositeValue)
+ return true;
+#endif
+ return bytecodeIndex() == other.bytecodeIndex()
+ && inlineCallFrame() == other.inlineCallFrame();
}
struct CodeOriginHash {
diff --git a/Source/JavaScriptCore/bytecode/DeferredSourceDump.h b/Source/JavaScriptCore/bytecode/DeferredSourceDump.h
index 35b26f4..779ce11 100644
--- a/Source/JavaScriptCore/bytecode/DeferredSourceDump.h
+++ b/Source/JavaScriptCore/bytecode/DeferredSourceDump.h
@@ -25,7 +25,6 @@
#pragma once
-#include "CodeOrigin.h"
#include "JITCode.h"
#include "Strong.h"
diff --git a/Source/JavaScriptCore/bytecode/GetByIdStatus.cpp b/Source/JavaScriptCore/bytecode/GetByIdStatus.cpp
index e1da1df..558bb24 100644
--- a/Source/JavaScriptCore/bytecode/GetByIdStatus.cpp
+++ b/Source/JavaScriptCore/bytecode/GetByIdStatus.cpp
@@ -130,11 +130,12 @@
#if ENABLE(DFG_JIT)
GetByIdStatus GetByIdStatus::computeForStubInfo(const ConcurrentJSLocker& locker, CodeBlock* profiledBlock, StructureStubInfo* stubInfo, CodeOrigin codeOrigin, UniquedStringImpl* uid)
{
+ unsigned bytecodeIndex = codeOrigin.bytecodeIndex();
GetByIdStatus result = GetByIdStatus::computeForStubInfoWithoutExitSiteFeedback(
locker, profiledBlock, stubInfo, uid,
- CallLinkStatus::computeExitSiteData(profiledBlock, codeOrigin.bytecodeIndex));
+ CallLinkStatus::computeExitSiteData(profiledBlock, bytecodeIndex));
- if (!result.takesSlowPath() && hasBadCacheExitSite(profiledBlock, codeOrigin.bytecodeIndex))
+ if (!result.takesSlowPath() && hasBadCacheExitSite(profiledBlock, bytecodeIndex))
return result.slowVersion();
return result;
}
@@ -302,9 +303,9 @@
CodeBlock* profiledBlock, ICStatusMap& baselineMap,
ICStatusContextStack& icContextStack, CodeOrigin codeOrigin, UniquedStringImpl* uid)
{
- CallLinkStatus::ExitSiteData callExitSiteData =
- CallLinkStatus::computeExitSiteData(profiledBlock, codeOrigin.bytecodeIndex);
- ExitFlag didExit = hasBadCacheExitSite(profiledBlock, codeOrigin.bytecodeIndex);
+ unsigned bytecodeIndex = codeOrigin.bytecodeIndex();
+ CallLinkStatus::ExitSiteData callExitSiteData = CallLinkStatus::computeExitSiteData(profiledBlock, bytecodeIndex);
+ ExitFlag didExit = hasBadCacheExitSite(profiledBlock, bytecodeIndex);
for (ICStatusContext* context : icContextStack) {
ICStatus status = context->get(codeOrigin);
@@ -314,7 +315,7 @@
// Merge with baseline result, which also happens to contain exit data for both
// inlined and not-inlined.
GetByIdStatus baselineResult = computeFor(
- profiledBlock, baselineMap, codeOrigin.bytecodeIndex, uid, didExit,
+ profiledBlock, baselineMap, bytecodeIndex, uid, didExit,
callExitSiteData);
baselineResult.merge(result);
return baselineResult;
@@ -339,7 +340,7 @@
return bless(*status.getStatus);
}
- return computeFor(profiledBlock, baselineMap, codeOrigin.bytecodeIndex, uid, didExit, callExitSiteData);
+ return computeFor(profiledBlock, baselineMap, bytecodeIndex, uid, didExit, callExitSiteData);
}
GetByIdStatus GetByIdStatus::computeFor(const StructureSet& set, UniquedStringImpl* uid)
diff --git a/Source/JavaScriptCore/bytecode/ICStatusMap.cpp b/Source/JavaScriptCore/bytecode/ICStatusMap.cpp
index 7c9e699..b85e4fd 100644
--- a/Source/JavaScriptCore/bytecode/ICStatusMap.cpp
+++ b/Source/JavaScriptCore/bytecode/ICStatusMap.cpp
@@ -39,7 +39,8 @@
bool ICStatusContext::isInlined(CodeOrigin codeOrigin) const
{
- return codeOrigin.inlineCallFrame && codeOrigin.inlineCallFrame != inlineCallFrame;
+ auto* originInlineCallFrame = codeOrigin.inlineCallFrame();
+ return originInlineCallFrame && originInlineCallFrame != inlineCallFrame;
}
ExitingInlineKind ICStatusContext::inlineKind(CodeOrigin codeOrigin) const
diff --git a/Source/JavaScriptCore/bytecode/InByIdStatus.cpp b/Source/JavaScriptCore/bytecode/InByIdStatus.cpp
index 5127207..6399cec 100644
--- a/Source/JavaScriptCore/bytecode/InByIdStatus.cpp
+++ b/Source/JavaScriptCore/bytecode/InByIdStatus.cpp
@@ -73,7 +73,8 @@
CodeBlock* profiledBlock, ICStatusMap& baselineMap,
ICStatusContextStack& contextStack, CodeOrigin codeOrigin, UniquedStringImpl* uid)
{
- ExitFlag didExit = hasBadCacheExitSite(profiledBlock, codeOrigin.bytecodeIndex);
+ unsigned bytecodeIndex = codeOrigin.bytecodeIndex();
+ ExitFlag didExit = hasBadCacheExitSite(profiledBlock, bytecodeIndex);
for (ICStatusContext* context : contextStack) {
ICStatus status = context->get(codeOrigin);
@@ -81,7 +82,7 @@
auto bless = [&] (const InByIdStatus& result) -> InByIdStatus {
if (!context->isInlined(codeOrigin)) {
InByIdStatus baselineResult = computeFor(
- profiledBlock, baselineMap, codeOrigin.bytecodeIndex, uid, didExit);
+ profiledBlock, baselineMap, bytecodeIndex, uid, didExit);
baselineResult.merge(result);
return baselineResult;
}
@@ -106,7 +107,7 @@
return bless(*status.inStatus);
}
- return computeFor(profiledBlock, baselineMap, codeOrigin.bytecodeIndex, uid, didExit);
+ return computeFor(profiledBlock, baselineMap, bytecodeIndex, uid, didExit);
}
#endif // ENABLE(JIT)
@@ -115,7 +116,7 @@
{
InByIdStatus result = InByIdStatus::computeForStubInfoWithoutExitSiteFeedback(locker, stubInfo, uid);
- if (!result.takesSlowPath() && hasBadCacheExitSite(profiledBlock, codeOrigin.bytecodeIndex))
+ if (!result.takesSlowPath() && hasBadCacheExitSite(profiledBlock, codeOrigin.bytecodeIndex()))
return InByIdStatus(TakesSlowPath);
return result;
}
diff --git a/Source/JavaScriptCore/bytecode/InlineCallFrame.cpp b/Source/JavaScriptCore/bytecode/InlineCallFrame.cpp
index 2fe51af..ad2e0b8 100644
--- a/Source/JavaScriptCore/bytecode/InlineCallFrame.cpp
+++ b/Source/JavaScriptCore/bytecode/InlineCallFrame.cpp
@@ -69,7 +69,7 @@
out.print(briefFunctionInformation(), ":<", RawPointer(baselineCodeBlock.get()));
if (isStrictMode())
out.print(" (StrictMode)");
- out.print(", bc#", directCaller.bytecodeIndex, ", ", static_cast<Kind>(kind));
+ out.print(", bc#", directCaller.bytecodeIndex(), ", ", static_cast<Kind>(kind));
if (isClosureCall)
out.print(", closure call");
else
diff --git a/Source/JavaScriptCore/bytecode/InlineCallFrame.h b/Source/JavaScriptCore/bytecode/InlineCallFrame.h
index 326c9ab..d9a399c 100644
--- a/Source/JavaScriptCore/bytecode/InlineCallFrame.h
+++ b/Source/JavaScriptCore/bytecode/InlineCallFrame.h
@@ -152,7 +152,7 @@
tailCallee = inlineCallFrame->isTail();
callKind = inlineCallFrame->kind;
codeOrigin = &inlineCallFrame->directCaller;
- inlineCallFrame = codeOrigin->inlineCallFrame;
+ inlineCallFrame = codeOrigin->inlineCallFrame();
} while (inlineCallFrame && tailCallee);
if (tailCallee)
@@ -172,7 +172,7 @@
InlineCallFrame* getCallerInlineFrameSkippingTailCalls()
{
CodeOrigin* caller = getCallerSkippingTailCalls();
- return caller ? caller->inlineCallFrame : nullptr;
+ return caller ? caller->inlineCallFrame() : nullptr;
}
Vector<ValueRecovery> argumentsWithFixup; // Includes 'this' and arity fixups.
@@ -241,20 +241,23 @@
inline CodeBlock* baselineCodeBlockForOriginAndBaselineCodeBlock(const CodeOrigin& codeOrigin, CodeBlock* baselineCodeBlock)
{
ASSERT(baselineCodeBlock->jitType() == JITCode::BaselineJIT);
- if (codeOrigin.inlineCallFrame)
- return baselineCodeBlockForInlineCallFrame(codeOrigin.inlineCallFrame);
+ auto* inlineCallFrame = codeOrigin.inlineCallFrame();
+ if (inlineCallFrame)
+ return baselineCodeBlockForInlineCallFrame(inlineCallFrame);
return baselineCodeBlock;
}
+// This function is defined here and not in CodeOrigin because it needs access to the directCaller field in InlineCallFrame
template <typename Function>
inline void CodeOrigin::walkUpInlineStack(const Function& function)
{
CodeOrigin codeOrigin = *this;
while (true) {
function(codeOrigin);
- if (!codeOrigin.inlineCallFrame)
+ auto* inlineCallFrame = codeOrigin.inlineCallFrame();
+ if (!inlineCallFrame)
break;
- codeOrigin = codeOrigin.inlineCallFrame->directCaller;
+ codeOrigin = inlineCallFrame->directCaller;
}
}
diff --git a/Source/JavaScriptCore/bytecode/InstanceOfStatus.h b/Source/JavaScriptCore/bytecode/InstanceOfStatus.h
index 13ca98e..9163c89 100644
--- a/Source/JavaScriptCore/bytecode/InstanceOfStatus.h
+++ b/Source/JavaScriptCore/bytecode/InstanceOfStatus.h
@@ -25,7 +25,6 @@
#pragma once
-#include "CodeOrigin.h"
#include "ConcurrentJSLock.h"
#include "ICStatusMap.h"
#include "InstanceOfVariant.h"
diff --git a/Source/JavaScriptCore/bytecode/PutByIdStatus.cpp b/Source/JavaScriptCore/bytecode/PutByIdStatus.cpp
index d6e7e26..4db8bd10 100644
--- a/Source/JavaScriptCore/bytecode/PutByIdStatus.cpp
+++ b/Source/JavaScriptCore/bytecode/PutByIdStatus.cpp
@@ -122,7 +122,7 @@
{
return computeForStubInfo(
locker, baselineBlock, stubInfo, uid,
- CallLinkStatus::computeExitSiteData(baselineBlock, codeOrigin.bytecodeIndex));
+ CallLinkStatus::computeExitSiteData(baselineBlock, codeOrigin.bytecodeIndex()));
}
PutByIdStatus PutByIdStatus::computeForStubInfo(
@@ -237,9 +237,9 @@
PutByIdStatus PutByIdStatus::computeFor(CodeBlock* baselineBlock, ICStatusMap& baselineMap, ICStatusContextStack& contextStack, CodeOrigin codeOrigin, UniquedStringImpl* uid)
{
- CallLinkStatus::ExitSiteData callExitSiteData =
- CallLinkStatus::computeExitSiteData(baselineBlock, codeOrigin.bytecodeIndex);
- ExitFlag didExit = hasBadCacheExitSite(baselineBlock, codeOrigin.bytecodeIndex);
+ unsigned bytecodeIndex = codeOrigin.bytecodeIndex();
+ CallLinkStatus::ExitSiteData callExitSiteData = CallLinkStatus::computeExitSiteData(baselineBlock, bytecodeIndex);
+ ExitFlag didExit = hasBadCacheExitSite(baselineBlock, bytecodeIndex);
for (ICStatusContext* context : contextStack) {
ICStatus status = context->get(codeOrigin);
@@ -247,7 +247,7 @@
auto bless = [&] (const PutByIdStatus& result) -> PutByIdStatus {
if (!context->isInlined(codeOrigin)) {
PutByIdStatus baselineResult = computeFor(
- baselineBlock, baselineMap, codeOrigin.bytecodeIndex, uid, didExit,
+ baselineBlock, baselineMap, bytecodeIndex, uid, didExit,
callExitSiteData);
baselineResult.merge(result);
return baselineResult;
@@ -272,7 +272,7 @@
return bless(*status.putStatus);
}
- return computeFor(baselineBlock, baselineMap, codeOrigin.bytecodeIndex, uid, didExit, callExitSiteData);
+ return computeFor(baselineBlock, baselineMap, bytecodeIndex, uid, didExit, callExitSiteData);
}
PutByIdStatus PutByIdStatus::computeFor(JSGlobalObject* globalObject, const StructureSet& set, UniquedStringImpl* uid, bool isDirect)
diff --git a/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h b/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
index e1c96ca..ff20e5f 100644
--- a/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
+++ b/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
@@ -2213,7 +2213,7 @@
case GetMyArgumentByVal:
case GetMyArgumentByValOutOfBounds: {
JSValue index = forNode(node->child2()).m_value;
- InlineCallFrame* inlineCallFrame = node->child1()->origin.semantic.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = node->child1()->origin.semantic.inlineCallFrame();
if (index && index.isUInt32()) {
// This pretends to return TOP for accesses that are actually proven out-of-bounds because
diff --git a/Source/JavaScriptCore/dfg/DFGArgumentsEliminationPhase.cpp b/Source/JavaScriptCore/dfg/DFGArgumentsEliminationPhase.cpp
index d397e85..2b7a7c6 100644
--- a/Source/JavaScriptCore/dfg/DFGArgumentsEliminationPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGArgumentsEliminationPhase.cpp
@@ -529,7 +529,7 @@
bool isClobberedByBlock = false;
Operands<bool>& clobberedByThisBlock = clobberedByBlock[block];
- if (InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame) {
+ if (InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame()) {
if (inlineCallFrame->isVarargs()) {
isClobberedByBlock |= clobberedByThisBlock.operand(
inlineCallFrame->stackOffset + CallFrameSlot::argumentCount);
@@ -759,7 +759,7 @@
Node* result = nullptr;
if (m_graph.varArgChild(node, 1)->isInt32Constant()) {
unsigned index = m_graph.varArgChild(node, 1)->asUInt32();
- InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame();
index += numberOfArgumentsToSkip;
bool safeToGetStack;
@@ -861,7 +861,7 @@
return true;
ASSERT(candidate->op() == PhantomCreateRest);
- InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame();
return inlineCallFrame && !inlineCallFrame->isVarargs();
});
@@ -887,7 +887,7 @@
ASSERT(candidate->op() == PhantomCreateRest);
unsigned numberOfArgumentsToSkip = candidate->numberOfArgumentsToSkip();
- InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame();
unsigned frameArgumentCount = inlineCallFrame->argumentCountIncludingThis - 1;
if (frameArgumentCount >= numberOfArgumentsToSkip)
return frameArgumentCount - numberOfArgumentsToSkip;
@@ -935,7 +935,7 @@
ASSERT(candidate->op() == PhantomCreateRest);
unsigned numberOfArgumentsToSkip = candidate->numberOfArgumentsToSkip();
- InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame();
unsigned frameArgumentCount = inlineCallFrame->argumentCountIncludingThis - 1;
for (unsigned loadIndex = numberOfArgumentsToSkip; loadIndex < frameArgumentCount; ++loadIndex) {
VirtualRegister reg = virtualRegisterForArgument(loadIndex + 1) + inlineCallFrame->stackOffset;
@@ -970,7 +970,7 @@
numberOfArgumentsToSkip = candidate->numberOfArgumentsToSkip();
varargsData->offset += numberOfArgumentsToSkip;
- InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame();
if (inlineCallFrame
&& !inlineCallFrame->isVarargs()) {
@@ -1111,7 +1111,7 @@
return true;
ASSERT(candidate->op() == PhantomCreateRest);
- InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame();
return inlineCallFrame && !inlineCallFrame->isVarargs();
});
@@ -1150,7 +1150,7 @@
}
ASSERT(candidate->op() == PhantomCreateRest);
- InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame();
unsigned numberOfArgumentsToSkip = candidate->numberOfArgumentsToSkip();
for (unsigned i = 1 + numberOfArgumentsToSkip; i < inlineCallFrame->argumentCountIncludingThis; ++i) {
StackAccessData* data = m_graph.m_stackAccessData.add(
@@ -1175,7 +1175,7 @@
CallVarargsData* varargsData = node->callVarargsData();
varargsData->firstVarArgOffset += numberOfArgumentsToSkip;
- InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame();
if (inlineCallFrame && !inlineCallFrame->isVarargs()) {
Vector<Node*> arguments;
for (unsigned i = 1 + varargsData->firstVarArgOffset; i < inlineCallFrame->argumentCountIncludingThis; ++i) {
diff --git a/Source/JavaScriptCore/dfg/DFGArgumentsUtilities.cpp b/Source/JavaScriptCore/dfg/DFGArgumentsUtilities.cpp
index cc75f47..6261123 100644
--- a/Source/JavaScriptCore/dfg/DFGArgumentsUtilities.cpp
+++ b/Source/JavaScriptCore/dfg/DFGArgumentsUtilities.cpp
@@ -54,7 +54,7 @@
bool argumentsInvolveStackSlot(Node* candidate, VirtualRegister reg)
{
- return argumentsInvolveStackSlot(candidate->origin.semantic.inlineCallFrame, reg);
+ return argumentsInvolveStackSlot(candidate->origin.semantic.inlineCallFrame(), reg);
}
Node* emitCodeToGetArgumentsArrayLength(
@@ -106,7 +106,7 @@
nodeIndex, origin, jsNumber(arguments->castOperand<JSImmutableButterfly*>()->length()));
}
- InlineCallFrame* inlineCallFrame = arguments->origin.semantic.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = arguments->origin.semantic.inlineCallFrame();
unsigned numberOfArgumentsToSkip = 0;
if (arguments->op() == CreateRest || arguments->op() == PhantomCreateRest)
diff --git a/Source/JavaScriptCore/dfg/DFGArrayMode.h b/Source/JavaScriptCore/dfg/DFGArrayMode.h
index c23b098..753e8a2 100644
--- a/Source/JavaScriptCore/dfg/DFGArrayMode.h
+++ b/Source/JavaScriptCore/dfg/DFGArrayMode.h
@@ -32,7 +32,7 @@
namespace JSC {
-struct CodeOrigin;
+class CodeOrigin;
namespace DFG {
diff --git a/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp b/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
index adeb12c..4cbe394 100644
--- a/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
+++ b/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
@@ -385,7 +385,7 @@
Node* injectLazyOperandSpeculation(Node* node)
{
ASSERT(node->op() == GetLocal);
- ASSERT(node->origin.semantic.bytecodeIndex == m_currentIndex);
+ ASSERT(node->origin.semantic.bytecodeIndex() == m_currentIndex);
ConcurrentJSLocker locker(m_inlineStackTop->m_profiledBlock->m_lock);
LazyOperandValueProfileKey key(m_currentIndex, node->local());
SpeculatedType prediction = m_inlineStackTop->m_lazyOperands.prediction(locker, key);
@@ -442,9 +442,9 @@
VariableAccessData* variableAccessData = newVariableAccessData(operand);
variableAccessData->mergeStructureCheckHoistingFailed(
- m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex, BadCache));
+ m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex(), BadCache));
variableAccessData->mergeCheckArrayHoistingFailed(
- m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex, BadIndexingType));
+ m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex(), BadIndexingType));
Node* node = addToGraph(SetLocal, OpInfo(variableAccessData), value);
m_currentBlock->variablesAtTail.local(local) = node;
return node;
@@ -498,9 +498,9 @@
variableAccessData->mergeShouldNeverUnbox(true);
variableAccessData->mergeStructureCheckHoistingFailed(
- m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex, BadCache));
+ m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex(), BadCache));
variableAccessData->mergeCheckArrayHoistingFailed(
- m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex, BadIndexingType));
+ m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex(), BadIndexingType));
Node* node = addToGraph(SetLocal, OpInfo(variableAccessData), value);
m_currentBlock->variablesAtTail.argument(argument) = node;
return node;
@@ -563,8 +563,8 @@
{
origin.walkUpInlineStack(
[&] (CodeOrigin origin) {
- unsigned bytecodeIndex = origin.bytecodeIndex;
- InlineCallFrame* inlineCallFrame = origin.inlineCallFrame;
+ unsigned bytecodeIndex = origin.bytecodeIndex();
+ InlineCallFrame* inlineCallFrame = origin.inlineCallFrame();
flushImpl(inlineCallFrame, addFlushDirect);
CodeBlock* codeBlock = m_graph.baselineCodeBlockFor(inlineCallFrame);
@@ -868,10 +868,10 @@
return SpecFullTop;
InlineStackEntry* stack = m_inlineStackTop;
- while (stack->m_inlineCallFrame != codeOrigin->inlineCallFrame)
+ while (stack->m_inlineCallFrame != codeOrigin->inlineCallFrame())
stack = stack->m_caller;
- bytecodeIndex = codeOrigin->bytecodeIndex;
+ bytecodeIndex = codeOrigin->bytecodeIndex();
CodeBlock* profiledBlock = stack->m_profiledBlock;
ConcurrentJSLocker locker(profiledBlock->m_lock);
return profiledBlock->valueProfilePredictionForBytecodeOffset(locker, bytecodeIndex);
@@ -5396,7 +5396,7 @@
unsigned identifierNumber = 0;
{
ConcurrentJSLocker locker(m_inlineStackTop->m_profiledBlock->m_lock);
- ByValInfo* byValInfo = m_inlineStackTop->m_baselineMap.get(CodeOrigin(currentCodeOrigin().bytecodeIndex)).byValInfo;
+ ByValInfo* byValInfo = m_inlineStackTop->m_baselineMap.get(CodeOrigin(currentCodeOrigin().bytecodeIndex())).byValInfo;
// FIXME: When the bytecode is not compiled in the baseline JIT, byValInfo becomes null.
// At that time, there is no information.
if (byValInfo
@@ -6026,7 +6026,7 @@
m_inlineStackTop->m_exitProfile.hasExitSite(exitBytecodeIndex, BadIndexingType));
Node* setArgument = addToGraph(SetArgument, OpInfo(variable));
- setArgument->origin.forExit.bytecodeIndex = exitBytecodeIndex;
+ setArgument->origin.forExit = CodeOrigin(exitBytecodeIndex, setArgument->origin.forExit.inlineCallFrame());
m_currentBlock->variablesAtTail.setArgumentFirstTime(argument, setArgument);
entrypointArguments[argument] = setArgument;
}
@@ -7088,7 +7088,7 @@
if (UNLIKELY(Options::dumpSourceAtDFGTime())) {
Vector<DeferredSourceDump>& deferredSourceDump = m_graph.m_plan.callback()->ensureDeferredSourceDump();
if (inlineCallFrame()) {
- DeferredSourceDump dump(codeBlock->baselineVersion(), m_codeBlock, JITCode::DFGJIT, inlineCallFrame()->directCaller.bytecodeIndex);
+ DeferredSourceDump dump(codeBlock->baselineVersion(), m_codeBlock, JITCode::DFGJIT, inlineCallFrame()->directCaller.bytecodeIndex());
deferredSourceDump.append(dump);
} else
deferredSourceDump.append(DeferredSourceDump(codeBlock->baselineVersion()));
@@ -7176,7 +7176,7 @@
PutByIdStatus putByIdStatus;
{
ConcurrentJSLocker locker(m_inlineStackTop->m_profiledBlock->m_lock);
- ByValInfo* byValInfo = m_inlineStackTop->m_baselineMap.get(CodeOrigin(currentCodeOrigin().bytecodeIndex)).byValInfo;
+ ByValInfo* byValInfo = m_inlineStackTop->m_baselineMap.get(CodeOrigin(currentCodeOrigin().bytecodeIndex())).byValInfo;
// FIXME: When the bytecode is not compiled in the baseline JIT, byValInfo becomes null.
// At that time, there is no information.
if (byValInfo
diff --git a/Source/JavaScriptCore/dfg/DFGClobberize.h b/Source/JavaScriptCore/dfg/DFGClobberize.h
index 3725d63..1a2d4ca 100644
--- a/Source/JavaScriptCore/dfg/DFGClobberize.h
+++ b/Source/JavaScriptCore/dfg/DFGClobberize.h
@@ -110,7 +110,7 @@
// by calls into the runtime. For debugging we might replace the implementation of any node with a call
// to the runtime, and that call may walk stack. Therefore, each node must read() anything that a stack
// scan would read. That's what this does.
- for (InlineCallFrame* inlineCallFrame = node->origin.semantic.inlineCallFrame; inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame) {
+ for (InlineCallFrame* inlineCallFrame = node->origin.semantic.inlineCallFrame(); inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame()) {
if (inlineCallFrame->isClosureCall)
read(AbstractHeap(Stack, inlineCallFrame->stackOffset + CallFrameSlot::callee));
if (inlineCallFrame->isVarargs())
@@ -123,7 +123,7 @@
// The debugger's machinery is free to take a stack trace and try to read from
// a scope which is expected to be flushed to the stack.
if (graph.hasDebuggerEnabled()) {
- ASSERT(!node->origin.semantic.inlineCallFrame);
+ ASSERT(!node->origin.semantic.inlineCallFrame());
read(AbstractHeap(Stack, graph.m_codeBlock->scopeRegister()));
}
@@ -710,7 +710,7 @@
}
case CallEval:
- ASSERT(!node->origin.semantic.inlineCallFrame);
+ ASSERT(!node->origin.semantic.inlineCallFrame());
read(AbstractHeap(Stack, graph.m_codeBlock->scopeRegister()));
read(AbstractHeap(Stack, virtualRegisterForArgument(0)));
read(World);
diff --git a/Source/JavaScriptCore/dfg/DFGConstantFoldingPhase.cpp b/Source/JavaScriptCore/dfg/DFGConstantFoldingPhase.cpp
index 0c3813c..452cf72 100644
--- a/Source/JavaScriptCore/dfg/DFGConstantFoldingPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGConstantFoldingPhase.cpp
@@ -363,7 +363,7 @@
unsigned index = checkedIndex.unsafeGet();
Node* arguments = node->child1().node();
- InlineCallFrame* inlineCallFrame = arguments->origin.semantic.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = arguments->origin.semantic.inlineCallFrame();
// Don't try to do anything if the index is known to be outside our static bounds. Note
// that our static bounds are usually strictly larger than the dynamic bounds. The
diff --git a/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp b/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
index f5b9889..e62b002 100644
--- a/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
@@ -3430,7 +3430,7 @@
return false;
CodeBlock* profiledBlock = m_graph.baselineCodeBlockFor(node->origin.semantic);
ArrayProfile* arrayProfile =
- profiledBlock->getArrayProfile(node->origin.semantic.bytecodeIndex);
+ profiledBlock->getArrayProfile(node->origin.semantic.bytecodeIndex());
ArrayMode arrayMode = ArrayMode(Array::SelectUsingPredictions, Array::Read);
if (arrayProfile) {
ConcurrentJSLocker locker(profiledBlock->m_lock);
diff --git a/Source/JavaScriptCore/dfg/DFGForAllKills.h b/Source/JavaScriptCore/dfg/DFGForAllKills.h
index ac13301..b660913 100644
--- a/Source/JavaScriptCore/dfg/DFGForAllKills.h
+++ b/Source/JavaScriptCore/dfg/DFGForAllKills.h
@@ -68,12 +68,13 @@
// It's easier to do this if the inline call frames are the same. This is way faster than the
// other loop, below.
- if (before.inlineCallFrame == after.inlineCallFrame) {
- int stackOffset = before.inlineCallFrame ? before.inlineCallFrame->stackOffset : 0;
- CodeBlock* codeBlock = graph.baselineCodeBlockFor(before.inlineCallFrame);
+ auto* beforeInlineCallFrame = before.inlineCallFrame();
+ if (beforeInlineCallFrame == after.inlineCallFrame()) {
+ int stackOffset = beforeInlineCallFrame ? beforeInlineCallFrame->stackOffset : 0;
+ CodeBlock* codeBlock = graph.baselineCodeBlockFor(beforeInlineCallFrame);
FullBytecodeLiveness& fullLiveness = graph.livenessFor(codeBlock);
- const FastBitVector& liveBefore = fullLiveness.getLiveness(before.bytecodeIndex);
- const FastBitVector& liveAfter = fullLiveness.getLiveness(after.bytecodeIndex);
+ const FastBitVector& liveBefore = fullLiveness.getLiveness(before.bytecodeIndex());
+ const FastBitVector& liveAfter = fullLiveness.getLiveness(after.bytecodeIndex());
(liveBefore & ~liveAfter).forEachSetBit(
[&] (size_t relativeLocal) {
diff --git a/Source/JavaScriptCore/dfg/DFGGraph.cpp b/Source/JavaScriptCore/dfg/DFGGraph.cpp
index 344b645..b61d5ba 100644
--- a/Source/JavaScriptCore/dfg/DFGGraph.cpp
+++ b/Source/JavaScriptCore/dfg/DFGGraph.cpp
@@ -120,7 +120,7 @@
if (!previousNode)
return false;
- if (previousNode->origin.semantic.inlineCallFrame == currentNode->origin.semantic.inlineCallFrame)
+ if (previousNode->origin.semantic.inlineCallFrame() == currentNode->origin.semantic.inlineCallFrame())
return false;
Vector<CodeOrigin> previousInlineStack = previousNode->origin.semantic.inlineStack();
@@ -128,7 +128,7 @@
unsigned commonSize = std::min(previousInlineStack.size(), currentInlineStack.size());
unsigned indexOfDivergence = commonSize;
for (unsigned i = 0; i < commonSize; ++i) {
- if (previousInlineStack[i].inlineCallFrame != currentInlineStack[i].inlineCallFrame) {
+ if (previousInlineStack[i].inlineCallFrame() != currentInlineStack[i].inlineCallFrame()) {
indexOfDivergence = i;
break;
}
@@ -140,7 +140,7 @@
for (unsigned i = previousInlineStack.size(); i-- > indexOfDivergence;) {
out.print(prefix);
printWhiteSpace(out, i * 2);
- out.print("<-- ", inContext(*previousInlineStack[i].inlineCallFrame, context), "\n");
+ out.print("<-- ", inContext(*previousInlineStack[i].inlineCallFrame(), context), "\n");
hasPrinted = true;
}
@@ -148,7 +148,7 @@
for (unsigned i = indexOfDivergence; i < currentInlineStack.size(); ++i) {
out.print(prefix);
printWhiteSpace(out, i * 2);
- out.print("--> ", inContext(*currentInlineStack[i].inlineCallFrame, context), "\n");
+ out.print("--> ", inContext(*currentInlineStack[i].inlineCallFrame(), context), "\n");
hasPrinted = true;
}
@@ -394,7 +394,7 @@
if (clobbersExitState(*this, node))
out.print(comma, "ClobbersExit");
if (node->origin.isSet()) {
- out.print(comma, "bc#", node->origin.semantic.bytecodeIndex);
+ out.print(comma, "bc#", node->origin.semantic.bytecodeIndex());
if (node->origin.semantic != node->origin.forExit && node->origin.forExit.isSet())
out.print(comma, "exit: ", node->origin.forExit);
}
@@ -1123,19 +1123,21 @@
if (verbose)
dataLog("reg = ", reg, "\n");
-
+
+ auto* inlineCallFrame = codeOriginPtr->inlineCallFrame();
if (operand.offset() < codeOriginPtr->stackOffset() + CallFrame::headerSizeInRegisters) {
if (reg.isArgument()) {
RELEASE_ASSERT(reg.offset() < CallFrame::headerSizeInRegisters);
-
- if (codeOriginPtr->inlineCallFrame->isClosureCall
+
+
+ if (inlineCallFrame->isClosureCall
&& reg.offset() == CallFrameSlot::callee) {
if (verbose)
dataLog("Looks like a callee.\n");
return true;
}
- if (codeOriginPtr->inlineCallFrame->isVarargs()
+ if (inlineCallFrame->isVarargs()
&& reg.offset() == CallFrameSlot::argumentCount) {
if (verbose)
dataLog("Looks like the argument count.\n");
@@ -1147,11 +1149,9 @@
if (verbose)
dataLog("Asking the bytecode liveness.\n");
- return livenessFor(codeOriginPtr->inlineCallFrame).operandIsLive(
- reg.offset(), codeOriginPtr->bytecodeIndex);
+ return livenessFor(inlineCallFrame).operandIsLive(reg.offset(), codeOriginPtr->bytecodeIndex());
}
-
- InlineCallFrame* inlineCallFrame = codeOriginPtr->inlineCallFrame;
+
if (!inlineCallFrame) {
if (verbose)
dataLog("Ran out of stack, returning true.\n");
@@ -1622,15 +1622,15 @@
return MethodOfGettingAValueProfile::fromLazyOperand(
profiledBlock,
LazyOperandValueProfileKey(
- node->origin.semantic.bytecodeIndex, node->local()));
+ node->origin.semantic.bytecodeIndex(), node->local()));
}
}
if (node->hasHeapPrediction())
- return &profiledBlock->valueProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex);
+ return &profiledBlock->valueProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex());
if (profiledBlock->hasBaselineJITProfiling()) {
- if (ArithProfile* result = profiledBlock->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex))
+ if (ArithProfile* result = profiledBlock->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex()))
return result;
}
}
@@ -1728,9 +1728,9 @@
if (!m_hasExceptionHandlers)
return false;
- unsigned bytecodeIndexToCheck = codeOrigin.bytecodeIndex;
+ unsigned bytecodeIndexToCheck = codeOrigin.bytecodeIndex();
while (1) {
- InlineCallFrame* inlineCallFrame = codeOrigin.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = codeOrigin.inlineCallFrame();
CodeBlock* codeBlock = baselineCodeBlockFor(inlineCallFrame);
if (HandlerInfo* handler = codeBlock->handlerForBytecodeOffset(bytecodeIndexToCheck)) {
opCatchOriginOut = CodeOrigin(handler->target, inlineCallFrame);
@@ -1741,8 +1741,8 @@
if (!inlineCallFrame)
return false;
- bytecodeIndexToCheck = inlineCallFrame->directCaller.bytecodeIndex;
- codeOrigin = codeOrigin.inlineCallFrame->directCaller;
+ bytecodeIndexToCheck = inlineCallFrame->directCaller.bytecodeIndex();
+ codeOrigin = inlineCallFrame->directCaller;
}
RELEASE_ASSERT_NOT_REACHED();
diff --git a/Source/JavaScriptCore/dfg/DFGGraph.h b/Source/JavaScriptCore/dfg/DFGGraph.h
index c08cacc..99558a8 100644
--- a/Source/JavaScriptCore/dfg/DFGGraph.h
+++ b/Source/JavaScriptCore/dfg/DFGGraph.h
@@ -424,7 +424,7 @@
ScriptExecutable* executableFor(const CodeOrigin& codeOrigin)
{
- return executableFor(codeOrigin.inlineCallFrame);
+ return executableFor(codeOrigin.inlineCallFrame());
}
CodeBlock* baselineCodeBlockFor(InlineCallFrame* inlineCallFrame)
@@ -441,9 +441,9 @@
bool isStrictModeFor(CodeOrigin codeOrigin)
{
- if (!codeOrigin.inlineCallFrame)
+ if (!codeOrigin.inlineCallFrame())
return m_codeBlock->isStrictMode();
- return codeOrigin.inlineCallFrame->isStrictMode();
+ return codeOrigin.inlineCallFrame()->isStrictMode();
}
ECMAMode ecmaModeFor(CodeOrigin codeOrigin)
@@ -463,7 +463,7 @@
bool hasExitSite(const CodeOrigin& codeOrigin, ExitKind exitKind)
{
- return baselineCodeBlockFor(codeOrigin)->unlinkedCodeBlock()->hasExitSite(FrequentExitSite(codeOrigin.bytecodeIndex, exitKind));
+ return baselineCodeBlockFor(codeOrigin)->unlinkedCodeBlock()->hasExitSite(FrequentExitSite(codeOrigin.bytecodeIndex(), exitKind));
}
bool hasExitSite(Node* node, ExitKind exitKind)
@@ -827,7 +827,7 @@
CodeOrigin* codeOriginPtr = &codeOrigin;
for (;;) {
- InlineCallFrame* inlineCallFrame = codeOriginPtr->inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = codeOriginPtr->inlineCallFrame();
VirtualRegister stackOffset(inlineCallFrame ? inlineCallFrame->stackOffset : 0);
if (inlineCallFrame) {
@@ -839,7 +839,7 @@
CodeBlock* codeBlock = baselineCodeBlockFor(inlineCallFrame);
FullBytecodeLiveness& fullLiveness = livenessFor(codeBlock);
- const FastBitVector& liveness = fullLiveness.getLiveness(codeOriginPtr->bytecodeIndex);
+ const FastBitVector& liveness = fullLiveness.getLiveness(codeOriginPtr->bytecodeIndex());
for (unsigned relativeLocal = codeBlock->numCalleeLocals(); relativeLocal--;) {
VirtualRegister reg = stackOffset + virtualRegisterForLocal(relativeLocal);
diff --git a/Source/JavaScriptCore/dfg/DFGLiveCatchVariablePreservationPhase.cpp b/Source/JavaScriptCore/dfg/DFGLiveCatchVariablePreservationPhase.cpp
index acba419..3b2d9ef 100644
--- a/Source/JavaScriptCore/dfg/DFGLiveCatchVariablePreservationPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGLiveCatchVariablePreservationPhase.cpp
@@ -125,12 +125,12 @@
if (origin == cachedCodeOrigin)
return cachedHandlerResult;
- unsigned bytecodeIndexToCheck = origin.bytecodeIndex;
+ unsigned bytecodeIndexToCheck = origin.bytecodeIndex();
cachedCodeOrigin = origin;
while (1) {
- InlineCallFrame* inlineCallFrame = origin.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = origin.inlineCallFrame();
CodeBlock* codeBlock = m_graph.baselineCodeBlockFor(inlineCallFrame);
if (HandlerInfo* handler = codeBlock->handlerForBytecodeOffset(bytecodeIndexToCheck)) {
liveAtCatchHead.clearAll();
@@ -149,7 +149,7 @@
break;
}
- bytecodeIndexToCheck = inlineCallFrame->directCaller.bytecodeIndex;
+ bytecodeIndexToCheck = inlineCallFrame->directCaller.bytecodeIndex();
origin = inlineCallFrame->directCaller;
}
@@ -199,7 +199,7 @@
}
if (currentExceptionHandler && (node->op() == SetLocal || node->op() == SetArgument)) {
- InlineCallFrame* inlineCallFrame = node->origin.semantic.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = node->origin.semantic.inlineCallFrame();
if (inlineCallFrame)
seenInlineCallFrames.add(inlineCallFrame);
VirtualRegister operand = node->local();
diff --git a/Source/JavaScriptCore/dfg/DFGMinifiedNode.cpp b/Source/JavaScriptCore/dfg/DFGMinifiedNode.cpp
index 80795c2..57fb6a1 100644
--- a/Source/JavaScriptCore/dfg/DFGMinifiedNode.cpp
+++ b/Source/JavaScriptCore/dfg/DFGMinifiedNode.cpp
@@ -43,7 +43,7 @@
result.m_info = JSValue::encode(node->asJSValue());
else {
ASSERT(node->op() == PhantomDirectArguments || node->op() == PhantomClonedArguments);
- result.m_info = bitwise_cast<uintptr_t>(node->origin.semantic.inlineCallFrame);
+ result.m_info = bitwise_cast<uintptr_t>(node->origin.semantic.inlineCallFrame());
}
return result;
}
diff --git a/Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.cpp b/Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.cpp
index 18b14c1..3280f87 100644
--- a/Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.cpp
@@ -258,7 +258,7 @@
case PhantomCreateRest:
case PhantomDirectArguments:
case PhantomClonedArguments: {
- InlineCallFrame* inlineCallFrame = node->origin.semantic.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = node->origin.semantic.inlineCallFrame();
if (!inlineCallFrame) {
// We don't need to record anything about how the arguments are to be recovered. It's just a
// given that we can read them from the stack.
diff --git a/Source/JavaScriptCore/dfg/DFGOSRExit.cpp b/Source/JavaScriptCore/dfg/DFGOSRExit.cpp
index a5bff77..bcebede 100644
--- a/Source/JavaScriptCore/dfg/DFGOSRExit.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOSRExit.cpp
@@ -393,7 +393,7 @@
if (exit.m_kind == BadCache || exit.m_kind == BadIndexingType) {
CodeOrigin codeOrigin = exit.m_codeOriginForExitProfile;
CodeBlock* profiledCodeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(codeOrigin, baselineCodeBlock);
- arrayProfile = profiledCodeBlock->getArrayProfile(codeOrigin.bytecodeIndex);
+ arrayProfile = profiledCodeBlock->getArrayProfile(codeOrigin.bytecodeIndex());
if (arrayProfile)
extraInitializationLevel = std::max(extraInitializationLevel, ExtraInitializationLevel::ArrayProfileUpdate);
}
@@ -406,7 +406,7 @@
CodeBlock* codeBlockForExit = baselineCodeBlockForOriginAndBaselineCodeBlock(exit.m_codeOrigin, baselineCodeBlock);
const JITCodeMap& codeMap = codeBlockForExit->jitCodeMap();
- CodeLocationLabel<JSEntryPtrTag> codeLocation = codeMap.find(exit.m_codeOrigin.bytecodeIndex);
+ CodeLocationLabel<JSEntryPtrTag> codeLocation = codeMap.find(exit.m_codeOrigin.bytecodeIndex());
ASSERT(codeLocation);
void* jumpTarget = codeLocation.executableAddress();
@@ -748,8 +748,8 @@
frame.setOperand<CodeBlock*>(CallFrameSlot::codeBlock, outermostBaselineCodeBlock);
const CodeOrigin* codeOrigin;
- for (codeOrigin = &exit.m_codeOrigin; codeOrigin && codeOrigin->inlineCallFrame; codeOrigin = codeOrigin->inlineCallFrame->getCallerSkippingTailCalls()) {
- InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame;
+ for (codeOrigin = &exit.m_codeOrigin; codeOrigin && codeOrigin->inlineCallFrame(); codeOrigin = codeOrigin->inlineCallFrame()->getCallerSkippingTailCalls()) {
+ InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame();
CodeBlock* baselineCodeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(*codeOrigin, outermostBaselineCodeBlock);
InlineCallFrame::Kind trueCallerCallKind;
CodeOrigin* trueCaller = inlineCallFrame->getCallerSkippingTailCalls(&trueCallerCallKind);
@@ -767,7 +767,7 @@
callerFrame = frame.get<void*>(CallFrame::callerFrameOffset());
} else {
CodeBlock* baselineCodeBlockForCaller = baselineCodeBlockForOriginAndBaselineCodeBlock(*trueCaller, outermostBaselineCodeBlock);
- unsigned callBytecodeIndex = trueCaller->bytecodeIndex;
+ unsigned callBytecodeIndex = trueCaller->bytecodeIndex();
MacroAssemblerCodePtr<JSInternalPtrTag> jumpTarget;
switch (trueCallerCallKind) {
@@ -799,8 +799,8 @@
RELEASE_ASSERT_NOT_REACHED();
}
- if (trueCaller->inlineCallFrame)
- callerFrame = cpu.fp<uint8_t*>() + trueCaller->inlineCallFrame->stackOffset * sizeof(EncodedJSValue);
+ if (trueCaller->inlineCallFrame())
+ callerFrame = cpu.fp<uint8_t*>() + trueCaller->inlineCallFrame()->stackOffset * sizeof(EncodedJSValue);
void* targetAddress = jumpTarget.executableAddress();
#if USE(POINTER_PROFILING)
@@ -822,7 +822,7 @@
ASSERT(callerFrame);
frame.set<void*>(inlineCallFrame->callerFrameOffset(), callerFrame);
#if USE(JSVALUE64)
- uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits();
+ uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex()).bits();
frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount, TagOffset, locationBits);
if (!inlineCallFrame->isClosureCall)
frame.setOperand(inlineCallFrame->stackOffset + CallFrameSlot::callee, JSValue(inlineCallFrame->calleeConstant()));
@@ -839,7 +839,7 @@
// Don't need to set the toplevel code origin if we only did inline tail calls
if (codeOrigin) {
#if USE(JSVALUE64)
- uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits();
+ uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex()).bits();
#else
const Instruction* instruction = outermostBaselineCodeBlock->instructions().at(codeOrigin->bytecodeIndex).ptr();
uint32_t locationBits = CallSiteIndex(instruction).bits();
@@ -868,8 +868,9 @@
vm.heap.writeBarrier(inlineCallFrame->baselineCodeBlock.get());
}
- if (exit.m_codeOrigin.inlineCallFrame)
- context.fp() = context.fp<uint8_t*>() + exit.m_codeOrigin.inlineCallFrame->stackOffset * sizeof(EncodedJSValue);
+ auto* exitInlineCallFrame = exit.m_codeOrigin.inlineCallFrame();
+ if (exitInlineCallFrame)
+ context.fp() = context.fp<uint8_t*>() + exitInlineCallFrame->stackOffset * sizeof(EncodedJSValue);
void* jumpTarget = exitState->jumpTarget;
ASSERT(jumpTarget);
@@ -889,7 +890,7 @@
CodeBlock* codeBlock = exec->codeBlock();
CodeBlock* alternative = codeBlock->alternative();
ExitKind kind = exit.m_kind;
- unsigned bytecodeOffset = exit.m_codeOrigin.bytecodeIndex;
+ unsigned bytecodeOffset = exit.m_codeOrigin.bytecodeIndex();
dataLog("Speculation failure in ", *codeBlock);
dataLog(" @ exit #", osrExitIndex, " (bc#", bytecodeOffset, ", ", exitKindToString(kind), ") with ");
@@ -1099,7 +1100,7 @@
SpeculationFailureDebugInfo* debugInfo = new SpeculationFailureDebugInfo;
debugInfo->codeBlock = jit.codeBlock();
debugInfo->kind = exit.m_kind;
- debugInfo->bytecodeOffset = exit.m_codeOrigin.bytecodeIndex;
+ debugInfo->bytecodeOffset = exit.m_codeOrigin.bytecodeIndex();
jit.debugCall(vm, debugOperationPrintSpeculationFailure, debugInfo);
}
@@ -1148,7 +1149,7 @@
// property access, or due to an array profile).
CodeOrigin codeOrigin = exit.m_codeOriginForExitProfile;
- if (ArrayProfile* arrayProfile = jit.baselineCodeBlockFor(codeOrigin)->getArrayProfile(codeOrigin.bytecodeIndex)) {
+ if (ArrayProfile* arrayProfile = jit.baselineCodeBlockFor(codeOrigin)->getArrayProfile(codeOrigin.bytecodeIndex())) {
#if USE(JSVALUE64)
GPRReg usedRegister;
if (exit.m_jsValueSource.isAddress())
diff --git a/Source/JavaScriptCore/dfg/DFGOSRExitBase.cpp b/Source/JavaScriptCore/dfg/DFGOSRExitBase.cpp
index 1532aa3..ee63e79 100644
--- a/Source/JavaScriptCore/dfg/DFGOSRExitBase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOSRExitBase.cpp
@@ -43,7 +43,7 @@
m_codeOriginForExitProfile, profiledCodeBlock);
if (sourceProfiledCodeBlock) {
ExitingInlineKind inlineKind;
- if (m_codeOriginForExitProfile.inlineCallFrame)
+ if (m_codeOriginForExitProfile.inlineCallFrame())
inlineKind = ExitFromInlined;
else
inlineKind = ExitFromNotInlined;
@@ -52,7 +52,7 @@
if (m_wasHoisted)
site = FrequentExitSite(HoistingFailed, jitType, inlineKind);
else
- site = FrequentExitSite(m_codeOriginForExitProfile.bytecodeIndex, m_kind, jitType, inlineKind);
+ site = FrequentExitSite(m_codeOriginForExitProfile.bytecodeIndex(), m_kind, jitType, inlineKind);
ExitProfile::add(sourceProfiledCodeBlock, site);
}
}
diff --git a/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp b/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
index fd6fdb5..60331f5 100644
--- a/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
@@ -73,7 +73,7 @@
AssemblyHelpers::JumpList loopThreshold;
- for (InlineCallFrame* inlineCallFrame = exit.m_codeOrigin.inlineCallFrame; inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame) {
+ for (InlineCallFrame* inlineCallFrame = exit.m_codeOrigin.inlineCallFrame(); inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame()) {
loopThreshold.append(
jit.branchTest8(
AssemblyHelpers::NonZero,
@@ -145,8 +145,8 @@
jit.storePtr(AssemblyHelpers::TrustedImmPtr(jit.baselineCodeBlock()), AssemblyHelpers::addressFor((VirtualRegister)CallFrameSlot::codeBlock));
const CodeOrigin* codeOrigin;
- for (codeOrigin = &exit.m_codeOrigin; codeOrigin && codeOrigin->inlineCallFrame; codeOrigin = codeOrigin->inlineCallFrame->getCallerSkippingTailCalls()) {
- InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame;
+ for (codeOrigin = &exit.m_codeOrigin; codeOrigin && codeOrigin->inlineCallFrame(); codeOrigin = codeOrigin->inlineCallFrame()->getCallerSkippingTailCalls()) {
+ InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame();
CodeBlock* baselineCodeBlock = jit.baselineCodeBlockFor(*codeOrigin);
InlineCallFrame::Kind trueCallerCallKind;
CodeOrigin* trueCaller = inlineCallFrame->getCallerSkippingTailCalls(&trueCallerCallKind);
@@ -166,7 +166,7 @@
callerFrameGPR = GPRInfo::regT3;
} else {
CodeBlock* baselineCodeBlockForCaller = jit.baselineCodeBlockFor(*trueCaller);
- unsigned callBytecodeIndex = trueCaller->bytecodeIndex;
+ unsigned callBytecodeIndex = trueCaller->bytecodeIndex();
void* jumpTarget = nullptr;
switch (trueCallerCallKind) {
@@ -198,9 +198,9 @@
RELEASE_ASSERT_NOT_REACHED();
}
- if (trueCaller->inlineCallFrame) {
+ if (trueCaller->inlineCallFrame()) {
jit.addPtr(
- AssemblyHelpers::TrustedImm32(trueCaller->inlineCallFrame->stackOffset * sizeof(EncodedJSValue)),
+ AssemblyHelpers::TrustedImm32(trueCaller->inlineCallFrame()->stackOffset * sizeof(EncodedJSValue)),
GPRInfo::callFrameRegister,
GPRInfo::regT3);
callerFrameGPR = GPRInfo::regT3;
@@ -231,7 +231,7 @@
jit.store32(AssemblyHelpers::TrustedImm32(inlineCallFrame->argumentCountIncludingThis), AssemblyHelpers::payloadFor((VirtualRegister)(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount)));
#if USE(JSVALUE64)
jit.storePtr(callerFrameGPR, AssemblyHelpers::addressForByteOffset(inlineCallFrame->callerFrameOffset()));
- uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits();
+ uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex()).bits();
jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor((VirtualRegister)(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount)));
if (!inlineCallFrame->isClosureCall)
jit.store64(AssemblyHelpers::TrustedImm64(JSValue::encode(JSValue(inlineCallFrame->calleeConstant()))), AssemblyHelpers::addressFor((VirtualRegister)(inlineCallFrame->stackOffset + CallFrameSlot::callee)));
@@ -249,7 +249,7 @@
// Don't need to set the toplevel code origin if we only did inline tail calls
if (codeOrigin) {
#if USE(JSVALUE64)
- uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits();
+ uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex()).bits();
#else
const Instruction* instruction = jit.baselineCodeBlock()->instructions().at(codeOrigin->bytecodeIndex).ptr();
uint32_t locationBits = CallSiteIndex(instruction).bits();
@@ -304,13 +304,14 @@
}
}
- if (exit.m_codeOrigin.inlineCallFrame)
- jit.addPtr(AssemblyHelpers::TrustedImm32(exit.m_codeOrigin.inlineCallFrame->stackOffset * sizeof(EncodedJSValue)), GPRInfo::callFrameRegister);
+ auto* exitInlineCallFrame = exit.m_codeOrigin.inlineCallFrame();
+ if (exitInlineCallFrame)
+ jit.addPtr(AssemblyHelpers::TrustedImm32(exitInlineCallFrame->stackOffset * sizeof(EncodedJSValue)), GPRInfo::callFrameRegister);
CodeBlock* codeBlockForExit = jit.baselineCodeBlockFor(exit.m_codeOrigin);
ASSERT(codeBlockForExit == codeBlockForExit->baselineVersion());
ASSERT(codeBlockForExit->jitType() == JITCode::BaselineJIT);
- CodeLocationLabel<JSEntryPtrTag> codeLocation = codeBlockForExit->jitCodeMap().find(exit.m_codeOrigin.bytecodeIndex);
+ CodeLocationLabel<JSEntryPtrTag> codeLocation = codeBlockForExit->jitCodeMap().find(exit.m_codeOrigin.bytecodeIndex());
ASSERT(codeLocation);
void* jumpTarget = codeLocation.retagged<OSRExitPtrTag>().executableAddress();
diff --git a/Source/JavaScriptCore/dfg/DFGOSRExitPreparation.cpp b/Source/JavaScriptCore/dfg/DFGOSRExitPreparation.cpp
index a94170a..a0b9d8b 100644
--- a/Source/JavaScriptCore/dfg/DFGOSRExitPreparation.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOSRExitPreparation.cpp
@@ -41,8 +41,8 @@
VM& vm = exec->vm();
DeferGC deferGC(vm.heap);
- for (; codeOrigin.inlineCallFrame; codeOrigin = codeOrigin.inlineCallFrame->directCaller) {
- CodeBlock* codeBlock = codeOrigin.inlineCallFrame->baselineCodeBlock.get();
+ for (; codeOrigin.inlineCallFrame(); codeOrigin = codeOrigin.inlineCallFrame()->directCaller) {
+ CodeBlock* codeBlock = codeOrigin.inlineCallFrame()->baselineCodeBlock.get();
JITWorklist::ensureGlobalWorklist().compileNow(codeBlock);
}
}
diff --git a/Source/JavaScriptCore/dfg/DFGObjectAllocationSinkingPhase.cpp b/Source/JavaScriptCore/dfg/DFGObjectAllocationSinkingPhase.cpp
index 0dc646b..0345339 100644
--- a/Source/JavaScriptCore/dfg/DFGObjectAllocationSinkingPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGObjectAllocationSinkingPhase.cpp
@@ -1268,10 +1268,10 @@
forEachEscapee([&] (HashMap<Node*, Allocation>& escapees, Node* where) {
for (Node* allocation : escapees.keys()) {
- InlineCallFrame* inlineCallFrame = allocation->origin.semantic.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = allocation->origin.semantic.inlineCallFrame();
if (!inlineCallFrame)
continue;
- if ((inlineCallFrame->isClosureCall || inlineCallFrame->isVarargs()) && inlineCallFrame != where->origin.semantic.inlineCallFrame)
+ if ((inlineCallFrame->isClosureCall || inlineCallFrame->isVarargs()) && inlineCallFrame != where->origin.semantic.inlineCallFrame())
m_sinkCandidates.remove(allocation);
}
});
diff --git a/Source/JavaScriptCore/dfg/DFGOperations.cpp b/Source/JavaScriptCore/dfg/DFGOperations.cpp
index 4cfac1d..9f0e804 100644
--- a/Source/JavaScriptCore/dfg/DFGOperations.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOperations.cpp
@@ -2895,9 +2895,10 @@
}
CodeOrigin origin = exec->codeOrigin();
+ auto* inlineCallFrame = origin.inlineCallFrame();
bool strictMode;
- if (origin.inlineCallFrame)
- strictMode = origin.inlineCallFrame->baselineCodeBlock->isStrictMode();
+ if (inlineCallFrame)
+ strictMode = inlineCallFrame->baselineCodeBlock->isStrictMode();
else
strictMode = exec->codeBlock()->isStrictMode();
PutPropertySlot slot(scope, strictMode, PutPropertySlot::UnknownContext, isInitialization(getPutInfo.initializationMode()));
@@ -3044,7 +3045,7 @@
ASSERT(JITCode::isOptimizingJIT(optimizedCodeBlock->jitType()));
bool didTryToEnterIntoInlinedLoops = false;
- for (InlineCallFrame* inlineCallFrame = exit->m_codeOrigin.inlineCallFrame; inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame) {
+ for (InlineCallFrame* inlineCallFrame = exit->m_codeOrigin.inlineCallFrame(); inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame()) {
if (inlineCallFrame->baselineCodeBlock->ownerExecutable()->didTryToEnterInLoop()) {
didTryToEnterIntoInlinedLoops = true;
break;
diff --git a/Source/JavaScriptCore/dfg/DFGPreciseLocalClobberize.h b/Source/JavaScriptCore/dfg/DFGPreciseLocalClobberize.h
index 0d8bffe..a58e179 100644
--- a/Source/JavaScriptCore/dfg/DFGPreciseLocalClobberize.h
+++ b/Source/JavaScriptCore/dfg/DFGPreciseLocalClobberize.h
@@ -130,7 +130,7 @@
// This reads from a constant buffer.
return;
}
- InlineCallFrame* inlineCallFrame = spread->child1()->origin.semantic.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = spread->child1()->origin.semantic.inlineCallFrame();
unsigned numberOfArgumentsToSkip = spread->child1()->numberOfArgumentsToSkip();
readFrame(inlineCallFrame, numberOfArgumentsToSkip);
};
@@ -192,9 +192,9 @@
} else {
InlineCallFrame* inlineCallFrame;
if (m_node->hasArgumentsChild() && m_node->argumentsChild())
- inlineCallFrame = m_node->argumentsChild()->origin.semantic.inlineCallFrame;
+ inlineCallFrame = m_node->argumentsChild()->origin.semantic.inlineCallFrame();
else
- inlineCallFrame = m_node->origin.semantic.inlineCallFrame;
+ inlineCallFrame = m_node->origin.semantic.inlineCallFrame();
unsigned numberOfArgumentsToSkip = 0;
if (m_node->op() == GetMyArgumentByVal || m_node->op() == GetMyArgumentByValOutOfBounds) {
@@ -220,7 +220,7 @@
}
case GetArgument: {
- InlineCallFrame* inlineCallFrame = m_node->origin.semantic.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = m_node->origin.semantic.inlineCallFrame();
unsigned indexIncludingThis = m_node->argumentIndex();
if (!inlineCallFrame) {
if (indexIncludingThis < static_cast<unsigned>(m_graph.m_codeBlock->numParameters()))
@@ -248,7 +248,7 @@
m_read(VirtualRegister(i));
// Read all of the inline arguments and call frame headers that we didn't already capture.
- for (InlineCallFrame* inlineCallFrame = m_node->origin.semantic.inlineCallFrame; inlineCallFrame; inlineCallFrame = inlineCallFrame->getCallerInlineFrameSkippingTailCalls()) {
+ for (InlineCallFrame* inlineCallFrame = m_node->origin.semantic.inlineCallFrame(); inlineCallFrame; inlineCallFrame = inlineCallFrame->getCallerInlineFrameSkippingTailCalls()) {
if (!inlineCallFrame->isStrictMode()) {
for (unsigned i = inlineCallFrame->argumentsWithFixup.size(); i--;)
m_read(VirtualRegister(inlineCallFrame->stackOffset + virtualRegisterForArgument(i).offset()));
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
index 193bae1..abb2d09 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
@@ -187,19 +187,20 @@
void SpeculativeJIT::emitGetLength(CodeOrigin origin, GPRReg lengthGPR, bool includeThis)
{
- emitGetLength(origin.inlineCallFrame, lengthGPR, includeThis);
+ emitGetLength(origin.inlineCallFrame(), lengthGPR, includeThis);
}
void SpeculativeJIT::emitGetCallee(CodeOrigin origin, GPRReg calleeGPR)
{
- if (origin.inlineCallFrame) {
- if (origin.inlineCallFrame->isClosureCall) {
+ auto* inlineCallFrame = origin.inlineCallFrame();
+ if (inlineCallFrame) {
+ if (inlineCallFrame->isClosureCall) {
m_jit.loadPtr(
- JITCompiler::addressFor(origin.inlineCallFrame->calleeRecovery.virtualRegister()),
+ JITCompiler::addressFor(inlineCallFrame->calleeRecovery.virtualRegister()),
calleeGPR);
} else {
m_jit.move(
- TrustedImmPtr::weakPointer(m_jit.graph(), origin.inlineCallFrame->calleeRecovery.constant().asCell()),
+ TrustedImmPtr::weakPointer(m_jit.graph(), inlineCallFrame->calleeRecovery.constant().asCell()),
calleeGPR);
}
} else
@@ -1863,7 +1864,7 @@
dataLogF(
"SpeculativeJIT generating Node @%d (bc#%u) at JIT offset 0x%x",
(int)m_currentNode->index(),
- m_currentNode->origin.semantic.bytecodeIndex, m_jit.debugOffset());
+ m_currentNode->origin.semantic.bytecodeIndex(), m_jit.debugOffset());
dataLog("\n");
}
@@ -3937,8 +3938,9 @@
#endif
CodeBlock* baselineCodeBlock = m_jit.graph().baselineCodeBlockFor(node->origin.semantic);
- ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex);
- const Instruction* instruction = baselineCodeBlock->instructions().at(node->origin.semantic.bytecodeIndex).ptr();
+ unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex();
+ ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex);
+ const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr();
JITAddIC* addIC = m_jit.codeBlock()->addJITAddIC(arithProfile, instruction);
auto repatchingFunction = operationValueAddOptimize;
auto nonRepatchingFunction = operationValueAdd;
@@ -3961,8 +3963,9 @@
#endif
CodeBlock* baselineCodeBlock = m_jit.graph().baselineCodeBlockFor(node->origin.semantic);
- ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex);
- const Instruction* instruction = baselineCodeBlock->instructions().at(node->origin.semantic.bytecodeIndex).ptr();
+ unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex();
+ ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex);
+ const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr();
JITSubIC* subIC = m_jit.codeBlock()->addJITSubIC(arithProfile, instruction);
auto repatchingFunction = operationValueSubOptimize;
auto nonRepatchingFunction = operationValueSub;
@@ -4553,8 +4556,9 @@
void SpeculativeJIT::compileValueNegate(Node* node)
{
CodeBlock* baselineCodeBlock = m_jit.graph().baselineCodeBlockFor(node->origin.semantic);
- ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex);
- const Instruction* instruction = baselineCodeBlock->instructions().at(node->origin.semantic.bytecodeIndex).ptr();
+ unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex();
+ ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex);
+ const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr();
JITNegIC* negIC = m_jit.codeBlock()->addJITNegIC(arithProfile, instruction);
auto repatchingFunction = operationArithNegateOptimize;
auto nonRepatchingFunction = operationArithNegate;
@@ -4776,8 +4780,9 @@
#endif
CodeBlock* baselineCodeBlock = m_jit.graph().baselineCodeBlockFor(node->origin.semantic);
- ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex);
- const Instruction* instruction = baselineCodeBlock->instructions().at(node->origin.semantic.bytecodeIndex).ptr();
+ unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex();
+ ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex);
+ const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr();
JITMulIC* mulIC = m_jit.codeBlock()->addJITMulIC(arithProfile, instruction);
auto repatchingFunction = operationValueMulOptimize;
auto nonRepatchingFunction = operationValueMul;
@@ -7231,9 +7236,9 @@
LoadVarargsData* data = node->loadVarargsData();
InlineCallFrame* inlineCallFrame;
if (node->child1())
- inlineCallFrame = node->child1()->origin.semantic.inlineCallFrame;
+ inlineCallFrame = node->child1()->origin.semantic.inlineCallFrame();
else
- inlineCallFrame = node->origin.semantic.inlineCallFrame;
+ inlineCallFrame = node->origin.semantic.inlineCallFrame();
GPRTemporary length(this);
JSValueRegsTemporary temp(this);
@@ -7396,9 +7401,10 @@
unsigned knownLength;
bool lengthIsKnown; // if false, lengthGPR will have the length.
- if (node->origin.semantic.inlineCallFrame
- && !node->origin.semantic.inlineCallFrame->isVarargs()) {
- knownLength = node->origin.semantic.inlineCallFrame->argumentCountIncludingThis - 1;
+ auto* inlineCallFrame = node->origin.semantic.inlineCallFrame();
+ if (inlineCallFrame
+ && !inlineCallFrame->isVarargs()) {
+ knownLength = inlineCallFrame->argumentCountIncludingThis - 1;
lengthIsKnown = true;
} else {
knownLength = UINT_MAX;
@@ -7471,17 +7477,17 @@
slowPath, this, resultGPR, structure, lengthGPR, minCapacity);
addSlowPathGenerator(WTFMove(generator));
}
-
- if (node->origin.semantic.inlineCallFrame) {
- if (node->origin.semantic.inlineCallFrame->isClosureCall) {
+
+ if (inlineCallFrame) {
+ if (inlineCallFrame->isClosureCall) {
m_jit.loadPtr(
JITCompiler::addressFor(
- node->origin.semantic.inlineCallFrame->calleeRecovery.virtualRegister()),
+ inlineCallFrame->calleeRecovery.virtualRegister()),
scratch1GPR);
} else {
m_jit.move(
TrustedImmPtr::weakPointer(
- m_jit.graph(), node->origin.semantic.inlineCallFrame->calleeRecovery.constant().asCell()),
+ m_jit.graph(), inlineCallFrame->calleeRecovery.constant().asCell()),
scratch1GPR);
}
} else
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
index ca3b95e..b585979 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
@@ -618,9 +618,9 @@
JITCompiler::JumpList slowCase;
InlineCallFrame* inlineCallFrame;
if (node->child3())
- inlineCallFrame = node->child3()->origin.semantic.inlineCallFrame;
+ inlineCallFrame = node->child3()->origin.semantic.inlineCallFrame();
else
- inlineCallFrame = node->origin.semantic.inlineCallFrame;
+ inlineCallFrame = node->origin.semantic.inlineCallFrame();
// emitSetupVarargsFrameFastCase modifies the stack pointer if it succeeds.
emitSetupVarargsFrameFastCase(*m_jit.vm(), m_jit, scratchGPR2, scratchGPR1, scratchGPR2, scratchGPR3, inlineCallFrame, data->firstVarArgOffset, slowCase);
JITCompiler::Jump done = m_jit.jump();
@@ -768,10 +768,11 @@
JITCompiler::JumpList slowPath;
CodeOrigin staticOrigin = node->origin.semantic;
- ASSERT(!isTail || !staticOrigin.inlineCallFrame || !staticOrigin.inlineCallFrame->getCallerSkippingTailCalls());
- ASSERT(!isEmulatedTail || (staticOrigin.inlineCallFrame && staticOrigin.inlineCallFrame->getCallerSkippingTailCalls()));
+ InlineCallFrame* staticInlineCallFrame = staticOrigin.inlineCallFrame();
+ ASSERT(!isTail || !staticInlineCallFrame || !staticInlineCallFrame->getCallerSkippingTailCalls());
+ ASSERT(!isEmulatedTail || (staticInlineCallFrame && staticInlineCallFrame->getCallerSkippingTailCalls()));
CodeOrigin dynamicOrigin =
- isEmulatedTail ? *staticOrigin.inlineCallFrame->getCallerSkippingTailCalls() : staticOrigin;
+ isEmulatedTail ? *staticInlineCallFrame->getCallerSkippingTailCalls() : staticOrigin;
CallSiteIndex callSite = m_jit.recordCallSiteAndGenerateExceptionHandlingOSRExitIfNeeded(dynamicOrigin, m_stream->size());
CallLinkInfo* info = m_jit.codeBlock()->addCallLinkInfo();
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
index 93146f5..892cd3f 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
@@ -579,9 +579,9 @@
JITCompiler::JumpList slowCase;
InlineCallFrame* inlineCallFrame;
if (node->child3())
- inlineCallFrame = node->child3()->origin.semantic.inlineCallFrame;
+ inlineCallFrame = node->child3()->origin.semantic.inlineCallFrame();
else
- inlineCallFrame = node->origin.semantic.inlineCallFrame;
+ inlineCallFrame = node->origin.semantic.inlineCallFrame();
// emitSetupVarargsFrameFastCase modifies the stack pointer if it succeeds.
emitSetupVarargsFrameFastCase(*m_jit.vm(), m_jit, scratchGPR2, scratchGPR1, scratchGPR2, scratchGPR3, inlineCallFrame, data->firstVarArgOffset, slowCase);
JITCompiler::Jump done = m_jit.jump();
@@ -720,10 +720,11 @@
}
CodeOrigin staticOrigin = node->origin.semantic;
- ASSERT(!isTail || !staticOrigin.inlineCallFrame || !staticOrigin.inlineCallFrame->getCallerSkippingTailCalls());
- ASSERT(!isEmulatedTail || (staticOrigin.inlineCallFrame && staticOrigin.inlineCallFrame->getCallerSkippingTailCalls()));
+ InlineCallFrame* staticInlineCallFrame = staticOrigin.inlineCallFrame();
+ ASSERT(!isTail || !staticInlineCallFrame || !staticInlineCallFrame->getCallerSkippingTailCalls());
+ ASSERT(!isEmulatedTail || (staticInlineCallFrame && staticInlineCallFrame->getCallerSkippingTailCalls()));
CodeOrigin dynamicOrigin =
- isEmulatedTail ? *staticOrigin.inlineCallFrame->getCallerSkippingTailCalls() : staticOrigin;
+ isEmulatedTail ? *staticInlineCallFrame->getCallerSkippingTailCalls() : staticOrigin;
CallSiteIndex callSite = m_jit.recordCallSiteAndGenerateExceptionHandlingOSRExitIfNeeded(dynamicOrigin, m_stream->size());
@@ -5014,7 +5015,7 @@
Vector<SilentRegisterSavePlan> savePlans;
silentSpillAllRegistersImpl(false, savePlans, InvalidGPRReg);
- unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex;
+ unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex();
addSlowPathGeneratorLambda([=]() {
callTierUp.link(&m_jit);
@@ -5043,12 +5044,12 @@
}
case CheckTierUpAndOSREnter: {
- ASSERT(!node->origin.semantic.inlineCallFrame);
+ ASSERT(!node->origin.semantic.inlineCallFrame());
GPRTemporary temp(this);
GPRReg tempGPR = temp.gpr();
- unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex;
+ unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex();
auto triggerIterator = m_jit.jitCode()->tierUpEntryTriggers.find(bytecodeIndex);
DFG_ASSERT(m_jit.graph(), node, triggerIterator != m_jit.jitCode()->tierUpEntryTriggers.end());
JITCode::TriggerReason* forceEntryTrigger = &(m_jit.jitCode()->tierUpEntryTriggers.find(bytecodeIndex)->value);
diff --git a/Source/JavaScriptCore/dfg/DFGTierUpCheckInjectionPhase.cpp b/Source/JavaScriptCore/dfg/DFGTierUpCheckInjectionPhase.cpp
index c5067ef..b5e890a 100644
--- a/Source/JavaScriptCore/dfg/DFGTierUpCheckInjectionPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGTierUpCheckInjectionPhase.cpp
@@ -108,7 +108,7 @@
tierUpType = CheckTierUpInLoop;
insertionSet.insertNode(nodeIndex + 1, SpecNone, tierUpType, origin);
- unsigned bytecodeIndex = origin.semantic.bytecodeIndex;
+ unsigned bytecodeIndex = origin.semantic.bytecodeIndex();
if (canOSREnter)
m_graph.m_plan.tierUpAndOSREnterBytecodes().append(bytecodeIndex);
@@ -170,7 +170,7 @@
ASSERT(node->op() == LoopHint);
NodeOrigin origin = node->origin;
- if (level != FTL::CanCompileAndOSREnter || origin.semantic.inlineCallFrame)
+ if (level != FTL::CanCompileAndOSREnter || origin.semantic.inlineCallFrame())
return false;
// We only put OSR checks for the first LoopHint in the block. Note that
@@ -194,7 +194,7 @@
continue;
if (const NaturalLoop* loop = naturalLoops.innerMostLoopOf(block)) {
- unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex;
+ unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex();
naturalLoopsToLoopHint.add(loop, bytecodeIndex);
}
break;
diff --git a/Source/JavaScriptCore/dfg/DFGTypeCheckHoistingPhase.cpp b/Source/JavaScriptCore/dfg/DFGTypeCheckHoistingPhase.cpp
index abbce49..5249f7d 100644
--- a/Source/JavaScriptCore/dfg/DFGTypeCheckHoistingPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGTypeCheckHoistingPhase.cpp
@@ -147,7 +147,7 @@
auto checkOp = CheckStructure;
if (SpecCellCheck & SpecEmpty) {
VirtualRegister local = node->variableAccessData()->local();
- auto* inlineCallFrame = node->origin.semantic.inlineCallFrame;
+ auto* inlineCallFrame = node->origin.semantic.inlineCallFrame();
if ((local - (inlineCallFrame ? inlineCallFrame->stackOffset : 0)) == virtualRegisterForArgument(0)) {
// |this| can be the TDZ value. The call entrypoint won't have |this| as TDZ,
// but a catch or a loop OSR entry may have |this| be TDZ.
diff --git a/Source/JavaScriptCore/dfg/DFGVariableEventStream.cpp b/Source/JavaScriptCore/dfg/DFGVariableEventStream.cpp
index 57a175d..9e65e47 100644
--- a/Source/JavaScriptCore/dfg/DFGVariableEventStream.cpp
+++ b/Source/JavaScriptCore/dfg/DFGVariableEventStream.cpp
@@ -147,8 +147,9 @@
}
};
- if (codeOrigin.inlineCallFrame)
- numVariables = baselineCodeBlockForInlineCallFrame(codeOrigin.inlineCallFrame)->numCalleeLocals() + VirtualRegister(codeOrigin.inlineCallFrame->stackOffset).toLocal() + 1;
+ auto* inlineCallFrame = codeOrigin.inlineCallFrame();
+ if (inlineCallFrame)
+ numVariables = baselineCodeBlockForInlineCallFrame(inlineCallFrame)->numCalleeLocals() + VirtualRegister(inlineCallFrame->stackOffset).toLocal() + 1;
else
numVariables = baselineCodeBlock->numCalleeLocals();
diff --git a/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp b/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
index a1cfc67..2d9b7d7 100644
--- a/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
+++ b/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
@@ -1902,8 +1902,9 @@
}
CodeBlock* baselineCodeBlock = m_ftlState.graph.baselineCodeBlockFor(m_node->origin.semantic);
- ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(m_node->origin.semantic.bytecodeIndex);
- const Instruction* instruction = baselineCodeBlock->instructions().at(m_node->origin.semantic.bytecodeIndex).ptr();
+ unsigned bytecodeIndex = m_node->origin.semantic.bytecodeIndex();
+ ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex);
+ const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr();
auto repatchingFunction = operationValueAddOptimize;
auto nonRepatchingFunction = operationValueAdd;
compileBinaryMathIC<JITAddGenerator>(arithProfile, instruction, repatchingFunction, nonRepatchingFunction);
@@ -1921,8 +1922,9 @@
}
CodeBlock* baselineCodeBlock = m_ftlState.graph.baselineCodeBlockFor(m_node->origin.semantic);
- ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(m_node->origin.semantic.bytecodeIndex);
- const Instruction* instruction = baselineCodeBlock->instructions().at(m_node->origin.semantic.bytecodeIndex).ptr();
+ unsigned bytecodeIndex = m_node->origin.semantic.bytecodeIndex();
+ ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex);
+ const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr();
auto repatchingFunction = operationValueSubOptimize;
auto nonRepatchingFunction = operationValueSub;
compileBinaryMathIC<JITSubGenerator>(arithProfile, instruction, repatchingFunction, nonRepatchingFunction);
@@ -1940,8 +1942,9 @@
}
CodeBlock* baselineCodeBlock = m_ftlState.graph.baselineCodeBlockFor(m_node->origin.semantic);
- ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(m_node->origin.semantic.bytecodeIndex);
- const Instruction* instruction = baselineCodeBlock->instructions().at(m_node->origin.semantic.bytecodeIndex).ptr();
+ unsigned bytecodeIndex = m_node->origin.semantic.bytecodeIndex();
+ ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex);
+ const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr();
auto repatchingFunction = operationValueMulOptimize;
auto nonRepatchingFunction = operationValueMul;
compileBinaryMathIC<JITMulGenerator>(arithProfile, instruction, repatchingFunction, nonRepatchingFunction);
@@ -2201,8 +2204,9 @@
}
CodeBlock* baselineCodeBlock = m_ftlState.graph.baselineCodeBlockFor(m_node->origin.semantic);
- ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(m_node->origin.semantic.bytecodeIndex);
- const Instruction* instruction = baselineCodeBlock->instructions().at(m_node->origin.semantic.bytecodeIndex).ptr();
+ unsigned bytecodeIndex = m_node->origin.semantic.bytecodeIndex();
+ ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex);
+ const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr();
auto repatchingFunction = operationValueSubOptimize;
auto nonRepatchingFunction = operationValueSub;
compileBinaryMathIC<JITSubGenerator>(arithProfile, instruction, repatchingFunction, nonRepatchingFunction);
@@ -2833,8 +2837,9 @@
{
DFG_ASSERT(m_graph, m_node, m_node->child1().useKind() == UntypedUse);
CodeBlock* baselineCodeBlock = m_ftlState.graph.baselineCodeBlockFor(m_node->origin.semantic);
- ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(m_node->origin.semantic.bytecodeIndex);
- const Instruction* instruction = baselineCodeBlock->instructions().at(m_node->origin.semantic.bytecodeIndex).ptr();
+ unsigned bytecodeIndex = m_node->origin.semantic.bytecodeIndex();
+ ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex);
+ const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr();
auto repatchingFunction = operationArithNegateOptimize;
auto nonRepatchingFunction = operationArithNegate;
compileUnaryMathIC<JITNegGenerator>(arithProfile, instruction, repatchingFunction, nonRepatchingFunction);
@@ -4242,7 +4247,7 @@
void compileGetMyArgumentByVal()
{
- InlineCallFrame* inlineCallFrame = m_node->child1()->origin.semantic.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = m_node->child1()->origin.semantic.inlineCallFrame();
LValue originalIndex = lowInt32(m_node->child2());
@@ -5816,7 +5821,7 @@
CheckValue* lengthCheck = nullptr;
if (use->op() == PhantomSpread) {
if (use->child1()->op() == PhantomCreateRest) {
- InlineCallFrame* inlineCallFrame = use->child1()->origin.semantic.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = use->child1()->origin.semantic.inlineCallFrame();
unsigned numberOfArgumentsToSkip = use->child1()->numberOfArgumentsToSkip();
LValue spreadLength = cachedSpreadLengths.ensure(inlineCallFrame, [&] () {
return getSpreadLengthFromInlineCallFrame(inlineCallFrame, numberOfArgumentsToSkip);
@@ -5857,7 +5862,7 @@
index = m_out.add(index, m_out.constIntPtr(array->length()));
} else {
RELEASE_ASSERT(use->child1()->op() == PhantomCreateRest);
- InlineCallFrame* inlineCallFrame = use->child1()->origin.semantic.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = use->child1()->origin.semantic.inlineCallFrame();
unsigned numberOfArgumentsToSkip = use->child1()->numberOfArgumentsToSkip();
LValue length = m_out.zeroExtPtr(cachedSpreadLengths.get(inlineCallFrame));
@@ -6049,7 +6054,7 @@
LBasicBlock continuation = m_out.newBlock();
LBasicBlock lastNext = m_out.insertNewBlocksBefore(loopHeader);
- InlineCallFrame* inlineCallFrame = m_node->child1()->origin.semantic.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = m_node->child1()->origin.semantic.inlineCallFrame();
unsigned numberOfArgumentsToSkip = m_node->child1()->numberOfArgumentsToSkip();
LValue sourceStart = getArgumentsStart(inlineCallFrame, numberOfArgumentsToSkip);
LValue length = getSpreadLengthFromInlineCallFrame(inlineCallFrame, numberOfArgumentsToSkip);
@@ -8045,7 +8050,7 @@
}
RELEASE_ASSERT(target->op() == PhantomCreateRest);
- InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame();
unsigned numberOfArgumentsToSkip = target->numberOfArgumentsToSkip();
LValue length = cachedSpreadLengths.ensure(inlineCallFrame, [&] () {
return m_out.zeroExtPtr(this->getSpreadLengthFromInlineCallFrame(inlineCallFrame, numberOfArgumentsToSkip));
@@ -8208,7 +8213,7 @@
}
RELEASE_ASSERT(target->op() == PhantomCreateRest);
- InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame();
unsigned numberOfArgumentsToSkip = target->numberOfArgumentsToSkip();
@@ -8488,9 +8493,9 @@
CCallHelpers::JumpList slowCase;
InlineCallFrame* inlineCallFrame;
if (node->child3())
- inlineCallFrame = node->child3()->origin.semantic.inlineCallFrame;
+ inlineCallFrame = node->child3()->origin.semantic.inlineCallFrame();
else
- inlineCallFrame = node->origin.semantic.inlineCallFrame;
+ inlineCallFrame = node->origin.semantic.inlineCallFrame();
// emitSetupVarargsFrameFastCase modifies the stack pointer if it succeeds.
emitSetupVarargsFrameFastCase(*vm, jit, scratchGPR2, scratchGPR1, scratchGPR2, scratchGPR3, inlineCallFrame, data->firstVarArgOffset, slowCase);
@@ -8734,9 +8739,9 @@
LoadVarargsData* data = m_node->loadVarargsData();
InlineCallFrame* inlineCallFrame;
if (m_node->child1())
- inlineCallFrame = m_node->child1()->origin.semantic.inlineCallFrame;
+ inlineCallFrame = m_node->child1()->origin.semantic.inlineCallFrame();
else
- inlineCallFrame = m_node->origin.semantic.inlineCallFrame;
+ inlineCallFrame = m_node->origin.semantic.inlineCallFrame();
LValue length = nullptr;
LValue lengthIncludingThis = nullptr;
@@ -8883,7 +8888,7 @@
}
ASSERT(target->op() == PhantomCreateRest);
- InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame();
unsigned numberOfArgumentsToSkip = target->numberOfArgumentsToSkip();
spreadLengths.append(cachedSpreadLengths.ensure(inlineCallFrame, [&] () {
return this->getSpreadLengthFromInlineCallFrame(inlineCallFrame, numberOfArgumentsToSkip);
@@ -8934,7 +8939,7 @@
}
RELEASE_ASSERT(target->op() == PhantomCreateRest);
- InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame();
LValue sourceStart = this->getArgumentsStart(inlineCallFrame, target->numberOfArgumentsToSkip());
LValue spreadLength = m_out.zeroExtPtr(cachedSpreadLengths.get(inlineCallFrame));
@@ -11467,12 +11472,12 @@
ArgumentsLength getArgumentsLength()
{
- return getArgumentsLength(m_node->origin.semantic.inlineCallFrame);
+ return getArgumentsLength(m_node->origin.semantic.inlineCallFrame());
}
LValue getCurrentCallee()
{
- if (InlineCallFrame* frame = m_node->origin.semantic.inlineCallFrame) {
+ if (InlineCallFrame* frame = m_node->origin.semantic.inlineCallFrame()) {
if (frame->isClosureCall)
return m_out.loadPtr(addressFor(frame->calleeRecovery.virtualRegister()));
return weakPointer(frame->calleeRecovery.constant().asCell());
@@ -11488,7 +11493,7 @@
LValue getArgumentsStart()
{
- return getArgumentsStart(m_node->origin.semantic.inlineCallFrame);
+ return getArgumentsStart(m_node->origin.semantic.inlineCallFrame());
}
template<typename Functor>
@@ -16537,7 +16542,7 @@
// to baz and baz is inlined in bar. And then baz makes a tail-call to jaz,
// and jaz is inlined in baz. We want the callframe for jaz to appear to
// have caller be bar.
- codeOrigin = *codeOrigin.inlineCallFrame->getCallerSkippingTailCalls();
+ codeOrigin = *codeOrigin.inlineCallFrame()->getCallerSkippingTailCalls();
}
return codeOrigin;
diff --git a/Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp b/Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp
index 38baa8c..08ae463 100644
--- a/Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp
+++ b/Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp
@@ -286,7 +286,7 @@
if (exit.m_kind == BadCache || exit.m_kind == BadIndexingType) {
CodeOrigin codeOrigin = exit.m_codeOriginForExitProfile;
- if (ArrayProfile* arrayProfile = jit.baselineCodeBlockFor(codeOrigin)->getArrayProfile(codeOrigin.bytecodeIndex)) {
+ if (ArrayProfile* arrayProfile = jit.baselineCodeBlockFor(codeOrigin)->getArrayProfile(codeOrigin.bytecodeIndex())) {
jit.load32(MacroAssembler::Address(GPRInfo::regT0, JSCell::structureIDOffset()), GPRInfo::regT1);
jit.store32(GPRInfo::regT1, arrayProfile->addressOfLastSeenStructureID());
diff --git a/Source/JavaScriptCore/ftl/FTLOperations.cpp b/Source/JavaScriptCore/ftl/FTLOperations.cpp
index acfc6e7..35ecec6 100644
--- a/Source/JavaScriptCore/ftl/FTLOperations.cpp
+++ b/Source/JavaScriptCore/ftl/FTLOperations.cpp
@@ -278,7 +278,7 @@
case PhantomCreateRest:
case PhantomDirectArguments:
case PhantomClonedArguments: {
- if (!materialization->origin().inlineCallFrame) {
+ if (!materialization->origin().inlineCallFrame()) {
switch (materialization->type()) {
case PhantomDirectArguments:
return DirectArguments::createByCopying(exec);
@@ -303,7 +303,7 @@
// First figure out the argument count. If there isn't one then we represent the machine frame.
unsigned argumentCount = 0;
- if (materialization->origin().inlineCallFrame->isVarargs()) {
+ if (materialization->origin().inlineCallFrame()->isVarargs()) {
for (unsigned i = materialization->properties().size(); i--;) {
const ExitPropertyValue& property = materialization->properties()[i];
if (property.location() != PromotedLocationDescriptor(ArgumentCountPLoc))
@@ -312,11 +312,11 @@
break;
}
} else
- argumentCount = materialization->origin().inlineCallFrame->argumentCountIncludingThis;
+ argumentCount = materialization->origin().inlineCallFrame()->argumentCountIncludingThis;
RELEASE_ASSERT(argumentCount);
JSFunction* callee = nullptr;
- if (materialization->origin().inlineCallFrame->isClosureCall) {
+ if (materialization->origin().inlineCallFrame()->isClosureCall) {
for (unsigned i = materialization->properties().size(); i--;) {
const ExitPropertyValue& property = materialization->properties()[i];
if (property.location() != PromotedLocationDescriptor(ArgumentsCalleePLoc))
@@ -326,7 +326,7 @@
break;
}
} else
- callee = materialization->origin().inlineCallFrame->calleeConstant();
+ callee = materialization->origin().inlineCallFrame()->calleeConstant();
RELEASE_ASSERT(callee);
CodeBlock* codeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(
@@ -474,7 +474,7 @@
// For now, we use array allocation profile in the actual CodeBlock. It is OK since current NewArrayBuffer
// and PhantomNewArrayBuffer are always bound to a specific op_new_array_buffer.
CodeBlock* codeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(materialization->origin(), exec->codeBlock()->baselineAlternative());
- const Instruction* currentInstruction = codeBlock->instructions().at(materialization->origin().bytecodeIndex).ptr();
+ const Instruction* currentInstruction = codeBlock->instructions().at(materialization->origin().bytecodeIndex()).ptr();
if (!currentInstruction->is<OpNewArrayBuffer>()) {
// This case can happen if Object.keys, an OpCall is first converted into a NewArrayBuffer which is then converted into a PhantomNewArrayBuffer.
// There is no need to update the array allocation profile in that case.
diff --git a/Source/JavaScriptCore/interpreter/CallFrame.cpp b/Source/JavaScriptCore/interpreter/CallFrame.cpp
index 256a33b..5a97723 100644
--- a/Source/JavaScriptCore/interpreter/CallFrame.cpp
+++ b/Source/JavaScriptCore/interpreter/CallFrame.cpp
@@ -156,11 +156,11 @@
if (callSiteBitsAreCodeOriginIndex()) {
ASSERT(codeBlock());
CodeOrigin codeOrigin = this->codeOrigin();
- for (InlineCallFrame* inlineCallFrame = codeOrigin.inlineCallFrame; inlineCallFrame;) {
+ for (InlineCallFrame* inlineCallFrame = codeOrigin.inlineCallFrame(); inlineCallFrame;) {
codeOrigin = inlineCallFrame->directCaller;
- inlineCallFrame = codeOrigin.inlineCallFrame;
+ inlineCallFrame = codeOrigin.inlineCallFrame();
}
- return codeOrigin.bytecodeIndex;
+ return codeOrigin.bytecodeIndex();
}
#endif
ASSERT(callSiteBitsAreBytecodeOffset());
diff --git a/Source/JavaScriptCore/interpreter/StackVisitor.cpp b/Source/JavaScriptCore/interpreter/StackVisitor.cpp
index 4d1c792..833df06 100644
--- a/Source/JavaScriptCore/interpreter/StackVisitor.cpp
+++ b/Source/JavaScriptCore/interpreter/StackVisitor.cpp
@@ -96,8 +96,8 @@
#if ENABLE(DFG_JIT)
if (m_frame.isInlinedFrame()) {
CodeOrigin codeOrigin = m_frame.inlineCallFrame()->directCaller;
- while (codeOrigin.inlineCallFrame)
- codeOrigin = codeOrigin.inlineCallFrame->directCaller;
+ while (codeOrigin.inlineCallFrame())
+ codeOrigin = codeOrigin.inlineCallFrame()->directCaller;
readNonInlinedFrame(m_frame.callFrame(), &codeOrigin);
}
#endif
@@ -144,7 +144,7 @@
}
CodeOrigin codeOrigin = codeBlock->codeOrigin(index);
- if (!codeOrigin.inlineCallFrame) {
+ if (!codeOrigin.inlineCallFrame()) {
readNonInlinedFrame(callFrame, &codeOrigin);
return;
}
@@ -177,7 +177,7 @@
} else {
m_frame.m_codeBlock = callFrame->codeBlock();
m_frame.m_bytecodeOffset = !m_frame.codeBlock() ? 0
- : codeOrigin ? codeOrigin->bytecodeIndex
+ : codeOrigin ? codeOrigin->bytecodeIndex()
: callFrame->bytecodeOffset();
}
@@ -190,7 +190,7 @@
#if ENABLE(DFG_JIT)
static int inlinedFrameOffset(CodeOrigin* codeOrigin)
{
- InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame();
int frameOffset = inlineCallFrame ? inlineCallFrame->stackOffset : 0;
return frameOffset;
}
@@ -203,7 +203,7 @@
int frameOffset = inlinedFrameOffset(codeOrigin);
bool isInlined = !!frameOffset;
if (isInlined) {
- InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame;
+ InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame();
m_frame.m_callFrame = callFrame;
m_frame.m_inlineCallFrame = inlineCallFrame;
@@ -212,7 +212,7 @@
else
m_frame.m_argumentCountIncludingThis = inlineCallFrame->argumentCountIncludingThis;
m_frame.m_codeBlock = inlineCallFrame->baselineCodeBlock.get();
- m_frame.m_bytecodeOffset = codeOrigin->bytecodeIndex;
+ m_frame.m_bytecodeOffset = codeOrigin->bytecodeIndex();
JSFunction* callee = inlineCallFrame->calleeForCallFrame(callFrame);
m_frame.m_callee = callee;
diff --git a/Source/JavaScriptCore/interpreter/StackVisitor.h b/Source/JavaScriptCore/interpreter/StackVisitor.h
index 7b00314..b2f02f8 100644
--- a/Source/JavaScriptCore/interpreter/StackVisitor.h
+++ b/Source/JavaScriptCore/interpreter/StackVisitor.h
@@ -33,11 +33,11 @@
namespace JSC {
-struct CodeOrigin;
struct EntryFrame;
struct InlineCallFrame;
class CodeBlock;
+class CodeOrigin;
class ExecState;
class JSCell;
class JSFunction;
diff --git a/Source/JavaScriptCore/jit/AssemblyHelpers.cpp b/Source/JavaScriptCore/jit/AssemblyHelpers.cpp
index 41e0d5d..5b700c1 100644
--- a/Source/JavaScriptCore/jit/AssemblyHelpers.cpp
+++ b/Source/JavaScriptCore/jit/AssemblyHelpers.cpp
@@ -44,10 +44,10 @@
ExecutableBase* AssemblyHelpers::executableFor(const CodeOrigin& codeOrigin)
{
- if (!codeOrigin.inlineCallFrame)
+ auto* inlineCallFrame = codeOrigin.inlineCallFrame();
+ if (!inlineCallFrame)
return m_codeBlock->ownerExecutable();
-
- return codeOrigin.inlineCallFrame->baselineCodeBlock->ownerExecutable();
+ return inlineCallFrame->baselineCodeBlock->ownerExecutable();
}
AssemblyHelpers::Jump AssemblyHelpers::branchIfFastTypedArray(GPRReg baseGPR)
diff --git a/Source/JavaScriptCore/jit/AssemblyHelpers.h b/Source/JavaScriptCore/jit/AssemblyHelpers.h
index f40031b..9b46fe5 100644
--- a/Source/JavaScriptCore/jit/AssemblyHelpers.h
+++ b/Source/JavaScriptCore/jit/AssemblyHelpers.h
@@ -1432,9 +1432,10 @@
bool isStrictModeFor(CodeOrigin codeOrigin)
{
- if (!codeOrigin.inlineCallFrame)
+ auto* inlineCallFrame = codeOrigin.inlineCallFrame();
+ if (!inlineCallFrame)
return codeBlock()->isStrictMode();
- return codeOrigin.inlineCallFrame->isStrictMode();
+ return inlineCallFrame->isStrictMode();
}
ECMAMode ecmaModeFor(CodeOrigin codeOrigin)
@@ -1474,7 +1475,7 @@
static VirtualRegister argumentsStart(const CodeOrigin& codeOrigin)
{
- return argumentsStart(codeOrigin.inlineCallFrame);
+ return argumentsStart(codeOrigin.inlineCallFrame());
}
static VirtualRegister argumentCount(InlineCallFrame* inlineCallFrame)
@@ -1487,7 +1488,7 @@
static VirtualRegister argumentCount(const CodeOrigin& codeOrigin)
{
- return argumentCount(codeOrigin.inlineCallFrame);
+ return argumentCount(codeOrigin.inlineCallFrame());
}
void emitLoadStructure(VM&, RegisterID source, RegisterID dest, RegisterID scratch);
diff --git a/Source/JavaScriptCore/jit/PCToCodeOriginMap.cpp b/Source/JavaScriptCore/jit/PCToCodeOriginMap.cpp
index 9cb3266..625e1f2 100644
--- a/Source/JavaScriptCore/jit/PCToCodeOriginMap.cpp
+++ b/Source/JavaScriptCore/jit/PCToCodeOriginMap.cpp
@@ -191,7 +191,7 @@
DeltaCompressionBuilder codeOriginCompressor((sizeof(intptr_t) + sizeof(int8_t) + sizeof(int8_t) + sizeof(InlineCallFrame*)) * builder.m_codeRanges.size());
CodeOrigin lastCodeOrigin(0, nullptr);
auto buildCodeOriginTable = [&] (const CodeOrigin& codeOrigin) {
- intptr_t delta = static_cast<intptr_t>(codeOrigin.bytecodeIndex) - static_cast<intptr_t>(lastCodeOrigin.bytecodeIndex);
+ intptr_t delta = static_cast<intptr_t>(codeOrigin.bytecodeIndex()) - static_cast<intptr_t>(lastCodeOrigin.bytecodeIndex());
lastCodeOrigin = codeOrigin;
if (delta > std::numeric_limits<int8_t>::max() || delta < std::numeric_limits<int8_t>::min() || delta == sentinelBytecodeDelta) {
codeOriginCompressor.write<int8_t>(sentinelBytecodeDelta);
@@ -199,10 +199,10 @@
} else
codeOriginCompressor.write<int8_t>(static_cast<int8_t>(delta));
- int8_t hasInlineCallFrameByte = codeOrigin.inlineCallFrame ? 1 : 0;
+ int8_t hasInlineCallFrameByte = codeOrigin.inlineCallFrame() ? 1 : 0;
codeOriginCompressor.write<int8_t>(hasInlineCallFrameByte);
if (hasInlineCallFrameByte)
- codeOriginCompressor.write<uintptr_t>(bitwise_cast<uintptr_t>(codeOrigin.inlineCallFrame));
+ codeOriginCompressor.write<uintptr_t>(bitwise_cast<uintptr_t>(codeOrigin.inlineCallFrame()));
};
m_pcRangeStart = linkBuffer.locationOf<NoPtrTag>(builder.m_codeRanges.first().start).dataLocation<uintptr_t>();
@@ -254,7 +254,8 @@
return WTF::nullopt;
uintptr_t currentPC = 0;
- CodeOrigin currentCodeOrigin(0, nullptr);
+ unsigned currentBytecodeIndex = 0;
+ InlineCallFrame* currentInlineCallFrame = nullptr;
DeltaCompresseionReader pcReader(m_compressedPCs, m_compressedPCBufferSize);
DeltaCompresseionReader codeOriginReader(m_compressedCodeOrigins, m_compressedCodeOriginsSize);
@@ -270,7 +271,7 @@
currentPC += delta;
}
- CodeOrigin previousOrigin = currentCodeOrigin;
+ CodeOrigin previousOrigin = CodeOrigin(currentBytecodeIndex, currentInlineCallFrame);
{
int8_t value = codeOriginReader.read<int8_t>();
intptr_t delta;
@@ -279,14 +280,14 @@
else
delta = static_cast<intptr_t>(value);
- currentCodeOrigin.bytecodeIndex = static_cast<unsigned>(static_cast<intptr_t>(currentCodeOrigin.bytecodeIndex) + delta);
+ currentBytecodeIndex = static_cast<unsigned>(static_cast<intptr_t>(currentBytecodeIndex) + delta);
int8_t hasInlineFrame = codeOriginReader.read<int8_t>();
ASSERT(hasInlineFrame == 0 || hasInlineFrame == 1);
if (hasInlineFrame)
- currentCodeOrigin.inlineCallFrame = bitwise_cast<InlineCallFrame*>(codeOriginReader.read<uintptr_t>());
+ currentInlineCallFrame = bitwise_cast<InlineCallFrame*>(codeOriginReader.read<uintptr_t>());
else
- currentCodeOrigin.inlineCallFrame = nullptr;
+ currentInlineCallFrame = nullptr;
}
if (previousPC) {
diff --git a/Source/JavaScriptCore/profiler/ProfilerOriginStack.cpp b/Source/JavaScriptCore/profiler/ProfilerOriginStack.cpp
index 61dfb63..4b5c31e5 100644
--- a/Source/JavaScriptCore/profiler/ProfilerOriginStack.cpp
+++ b/Source/JavaScriptCore/profiler/ProfilerOriginStack.cpp
@@ -48,12 +48,12 @@
{
Vector<CodeOrigin> stack = codeOrigin.inlineStack();
- append(Origin(database, codeBlock, stack[0].bytecodeIndex));
+ append(Origin(database, codeBlock, stack[0].bytecodeIndex()));
for (unsigned i = 1; i < stack.size(); ++i) {
append(Origin(
- database.ensureBytecodesFor(stack[i].inlineCallFrame->baselineCodeBlock.get()),
- stack[i].bytecodeIndex));
+ database.ensureBytecodesFor(stack[i].inlineCallFrame()->baselineCodeBlock.get()),
+ stack[i].bytecodeIndex()));
}
}
diff --git a/Source/JavaScriptCore/profiler/ProfilerOriginStack.h b/Source/JavaScriptCore/profiler/ProfilerOriginStack.h
index bc81b9c..698fd58 100644
--- a/Source/JavaScriptCore/profiler/ProfilerOriginStack.h
+++ b/Source/JavaScriptCore/profiler/ProfilerOriginStack.h
@@ -33,7 +33,7 @@
namespace JSC {
class CodeBlock;
-struct CodeOrigin;
+class CodeOrigin;
namespace Profiler {
diff --git a/Source/JavaScriptCore/runtime/ErrorInstance.cpp b/Source/JavaScriptCore/runtime/ErrorInstance.cpp
index 50ca997..7b09e0a 100644
--- a/Source/JavaScriptCore/runtime/ErrorInstance.cpp
+++ b/Source/JavaScriptCore/runtime/ErrorInstance.cpp
@@ -66,8 +66,8 @@
CodeBlock* codeBlock;
CodeOrigin codeOrigin = callFrame->codeOrigin();
- if (codeOrigin && codeOrigin.inlineCallFrame)
- codeBlock = baselineCodeBlockForInlineCallFrame(codeOrigin.inlineCallFrame);
+ if (codeOrigin && codeOrigin.inlineCallFrame())
+ codeBlock = baselineCodeBlockForInlineCallFrame(codeOrigin.inlineCallFrame());
else
codeBlock = callFrame->codeBlock();
diff --git a/Source/JavaScriptCore/runtime/SamplingProfiler.cpp b/Source/JavaScriptCore/runtime/SamplingProfiler.cpp
index 4b9f2c9..e97d448 100644
--- a/Source/JavaScriptCore/runtime/SamplingProfiler.cpp
+++ b/Source/JavaScriptCore/runtime/SamplingProfiler.cpp
@@ -556,12 +556,13 @@
CodeOrigin machineOrigin;
origin.walkUpInlineStack([&] (const CodeOrigin& codeOrigin) {
machineOrigin = codeOrigin;
- appendCodeBlock(codeOrigin.inlineCallFrame ? codeOrigin.inlineCallFrame->baselineCodeBlock.get() : machineCodeBlock, codeOrigin.bytecodeIndex);
+ auto* inlineCallFrame = codeOrigin.inlineCallFrame();
+ appendCodeBlock(inlineCallFrame ? inlineCallFrame->baselineCodeBlock.get() : machineCodeBlock, codeOrigin.bytecodeIndex());
});
if (Options::collectSamplingProfilerDataForJSCShell()) {
RELEASE_ASSERT(machineOrigin.isSet());
- RELEASE_ASSERT(!machineOrigin.inlineCallFrame);
+ RELEASE_ASSERT(!machineOrigin.inlineCallFrame());
StackFrame::CodeLocation machineLocation = stackTrace.frames.last().semanticLocation;