Reland bytecode checkpoints since bugs have been fixed
https://bugs.webkit.org/show_bug.cgi?id=206361

Unreviewed, reland.

The watch bugs have been fixed by https://trac.webkit.org/changeset/254674


JSTests:

* stress/apply-osr-exit-should-get-length-once-exceptions-occasionally.js: Added.
(expectedArgCount):
(callee):
(test):
(let.array.get length):
* stress/apply-osr-exit-should-get-length-once.js: Added.
(expectedArgCount):
(callee):
(test):
(let.array.get length):
* stress/load-varargs-then-inlined-call-and-exit-strict.js:
(checkEqual):
* stress/recursive-tail-call-with-different-argument-count.js:
* stress/rest-varargs-osr-exit-to-checkpoint.js: Added.
(foo):
(bar):

Source/JavaScriptCore:

* CMakeLists.txt:
* DerivedSources-input.xcfilelist:
* JavaScriptCore.xcodeproj/project.pbxproj:
* assembler/MacroAssemblerCodeRef.h:
* assembler/ProbeFrame.h:
(JSC::Probe::Frame::operand):
(JSC::Probe::Frame::setOperand):
* b3/testb3.h:
(populateWithInterestingValues):
(floatingPointOperands):
* bytecode/AccessCase.cpp:
(JSC::AccessCase::generateImpl):
* bytecode/AccessCaseSnippetParams.cpp:
(JSC::SlowPathCallGeneratorWithArguments::generateImpl):
* bytecode/BytecodeDumper.cpp:
(JSC::BytecodeDumperBase::dumpValue):
(JSC::BytecodeDumper<Block>::registerName const):
(JSC::BytecodeDumper<Block>::constantName const):
(JSC::Wasm::BytecodeDumper::constantName const):
* bytecode/BytecodeDumper.h:
* bytecode/BytecodeIndex.cpp:
(JSC::BytecodeIndex::dump const):
* bytecode/BytecodeIndex.h:
(JSC::BytecodeIndex::BytecodeIndex):
(JSC::BytecodeIndex::offset const):
(JSC::BytecodeIndex::checkpoint const):
(JSC::BytecodeIndex::asBits const):
(JSC::BytecodeIndex::hash const):
(JSC::BytecodeIndex::operator bool const):
(JSC::BytecodeIndex::pack):
(JSC::BytecodeIndex::fromBits):
* bytecode/BytecodeList.rb:
* bytecode/BytecodeLivenessAnalysis.cpp:
(JSC::enumValuesEqualAsIntegral):
(JSC::tmpLivenessForCheckpoint):
* bytecode/BytecodeLivenessAnalysis.h:
* bytecode/BytecodeLivenessAnalysisInlines.h:
(JSC::virtualRegisterIsAlwaysLive):
(JSC::virtualRegisterThatIsNotAlwaysLiveIsLive):
(JSC::virtualRegisterIsLive):
(JSC::operandIsAlwaysLive): Deleted.
(JSC::operandThatIsNotAlwaysLiveIsLive): Deleted.
(JSC::operandIsLive): Deleted.
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::finishCreation):
(JSC::CodeBlock::bytecodeIndexForExit const):
(JSC::CodeBlock::ensureCatchLivenessIsComputedForBytecodeIndexSlow):
(JSC::CodeBlock::updateAllValueProfilePredictionsAndCountLiveness):
* bytecode/CodeBlock.h:
(JSC::CodeBlock::numTmps const):
(JSC::CodeBlock::isKnownNotImmediate):
(JSC::CodeBlock::isTemporaryRegister):
(JSC::CodeBlock::constantRegister):
(JSC::CodeBlock::getConstant const):
(JSC::CodeBlock::constantSourceCodeRepresentation const):
(JSC::CodeBlock::replaceConstant):
(JSC::CodeBlock::isTemporaryRegisterIndex): Deleted.
(JSC::CodeBlock::isConstantRegisterIndex): Deleted.
* bytecode/CodeOrigin.h:
* bytecode/FullBytecodeLiveness.h:
(JSC::FullBytecodeLiveness::virtualRegisterIsLive const):
(JSC::FullBytecodeLiveness::operandIsLive const): Deleted.
* bytecode/InlineCallFrame.h:
(JSC::InlineCallFrame::InlineCallFrame):
(JSC::InlineCallFrame::setTmpOffset):
(JSC::CodeOrigin::walkUpInlineStack const):
(JSC::CodeOrigin::inlineStackContainsActiveCheckpoint const):
(JSC::remapOperand):
(JSC::unmapOperand):
(JSC::CodeOrigin::walkUpInlineStack): Deleted.
* bytecode/LazyOperandValueProfile.h:
(JSC::LazyOperandValueProfileKey::LazyOperandValueProfileKey):
(JSC::LazyOperandValueProfileKey::hash const):
(JSC::LazyOperandValueProfileKey::operand const):
* bytecode/MethodOfGettingAValueProfile.cpp:
(JSC::MethodOfGettingAValueProfile::fromLazyOperand):
(JSC::MethodOfGettingAValueProfile::emitReportValue const):
(JSC::MethodOfGettingAValueProfile::reportValue):
* bytecode/MethodOfGettingAValueProfile.h:
* bytecode/Operands.h:
(JSC::Operand::Operand):
(JSC::Operand::tmp):
(JSC::Operand::kind const):
(JSC::Operand::value const):
(JSC::Operand::virtualRegister const):
(JSC::Operand::asBits const):
(JSC::Operand::isTmp const):
(JSC::Operand::isArgument const):
(JSC::Operand::isLocal const):
(JSC::Operand::isHeader const):
(JSC::Operand::isConstant const):
(JSC::Operand::toArgument const):
(JSC::Operand::toLocal const):
(JSC::Operand::operator== const):
(JSC::Operand::isValid const):
(JSC::Operand::fromBits):
(JSC::Operands::Operands):
(JSC::Operands::numberOfLocals const):
(JSC::Operands::numberOfTmps const):
(JSC::Operands::tmpIndex const):
(JSC::Operands::argumentIndex const):
(JSC::Operands::localIndex const):
(JSC::Operands::tmp):
(JSC::Operands::tmp const):
(JSC::Operands::argument):
(JSC::Operands::argument const):
(JSC::Operands::local):
(JSC::Operands::local const):
(JSC::Operands::sizeFor const):
(JSC::Operands::atFor):
(JSC::Operands::atFor const):
(JSC::Operands::ensureLocals):
(JSC::Operands::ensureTmps):
(JSC::Operands::getForOperandIndex):
(JSC::Operands::getForOperandIndex const):
(JSC::Operands::operandIndex const):
(JSC::Operands::operand):
(JSC::Operands::operand const):
(JSC::Operands::hasOperand const):
(JSC::Operands::setOperand):
(JSC::Operands::at const):
(JSC::Operands::at):
(JSC::Operands::operator[] const):
(JSC::Operands::operator[]):
(JSC::Operands::operandForIndex const):
(JSC::Operands::operator== const):
(JSC::Operands::isArgument const): Deleted.
(JSC::Operands::isLocal const): Deleted.
(JSC::Operands::virtualRegisterForIndex const): Deleted.
(JSC::Operands::setOperandFirstTime): Deleted.
* bytecode/OperandsInlines.h:
(JSC::Operand::dump const):
(JSC::Operands<T>::dumpInContext const):
(JSC::Operands<T>::dump const):
* bytecode/UnlinkedCodeBlock.cpp:
(JSC::UnlinkedCodeBlock::UnlinkedCodeBlock):
* bytecode/UnlinkedCodeBlock.h:
(JSC::UnlinkedCodeBlock::hasCheckpoints const):
(JSC::UnlinkedCodeBlock::setHasCheckpoints):
(JSC::UnlinkedCodeBlock::constantRegister const):
(JSC::UnlinkedCodeBlock::getConstant const):
(JSC::UnlinkedCodeBlock::isConstantRegisterIndex const): Deleted.
* bytecode/ValueProfile.h:
(JSC::ValueProfileAndVirtualRegisterBuffer::ValueProfileAndVirtualRegisterBuffer):
(JSC::ValueProfileAndVirtualRegisterBuffer::~ValueProfileAndVirtualRegisterBuffer):
(JSC::ValueProfileAndOperandBuffer::ValueProfileAndOperandBuffer): Deleted.
(JSC::ValueProfileAndOperandBuffer::~ValueProfileAndOperandBuffer): Deleted.
(JSC::ValueProfileAndOperandBuffer::forEach): Deleted.
* bytecode/ValueRecovery.cpp:
(JSC::ValueRecovery::recover const):
* bytecode/ValueRecovery.h:
* bytecode/VirtualRegister.h:
(JSC::virtualRegisterIsLocal):
(JSC::virtualRegisterIsArgument):
(JSC::VirtualRegister::VirtualRegister):
(JSC::VirtualRegister::isValid const):
(JSC::VirtualRegister::isLocal const):
(JSC::VirtualRegister::isArgument const):
(JSC::VirtualRegister::isConstant const):
(JSC::VirtualRegister::toConstantIndex const):
(JSC::operandIsLocal): Deleted.
(JSC::operandIsArgument): Deleted.
* bytecompiler/BytecodeGenerator.cpp:
(JSC::BytecodeGenerator::initializeNextParameter):
(JSC::BytecodeGenerator::initializeParameters):
(JSC::BytecodeGenerator::emitEqualityOpImpl):
(JSC::BytecodeGenerator::emitCallVarargs):
* bytecompiler/BytecodeGenerator.h:
(JSC::BytecodeGenerator::setUsesCheckpoints):
* bytecompiler/RegisterID.h:
(JSC::RegisterID::setIndex):
* dfg/DFGAbstractHeap.cpp:
(JSC::DFG::AbstractHeap::Payload::dumpAsOperand const):
(JSC::DFG::AbstractHeap::dump const):
* dfg/DFGAbstractHeap.h:
(JSC::DFG::AbstractHeap::Payload::Payload):
(JSC::DFG::AbstractHeap::AbstractHeap):
(JSC::DFG::AbstractHeap::operand const):
* dfg/DFGAbstractInterpreterInlines.h:
(JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
* dfg/DFGArgumentPosition.h:
(JSC::DFG::ArgumentPosition::dump):
* dfg/DFGArgumentsEliminationPhase.cpp:
* dfg/DFGArgumentsUtilities.cpp:
(JSC::DFG::argumentsInvolveStackSlot):
(JSC::DFG::emitCodeToGetArgumentsArrayLength):
* dfg/DFGArgumentsUtilities.h:
* dfg/DFGAtTailAbstractState.h:
(JSC::DFG::AtTailAbstractState::operand):
* dfg/DFGAvailabilityMap.cpp:
(JSC::DFG::AvailabilityMap::pruneByLiveness):
* dfg/DFGAvailabilityMap.h:
(JSC::DFG::AvailabilityMap::closeStartingWithLocal):
* dfg/DFGBasicBlock.cpp:
(JSC::DFG::BasicBlock::BasicBlock):
(JSC::DFG::BasicBlock::ensureTmps):
* dfg/DFGBasicBlock.h:
* dfg/DFGBlockInsertionSet.cpp:
(JSC::DFG::BlockInsertionSet::insert):
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::ByteCodeParser):
(JSC::DFG::ByteCodeParser::ensureTmps):
(JSC::DFG::ByteCodeParser::progressToNextCheckpoint):
(JSC::DFG::ByteCodeParser::newVariableAccessData):
(JSC::DFG::ByteCodeParser::getDirect):
(JSC::DFG::ByteCodeParser::get):
(JSC::DFG::ByteCodeParser::setDirect):
(JSC::DFG::ByteCodeParser::injectLazyOperandSpeculation):
(JSC::DFG::ByteCodeParser::getLocalOrTmp):
(JSC::DFG::ByteCodeParser::setLocalOrTmp):
(JSC::DFG::ByteCodeParser::setArgument):
(JSC::DFG::ByteCodeParser::findArgumentPositionForLocal):
(JSC::DFG::ByteCodeParser::findArgumentPosition):
(JSC::DFG::ByteCodeParser::flushImpl):
(JSC::DFG::ByteCodeParser::flushForTerminalImpl):
(JSC::DFG::ByteCodeParser::flush):
(JSC::DFG::ByteCodeParser::flushDirect):
(JSC::DFG::ByteCodeParser::addFlushOrPhantomLocal):
(JSC::DFG::ByteCodeParser::phantomLocalDirect):
(JSC::DFG::ByteCodeParser::flushForTerminal):
(JSC::DFG::ByteCodeParser::addToGraph):
(JSC::DFG::ByteCodeParser::InlineStackEntry::remapOperand const):
(JSC::DFG::ByteCodeParser::DelayedSetLocal::DelayedSetLocal):
(JSC::DFG::ByteCodeParser::DelayedSetLocal::execute):
(JSC::DFG::ByteCodeParser::allocateTargetableBlock):
(JSC::DFG::ByteCodeParser::allocateUntargetableBlock):
(JSC::DFG::ByteCodeParser::handleRecursiveTailCall):
(JSC::DFG::ByteCodeParser::inlineCall):
(JSC::DFG::ByteCodeParser::handleVarargsInlining):
(JSC::DFG::ByteCodeParser::handleInlining):
(JSC::DFG::ByteCodeParser::parseBlock):
(JSC::DFG::ByteCodeParser::InlineStackEntry::InlineStackEntry):
(JSC::DFG::ByteCodeParser::parse):
(JSC::DFG::ByteCodeParser::getLocal): Deleted.
(JSC::DFG::ByteCodeParser::setLocal): Deleted.
* dfg/DFGCFAPhase.cpp:
(JSC::DFG::CFAPhase::injectOSR):
* dfg/DFGCPSRethreadingPhase.cpp:
(JSC::DFG::CPSRethreadingPhase::run):
(JSC::DFG::CPSRethreadingPhase::canonicalizeGetLocal):
(JSC::DFG::CPSRethreadingPhase::canonicalizeFlushOrPhantomLocalFor):
(JSC::DFG::CPSRethreadingPhase::canonicalizeFlushOrPhantomLocal):
(JSC::DFG::CPSRethreadingPhase::canonicalizeSet):
(JSC::DFG::CPSRethreadingPhase::canonicalizeLocalsInBlock):
(JSC::DFG::CPSRethreadingPhase::propagatePhis):
(JSC::DFG::CPSRethreadingPhase::phiStackFor):
* dfg/DFGCSEPhase.cpp:
* dfg/DFGCapabilities.cpp:
(JSC::DFG::capabilityLevel):
* dfg/DFGClobberize.h:
(JSC::DFG::clobberize):
* dfg/DFGCombinedLiveness.cpp:
(JSC::DFG::addBytecodeLiveness):
* dfg/DFGCommonData.cpp:
(JSC::DFG::CommonData::addCodeOrigin):
(JSC::DFG::CommonData::addUniqueCallSiteIndex):
(JSC::DFG::CommonData::lastCallSite const):
* dfg/DFGConstantFoldingPhase.cpp:
(JSC::DFG::ConstantFoldingPhase::foldConstants):
* dfg/DFGDoesGC.cpp:
(JSC::DFG::doesGC):
* dfg/DFGDriver.cpp:
(JSC::DFG::compileImpl):
* dfg/DFGFixupPhase.cpp:
(JSC::DFG::FixupPhase::fixupNode):
* dfg/DFGForAllKills.h:
(JSC::DFG::forAllKilledOperands):
(JSC::DFG::forAllKilledNodesAtNodeIndex):
(JSC::DFG::forAllKillsInBlock):
* dfg/DFGGraph.cpp:
(JSC::DFG::Graph::dump):
(JSC::DFG::Graph::dumpBlockHeader):
(JSC::DFG::Graph::substituteGetLocal):
(JSC::DFG::Graph::isLiveInBytecode):
(JSC::DFG::Graph::localsAndTmpsLiveInBytecode):
(JSC::DFG::Graph::methodOfGettingAValueProfileFor):
(JSC::DFG::Graph::localsLiveInBytecode): Deleted.
* dfg/DFGGraph.h:
(JSC::DFG::Graph::forAllLocalsAndTmpsLiveInBytecode):
(JSC::DFG::Graph::forAllLiveInBytecode):
(JSC::DFG::Graph::forAllLocalsLiveInBytecode): Deleted.
* dfg/DFGInPlaceAbstractState.cpp:
(JSC::DFG::InPlaceAbstractState::InPlaceAbstractState):
* dfg/DFGInPlaceAbstractState.h:
(JSC::DFG::InPlaceAbstractState::operand):
* dfg/DFGJITCompiler.cpp:
(JSC::DFG::JITCompiler::linkOSRExits):
(JSC::DFG::JITCompiler::noticeOSREntry):
* dfg/DFGJITCompiler.h:
(JSC::DFG::JITCompiler::emitStoreCallSiteIndex):
* dfg/DFGLiveCatchVariablePreservationPhase.cpp:
(JSC::DFG::LiveCatchVariablePreservationPhase::isValidFlushLocation):
(JSC::DFG::LiveCatchVariablePreservationPhase::handleBlockForTryCatch):
(JSC::DFG::LiveCatchVariablePreservationPhase::newVariableAccessData):
* dfg/DFGMovHintRemovalPhase.cpp:
* dfg/DFGNode.h:
(JSC::DFG::StackAccessData::StackAccessData):
(JSC::DFG::Node::hasArgumentsChild):
(JSC::DFG::Node::argumentsChild):
(JSC::DFG::Node::operand):
(JSC::DFG::Node::hasUnlinkedOperand):
(JSC::DFG::Node::unlinkedOperand):
(JSC::DFG::Node::hasLoadVarargsData):
(JSC::DFG::Node::local): Deleted.
(JSC::DFG::Node::hasUnlinkedLocal): Deleted.
(JSC::DFG::Node::unlinkedLocal): Deleted.
* dfg/DFGNodeType.h:
* dfg/DFGOSRAvailabilityAnalysisPhase.cpp:
(JSC::DFG::OSRAvailabilityAnalysisPhase::run):
(JSC::DFG::LocalOSRAvailabilityCalculator::executeNode):
* dfg/DFGOSREntry.cpp:
(JSC::DFG::prepareOSREntry):
(JSC::DFG::prepareCatchOSREntry):
* dfg/DFGOSREntrypointCreationPhase.cpp:
(JSC::DFG::OSREntrypointCreationPhase::run):
* dfg/DFGOSRExit.cpp:
(JSC::DFG::OSRExit::emitRestoreArguments):
(JSC::DFG::OSRExit::compileExit):
(JSC::DFG::jsValueFor): Deleted.
(JSC::DFG::restoreCalleeSavesFor): Deleted.
(JSC::DFG::saveCalleeSavesFor): Deleted.
(JSC::DFG::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer): Deleted.
(JSC::DFG::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer): Deleted.
(JSC::DFG::saveOrCopyCalleeSavesFor): Deleted.
(JSC::DFG::createDirectArgumentsDuringExit): Deleted.
(JSC::DFG::createClonedArgumentsDuringExit): Deleted.
(JSC::DFG::emitRestoreArguments): Deleted.
(JSC::DFG::OSRExit::executeOSRExit): Deleted.
(JSC::DFG::reifyInlinedCallFrames): Deleted.
(JSC::DFG::adjustAndJumpToTarget): Deleted.
(JSC::DFG::printOSRExit): Deleted.
* dfg/DFGOSRExit.h:
* dfg/DFGOSRExitBase.h:
(JSC::DFG::OSRExitBase::isExitingToCheckpointHandler const):
* dfg/DFGOSRExitCompilerCommon.cpp:
(JSC::DFG::callerReturnPC):
(JSC::DFG::reifyInlinedCallFrames):
(JSC::DFG::adjustAndJumpToTarget):
* dfg/DFGObjectAllocationSinkingPhase.cpp:
* dfg/DFGOpInfo.h:
(JSC::DFG::OpInfo::OpInfo):
* dfg/DFGOperations.cpp:
* dfg/DFGPhantomInsertionPhase.cpp:
* dfg/DFGPreciseLocalClobberize.h:
(JSC::DFG::PreciseLocalClobberizeAdaptor::read):
(JSC::DFG::PreciseLocalClobberizeAdaptor::write):
(JSC::DFG::PreciseLocalClobberizeAdaptor::def):
(JSC::DFG::PreciseLocalClobberizeAdaptor::callIfAppropriate):
* dfg/DFGPredictionInjectionPhase.cpp:
(JSC::DFG::PredictionInjectionPhase::run):
* dfg/DFGPredictionPropagationPhase.cpp:
* dfg/DFGPutStackSinkingPhase.cpp:
* dfg/DFGSSAConversionPhase.cpp:
(JSC::DFG::SSAConversionPhase::run):
* dfg/DFGSafeToExecute.h:
(JSC::DFG::safeToExecute):
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::compileMovHint):
(JSC::DFG::SpeculativeJIT::compileCurrentBlock):
(JSC::DFG::SpeculativeJIT::checkArgumentTypes):
(JSC::DFG::SpeculativeJIT::compileVarargsLength):
(JSC::DFG::SpeculativeJIT::compileLoadVarargs):
(JSC::DFG::SpeculativeJIT::compileForwardVarargs):
(JSC::DFG::SpeculativeJIT::compileCreateDirectArguments):
(JSC::DFG::SpeculativeJIT::compileGetArgumentCountIncludingThis):
* dfg/DFGSpeculativeJIT.h:
(JSC::DFG::SpeculativeJIT::recordSetLocal):
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGStackLayoutPhase.cpp:
(JSC::DFG::StackLayoutPhase::run):
(JSC::DFG::StackLayoutPhase::assign):
* dfg/DFGStrengthReductionPhase.cpp:
(JSC::DFG::StrengthReductionPhase::handleNode):
* dfg/DFGThunks.cpp:
(JSC::DFG::osrExitThunkGenerator): Deleted.
* dfg/DFGThunks.h:
* dfg/DFGTypeCheckHoistingPhase.cpp:
(JSC::DFG::TypeCheckHoistingPhase::run):
(JSC::DFG::TypeCheckHoistingPhase::disableHoistingAcrossOSREntries):
* dfg/DFGValidate.cpp:
* dfg/DFGVarargsForwardingPhase.cpp:
* dfg/DFGVariableAccessData.cpp:
(JSC::DFG::VariableAccessData::VariableAccessData):
(JSC::DFG::VariableAccessData::shouldUseDoubleFormatAccordingToVote):
(JSC::DFG::VariableAccessData::tallyVotesForShouldUseDoubleFormat):
(JSC::DFG::VariableAccessData::couldRepresentInt52Impl):
* dfg/DFGVariableAccessData.h:
(JSC::DFG::VariableAccessData::operand):
(JSC::DFG::VariableAccessData::local): Deleted.
* dfg/DFGVariableEvent.cpp:
(JSC::DFG::VariableEvent::dump const):
* dfg/DFGVariableEvent.h:
(JSC::DFG::VariableEvent::spill):
(JSC::DFG::VariableEvent::setLocal):
(JSC::DFG::VariableEvent::movHint):
(JSC::DFG::VariableEvent::spillRegister const):
(JSC::DFG::VariableEvent::operand const):
(JSC::DFG::VariableEvent::bytecodeRegister const): Deleted.
* dfg/DFGVariableEventStream.cpp:
(JSC::DFG::VariableEventStream::logEvent):
(JSC::DFG::VariableEventStream::reconstruct const):
* dfg/DFGVariableEventStream.h:
(JSC::DFG::VariableEventStream::appendAndLog):
* ftl/FTLCapabilities.cpp:
(JSC::FTL::canCompile):
* ftl/FTLForOSREntryJITCode.cpp:
(JSC::FTL::ForOSREntryJITCode::ForOSREntryJITCode):
* ftl/FTLLowerDFGToB3.cpp:
(JSC::FTL::DFG::LowerDFGToB3::lower):
(JSC::FTL::DFG::LowerDFGToB3::compileNode):
(JSC::FTL::DFG::LowerDFGToB3::compileExtractOSREntryLocal):
(JSC::FTL::DFG::LowerDFGToB3::compileGetStack):
(JSC::FTL::DFG::LowerDFGToB3::compileGetCallee):
(JSC::FTL::DFG::LowerDFGToB3::compileSetCallee):
(JSC::FTL::DFG::LowerDFGToB3::compileSetArgumentCountIncludingThis):
(JSC::FTL::DFG::LowerDFGToB3::compileVarargsLength):
(JSC::FTL::DFG::LowerDFGToB3::compileLoadVarargs):
(JSC::FTL::DFG::LowerDFGToB3::compileForwardVarargs):
(JSC::FTL::DFG::LowerDFGToB3::getSpreadLengthFromInlineCallFrame):
(JSC::FTL::DFG::LowerDFGToB3::compileForwardVarargsWithSpread):
(JSC::FTL::DFG::LowerDFGToB3::compileLogShadowChickenPrologue):
(JSC::FTL::DFG::LowerDFGToB3::getArgumentsLength):
(JSC::FTL::DFG::LowerDFGToB3::getCurrentCallee):
(JSC::FTL::DFG::LowerDFGToB3::callPreflight):
(JSC::FTL::DFG::LowerDFGToB3::appendOSRExitDescriptor):
(JSC::FTL::DFG::LowerDFGToB3::buildExitArguments):
(JSC::FTL::DFG::LowerDFGToB3::addressFor):
(JSC::FTL::DFG::LowerDFGToB3::payloadFor):
(JSC::FTL::DFG::LowerDFGToB3::tagFor):
* ftl/FTLOSREntry.cpp:
(JSC::FTL::prepareOSREntry):
* ftl/FTLOSRExit.cpp:
(JSC::FTL::OSRExitDescriptor::OSRExitDescriptor):
* ftl/FTLOSRExit.h:
* ftl/FTLOSRExitCompiler.cpp:
(JSC::FTL::compileStub):
* ftl/FTLOperations.cpp:
(JSC::FTL::operationMaterializeObjectInOSR):
* ftl/FTLOutput.cpp:
(JSC::FTL::Output::select):
* ftl/FTLOutput.h:
* ftl/FTLSelectPredictability.h: Copied from Source/JavaScriptCore/ftl/FTLForOSREntryJITCode.cpp.
* ftl/FTLSlowPathCall.h:
(JSC::FTL::callOperation):
* generator/Checkpoints.rb: Added.
* generator/Opcode.rb:
* generator/Section.rb:
* heap/Heap.cpp:
(JSC::Heap::gatherScratchBufferRoots):
* interpreter/CallFrame.cpp:
(JSC::CallFrame::callSiteAsRawBits const):
(JSC::CallFrame::unsafeCallSiteAsRawBits const):
(JSC::CallFrame::callSiteIndex const):
(JSC::CallFrame::unsafeCallSiteIndex const):
(JSC::CallFrame::setCurrentVPC):
(JSC::CallFrame::bytecodeIndex):
(JSC::CallFrame::codeOrigin):
* interpreter/CallFrame.h:
(JSC::CallSiteIndex::CallSiteIndex):
(JSC::CallSiteIndex::operator bool const):
(JSC::CallSiteIndex::operator== const):
(JSC::CallSiteIndex::bits const):
(JSC::CallSiteIndex::fromBits):
(JSC::CallSiteIndex::bytecodeIndex const):
(JSC::DisposableCallSiteIndex::DisposableCallSiteIndex):
(JSC::CallFrame::callee const):
(JSC::CallFrame::unsafeCallee const):
(JSC::CallFrame::addressOfCodeBlock const):
(JSC::CallFrame::argumentCountIncludingThis const):
(JSC::CallFrame::offsetFor):
(JSC::CallFrame::setArgumentCountIncludingThis):
(JSC::CallFrame::setReturnPC):
* interpreter/CallFrameInlines.h:
(JSC::CallFrame::r):
(JSC::CallFrame::uncheckedR):
(JSC::CallFrame::guaranteedJSValueCallee const):
(JSC::CallFrame::jsCallee const):
(JSC::CallFrame::codeBlock const):
(JSC::CallFrame::unsafeCodeBlock const):
(JSC::CallFrame::setCallee):
(JSC::CallFrame::setCodeBlock):
* interpreter/CheckpointOSRExitSideState.h: Copied from Source/JavaScriptCore/dfg/DFGThunks.h.
* interpreter/Interpreter.cpp:
(JSC::eval):
(JSC::sizeOfVarargs):
(JSC::loadVarargs):
(JSC::setupVarargsFrame):
(JSC::UnwindFunctor::operator() const):
(JSC::Interpreter::executeCall):
(JSC::Interpreter::executeConstruct):
* interpreter/Interpreter.h:
* interpreter/StackVisitor.cpp:
(JSC::StackVisitor::readInlinedFrame):
* jit/AssemblyHelpers.h:
(JSC::AssemblyHelpers::emitGetFromCallFrameHeaderPtr):
(JSC::AssemblyHelpers::emitGetFromCallFrameHeader32):
(JSC::AssemblyHelpers::emitGetFromCallFrameHeader64):
(JSC::AssemblyHelpers::emitPutToCallFrameHeader):
(JSC::AssemblyHelpers::emitPutToCallFrameHeaderBeforePrologue):
(JSC::AssemblyHelpers::emitPutPayloadToCallFrameHeaderBeforePrologue):
(JSC::AssemblyHelpers::emitPutTagToCallFrameHeaderBeforePrologue):
(JSC::AssemblyHelpers::addressFor):
(JSC::AssemblyHelpers::tagFor):
(JSC::AssemblyHelpers::payloadFor):
(JSC::AssemblyHelpers::calleeFrameSlot):
(JSC::AssemblyHelpers::calleeArgumentSlot):
(JSC::AssemblyHelpers::calleeFrameTagSlot):
(JSC::AssemblyHelpers::calleeFramePayloadSlot):
(JSC::AssemblyHelpers::calleeFrameCallerFrame):
(JSC::AssemblyHelpers::argumentCount):
* jit/CallFrameShuffler.cpp:
(JSC::CallFrameShuffler::CallFrameShuffler):
* jit/CallFrameShuffler.h:
(JSC::CallFrameShuffler::setCalleeJSValueRegs):
(JSC::CallFrameShuffler::assumeCalleeIsCell):
* jit/JIT.h:
* jit/JITArithmetic.cpp:
(JSC::JIT::emit_op_unsigned):
(JSC::JIT::emit_compareAndJump):
(JSC::JIT::emit_compareAndJumpImpl):
(JSC::JIT::emit_compareUnsignedAndJump):
(JSC::JIT::emit_compareUnsignedAndJumpImpl):
(JSC::JIT::emit_compareUnsigned):
(JSC::JIT::emit_compareUnsignedImpl):
(JSC::JIT::emit_compareAndJumpSlow):
(JSC::JIT::emit_compareAndJumpSlowImpl):
(JSC::JIT::emit_op_inc):
(JSC::JIT::emit_op_dec):
(JSC::JIT::emit_op_mod):
(JSC::JIT::emitBitBinaryOpFastPath):
(JSC::JIT::emit_op_bitnot):
(JSC::JIT::emitRightShiftFastPath):
(JSC::JIT::emitMathICFast):
(JSC::JIT::emitMathICSlow):
(JSC::JIT::emit_op_div):
* jit/JITCall.cpp:
(JSC::JIT::emitPutCallResult):
(JSC::JIT::compileSetupFrame):
(JSC::JIT::compileOpCall):
* jit/JITExceptions.cpp:
(JSC::genericUnwind):
* jit/JITInlines.h:
(JSC::JIT::isOperandConstantDouble):
(JSC::JIT::getConstantOperand):
(JSC::JIT::emitPutIntToCallFrameHeader):
(JSC::JIT::appendCallWithExceptionCheckSetJSValueResult):
(JSC::JIT::appendCallWithExceptionCheckSetJSValueResultWithProfile):
(JSC::JIT::linkSlowCaseIfNotJSCell):
(JSC::JIT::isOperandConstantChar):
(JSC::JIT::getOperandConstantInt):
(JSC::JIT::getOperandConstantDouble):
(JSC::JIT::emitInitRegister):
(JSC::JIT::emitLoadTag):
(JSC::JIT::emitLoadPayload):
(JSC::JIT::emitGet):
(JSC::JIT::emitPutVirtualRegister):
(JSC::JIT::emitLoad):
(JSC::JIT::emitLoad2):
(JSC::JIT::emitLoadDouble):
(JSC::JIT::emitLoadInt32ToDouble):
(JSC::JIT::emitStore):
(JSC::JIT::emitStoreInt32):
(JSC::JIT::emitStoreCell):
(JSC::JIT::emitStoreBool):
(JSC::JIT::emitStoreDouble):
(JSC::JIT::emitJumpSlowCaseIfNotJSCell):
(JSC::JIT::isOperandConstantInt):
(JSC::JIT::emitGetVirtualRegister):
(JSC::JIT::emitGetVirtualRegisters):
* jit/JITOpcodes.cpp:
(JSC::JIT::emit_op_mov):
(JSC::JIT::emit_op_end):
(JSC::JIT::emit_op_new_object):
(JSC::JIT::emitSlow_op_new_object):
(JSC::JIT::emit_op_overrides_has_instance):
(JSC::JIT::emit_op_instanceof):
(JSC::JIT::emitSlow_op_instanceof):
(JSC::JIT::emit_op_is_empty):
(JSC::JIT::emit_op_is_undefined):
(JSC::JIT::emit_op_is_undefined_or_null):
(JSC::JIT::emit_op_is_boolean):
(JSC::JIT::emit_op_is_number):
(JSC::JIT::emit_op_is_cell_with_type):
(JSC::JIT::emit_op_is_object):
(JSC::JIT::emit_op_ret):
(JSC::JIT::emit_op_to_primitive):
(JSC::JIT::emit_op_set_function_name):
(JSC::JIT::emit_op_not):
(JSC::JIT::emit_op_jfalse):
(JSC::JIT::emit_op_jeq_null):
(JSC::JIT::emit_op_jneq_null):
(JSC::JIT::emit_op_jundefined_or_null):
(JSC::JIT::emit_op_jnundefined_or_null):
(JSC::JIT::emit_op_jneq_ptr):
(JSC::JIT::emit_op_eq):
(JSC::JIT::emit_op_jeq):
(JSC::JIT::emit_op_jtrue):
(JSC::JIT::emit_op_neq):
(JSC::JIT::emit_op_jneq):
(JSC::JIT::emit_op_throw):
(JSC::JIT::compileOpStrictEq):
(JSC::JIT::compileOpStrictEqJump):
(JSC::JIT::emit_op_to_number):
(JSC::JIT::emit_op_to_numeric):
(JSC::JIT::emit_op_to_string):
(JSC::JIT::emit_op_to_object):
(JSC::JIT::emit_op_catch):
(JSC::JIT::emit_op_get_parent_scope):
(JSC::JIT::emit_op_switch_imm):
(JSC::JIT::emit_op_switch_char):
(JSC::JIT::emit_op_switch_string):
(JSC::JIT::emit_op_eq_null):
(JSC::JIT::emit_op_neq_null):
(JSC::JIT::emit_op_enter):
(JSC::JIT::emit_op_get_scope):
(JSC::JIT::emit_op_to_this):
(JSC::JIT::emit_op_create_this):
(JSC::JIT::emit_op_check_tdz):
(JSC::JIT::emitSlow_op_eq):
(JSC::JIT::emitSlow_op_neq):
(JSC::JIT::emitSlow_op_instanceof_custom):
(JSC::JIT::emit_op_new_regexp):
(JSC::JIT::emitNewFuncCommon):
(JSC::JIT::emitNewFuncExprCommon):
(JSC::JIT::emit_op_new_array):
(JSC::JIT::emit_op_new_array_with_size):
(JSC::JIT::emit_op_has_structure_property):
(JSC::JIT::emit_op_has_indexed_property):
(JSC::JIT::emitSlow_op_has_indexed_property):
(JSC::JIT::emit_op_get_direct_pname):
(JSC::JIT::emit_op_enumerator_structure_pname):
(JSC::JIT::emit_op_enumerator_generic_pname):
(JSC::JIT::emit_op_profile_type):
(JSC::JIT::emit_op_log_shadow_chicken_prologue):
(JSC::JIT::emit_op_log_shadow_chicken_tail):
(JSC::JIT::emit_op_argument_count):
(JSC::JIT::emit_op_get_rest_length):
(JSC::JIT::emit_op_get_argument):
* jit/JITOpcodes32_64.cpp:
(JSC::JIT::emit_op_catch):
* jit/JITOperations.cpp:
* jit/JITPropertyAccess.cpp:
(JSC::JIT::emit_op_get_by_val):
(JSC::JIT::emitSlow_op_get_by_val):
(JSC::JIT::emit_op_put_by_val):
(JSC::JIT::emitGenericContiguousPutByVal):
(JSC::JIT::emitArrayStoragePutByVal):
(JSC::JIT::emitPutByValWithCachedId):
(JSC::JIT::emitSlow_op_put_by_val):
(JSC::JIT::emit_op_put_getter_by_id):
(JSC::JIT::emit_op_put_setter_by_id):
(JSC::JIT::emit_op_put_getter_setter_by_id):
(JSC::JIT::emit_op_put_getter_by_val):
(JSC::JIT::emit_op_put_setter_by_val):
(JSC::JIT::emit_op_del_by_id):
(JSC::JIT::emit_op_del_by_val):
(JSC::JIT::emit_op_try_get_by_id):
(JSC::JIT::emitSlow_op_try_get_by_id):
(JSC::JIT::emit_op_get_by_id_direct):
(JSC::JIT::emitSlow_op_get_by_id_direct):
(JSC::JIT::emit_op_get_by_id):
(JSC::JIT::emit_op_get_by_id_with_this):
(JSC::JIT::emitSlow_op_get_by_id):
(JSC::JIT::emitSlow_op_get_by_id_with_this):
(JSC::JIT::emit_op_put_by_id):
(JSC::JIT::emit_op_in_by_id):
(JSC::JIT::emitSlow_op_in_by_id):
(JSC::JIT::emitResolveClosure):
(JSC::JIT::emit_op_resolve_scope):
(JSC::JIT::emitLoadWithStructureCheck):
(JSC::JIT::emitGetClosureVar):
(JSC::JIT::emit_op_get_from_scope):
(JSC::JIT::emitSlow_op_get_from_scope):
(JSC::JIT::emitPutGlobalVariable):
(JSC::JIT::emitPutGlobalVariableIndirect):
(JSC::JIT::emitPutClosureVar):
(JSC::JIT::emit_op_put_to_scope):
(JSC::JIT::emit_op_get_from_arguments):
(JSC::JIT::emit_op_put_to_arguments):
(JSC::JIT::emitWriteBarrier):
(JSC::JIT::emit_op_get_internal_field):
(JSC::JIT::emit_op_put_internal_field):
(JSC::JIT::emitIntTypedArrayPutByVal):
(JSC::JIT::emitFloatTypedArrayPutByVal):
* jit/JSInterfaceJIT.h:
(JSC::JSInterfaceJIT::emitLoadJSCell):
(JSC::JSInterfaceJIT::emitJumpIfNotJSCell):
(JSC::JSInterfaceJIT::emitLoadInt32):
(JSC::JSInterfaceJIT::emitLoadDouble):
(JSC::JSInterfaceJIT::emitGetFromCallFrameHeaderPtr):
(JSC::JSInterfaceJIT::emitPutToCallFrameHeader):
(JSC::JSInterfaceJIT::emitPutCellToCallFrameHeader):
* jit/SetupVarargsFrame.cpp:
(JSC::emitSetupVarargsFrameFastCase):
* jit/SpecializedThunkJIT.h:
(JSC::SpecializedThunkJIT::loadDoubleArgument):
(JSC::SpecializedThunkJIT::loadCellArgument):
(JSC::SpecializedThunkJIT::loadInt32Argument):
* jit/ThunkGenerators.cpp:
(JSC::absThunkGenerator):
* llint/LLIntSlowPaths.cpp:
(JSC::LLInt::getNonConstantOperand):
(JSC::LLInt::getOperand):
(JSC::LLInt::genericCall):
(JSC::LLInt::varargsSetup):
(JSC::LLInt::commonCallEval):
(JSC::LLInt::LLINT_SLOW_PATH_DECL):
(JSC::LLInt::handleVarargsCheckpoint):
(JSC::LLInt::dispatchToNextInstruction):
(JSC::LLInt::slow_path_checkpoint_osr_exit_from_inlined_call):
(JSC::LLInt::slow_path_checkpoint_osr_exit):
(JSC::LLInt::llint_throw_stack_overflow_error):
* llint/LLIntSlowPaths.h:
* llint/LowLevelInterpreter.asm:
* llint/LowLevelInterpreter32_64.asm:
* llint/LowLevelInterpreter64.asm:
* runtime/ArgList.h:
(JSC::MarkedArgumentBuffer::fill):
* runtime/CachedTypes.cpp:
(JSC::CachedCodeBlock::hasCheckpoints const):
(JSC::UnlinkedCodeBlock::UnlinkedCodeBlock):
(JSC::CachedCodeBlock<CodeBlockType>::encode):
* runtime/CommonSlowPaths.cpp:
(JSC::SLOW_PATH_DECL):
* runtime/ConstructData.cpp:
(JSC::construct):
* runtime/ConstructData.h:
* runtime/DirectArguments.cpp:
(JSC::DirectArguments::copyToArguments):
* runtime/DirectArguments.h:
* runtime/GenericArguments.h:
* runtime/GenericArgumentsInlines.h:
(JSC::GenericArguments<Type>::copyToArguments):
* runtime/JSArray.cpp:
(JSC::JSArray::copyToArguments):
* runtime/JSArray.h:
* runtime/JSImmutableButterfly.cpp:
(JSC::JSImmutableButterfly::copyToArguments):
* runtime/JSImmutableButterfly.h:
* runtime/JSLock.cpp:
(JSC::JSLock::willReleaseLock):
* runtime/ModuleProgramExecutable.cpp:
(JSC::ModuleProgramExecutable::create):
* runtime/Options.cpp:
(JSC::recomputeDependentOptions):
* runtime/ScopedArguments.cpp:
(JSC::ScopedArguments::copyToArguments):
* runtime/ScopedArguments.h:
* runtime/VM.cpp:
(JSC::VM::scanSideState const):
(JSC::VM::addCheckpointOSRSideState):
(JSC::VM::findCheckpointOSRSideState):
* runtime/VM.h:
(JSC::VM::hasCheckpointOSRSideState const):
* tools/VMInspector.cpp:
(JSC::VMInspector::dumpRegisters):
* wasm/WasmFunctionCodeBlock.h:
(JSC::Wasm::FunctionCodeBlock::getConstant const):
(JSC::Wasm::FunctionCodeBlock::getConstantType const):
* wasm/WasmLLIntGenerator.cpp:
(JSC::Wasm::LLIntGenerator::setUsesCheckpoints const):
* wasm/WasmOperations.cpp:
(JSC::Wasm::operationWasmToJSException):
* wasm/WasmSlowPaths.cpp:

Source/WTF:

* WTF.xcodeproj/project.pbxproj:
* wtf/Bitmap.h:
(WTF::WordType>::invert):
(WTF::WordType>::operator):
(WTF::WordType>::operator const const):
* wtf/CMakeLists.txt:
* wtf/EnumClassOperatorOverloads.h: Added.
* wtf/FastBitVector.h:
(WTF::FastBitReference::operator bool const):
(WTF::FastBitReference::operator|=):
(WTF::FastBitReference::operator&=):
(WTF::FastBitVector::fill):
(WTF::FastBitVector::grow):
* wtf/UnalignedAccess.h:
(WTF::unalignedLoad):
(WTF::unalignedStore):

Tools:

* Scripts/run-jsc-stress-tests:


git-svn-id: http://svn.webkit.org/repository/webkit/trunk@254735 268f45cc-cd09-0410-ab3c-d52691b4dbfc
diff --git a/JSTests/ChangeLog b/JSTests/ChangeLog
index a16ab24..3dbb900 100644
--- a/JSTests/ChangeLog
+++ b/JSTests/ChangeLog
@@ -1,5 +1,31 @@
 2020-01-16  Keith Miller  <keith_miller@apple.com>
 
+        Reland bytecode checkpoints since bugs have been fixed
+        https://bugs.webkit.org/show_bug.cgi?id=206361
+
+        Unreviewed, reland.
+
+        The watch bugs have been fixed by https://trac.webkit.org/changeset/254674
+
+        * stress/apply-osr-exit-should-get-length-once-exceptions-occasionally.js: Added.
+        (expectedArgCount):
+        (callee):
+        (test):
+        (let.array.get length):
+        * stress/apply-osr-exit-should-get-length-once.js: Added.
+        (expectedArgCount):
+        (callee):
+        (test):
+        (let.array.get length):
+        * stress/load-varargs-then-inlined-call-and-exit-strict.js:
+        (checkEqual):
+        * stress/recursive-tail-call-with-different-argument-count.js:
+        * stress/rest-varargs-osr-exit-to-checkpoint.js: Added.
+        (foo):
+        (bar):
+
+2020-01-16  Keith Miller  <keith_miller@apple.com>
+
         Revert 254725 since it breaks tests
         https://bugs.webkit.org/show_bug.cgi?id=206391
 
diff --git a/JSTests/stress/apply-osr-exit-should-get-length-once-exceptions-occasionally.js b/JSTests/stress/apply-osr-exit-should-get-length-once-exceptions-occasionally.js
new file mode 100644
index 0000000..f3f15a6
--- /dev/null
+++ b/JSTests/stress/apply-osr-exit-should-get-length-once-exceptions-occasionally.js
@@ -0,0 +1,36 @@
+
+let currentArgCount;
+function expectedArgCount() {
+    return currentArgCount;
+}
+noInline(expectedArgCount);
+
+function callee() {
+    if (arguments.length != expectedArgCount())
+        throw new Error();
+}
+
+function test(array) {
+    callee.apply(undefined, array);
+}
+noInline(test);
+
+let lengthCalls = 0;
+currentArgCount = 2;
+let array = { 0: 1, 1: 2, get length() {
+    if (lengthCalls++ % 10 == 1)
+        throw new Error("throwing an exception in length");
+    return currentArgCount
+} }
+for (let i = 0; i < 1e6; i++) {
+    try {
+        test(array);
+    } catch { }
+}
+
+currentArgCount = 100;
+lengthCalls = 0;
+test(array);
+
+if (lengthCalls !== 1)
+    throw new Error(lengthCalls);
diff --git a/JSTests/stress/apply-osr-exit-should-get-length-once.js b/JSTests/stress/apply-osr-exit-should-get-length-once.js
new file mode 100644
index 0000000..8bee45c
--- /dev/null
+++ b/JSTests/stress/apply-osr-exit-should-get-length-once.js
@@ -0,0 +1,31 @@
+
+let currentArgCount;
+function expectedArgCount() {
+    return currentArgCount;
+}
+noInline(expectedArgCount);
+
+function callee() {
+    if (arguments.length != expectedArgCount())
+        throw new Error();
+}
+
+function test(array) {
+    callee.apply(undefined, array);
+}
+noInline(test);
+
+let lengthCalls = 0;
+currentArgCount = 2;
+let array = { 0: 1, 1: 2, get length() { lengthCalls++; return currentArgCount } }
+for (let i = 0; i < 1e5; i++)
+    test(array);
+
+
+test(array);
+currentArgCount = 100;
+lengthCalls = 0;
+test(array);
+
+if (lengthCalls !== 1)
+    throw new Error(lengthCalls);
diff --git a/JSTests/stress/load-varargs-then-inlined-call-and-exit-strict.js b/JSTests/stress/load-varargs-then-inlined-call-and-exit-strict.js
index 3618f8c..aa73231 100644
--- a/JSTests/stress/load-varargs-then-inlined-call-and-exit-strict.js
+++ b/JSTests/stress/load-varargs-then-inlined-call-and-exit-strict.js
@@ -18,8 +18,8 @@
         throw "Error: bad value of a: " + a.a + " versus " + b.a;
     if (a.b != b.b)
         throw "Error: bad value of b: " + a.b + " versus " + b.b;
-    if (a.c.length != b.c.length)
-        throw "Error: bad value of c, length mismatch: " + a.c + " versus " + b.c;
+    if (a.c.length !== b.c.length)
+        throw "Error: bad value of c, length mismatch: " + a.c.length + " versus " + b.c.length;
     for (var i = a.c.length; i--;) {
         if (a.c[i] != b.c[i])
             throw "Error: bad value of c, mismatch at i = " + i + ": " + a.c + " versus " + b.c;
diff --git a/JSTests/stress/recursive-tail-call-with-different-argument-count.js b/JSTests/stress/recursive-tail-call-with-different-argument-count.js
index 047fb01..53d752d 100644
--- a/JSTests/stress/recursive-tail-call-with-different-argument-count.js
+++ b/JSTests/stress/recursive-tail-call-with-different-argument-count.js
@@ -18,8 +18,8 @@
 for (var i = 0; i < 10000; ++i) {
     var result = foo(40, 2);
     if (result !== 42)
-        throw "Wrong result for foo, expected 42, got " + result;
+        throw Error("Wrong result for foo, expected 42, got " + result);
     result = bar(40, 2);
     if (result !== 42)
-        throw "Wrong result for bar, expected 42, got " + result;
+        throw Error("Wrong result for bar, expected 42, got " + result);
 }
diff --git a/JSTests/stress/rest-varargs-osr-exit-to-checkpoint.js b/JSTests/stress/rest-varargs-osr-exit-to-checkpoint.js
new file mode 100644
index 0000000..87ba055
--- /dev/null
+++ b/JSTests/stress/rest-varargs-osr-exit-to-checkpoint.js
@@ -0,0 +1,21 @@
+"use strict";
+
+function foo(a, b, ...rest) {
+    return rest.length;
+}
+
+function bar(a, b, ...rest) {
+    return foo.call(...rest);
+}
+noInline(bar);
+
+let array = new Array(10);
+for (let i = 0; i < 1e5; ++i) {
+    let result = bar(...array);
+    if (result !== array.length - bar.length - foo.length - 1)
+        throw new Error(i + " " + result);
+}
+
+array.length = 10000;
+if (bar(...array) !== array.length - bar.length - foo.length - 1)
+    throw new Error();
diff --git a/Source/JavaScriptCore/CMakeLists.txt b/Source/JavaScriptCore/CMakeLists.txt
index b917bc5..91e427c 100644
--- a/Source/JavaScriptCore/CMakeLists.txt
+++ b/Source/JavaScriptCore/CMakeLists.txt
@@ -526,6 +526,7 @@
     bytecode/ObjectPropertyCondition.h
     bytecode/Opcode.h
     bytecode/OpcodeSize.h
+    bytecode/Operands.h
     bytecode/PropertyCondition.h
     bytecode/PutByIdFlags.h
     bytecode/SpeculatedType.h
diff --git a/Source/JavaScriptCore/ChangeLog b/Source/JavaScriptCore/ChangeLog
index 03a5ce0..215f060 100644
--- a/Source/JavaScriptCore/ChangeLog
+++ b/Source/JavaScriptCore/ChangeLog
@@ -1,5 +1,782 @@
 2020-01-16  Keith Miller  <keith_miller@apple.com>
 
+        Reland bytecode checkpoints since bugs have been fixed
+        https://bugs.webkit.org/show_bug.cgi?id=206361
+
+        Unreviewed, reland.
+
+        The watch bugs have been fixed by https://trac.webkit.org/changeset/254674
+
+        * CMakeLists.txt:
+        * DerivedSources-input.xcfilelist:
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * assembler/MacroAssemblerCodeRef.h:
+        * assembler/ProbeFrame.h:
+        (JSC::Probe::Frame::operand):
+        (JSC::Probe::Frame::setOperand):
+        * b3/testb3.h:
+        (populateWithInterestingValues):
+        (floatingPointOperands):
+        * bytecode/AccessCase.cpp:
+        (JSC::AccessCase::generateImpl):
+        * bytecode/AccessCaseSnippetParams.cpp:
+        (JSC::SlowPathCallGeneratorWithArguments::generateImpl):
+        * bytecode/BytecodeDumper.cpp:
+        (JSC::BytecodeDumperBase::dumpValue):
+        (JSC::BytecodeDumper<Block>::registerName const):
+        (JSC::BytecodeDumper<Block>::constantName const):
+        (JSC::Wasm::BytecodeDumper::constantName const):
+        * bytecode/BytecodeDumper.h:
+        * bytecode/BytecodeIndex.cpp:
+        (JSC::BytecodeIndex::dump const):
+        * bytecode/BytecodeIndex.h:
+        (JSC::BytecodeIndex::BytecodeIndex):
+        (JSC::BytecodeIndex::offset const):
+        (JSC::BytecodeIndex::checkpoint const):
+        (JSC::BytecodeIndex::asBits const):
+        (JSC::BytecodeIndex::hash const):
+        (JSC::BytecodeIndex::operator bool const):
+        (JSC::BytecodeIndex::pack):
+        (JSC::BytecodeIndex::fromBits):
+        * bytecode/BytecodeList.rb:
+        * bytecode/BytecodeLivenessAnalysis.cpp:
+        (JSC::enumValuesEqualAsIntegral):
+        (JSC::tmpLivenessForCheckpoint):
+        * bytecode/BytecodeLivenessAnalysis.h:
+        * bytecode/BytecodeLivenessAnalysisInlines.h:
+        (JSC::virtualRegisterIsAlwaysLive):
+        (JSC::virtualRegisterThatIsNotAlwaysLiveIsLive):
+        (JSC::virtualRegisterIsLive):
+        (JSC::operandIsAlwaysLive): Deleted.
+        (JSC::operandThatIsNotAlwaysLiveIsLive): Deleted.
+        (JSC::operandIsLive): Deleted.
+        * bytecode/CodeBlock.cpp:
+        (JSC::CodeBlock::finishCreation):
+        (JSC::CodeBlock::bytecodeIndexForExit const):
+        (JSC::CodeBlock::ensureCatchLivenessIsComputedForBytecodeIndexSlow):
+        (JSC::CodeBlock::updateAllValueProfilePredictionsAndCountLiveness):
+        * bytecode/CodeBlock.h:
+        (JSC::CodeBlock::numTmps const):
+        (JSC::CodeBlock::isKnownNotImmediate):
+        (JSC::CodeBlock::isTemporaryRegister):
+        (JSC::CodeBlock::constantRegister):
+        (JSC::CodeBlock::getConstant const):
+        (JSC::CodeBlock::constantSourceCodeRepresentation const):
+        (JSC::CodeBlock::replaceConstant):
+        (JSC::CodeBlock::isTemporaryRegisterIndex): Deleted.
+        (JSC::CodeBlock::isConstantRegisterIndex): Deleted.
+        * bytecode/CodeOrigin.h:
+        * bytecode/FullBytecodeLiveness.h:
+        (JSC::FullBytecodeLiveness::virtualRegisterIsLive const):
+        (JSC::FullBytecodeLiveness::operandIsLive const): Deleted.
+        * bytecode/InlineCallFrame.h:
+        (JSC::InlineCallFrame::InlineCallFrame):
+        (JSC::InlineCallFrame::setTmpOffset):
+        (JSC::CodeOrigin::walkUpInlineStack const):
+        (JSC::CodeOrigin::inlineStackContainsActiveCheckpoint const):
+        (JSC::remapOperand):
+        (JSC::unmapOperand):
+        (JSC::CodeOrigin::walkUpInlineStack): Deleted.
+        * bytecode/LazyOperandValueProfile.h:
+        (JSC::LazyOperandValueProfileKey::LazyOperandValueProfileKey):
+        (JSC::LazyOperandValueProfileKey::hash const):
+        (JSC::LazyOperandValueProfileKey::operand const):
+        * bytecode/MethodOfGettingAValueProfile.cpp:
+        (JSC::MethodOfGettingAValueProfile::fromLazyOperand):
+        (JSC::MethodOfGettingAValueProfile::emitReportValue const):
+        (JSC::MethodOfGettingAValueProfile::reportValue):
+        * bytecode/MethodOfGettingAValueProfile.h:
+        * bytecode/Operands.h:
+        (JSC::Operand::Operand):
+        (JSC::Operand::tmp):
+        (JSC::Operand::kind const):
+        (JSC::Operand::value const):
+        (JSC::Operand::virtualRegister const):
+        (JSC::Operand::asBits const):
+        (JSC::Operand::isTmp const):
+        (JSC::Operand::isArgument const):
+        (JSC::Operand::isLocal const):
+        (JSC::Operand::isHeader const):
+        (JSC::Operand::isConstant const):
+        (JSC::Operand::toArgument const):
+        (JSC::Operand::toLocal const):
+        (JSC::Operand::operator== const):
+        (JSC::Operand::isValid const):
+        (JSC::Operand::fromBits):
+        (JSC::Operands::Operands):
+        (JSC::Operands::numberOfLocals const):
+        (JSC::Operands::numberOfTmps const):
+        (JSC::Operands::tmpIndex const):
+        (JSC::Operands::argumentIndex const):
+        (JSC::Operands::localIndex const):
+        (JSC::Operands::tmp):
+        (JSC::Operands::tmp const):
+        (JSC::Operands::argument):
+        (JSC::Operands::argument const):
+        (JSC::Operands::local):
+        (JSC::Operands::local const):
+        (JSC::Operands::sizeFor const):
+        (JSC::Operands::atFor):
+        (JSC::Operands::atFor const):
+        (JSC::Operands::ensureLocals):
+        (JSC::Operands::ensureTmps):
+        (JSC::Operands::getForOperandIndex):
+        (JSC::Operands::getForOperandIndex const):
+        (JSC::Operands::operandIndex const):
+        (JSC::Operands::operand):
+        (JSC::Operands::operand const):
+        (JSC::Operands::hasOperand const):
+        (JSC::Operands::setOperand):
+        (JSC::Operands::at const):
+        (JSC::Operands::at):
+        (JSC::Operands::operator[] const):
+        (JSC::Operands::operator[]):
+        (JSC::Operands::operandForIndex const):
+        (JSC::Operands::operator== const):
+        (JSC::Operands::isArgument const): Deleted.
+        (JSC::Operands::isLocal const): Deleted.
+        (JSC::Operands::virtualRegisterForIndex const): Deleted.
+        (JSC::Operands::setOperandFirstTime): Deleted.
+        * bytecode/OperandsInlines.h:
+        (JSC::Operand::dump const):
+        (JSC::Operands<T>::dumpInContext const):
+        (JSC::Operands<T>::dump const):
+        * bytecode/UnlinkedCodeBlock.cpp:
+        (JSC::UnlinkedCodeBlock::UnlinkedCodeBlock):
+        * bytecode/UnlinkedCodeBlock.h:
+        (JSC::UnlinkedCodeBlock::hasCheckpoints const):
+        (JSC::UnlinkedCodeBlock::setHasCheckpoints):
+        (JSC::UnlinkedCodeBlock::constantRegister const):
+        (JSC::UnlinkedCodeBlock::getConstant const):
+        (JSC::UnlinkedCodeBlock::isConstantRegisterIndex const): Deleted.
+        * bytecode/ValueProfile.h:
+        (JSC::ValueProfileAndVirtualRegisterBuffer::ValueProfileAndVirtualRegisterBuffer):
+        (JSC::ValueProfileAndVirtualRegisterBuffer::~ValueProfileAndVirtualRegisterBuffer):
+        (JSC::ValueProfileAndOperandBuffer::ValueProfileAndOperandBuffer): Deleted.
+        (JSC::ValueProfileAndOperandBuffer::~ValueProfileAndOperandBuffer): Deleted.
+        (JSC::ValueProfileAndOperandBuffer::forEach): Deleted.
+        * bytecode/ValueRecovery.cpp:
+        (JSC::ValueRecovery::recover const):
+        * bytecode/ValueRecovery.h:
+        * bytecode/VirtualRegister.h:
+        (JSC::virtualRegisterIsLocal):
+        (JSC::virtualRegisterIsArgument):
+        (JSC::VirtualRegister::VirtualRegister):
+        (JSC::VirtualRegister::isValid const):
+        (JSC::VirtualRegister::isLocal const):
+        (JSC::VirtualRegister::isArgument const):
+        (JSC::VirtualRegister::isConstant const):
+        (JSC::VirtualRegister::toConstantIndex const):
+        (JSC::operandIsLocal): Deleted.
+        (JSC::operandIsArgument): Deleted.
+        * bytecompiler/BytecodeGenerator.cpp:
+        (JSC::BytecodeGenerator::initializeNextParameter):
+        (JSC::BytecodeGenerator::initializeParameters):
+        (JSC::BytecodeGenerator::emitEqualityOpImpl):
+        (JSC::BytecodeGenerator::emitCallVarargs):
+        * bytecompiler/BytecodeGenerator.h:
+        (JSC::BytecodeGenerator::setUsesCheckpoints):
+        * bytecompiler/RegisterID.h:
+        (JSC::RegisterID::setIndex):
+        * dfg/DFGAbstractHeap.cpp:
+        (JSC::DFG::AbstractHeap::Payload::dumpAsOperand const):
+        (JSC::DFG::AbstractHeap::dump const):
+        * dfg/DFGAbstractHeap.h:
+        (JSC::DFG::AbstractHeap::Payload::Payload):
+        (JSC::DFG::AbstractHeap::AbstractHeap):
+        (JSC::DFG::AbstractHeap::operand const):
+        * dfg/DFGAbstractInterpreterInlines.h:
+        (JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
+        * dfg/DFGArgumentPosition.h:
+        (JSC::DFG::ArgumentPosition::dump):
+        * dfg/DFGArgumentsEliminationPhase.cpp:
+        * dfg/DFGArgumentsUtilities.cpp:
+        (JSC::DFG::argumentsInvolveStackSlot):
+        (JSC::DFG::emitCodeToGetArgumentsArrayLength):
+        * dfg/DFGArgumentsUtilities.h:
+        * dfg/DFGAtTailAbstractState.h:
+        (JSC::DFG::AtTailAbstractState::operand):
+        * dfg/DFGAvailabilityMap.cpp:
+        (JSC::DFG::AvailabilityMap::pruneByLiveness):
+        * dfg/DFGAvailabilityMap.h:
+        (JSC::DFG::AvailabilityMap::closeStartingWithLocal):
+        * dfg/DFGBasicBlock.cpp:
+        (JSC::DFG::BasicBlock::BasicBlock):
+        (JSC::DFG::BasicBlock::ensureTmps):
+        * dfg/DFGBasicBlock.h:
+        * dfg/DFGBlockInsertionSet.cpp:
+        (JSC::DFG::BlockInsertionSet::insert):
+        * dfg/DFGByteCodeParser.cpp:
+        (JSC::DFG::ByteCodeParser::ByteCodeParser):
+        (JSC::DFG::ByteCodeParser::ensureTmps):
+        (JSC::DFG::ByteCodeParser::progressToNextCheckpoint):
+        (JSC::DFG::ByteCodeParser::newVariableAccessData):
+        (JSC::DFG::ByteCodeParser::getDirect):
+        (JSC::DFG::ByteCodeParser::get):
+        (JSC::DFG::ByteCodeParser::setDirect):
+        (JSC::DFG::ByteCodeParser::injectLazyOperandSpeculation):
+        (JSC::DFG::ByteCodeParser::getLocalOrTmp):
+        (JSC::DFG::ByteCodeParser::setLocalOrTmp):
+        (JSC::DFG::ByteCodeParser::setArgument):
+        (JSC::DFG::ByteCodeParser::findArgumentPositionForLocal):
+        (JSC::DFG::ByteCodeParser::findArgumentPosition):
+        (JSC::DFG::ByteCodeParser::flushImpl):
+        (JSC::DFG::ByteCodeParser::flushForTerminalImpl):
+        (JSC::DFG::ByteCodeParser::flush):
+        (JSC::DFG::ByteCodeParser::flushDirect):
+        (JSC::DFG::ByteCodeParser::addFlushOrPhantomLocal):
+        (JSC::DFG::ByteCodeParser::phantomLocalDirect):
+        (JSC::DFG::ByteCodeParser::flushForTerminal):
+        (JSC::DFG::ByteCodeParser::addToGraph):
+        (JSC::DFG::ByteCodeParser::InlineStackEntry::remapOperand const):
+        (JSC::DFG::ByteCodeParser::DelayedSetLocal::DelayedSetLocal):
+        (JSC::DFG::ByteCodeParser::DelayedSetLocal::execute):
+        (JSC::DFG::ByteCodeParser::allocateTargetableBlock):
+        (JSC::DFG::ByteCodeParser::allocateUntargetableBlock):
+        (JSC::DFG::ByteCodeParser::handleRecursiveTailCall):
+        (JSC::DFG::ByteCodeParser::inlineCall):
+        (JSC::DFG::ByteCodeParser::handleVarargsInlining):
+        (JSC::DFG::ByteCodeParser::handleInlining):
+        (JSC::DFG::ByteCodeParser::parseBlock):
+        (JSC::DFG::ByteCodeParser::InlineStackEntry::InlineStackEntry):
+        (JSC::DFG::ByteCodeParser::parse):
+        (JSC::DFG::ByteCodeParser::getLocal): Deleted.
+        (JSC::DFG::ByteCodeParser::setLocal): Deleted.
+        * dfg/DFGCFAPhase.cpp:
+        (JSC::DFG::CFAPhase::injectOSR):
+        * dfg/DFGCPSRethreadingPhase.cpp:
+        (JSC::DFG::CPSRethreadingPhase::run):
+        (JSC::DFG::CPSRethreadingPhase::canonicalizeGetLocal):
+        (JSC::DFG::CPSRethreadingPhase::canonicalizeFlushOrPhantomLocalFor):
+        (JSC::DFG::CPSRethreadingPhase::canonicalizeFlushOrPhantomLocal):
+        (JSC::DFG::CPSRethreadingPhase::canonicalizeSet):
+        (JSC::DFG::CPSRethreadingPhase::canonicalizeLocalsInBlock):
+        (JSC::DFG::CPSRethreadingPhase::propagatePhis):
+        (JSC::DFG::CPSRethreadingPhase::phiStackFor):
+        * dfg/DFGCSEPhase.cpp:
+        * dfg/DFGCapabilities.cpp:
+        (JSC::DFG::capabilityLevel):
+        * dfg/DFGClobberize.h:
+        (JSC::DFG::clobberize):
+        * dfg/DFGCombinedLiveness.cpp:
+        (JSC::DFG::addBytecodeLiveness):
+        * dfg/DFGCommonData.cpp:
+        (JSC::DFG::CommonData::addCodeOrigin):
+        (JSC::DFG::CommonData::addUniqueCallSiteIndex):
+        (JSC::DFG::CommonData::lastCallSite const):
+        * dfg/DFGConstantFoldingPhase.cpp:
+        (JSC::DFG::ConstantFoldingPhase::foldConstants):
+        * dfg/DFGDoesGC.cpp:
+        (JSC::DFG::doesGC):
+        * dfg/DFGDriver.cpp:
+        (JSC::DFG::compileImpl):
+        * dfg/DFGFixupPhase.cpp:
+        (JSC::DFG::FixupPhase::fixupNode):
+        * dfg/DFGForAllKills.h:
+        (JSC::DFG::forAllKilledOperands):
+        (JSC::DFG::forAllKilledNodesAtNodeIndex):
+        (JSC::DFG::forAllKillsInBlock):
+        * dfg/DFGGraph.cpp:
+        (JSC::DFG::Graph::dump):
+        (JSC::DFG::Graph::dumpBlockHeader):
+        (JSC::DFG::Graph::substituteGetLocal):
+        (JSC::DFG::Graph::isLiveInBytecode):
+        (JSC::DFG::Graph::localsAndTmpsLiveInBytecode):
+        (JSC::DFG::Graph::methodOfGettingAValueProfileFor):
+        (JSC::DFG::Graph::localsLiveInBytecode): Deleted.
+        * dfg/DFGGraph.h:
+        (JSC::DFG::Graph::forAllLocalsAndTmpsLiveInBytecode):
+        (JSC::DFG::Graph::forAllLiveInBytecode):
+        (JSC::DFG::Graph::forAllLocalsLiveInBytecode): Deleted.
+        * dfg/DFGInPlaceAbstractState.cpp:
+        (JSC::DFG::InPlaceAbstractState::InPlaceAbstractState):
+        * dfg/DFGInPlaceAbstractState.h:
+        (JSC::DFG::InPlaceAbstractState::operand):
+        * dfg/DFGJITCompiler.cpp:
+        (JSC::DFG::JITCompiler::linkOSRExits):
+        (JSC::DFG::JITCompiler::noticeOSREntry):
+        * dfg/DFGJITCompiler.h:
+        (JSC::DFG::JITCompiler::emitStoreCallSiteIndex):
+        * dfg/DFGLiveCatchVariablePreservationPhase.cpp:
+        (JSC::DFG::LiveCatchVariablePreservationPhase::isValidFlushLocation):
+        (JSC::DFG::LiveCatchVariablePreservationPhase::handleBlockForTryCatch):
+        (JSC::DFG::LiveCatchVariablePreservationPhase::newVariableAccessData):
+        * dfg/DFGMovHintRemovalPhase.cpp:
+        * dfg/DFGNode.h:
+        (JSC::DFG::StackAccessData::StackAccessData):
+        (JSC::DFG::Node::hasArgumentsChild):
+        (JSC::DFG::Node::argumentsChild):
+        (JSC::DFG::Node::operand):
+        (JSC::DFG::Node::hasUnlinkedOperand):
+        (JSC::DFG::Node::unlinkedOperand):
+        (JSC::DFG::Node::hasLoadVarargsData):
+        (JSC::DFG::Node::local): Deleted.
+        (JSC::DFG::Node::hasUnlinkedLocal): Deleted.
+        (JSC::DFG::Node::unlinkedLocal): Deleted.
+        * dfg/DFGNodeType.h:
+        * dfg/DFGOSRAvailabilityAnalysisPhase.cpp:
+        (JSC::DFG::OSRAvailabilityAnalysisPhase::run):
+        (JSC::DFG::LocalOSRAvailabilityCalculator::executeNode):
+        * dfg/DFGOSREntry.cpp:
+        (JSC::DFG::prepareOSREntry):
+        (JSC::DFG::prepareCatchOSREntry):
+        * dfg/DFGOSREntrypointCreationPhase.cpp:
+        (JSC::DFG::OSREntrypointCreationPhase::run):
+        * dfg/DFGOSRExit.cpp:
+        (JSC::DFG::OSRExit::emitRestoreArguments):
+        (JSC::DFG::OSRExit::compileExit):
+        (JSC::DFG::jsValueFor): Deleted.
+        (JSC::DFG::restoreCalleeSavesFor): Deleted.
+        (JSC::DFG::saveCalleeSavesFor): Deleted.
+        (JSC::DFG::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer): Deleted.
+        (JSC::DFG::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer): Deleted.
+        (JSC::DFG::saveOrCopyCalleeSavesFor): Deleted.
+        (JSC::DFG::createDirectArgumentsDuringExit): Deleted.
+        (JSC::DFG::createClonedArgumentsDuringExit): Deleted.
+        (JSC::DFG::emitRestoreArguments): Deleted.
+        (JSC::DFG::OSRExit::executeOSRExit): Deleted.
+        (JSC::DFG::reifyInlinedCallFrames): Deleted.
+        (JSC::DFG::adjustAndJumpToTarget): Deleted.
+        (JSC::DFG::printOSRExit): Deleted.
+        * dfg/DFGOSRExit.h:
+        * dfg/DFGOSRExitBase.h:
+        (JSC::DFG::OSRExitBase::isExitingToCheckpointHandler const):
+        * dfg/DFGOSRExitCompilerCommon.cpp:
+        (JSC::DFG::callerReturnPC):
+        (JSC::DFG::reifyInlinedCallFrames):
+        (JSC::DFG::adjustAndJumpToTarget):
+        * dfg/DFGObjectAllocationSinkingPhase.cpp:
+        * dfg/DFGOpInfo.h:
+        (JSC::DFG::OpInfo::OpInfo):
+        * dfg/DFGOperations.cpp:
+        * dfg/DFGPhantomInsertionPhase.cpp:
+        * dfg/DFGPreciseLocalClobberize.h:
+        (JSC::DFG::PreciseLocalClobberizeAdaptor::read):
+        (JSC::DFG::PreciseLocalClobberizeAdaptor::write):
+        (JSC::DFG::PreciseLocalClobberizeAdaptor::def):
+        (JSC::DFG::PreciseLocalClobberizeAdaptor::callIfAppropriate):
+        * dfg/DFGPredictionInjectionPhase.cpp:
+        (JSC::DFG::PredictionInjectionPhase::run):
+        * dfg/DFGPredictionPropagationPhase.cpp:
+        * dfg/DFGPutStackSinkingPhase.cpp:
+        * dfg/DFGSSAConversionPhase.cpp:
+        (JSC::DFG::SSAConversionPhase::run):
+        * dfg/DFGSafeToExecute.h:
+        (JSC::DFG::safeToExecute):
+        * dfg/DFGSpeculativeJIT.cpp:
+        (JSC::DFG::SpeculativeJIT::compileMovHint):
+        (JSC::DFG::SpeculativeJIT::compileCurrentBlock):
+        (JSC::DFG::SpeculativeJIT::checkArgumentTypes):
+        (JSC::DFG::SpeculativeJIT::compileVarargsLength):
+        (JSC::DFG::SpeculativeJIT::compileLoadVarargs):
+        (JSC::DFG::SpeculativeJIT::compileForwardVarargs):
+        (JSC::DFG::SpeculativeJIT::compileCreateDirectArguments):
+        (JSC::DFG::SpeculativeJIT::compileGetArgumentCountIncludingThis):
+        * dfg/DFGSpeculativeJIT.h:
+        (JSC::DFG::SpeculativeJIT::recordSetLocal):
+        * dfg/DFGSpeculativeJIT32_64.cpp:
+        (JSC::DFG::SpeculativeJIT::compile):
+        * dfg/DFGSpeculativeJIT64.cpp:
+        (JSC::DFG::SpeculativeJIT::compile):
+        * dfg/DFGStackLayoutPhase.cpp:
+        (JSC::DFG::StackLayoutPhase::run):
+        (JSC::DFG::StackLayoutPhase::assign):
+        * dfg/DFGStrengthReductionPhase.cpp:
+        (JSC::DFG::StrengthReductionPhase::handleNode):
+        * dfg/DFGThunks.cpp:
+        (JSC::DFG::osrExitThunkGenerator): Deleted.
+        * dfg/DFGThunks.h:
+        * dfg/DFGTypeCheckHoistingPhase.cpp:
+        (JSC::DFG::TypeCheckHoistingPhase::run):
+        (JSC::DFG::TypeCheckHoistingPhase::disableHoistingAcrossOSREntries):
+        * dfg/DFGValidate.cpp:
+        * dfg/DFGVarargsForwardingPhase.cpp:
+        * dfg/DFGVariableAccessData.cpp:
+        (JSC::DFG::VariableAccessData::VariableAccessData):
+        (JSC::DFG::VariableAccessData::shouldUseDoubleFormatAccordingToVote):
+        (JSC::DFG::VariableAccessData::tallyVotesForShouldUseDoubleFormat):
+        (JSC::DFG::VariableAccessData::couldRepresentInt52Impl):
+        * dfg/DFGVariableAccessData.h:
+        (JSC::DFG::VariableAccessData::operand):
+        (JSC::DFG::VariableAccessData::local): Deleted.
+        * dfg/DFGVariableEvent.cpp:
+        (JSC::DFG::VariableEvent::dump const):
+        * dfg/DFGVariableEvent.h:
+        (JSC::DFG::VariableEvent::spill):
+        (JSC::DFG::VariableEvent::setLocal):
+        (JSC::DFG::VariableEvent::movHint):
+        (JSC::DFG::VariableEvent::spillRegister const):
+        (JSC::DFG::VariableEvent::operand const):
+        (JSC::DFG::VariableEvent::bytecodeRegister const): Deleted.
+        * dfg/DFGVariableEventStream.cpp:
+        (JSC::DFG::VariableEventStream::logEvent):
+        (JSC::DFG::VariableEventStream::reconstruct const):
+        * dfg/DFGVariableEventStream.h:
+        (JSC::DFG::VariableEventStream::appendAndLog):
+        * ftl/FTLCapabilities.cpp:
+        (JSC::FTL::canCompile):
+        * ftl/FTLForOSREntryJITCode.cpp:
+        (JSC::FTL::ForOSREntryJITCode::ForOSREntryJITCode):
+        * ftl/FTLLowerDFGToB3.cpp:
+        (JSC::FTL::DFG::LowerDFGToB3::lower):
+        (JSC::FTL::DFG::LowerDFGToB3::compileNode):
+        (JSC::FTL::DFG::LowerDFGToB3::compileExtractOSREntryLocal):
+        (JSC::FTL::DFG::LowerDFGToB3::compileGetStack):
+        (JSC::FTL::DFG::LowerDFGToB3::compileGetCallee):
+        (JSC::FTL::DFG::LowerDFGToB3::compileSetCallee):
+        (JSC::FTL::DFG::LowerDFGToB3::compileSetArgumentCountIncludingThis):
+        (JSC::FTL::DFG::LowerDFGToB3::compileVarargsLength):
+        (JSC::FTL::DFG::LowerDFGToB3::compileLoadVarargs):
+        (JSC::FTL::DFG::LowerDFGToB3::compileForwardVarargs):
+        (JSC::FTL::DFG::LowerDFGToB3::getSpreadLengthFromInlineCallFrame):
+        (JSC::FTL::DFG::LowerDFGToB3::compileForwardVarargsWithSpread):
+        (JSC::FTL::DFG::LowerDFGToB3::compileLogShadowChickenPrologue):
+        (JSC::FTL::DFG::LowerDFGToB3::getArgumentsLength):
+        (JSC::FTL::DFG::LowerDFGToB3::getCurrentCallee):
+        (JSC::FTL::DFG::LowerDFGToB3::callPreflight):
+        (JSC::FTL::DFG::LowerDFGToB3::appendOSRExitDescriptor):
+        (JSC::FTL::DFG::LowerDFGToB3::buildExitArguments):
+        (JSC::FTL::DFG::LowerDFGToB3::addressFor):
+        (JSC::FTL::DFG::LowerDFGToB3::payloadFor):
+        (JSC::FTL::DFG::LowerDFGToB3::tagFor):
+        * ftl/FTLOSREntry.cpp:
+        (JSC::FTL::prepareOSREntry):
+        * ftl/FTLOSRExit.cpp:
+        (JSC::FTL::OSRExitDescriptor::OSRExitDescriptor):
+        * ftl/FTLOSRExit.h:
+        * ftl/FTLOSRExitCompiler.cpp:
+        (JSC::FTL::compileStub):
+        * ftl/FTLOperations.cpp:
+        (JSC::FTL::operationMaterializeObjectInOSR):
+        * ftl/FTLOutput.cpp:
+        (JSC::FTL::Output::select):
+        * ftl/FTLOutput.h:
+        * ftl/FTLSelectPredictability.h: Copied from Source/JavaScriptCore/ftl/FTLForOSREntryJITCode.cpp.
+        * ftl/FTLSlowPathCall.h:
+        (JSC::FTL::callOperation):
+        * generator/Checkpoints.rb: Added.
+        * generator/Opcode.rb:
+        * generator/Section.rb:
+        * heap/Heap.cpp:
+        (JSC::Heap::gatherScratchBufferRoots):
+        * interpreter/CallFrame.cpp:
+        (JSC::CallFrame::callSiteAsRawBits const):
+        (JSC::CallFrame::unsafeCallSiteAsRawBits const):
+        (JSC::CallFrame::callSiteIndex const):
+        (JSC::CallFrame::unsafeCallSiteIndex const):
+        (JSC::CallFrame::setCurrentVPC):
+        (JSC::CallFrame::bytecodeIndex):
+        (JSC::CallFrame::codeOrigin):
+        * interpreter/CallFrame.h:
+        (JSC::CallSiteIndex::CallSiteIndex):
+        (JSC::CallSiteIndex::operator bool const):
+        (JSC::CallSiteIndex::operator== const):
+        (JSC::CallSiteIndex::bits const):
+        (JSC::CallSiteIndex::fromBits):
+        (JSC::CallSiteIndex::bytecodeIndex const):
+        (JSC::DisposableCallSiteIndex::DisposableCallSiteIndex):
+        (JSC::CallFrame::callee const):
+        (JSC::CallFrame::unsafeCallee const):
+        (JSC::CallFrame::addressOfCodeBlock const):
+        (JSC::CallFrame::argumentCountIncludingThis const):
+        (JSC::CallFrame::offsetFor):
+        (JSC::CallFrame::setArgumentCountIncludingThis):
+        (JSC::CallFrame::setReturnPC):
+        * interpreter/CallFrameInlines.h:
+        (JSC::CallFrame::r):
+        (JSC::CallFrame::uncheckedR):
+        (JSC::CallFrame::guaranteedJSValueCallee const):
+        (JSC::CallFrame::jsCallee const):
+        (JSC::CallFrame::codeBlock const):
+        (JSC::CallFrame::unsafeCodeBlock const):
+        (JSC::CallFrame::setCallee):
+        (JSC::CallFrame::setCodeBlock):
+        * interpreter/CheckpointOSRExitSideState.h: Copied from Source/JavaScriptCore/dfg/DFGThunks.h.
+        * interpreter/Interpreter.cpp:
+        (JSC::eval):
+        (JSC::sizeOfVarargs):
+        (JSC::loadVarargs):
+        (JSC::setupVarargsFrame):
+        (JSC::UnwindFunctor::operator() const):
+        (JSC::Interpreter::executeCall):
+        (JSC::Interpreter::executeConstruct):
+        * interpreter/Interpreter.h:
+        * interpreter/StackVisitor.cpp:
+        (JSC::StackVisitor::readInlinedFrame):
+        * jit/AssemblyHelpers.h:
+        (JSC::AssemblyHelpers::emitGetFromCallFrameHeaderPtr):
+        (JSC::AssemblyHelpers::emitGetFromCallFrameHeader32):
+        (JSC::AssemblyHelpers::emitGetFromCallFrameHeader64):
+        (JSC::AssemblyHelpers::emitPutToCallFrameHeader):
+        (JSC::AssemblyHelpers::emitPutToCallFrameHeaderBeforePrologue):
+        (JSC::AssemblyHelpers::emitPutPayloadToCallFrameHeaderBeforePrologue):
+        (JSC::AssemblyHelpers::emitPutTagToCallFrameHeaderBeforePrologue):
+        (JSC::AssemblyHelpers::addressFor):
+        (JSC::AssemblyHelpers::tagFor):
+        (JSC::AssemblyHelpers::payloadFor):
+        (JSC::AssemblyHelpers::calleeFrameSlot):
+        (JSC::AssemblyHelpers::calleeArgumentSlot):
+        (JSC::AssemblyHelpers::calleeFrameTagSlot):
+        (JSC::AssemblyHelpers::calleeFramePayloadSlot):
+        (JSC::AssemblyHelpers::calleeFrameCallerFrame):
+        (JSC::AssemblyHelpers::argumentCount):
+        * jit/CallFrameShuffler.cpp:
+        (JSC::CallFrameShuffler::CallFrameShuffler):
+        * jit/CallFrameShuffler.h:
+        (JSC::CallFrameShuffler::setCalleeJSValueRegs):
+        (JSC::CallFrameShuffler::assumeCalleeIsCell):
+        * jit/JIT.h:
+        * jit/JITArithmetic.cpp:
+        (JSC::JIT::emit_op_unsigned):
+        (JSC::JIT::emit_compareAndJump):
+        (JSC::JIT::emit_compareAndJumpImpl):
+        (JSC::JIT::emit_compareUnsignedAndJump):
+        (JSC::JIT::emit_compareUnsignedAndJumpImpl):
+        (JSC::JIT::emit_compareUnsigned):
+        (JSC::JIT::emit_compareUnsignedImpl):
+        (JSC::JIT::emit_compareAndJumpSlow):
+        (JSC::JIT::emit_compareAndJumpSlowImpl):
+        (JSC::JIT::emit_op_inc):
+        (JSC::JIT::emit_op_dec):
+        (JSC::JIT::emit_op_mod):
+        (JSC::JIT::emitBitBinaryOpFastPath):
+        (JSC::JIT::emit_op_bitnot):
+        (JSC::JIT::emitRightShiftFastPath):
+        (JSC::JIT::emitMathICFast):
+        (JSC::JIT::emitMathICSlow):
+        (JSC::JIT::emit_op_div):
+        * jit/JITCall.cpp:
+        (JSC::JIT::emitPutCallResult):
+        (JSC::JIT::compileSetupFrame):
+        (JSC::JIT::compileOpCall):
+        * jit/JITExceptions.cpp:
+        (JSC::genericUnwind):
+        * jit/JITInlines.h:
+        (JSC::JIT::isOperandConstantDouble):
+        (JSC::JIT::getConstantOperand):
+        (JSC::JIT::emitPutIntToCallFrameHeader):
+        (JSC::JIT::appendCallWithExceptionCheckSetJSValueResult):
+        (JSC::JIT::appendCallWithExceptionCheckSetJSValueResultWithProfile):
+        (JSC::JIT::linkSlowCaseIfNotJSCell):
+        (JSC::JIT::isOperandConstantChar):
+        (JSC::JIT::getOperandConstantInt):
+        (JSC::JIT::getOperandConstantDouble):
+        (JSC::JIT::emitInitRegister):
+        (JSC::JIT::emitLoadTag):
+        (JSC::JIT::emitLoadPayload):
+        (JSC::JIT::emitGet):
+        (JSC::JIT::emitPutVirtualRegister):
+        (JSC::JIT::emitLoad):
+        (JSC::JIT::emitLoad2):
+        (JSC::JIT::emitLoadDouble):
+        (JSC::JIT::emitLoadInt32ToDouble):
+        (JSC::JIT::emitStore):
+        (JSC::JIT::emitStoreInt32):
+        (JSC::JIT::emitStoreCell):
+        (JSC::JIT::emitStoreBool):
+        (JSC::JIT::emitStoreDouble):
+        (JSC::JIT::emitJumpSlowCaseIfNotJSCell):
+        (JSC::JIT::isOperandConstantInt):
+        (JSC::JIT::emitGetVirtualRegister):
+        (JSC::JIT::emitGetVirtualRegisters):
+        * jit/JITOpcodes.cpp:
+        (JSC::JIT::emit_op_mov):
+        (JSC::JIT::emit_op_end):
+        (JSC::JIT::emit_op_new_object):
+        (JSC::JIT::emitSlow_op_new_object):
+        (JSC::JIT::emit_op_overrides_has_instance):
+        (JSC::JIT::emit_op_instanceof):
+        (JSC::JIT::emitSlow_op_instanceof):
+        (JSC::JIT::emit_op_is_empty):
+        (JSC::JIT::emit_op_is_undefined):
+        (JSC::JIT::emit_op_is_undefined_or_null):
+        (JSC::JIT::emit_op_is_boolean):
+        (JSC::JIT::emit_op_is_number):
+        (JSC::JIT::emit_op_is_cell_with_type):
+        (JSC::JIT::emit_op_is_object):
+        (JSC::JIT::emit_op_ret):
+        (JSC::JIT::emit_op_to_primitive):
+        (JSC::JIT::emit_op_set_function_name):
+        (JSC::JIT::emit_op_not):
+        (JSC::JIT::emit_op_jfalse):
+        (JSC::JIT::emit_op_jeq_null):
+        (JSC::JIT::emit_op_jneq_null):
+        (JSC::JIT::emit_op_jundefined_or_null):
+        (JSC::JIT::emit_op_jnundefined_or_null):
+        (JSC::JIT::emit_op_jneq_ptr):
+        (JSC::JIT::emit_op_eq):
+        (JSC::JIT::emit_op_jeq):
+        (JSC::JIT::emit_op_jtrue):
+        (JSC::JIT::emit_op_neq):
+        (JSC::JIT::emit_op_jneq):
+        (JSC::JIT::emit_op_throw):
+        (JSC::JIT::compileOpStrictEq):
+        (JSC::JIT::compileOpStrictEqJump):
+        (JSC::JIT::emit_op_to_number):
+        (JSC::JIT::emit_op_to_numeric):
+        (JSC::JIT::emit_op_to_string):
+        (JSC::JIT::emit_op_to_object):
+        (JSC::JIT::emit_op_catch):
+        (JSC::JIT::emit_op_get_parent_scope):
+        (JSC::JIT::emit_op_switch_imm):
+        (JSC::JIT::emit_op_switch_char):
+        (JSC::JIT::emit_op_switch_string):
+        (JSC::JIT::emit_op_eq_null):
+        (JSC::JIT::emit_op_neq_null):
+        (JSC::JIT::emit_op_enter):
+        (JSC::JIT::emit_op_get_scope):
+        (JSC::JIT::emit_op_to_this):
+        (JSC::JIT::emit_op_create_this):
+        (JSC::JIT::emit_op_check_tdz):
+        (JSC::JIT::emitSlow_op_eq):
+        (JSC::JIT::emitSlow_op_neq):
+        (JSC::JIT::emitSlow_op_instanceof_custom):
+        (JSC::JIT::emit_op_new_regexp):
+        (JSC::JIT::emitNewFuncCommon):
+        (JSC::JIT::emitNewFuncExprCommon):
+        (JSC::JIT::emit_op_new_array):
+        (JSC::JIT::emit_op_new_array_with_size):
+        (JSC::JIT::emit_op_has_structure_property):
+        (JSC::JIT::emit_op_has_indexed_property):
+        (JSC::JIT::emitSlow_op_has_indexed_property):
+        (JSC::JIT::emit_op_get_direct_pname):
+        (JSC::JIT::emit_op_enumerator_structure_pname):
+        (JSC::JIT::emit_op_enumerator_generic_pname):
+        (JSC::JIT::emit_op_profile_type):
+        (JSC::JIT::emit_op_log_shadow_chicken_prologue):
+        (JSC::JIT::emit_op_log_shadow_chicken_tail):
+        (JSC::JIT::emit_op_argument_count):
+        (JSC::JIT::emit_op_get_rest_length):
+        (JSC::JIT::emit_op_get_argument):
+        * jit/JITOpcodes32_64.cpp:
+        (JSC::JIT::emit_op_catch):
+        * jit/JITOperations.cpp:
+        * jit/JITPropertyAccess.cpp:
+        (JSC::JIT::emit_op_get_by_val):
+        (JSC::JIT::emitSlow_op_get_by_val):
+        (JSC::JIT::emit_op_put_by_val):
+        (JSC::JIT::emitGenericContiguousPutByVal):
+        (JSC::JIT::emitArrayStoragePutByVal):
+        (JSC::JIT::emitPutByValWithCachedId):
+        (JSC::JIT::emitSlow_op_put_by_val):
+        (JSC::JIT::emit_op_put_getter_by_id):
+        (JSC::JIT::emit_op_put_setter_by_id):
+        (JSC::JIT::emit_op_put_getter_setter_by_id):
+        (JSC::JIT::emit_op_put_getter_by_val):
+        (JSC::JIT::emit_op_put_setter_by_val):
+        (JSC::JIT::emit_op_del_by_id):
+        (JSC::JIT::emit_op_del_by_val):
+        (JSC::JIT::emit_op_try_get_by_id):
+        (JSC::JIT::emitSlow_op_try_get_by_id):
+        (JSC::JIT::emit_op_get_by_id_direct):
+        (JSC::JIT::emitSlow_op_get_by_id_direct):
+        (JSC::JIT::emit_op_get_by_id):
+        (JSC::JIT::emit_op_get_by_id_with_this):
+        (JSC::JIT::emitSlow_op_get_by_id):
+        (JSC::JIT::emitSlow_op_get_by_id_with_this):
+        (JSC::JIT::emit_op_put_by_id):
+        (JSC::JIT::emit_op_in_by_id):
+        (JSC::JIT::emitSlow_op_in_by_id):
+        (JSC::JIT::emitResolveClosure):
+        (JSC::JIT::emit_op_resolve_scope):
+        (JSC::JIT::emitLoadWithStructureCheck):
+        (JSC::JIT::emitGetClosureVar):
+        (JSC::JIT::emit_op_get_from_scope):
+        (JSC::JIT::emitSlow_op_get_from_scope):
+        (JSC::JIT::emitPutGlobalVariable):
+        (JSC::JIT::emitPutGlobalVariableIndirect):
+        (JSC::JIT::emitPutClosureVar):
+        (JSC::JIT::emit_op_put_to_scope):
+        (JSC::JIT::emit_op_get_from_arguments):
+        (JSC::JIT::emit_op_put_to_arguments):
+        (JSC::JIT::emitWriteBarrier):
+        (JSC::JIT::emit_op_get_internal_field):
+        (JSC::JIT::emit_op_put_internal_field):
+        (JSC::JIT::emitIntTypedArrayPutByVal):
+        (JSC::JIT::emitFloatTypedArrayPutByVal):
+        * jit/JSInterfaceJIT.h:
+        (JSC::JSInterfaceJIT::emitLoadJSCell):
+        (JSC::JSInterfaceJIT::emitJumpIfNotJSCell):
+        (JSC::JSInterfaceJIT::emitLoadInt32):
+        (JSC::JSInterfaceJIT::emitLoadDouble):
+        (JSC::JSInterfaceJIT::emitGetFromCallFrameHeaderPtr):
+        (JSC::JSInterfaceJIT::emitPutToCallFrameHeader):
+        (JSC::JSInterfaceJIT::emitPutCellToCallFrameHeader):
+        * jit/SetupVarargsFrame.cpp:
+        (JSC::emitSetupVarargsFrameFastCase):
+        * jit/SpecializedThunkJIT.h:
+        (JSC::SpecializedThunkJIT::loadDoubleArgument):
+        (JSC::SpecializedThunkJIT::loadCellArgument):
+        (JSC::SpecializedThunkJIT::loadInt32Argument):
+        * jit/ThunkGenerators.cpp:
+        (JSC::absThunkGenerator):
+        * llint/LLIntSlowPaths.cpp:
+        (JSC::LLInt::getNonConstantOperand):
+        (JSC::LLInt::getOperand):
+        (JSC::LLInt::genericCall):
+        (JSC::LLInt::varargsSetup):
+        (JSC::LLInt::commonCallEval):
+        (JSC::LLInt::LLINT_SLOW_PATH_DECL):
+        (JSC::LLInt::handleVarargsCheckpoint):
+        (JSC::LLInt::dispatchToNextInstruction):
+        (JSC::LLInt::slow_path_checkpoint_osr_exit_from_inlined_call):
+        (JSC::LLInt::slow_path_checkpoint_osr_exit):
+        (JSC::LLInt::llint_throw_stack_overflow_error):
+        * llint/LLIntSlowPaths.h:
+        * llint/LowLevelInterpreter.asm:
+        * llint/LowLevelInterpreter32_64.asm:
+        * llint/LowLevelInterpreter64.asm:
+        * runtime/ArgList.h:
+        (JSC::MarkedArgumentBuffer::fill):
+        * runtime/CachedTypes.cpp:
+        (JSC::CachedCodeBlock::hasCheckpoints const):
+        (JSC::UnlinkedCodeBlock::UnlinkedCodeBlock):
+        (JSC::CachedCodeBlock<CodeBlockType>::encode):
+        * runtime/CommonSlowPaths.cpp:
+        (JSC::SLOW_PATH_DECL):
+        * runtime/ConstructData.cpp:
+        (JSC::construct):
+        * runtime/ConstructData.h:
+        * runtime/DirectArguments.cpp:
+        (JSC::DirectArguments::copyToArguments):
+        * runtime/DirectArguments.h:
+        * runtime/GenericArguments.h:
+        * runtime/GenericArgumentsInlines.h:
+        (JSC::GenericArguments<Type>::copyToArguments):
+        * runtime/JSArray.cpp:
+        (JSC::JSArray::copyToArguments):
+        * runtime/JSArray.h:
+        * runtime/JSImmutableButterfly.cpp:
+        (JSC::JSImmutableButterfly::copyToArguments):
+        * runtime/JSImmutableButterfly.h:
+        * runtime/JSLock.cpp:
+        (JSC::JSLock::willReleaseLock):
+        * runtime/ModuleProgramExecutable.cpp:
+        (JSC::ModuleProgramExecutable::create):
+        * runtime/Options.cpp:
+        (JSC::recomputeDependentOptions):
+        * runtime/ScopedArguments.cpp:
+        (JSC::ScopedArguments::copyToArguments):
+        * runtime/ScopedArguments.h:
+        * runtime/VM.cpp:
+        (JSC::VM::scanSideState const):
+        (JSC::VM::addCheckpointOSRSideState):
+        (JSC::VM::findCheckpointOSRSideState):
+        * runtime/VM.h:
+        (JSC::VM::hasCheckpointOSRSideState const):
+        * tools/VMInspector.cpp:
+        (JSC::VMInspector::dumpRegisters):
+        * wasm/WasmFunctionCodeBlock.h:
+        (JSC::Wasm::FunctionCodeBlock::getConstant const):
+        (JSC::Wasm::FunctionCodeBlock::getConstantType const):
+        * wasm/WasmLLIntGenerator.cpp:
+        (JSC::Wasm::LLIntGenerator::setUsesCheckpoints const):
+        * wasm/WasmOperations.cpp:
+        (JSC::Wasm::operationWasmToJSException):
+        * wasm/WasmSlowPaths.cpp:
+
+2020-01-16  Keith Miller  <keith_miller@apple.com>
+
         Revert 254725 since it breaks tests
         https://bugs.webkit.org/show_bug.cgi?id=206391
 
diff --git a/Source/JavaScriptCore/DerivedSources-input.xcfilelist b/Source/JavaScriptCore/DerivedSources-input.xcfilelist
index f8c1b4a..9ead653 100644
--- a/Source/JavaScriptCore/DerivedSources-input.xcfilelist
+++ b/Source/JavaScriptCore/DerivedSources-input.xcfilelist
@@ -64,6 +64,7 @@
 $(PROJECT_DIR)/disassembler/udis86/ud_itab.py
 $(PROJECT_DIR)/generator/Argument.rb
 $(PROJECT_DIR)/generator/Assertion.rb
+$(PROJECT_DIR)/generator/Checkpoints.rb
 $(PROJECT_DIR)/generator/DSL.rb
 $(PROJECT_DIR)/generator/Fits.rb
 $(PROJECT_DIR)/generator/GeneratedFile.rb
diff --git a/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj b/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
index 3940f1d..66f935d 100644
--- a/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
+++ b/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
@@ -931,6 +931,8 @@
 		5333BBDB2110F7D2007618EC /* DFGSpeculativeJIT32_64.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 86880F1B14328BB900B08D42 /* DFGSpeculativeJIT32_64.cpp */; };
 		5333BBDC2110F7D9007618EC /* DFGSpeculativeJIT.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 86EC9DC21328DF82002B2AD7 /* DFGSpeculativeJIT.cpp */; };
 		5333BBDD2110F7E1007618EC /* DFGSpeculativeJIT64.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 86880F4C14353B2100B08D42 /* DFGSpeculativeJIT64.cpp */; };
+		5338E2A72396EFFB00C61BAD /* CheckpointOSRExitSideState.h in Headers */ = {isa = PBXBuildFile; fileRef = 5338E2A62396EFEC00C61BAD /* CheckpointOSRExitSideState.h */; settings = {ATTRIBUTES = (Private, ); }; };
+		5338EBA323AB04B800382662 /* FTLSelectPredictability.h in Headers */ = {isa = PBXBuildFile; fileRef = 5338EBA223AB04A300382662 /* FTLSelectPredictability.h */; };
 		5341FC721DAC343C00E7E4D7 /* B3WasmBoundsCheckValue.h in Headers */ = {isa = PBXBuildFile; fileRef = 5341FC711DAC343C00E7E4D7 /* B3WasmBoundsCheckValue.h */; };
 		534638711E70CF3D00F12AC1 /* JSRunLoopTimer.h in Headers */ = {isa = PBXBuildFile; fileRef = 534638701E70CF3D00F12AC1 /* JSRunLoopTimer.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		534638751E70DDEC00F12AC1 /* PromiseTimer.h in Headers */ = {isa = PBXBuildFile; fileRef = 534638741E70DDEC00F12AC1 /* PromiseTimer.h */; settings = {ATTRIBUTES = (Private, ); }; };
@@ -3601,6 +3603,8 @@
 		5318045D22EAAF0F004A7342 /* B3ExtractValue.cpp */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.cpp; name = B3ExtractValue.cpp; path = b3/B3ExtractValue.cpp; sourceTree = "<group>"; };
 		531D4E191F59CDD200EC836C /* testapi.cpp */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.cpp; name = testapi.cpp; path = API/tests/testapi.cpp; sourceTree = "<group>"; };
 		532631B3218777A5007B8191 /* JavaScriptCore.modulemap */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = "sourcecode.module-map"; path = JavaScriptCore.modulemap; sourceTree = "<group>"; };
+		5338E2A62396EFEC00C61BAD /* CheckpointOSRExitSideState.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CheckpointOSRExitSideState.h; sourceTree = "<group>"; };
+		5338EBA223AB04A300382662 /* FTLSelectPredictability.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLSelectPredictability.h; path = ftl/FTLSelectPredictability.h; sourceTree = "<group>"; };
 		533B15DE1DC7F463004D500A /* WasmOps.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WasmOps.h; sourceTree = "<group>"; };
 		5341FC6F1DAC33E500E7E4D7 /* B3WasmBoundsCheckValue.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = B3WasmBoundsCheckValue.cpp; path = b3/B3WasmBoundsCheckValue.cpp; sourceTree = "<group>"; };
 		5341FC711DAC343C00E7E4D7 /* B3WasmBoundsCheckValue.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = B3WasmBoundsCheckValue.h; path = b3/B3WasmBoundsCheckValue.h; sourceTree = "<group>"; };
@@ -3615,6 +3619,9 @@
 		534C457A1BC703DC007476A7 /* TypedArrayConstructor.js */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.javascript; path = TypedArrayConstructor.js; sourceTree = "<group>"; };
 		534C457B1BC72411007476A7 /* JSTypedArrayViewConstructor.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSTypedArrayViewConstructor.h; sourceTree = "<group>"; };
 		534C457D1BC72549007476A7 /* JSTypedArrayViewConstructor.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSTypedArrayViewConstructor.cpp; sourceTree = "<group>"; };
+		534D9BF82363C55D0054524D /* SetIteratorPrototype.js */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.javascript; path = SetIteratorPrototype.js; sourceTree = "<group>"; };
+		534D9BF92363C55D0054524D /* IteratorHelpers.js */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.javascript; path = IteratorHelpers.js; sourceTree = "<group>"; };
+		534D9BFA2363C55D0054524D /* MapIteratorPrototype.js */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.javascript; path = MapIteratorPrototype.js; sourceTree = "<group>"; };
 		534E034D1E4D4B1600213F64 /* AccessCase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AccessCase.h; sourceTree = "<group>"; };
 		534E034F1E4D95ED00213F64 /* AccessCase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = AccessCase.cpp; sourceTree = "<group>"; };
 		534E03531E53BD2900213F64 /* IntrinsicGetterAccessCase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = IntrinsicGetterAccessCase.h; sourceTree = "<group>"; };
@@ -5534,6 +5541,7 @@
 				0F485326187DFDEC0083B687 /* FTLRecoveryOpcode.h */,
 				0FCEFAA91804C13E00472CE4 /* FTLSaveRestore.cpp */,
 				0FCEFAAA1804C13E00472CE4 /* FTLSaveRestore.h */,
+				5338EBA223AB04A300382662 /* FTLSelectPredictability.h */,
 				0F25F1AA181635F300522F39 /* FTLSlowPathCall.cpp */,
 				0F25F1AB181635F300522F39 /* FTLSlowPathCall.h */,
 				0F25F1AC181635F300522F39 /* FTLSlowPathCallKey.cpp */,
@@ -5960,6 +5968,7 @@
 				1429D8DC0ED2205B00B89619 /* CallFrame.h */,
 				A7F869EC0F95C2EC00558697 /* CallFrameClosure.h */,
 				FEA3BBA7212B655800E93AD1 /* CallFrameInlines.h */,
+				5338E2A62396EFEC00C61BAD /* CheckpointOSRExitSideState.h */,
 				1429D85B0ED218E900B89619 /* CLoopStack.cpp */,
 				14D792640DAA03FB001A9F05 /* CLoopStack.h */,
 				A7C1EAEB17987AB600299DB2 /* CLoopStackInlines.h */,
@@ -8600,7 +8609,9 @@
 				A52704851D027C8800354C37 /* GlobalOperations.js */,
 				E35E03611B7AB4850073AD2A /* InspectorInstrumentationObject.js */,
 				E33F50881B844A1A00413856 /* InternalPromiseConstructor.js */,
+				534D9BF92363C55D0054524D /* IteratorHelpers.js */,
 				7CF9BC5B1B65D9A3009DB1EF /* IteratorPrototype.js */,
+				534D9BFA2363C55D0054524D /* MapIteratorPrototype.js */,
 				7035587C1C418419004BD7BF /* MapPrototype.js */,
 				E30677971B8BC6F5003F87F0 /* ModuleLoader.js */,
 				A52704861D027C8800354C37 /* NumberConstructor.js */,
@@ -8611,6 +8622,7 @@
 				7CF9BC5F1B65D9B1009DB1EF /* ReflectObject.js */,
 				654788421C937D2C000781A0 /* RegExpPrototype.js */,
 				84925A9C22B30CC800D1DFFF /* RegExpStringIteratorPrototype.js */,
+				534D9BF82363C55D0054524D /* SetIteratorPrototype.js */,
 				7035587D1C418419004BD7BF /* SetPrototype.js */,
 				7CF9BC601B65D9B1009DB1EF /* StringConstructor.js */,
 				7CF9BC611B65D9B1009DB1EF /* StringIteratorPrototype.js */,
@@ -9113,6 +9125,7 @@
 				FE1BD0211E72027900134BC9 /* CellProfile.h in Headers */,
 				FEC160322339E9F900A04CB8 /* CellSize.h in Headers */,
 				0F1C3DDA1BBCE09E00E523E4 /* CellState.h in Headers */,
+				5338E2A72396EFFB00C61BAD /* CheckpointOSRExitSideState.h in Headers */,
 				BC6AAAE50E1F426500AD87D8 /* ClassInfo.h in Headers */,
 				0FE050261AA9095600D33B33 /* ClonedArguments.h in Headers */,
 				BC18C45E0E16F5CD00B34460 /* CLoopStack.h in Headers */,
@@ -9458,6 +9471,7 @@
 				0F9D4C111C3E2C74006CD984 /* FTLPatchpointExceptionHandle.h in Headers */,
 				0F48532A187DFDEC0083B687 /* FTLRecoveryOpcode.h in Headers */,
 				0FCEFAAC1804C13E00472CE4 /* FTLSaveRestore.h in Headers */,
+				5338EBA323AB04B800382662 /* FTLSelectPredictability.h in Headers */,
 				0F25F1B2181635F300522F39 /* FTLSlowPathCall.h in Headers */,
 				0F25F1B4181635F300522F39 /* FTLSlowPathCallKey.h in Headers */,
 				E322E5A71DA644A8006E7709 /* FTLSnippetParams.h in Headers */,
diff --git a/Source/JavaScriptCore/assembler/MacroAssemblerCodeRef.h b/Source/JavaScriptCore/assembler/MacroAssemblerCodeRef.h
index 56d8a07..6884083 100644
--- a/Source/JavaScriptCore/assembler/MacroAssemblerCodeRef.h
+++ b/Source/JavaScriptCore/assembler/MacroAssemblerCodeRef.h
@@ -239,13 +239,6 @@
         ASSERT_VALID_CODE_POINTER(m_value);
     }
 
-    template<PtrTag tag>
-    explicit ReturnAddressPtr(FunctionPtr<tag> function)
-        : m_value(untagCodePtr<tag>(function.executableAddress()))
-    {
-        ASSERT_VALID_CODE_POINTER(m_value);
-    }
-
     const void* value() const
     {
         return m_value;
diff --git a/Source/JavaScriptCore/assembler/ProbeFrame.h b/Source/JavaScriptCore/assembler/ProbeFrame.h
index cab368d..4229a0d 100644
--- a/Source/JavaScriptCore/assembler/ProbeFrame.h
+++ b/Source/JavaScriptCore/assembler/ProbeFrame.h
@@ -46,14 +46,14 @@
         return get<T>(CallFrame::argumentOffset(argument) * sizeof(Register));
     }
     template<typename T = JSValue>
-    T operand(int operand)
+    T operand(VirtualRegister operand)
     {
-        return get<T>(static_cast<VirtualRegister>(operand).offset() * sizeof(Register));
+        return get<T>(operand.offset() * sizeof(Register));
     }
     template<typename T = JSValue>
-    T operand(int operand, ptrdiff_t offset)
+    T operand(VirtualRegister operand, ptrdiff_t offset)
     {
-        return get<T>(static_cast<VirtualRegister>(operand).offset() * sizeof(Register) + offset);
+        return get<T>(operand.offset() * sizeof(Register) + offset);
     }
 
     template<typename T>
@@ -62,14 +62,14 @@
         return set<T>(CallFrame::argumentOffset(argument) * sizeof(Register), value);
     }
     template<typename T>
-    void setOperand(int operand, T value)
+    void setOperand(VirtualRegister operand, T value)
     {
-        set<T>(static_cast<VirtualRegister>(operand).offset() * sizeof(Register), value);
+        set<T>(operand.offset() * sizeof(Register), value);
     }
     template<typename T>
-    void setOperand(int operand, ptrdiff_t offset, T value)
+    void setOperand(VirtualRegister operand, ptrdiff_t offset, T value)
     {
-        set<T>(static_cast<VirtualRegister>(operand).offset() * sizeof(Register) + offset, value);
+        set<T>(operand.offset() * sizeof(Register) + offset, value);
     }
 
     template<typename T = JSValue>
diff --git a/Source/JavaScriptCore/b3/testb3.h b/Source/JavaScriptCore/b3/testb3.h
index 35d0175..41bc961 100644
--- a/Source/JavaScriptCore/b3/testb3.h
+++ b/Source/JavaScriptCore/b3/testb3.h
@@ -263,20 +263,20 @@
 }
 
 template<typename Type>
-struct Operand {
+struct B3Operand {
     const char* name;
     Type value;
 };
 
-typedef Operand<int64_t> Int64Operand;
-typedef Operand<int32_t> Int32Operand;
-typedef Operand<int16_t> Int16Operand;
-typedef Operand<int8_t> Int8Operand;
+typedef B3Operand<int64_t> Int64Operand;
+typedef B3Operand<int32_t> Int32Operand;
+typedef B3Operand<int16_t> Int16Operand;
+typedef B3Operand<int8_t> Int8Operand;
 
-#define MAKE_OPERAND(value) Operand<decltype(value)> { #value, value }
+#define MAKE_OPERAND(value) B3Operand<decltype(value)> { #value, value }
 
 template<typename FloatType>
-void populateWithInterestingValues(Vector<Operand<FloatType>>& operands)
+void populateWithInterestingValues(Vector<B3Operand<FloatType>>& operands)
 {
     operands.append({ "0.", static_cast<FloatType>(0.) });
     operands.append({ "-0.", static_cast<FloatType>(-0.) });
@@ -302,9 +302,9 @@
 }
 
 template<typename FloatType>
-Vector<Operand<FloatType>> floatingPointOperands()
+Vector<B3Operand<FloatType>> floatingPointOperands()
 {
-    Vector<Operand<FloatType>> operands;
+    Vector<B3Operand<FloatType>> operands;
     populateWithInterestingValues(operands);
     return operands;
 };
diff --git a/Source/JavaScriptCore/bytecode/AccessCase.cpp b/Source/JavaScriptCore/bytecode/AccessCase.cpp
index c8b52b8..894d372 100644
--- a/Source/JavaScriptCore/bytecode/AccessCase.cpp
+++ b/Source/JavaScriptCore/bytecode/AccessCase.cpp
@@ -1520,7 +1520,7 @@
 
         jit.store32(
             CCallHelpers::TrustedImm32(state.callSiteIndexForExceptionHandlingOrOriginal().bits()),
-            CCallHelpers::tagFor(static_cast<VirtualRegister>(CallFrameSlot::argumentCountIncludingThis)));
+            CCallHelpers::tagFor(CallFrameSlot::argumentCountIncludingThis));
 
         if (m_type == Getter || m_type == Setter) {
             auto& access = this->as<GetterSetterAccessCase>();
@@ -1604,13 +1604,13 @@
 
             jit.storeCell(
                 thisGPR,
-                calleeFrame.withOffset(virtualRegisterForArgument(0).offset() * sizeof(Register)));
+                calleeFrame.withOffset(virtualRegisterForArgumentIncludingThis(0).offset() * sizeof(Register)));
 
             if (m_type == Setter) {
                 jit.storeValue(
                     valueRegs,
                     calleeFrame.withOffset(
-                        virtualRegisterForArgument(1).offset() * sizeof(Register)));
+                        virtualRegisterForArgumentIncludingThis(1).offset() * sizeof(Register)));
             }
 
             CCallHelpers::Jump slowCase = jit.branchPtrWithPatch(
@@ -1809,7 +1809,7 @@
                 jit.store32(
                     CCallHelpers::TrustedImm32(
                         state.callSiteIndexForExceptionHandlingOrOriginal().bits()),
-                    CCallHelpers::tagFor(static_cast<VirtualRegister>(CallFrameSlot::argumentCountIncludingThis)));
+                    CCallHelpers::tagFor(CallFrameSlot::argumentCountIncludingThis));
                 
                 jit.makeSpaceOnStackForCCall();
                 
diff --git a/Source/JavaScriptCore/bytecode/AccessCaseSnippetParams.cpp b/Source/JavaScriptCore/bytecode/AccessCaseSnippetParams.cpp
index b1d404a..69b5ee1 100644
--- a/Source/JavaScriptCore/bytecode/AccessCaseSnippetParams.cpp
+++ b/Source/JavaScriptCore/bytecode/AccessCaseSnippetParams.cpp
@@ -55,7 +55,7 @@
 
         jit.store32(
             CCallHelpers::TrustedImm32(state.callSiteIndexForExceptionHandlingOrOriginal().bits()),
-            CCallHelpers::tagFor(static_cast<VirtualRegister>(CallFrameSlot::argumentCountIncludingThis)));
+            CCallHelpers::tagFor(CallFrameSlot::argumentCountIncludingThis));
 
         jit.makeSpaceOnStackForCCall();
 
diff --git a/Source/JavaScriptCore/bytecode/BytecodeDumper.cpp b/Source/JavaScriptCore/bytecode/BytecodeDumper.cpp
index 6058fb0..8365115f 100644
--- a/Source/JavaScriptCore/bytecode/BytecodeDumper.cpp
+++ b/Source/JavaScriptCore/bytecode/BytecodeDumper.cpp
@@ -63,7 +63,7 @@
 
 void BytecodeDumperBase::dumpValue(VirtualRegister reg)
 {
-    m_out.printf("%s", registerName(reg.offset()).data());
+    m_out.printf("%s", registerName(reg).data());
 }
 
 template<typename Traits>
@@ -83,12 +83,12 @@
 #endif // ENABLE(WEBASSEMBLY)
 
 template<class Block>
-CString BytecodeDumper<Block>::registerName(int r) const
+CString BytecodeDumper<Block>::registerName(VirtualRegister r) const
 {
-    if (isConstantRegisterIndex(r))
+    if (r.isConstant())
         return constantName(r);
 
-    return toCString(VirtualRegister(r));
+    return toCString(r);
 }
 
 template <class Block>
@@ -98,10 +98,10 @@
 }
 
 template<class Block>
-CString BytecodeDumper<Block>::constantName(int index) const
+CString BytecodeDumper<Block>::constantName(VirtualRegister reg) const
 {
-    auto value = block()->getConstant(index);
-    return toCString(value, "(", VirtualRegister(index), ")");
+    auto value = block()->getConstant(reg);
+    return toCString(value, "(", reg, ")");
 }
 
 template<class Block>
@@ -335,7 +335,7 @@
     }
 }
 
-CString BytecodeDumper::constantName(int index) const
+CString BytecodeDumper::constantName(VirtualRegister index) const
 {
     FunctionCodeBlock* block = this->block();
     auto value = formatConstant(block->getConstantType(index), block->getConstant(index));
diff --git a/Source/JavaScriptCore/bytecode/BytecodeDumper.h b/Source/JavaScriptCore/bytecode/BytecodeDumper.h
index 0d71675..edb1129 100644
--- a/Source/JavaScriptCore/bytecode/BytecodeDumper.h
+++ b/Source/JavaScriptCore/bytecode/BytecodeDumper.h
@@ -61,7 +61,7 @@
     void dumpValue(T v) { m_out.print(v); }
 
 protected:
-    virtual CString registerName(int) const = 0;
+    virtual CString registerName(VirtualRegister) const = 0;
     virtual int outOfLineJumpOffset(InstructionStream::Offset) const = 0;
 
     BytecodeDumperBase(PrintStream& out)
@@ -91,11 +91,11 @@
 
     void dumpBytecode(const InstructionStream::Ref& it, const ICStatusMap&);
 
-    CString registerName(int r) const override;
+    CString registerName(VirtualRegister) const override;
     int outOfLineJumpOffset(InstructionStream::Offset offset) const override;
 
 private:
-    virtual CString constantName(int index) const;
+    virtual CString constantName(VirtualRegister) const;
 
     Block* m_block;
 };
@@ -135,7 +135,7 @@
     using JSC::BytecodeDumper<FunctionCodeBlock>::BytecodeDumper;
 
     void dumpConstants();
-    CString constantName(int index) const override;
+    CString constantName(VirtualRegister index) const override;
     CString formatConstant(Type, uint64_t) const;
 };
 
diff --git a/Source/JavaScriptCore/bytecode/BytecodeGeneratorification.cpp b/Source/JavaScriptCore/bytecode/BytecodeGeneratorification.cpp
index f5852d9..5134318 100644
--- a/Source/JavaScriptCore/bytecode/BytecodeGeneratorification.cpp
+++ b/Source/JavaScriptCore/bytecode/BytecodeGeneratorification.cpp
@@ -224,7 +224,7 @@
     {
         auto nextToEnterPoint = enterPoint().next();
         unsigned switchTableIndex = m_codeBlock->numberOfSwitchJumpTables();
-        VirtualRegister state = virtualRegisterForArgument(static_cast<int32_t>(JSGenerator::GeneratorArgument::State));
+        VirtualRegister state = virtualRegisterForArgumentIncludingThis(static_cast<int32_t>(JSGenerator::GeneratorArgument::State));
         auto& jumpTable = m_codeBlock->addSwitchJumpTable();
         jumpTable.min = 0;
         jumpTable.branchOffsets.resize(m_yields.size() + 1);
@@ -239,7 +239,7 @@
     }
 
     for (const YieldData& data : m_yields) {
-        VirtualRegister scope = virtualRegisterForArgument(static_cast<int32_t>(JSGenerator::GeneratorArgument::Frame));
+        VirtualRegister scope = virtualRegisterForArgumentIncludingThis(static_cast<int32_t>(JSGenerator::GeneratorArgument::Frame));
 
         auto instruction = m_instructions.at(data.point);
         // Emit save sequence.
diff --git a/Source/JavaScriptCore/bytecode/BytecodeIndex.cpp b/Source/JavaScriptCore/bytecode/BytecodeIndex.cpp
index 15d61ba..a2c1f41 100644
--- a/Source/JavaScriptCore/bytecode/BytecodeIndex.cpp
+++ b/Source/JavaScriptCore/bytecode/BytecodeIndex.cpp
@@ -25,6 +25,7 @@
 
 #include "config.h"
 #include "BytecodeIndex.h"
+
 #include <wtf/PrintStream.h>
 
 namespace JSC {
@@ -32,6 +33,8 @@
 void BytecodeIndex::dump(WTF::PrintStream& out) const
 {
     out.print("bc#", offset());
+    if (checkpoint())
+        out.print("cp#", checkpoint());
 }
 
 } // namespace JSC
diff --git a/Source/JavaScriptCore/bytecode/BytecodeIndex.h b/Source/JavaScriptCore/bytecode/BytecodeIndex.h
index 5295c29..10fba44 100644
--- a/Source/JavaScriptCore/bytecode/BytecodeIndex.h
+++ b/Source/JavaScriptCore/bytecode/BytecodeIndex.h
@@ -26,6 +26,7 @@
 #pragma once
 
 #include <wtf/HashTraits.h>
+#include <wtf/MathExtras.h>
 
 namespace WTF {
 class PrintStream;
@@ -37,24 +38,33 @@
 public:
     BytecodeIndex() = default;
     BytecodeIndex(WTF::HashTableDeletedValueType)
-        : m_offset(deletedValue().asBits())
+        : m_packedBits(deletedValue().asBits())
     {
     }
-    explicit BytecodeIndex(uint32_t bytecodeOffset)
-        : m_offset(bytecodeOffset)
-    { }
 
-    uint32_t offset() const { return m_offset; }
-    uint32_t asBits() const { return m_offset; }
+    explicit BytecodeIndex(uint32_t bytecodeOffset, uint8_t checkpoint = 0)
+        : m_packedBits(pack(bytecodeOffset, checkpoint))
+    {
+        ASSERT(*this);
+    }
 
-    unsigned hash() const { return WTF::intHash(m_offset); }
+    static constexpr uint32_t numberOfCheckpoints = 4;
+    static_assert(hasOneBitSet(numberOfCheckpoints), "numberOfCheckpoints should be a power of 2");
+    static constexpr uint32_t checkpointMask = numberOfCheckpoints - 1;
+    static constexpr uint32_t checkpointShift = WTF::getMSBSetConstexpr(numberOfCheckpoints);
+
+    uint32_t offset() const { return m_packedBits >> checkpointShift; }
+    uint8_t checkpoint() const { return m_packedBits & checkpointMask; }
+    uint32_t asBits() const { return m_packedBits; }
+
+    unsigned hash() const { return WTF::intHash(m_packedBits); }
     static BytecodeIndex deletedValue() { return fromBits(invalidOffset - 1); }
     bool isHashTableDeletedValue() const { return *this == deletedValue(); }
 
     static BytecodeIndex fromBits(uint32_t bits);
 
     // Comparison operators.
-    explicit operator bool() const { return m_offset != invalidOffset && m_offset != deletedValue().offset(); }
+    explicit operator bool() const { return m_packedBits != invalidOffset && m_packedBits != deletedValue().offset(); }
     bool operator ==(const BytecodeIndex& other) const { return asBits() == other.asBits(); }
     bool operator !=(const BytecodeIndex& other) const { return !(*this == other); }
 
@@ -69,13 +79,22 @@
 private:
     static constexpr uint32_t invalidOffset = std::numeric_limits<uint32_t>::max();
 
-    uint32_t m_offset { invalidOffset };
+    static uint32_t pack(uint32_t bytecodeIndex, uint8_t checkpoint);
+
+    uint32_t m_packedBits { invalidOffset };
 };
 
+inline uint32_t BytecodeIndex::pack(uint32_t bytecodeIndex, uint8_t checkpoint)
+{
+    ASSERT(checkpoint < numberOfCheckpoints);
+    ASSERT((bytecodeIndex << checkpointShift) >> checkpointShift == bytecodeIndex);
+    return bytecodeIndex << checkpointShift | checkpoint;
+}
+
 inline BytecodeIndex BytecodeIndex::fromBits(uint32_t bits)
 {
     BytecodeIndex result;
-    result.m_offset = bits;
+    result.m_packedBits = bits;
     return result;
 }
 
diff --git a/Source/JavaScriptCore/bytecode/BytecodeList.rb b/Source/JavaScriptCore/bytecode/BytecodeList.rb
index 862ccd8..348347c 100644
--- a/Source/JavaScriptCore/bytecode/BytecodeList.rb
+++ b/Source/JavaScriptCore/bytecode/BytecodeList.rb
@@ -59,7 +59,7 @@
     :WatchpointSet,
 
     :ValueProfile,
-    :ValueProfileAndOperandBuffer,
+    :ValueProfileAndVirtualRegisterBuffer,
     :UnaryArithProfile,
     :BinaryArithProfile,
     :ArrayProfile,
@@ -796,6 +796,13 @@
     metadata: {
         arrayProfile: ArrayProfile,
         profile: ValueProfile,
+    },
+    tmps: {
+        argCountIncludingThis: unsigned,
+    },
+    checkpoints: {
+        determiningArgCount: nil,
+        makeCall: nil,
     }
 
 op :tail_call_varargs,
@@ -810,6 +817,13 @@
     metadata: {
         arrayProfile: ArrayProfile,
         profile: ValueProfile,
+    },
+    tmps: {
+        argCountIncludingThis: unsigned
+    },
+    checkpoints: {
+        determiningArgCount: nil,
+        makeCall: nil,
     }
 
 op :tail_call_forward_arguments,
@@ -850,6 +864,13 @@
     metadata: {
         arrayProfile: ArrayProfile,
         profile: ValueProfile,
+    },
+    tmps: {
+        argCountIncludingThis: unsigned
+    },
+    checkpoints: {
+        determiningArgCount: nil,
+        makeCall: nil,
     }
 
 op :ret,
@@ -1003,7 +1024,7 @@
         thrownValue: VirtualRegister,
     },
     metadata: {
-        buffer: ValueProfileAndOperandBuffer.*,
+        buffer: ValueProfileAndVirtualRegisterBuffer.*,
     }
 
 op :throw,
@@ -1249,6 +1270,8 @@
 op :llint_native_construct_trampoline
 op :llint_internal_function_call_trampoline
 op :llint_internal_function_construct_trampoline
+op :checkpoint_osr_exit_from_inlined_call_trampoline
+op :checkpoint_osr_exit_trampoline
 op :handleUncaughtException
 op :op_call_return_location
 op :op_construct_return_location
diff --git a/Source/JavaScriptCore/bytecode/BytecodeLivenessAnalysis.cpp b/Source/JavaScriptCore/bytecode/BytecodeLivenessAnalysis.cpp
index 472a8d1..1e5c408 100644
--- a/Source/JavaScriptCore/bytecode/BytecodeLivenessAnalysis.cpp
+++ b/Source/JavaScriptCore/bytecode/BytecodeLivenessAnalysis.cpp
@@ -201,4 +201,39 @@
     }
 }
 
+template<typename EnumType1, typename EnumType2>
+constexpr bool enumValuesEqualAsIntegral(EnumType1 v1, EnumType2 v2)
+{
+    using IntType1 = typename std::underlying_type<EnumType1>::type;
+    using IntType2 = typename std::underlying_type<EnumType2>::type;
+    if constexpr (sizeof(IntType1) > sizeof(IntType2))
+        return static_cast<IntType1>(v1) == static_cast<IntType1>(v2);
+    else
+        return static_cast<IntType2>(v1) == static_cast<IntType2>(v2);
+}
+
+Bitmap<maxNumCheckpointTmps> tmpLivenessForCheckpoint(const CodeBlock& codeBlock, BytecodeIndex bytecodeIndex)
+{
+    Bitmap<maxNumCheckpointTmps> result;
+    uint8_t checkpoint = bytecodeIndex.checkpoint();
+
+    if (!checkpoint)
+        return result;
+
+    switch (codeBlock.instructions().at(bytecodeIndex)->opcodeID()) {
+    case op_call_varargs:
+    case op_tail_call_varargs:
+    case op_construct_varargs: {
+        static_assert(enumValuesEqualAsIntegral(OpCallVarargs::makeCall, OpTailCallVarargs::makeCall) && enumValuesEqualAsIntegral(OpCallVarargs::argCountIncludingThis, OpTailCallVarargs::argCountIncludingThis));
+        static_assert(enumValuesEqualAsIntegral(OpCallVarargs::makeCall, OpConstructVarargs::makeCall) && enumValuesEqualAsIntegral(OpCallVarargs::argCountIncludingThis, OpConstructVarargs::argCountIncludingThis));
+        if (checkpoint == OpCallVarargs::makeCall)
+            result.set(OpCallVarargs::argCountIncludingThis);
+        return result;
+    }
+    default:
+        break;
+    }
+    RELEASE_ASSERT_NOT_REACHED();
+}
+
 } // namespace JSC
diff --git a/Source/JavaScriptCore/bytecode/BytecodeLivenessAnalysis.h b/Source/JavaScriptCore/bytecode/BytecodeLivenessAnalysis.h
index 7379b3c..3e5edc0 100644
--- a/Source/JavaScriptCore/bytecode/BytecodeLivenessAnalysis.h
+++ b/Source/JavaScriptCore/bytecode/BytecodeLivenessAnalysis.h
@@ -28,6 +28,7 @@
 #include "BytecodeBasicBlock.h"
 #include "BytecodeGraph.h"
 #include "CodeBlock.h"
+#include <wtf/Bitmap.h>
 #include <wtf/FastBitVector.h>
 
 namespace JSC {
@@ -97,6 +98,8 @@
     BytecodeGraph m_graph;
 };
 
+Bitmap<maxNumCheckpointTmps> tmpLivenessForCheckpoint(const CodeBlock&, BytecodeIndex);
+
 inline bool operandIsAlwaysLive(int operand);
 inline bool operandThatIsNotAlwaysLiveIsLive(const FastBitVector& out, int operand);
 inline bool operandIsLive(const FastBitVector& out, int operand);
diff --git a/Source/JavaScriptCore/bytecode/BytecodeLivenessAnalysisInlines.h b/Source/JavaScriptCore/bytecode/BytecodeLivenessAnalysisInlines.h
index 3d12e19..de6ca8d 100644
--- a/Source/JavaScriptCore/bytecode/BytecodeLivenessAnalysisInlines.h
+++ b/Source/JavaScriptCore/bytecode/BytecodeLivenessAnalysisInlines.h
@@ -33,22 +33,22 @@
 
 namespace JSC {
 
-inline bool operandIsAlwaysLive(int operand)
+inline bool virtualRegisterIsAlwaysLive(VirtualRegister reg)
 {
-    return !VirtualRegister(operand).isLocal();
+    return !reg.isLocal();
 }
 
-inline bool operandThatIsNotAlwaysLiveIsLive(const FastBitVector& out, int operand)
+inline bool virtualRegisterThatIsNotAlwaysLiveIsLive(const FastBitVector& out, VirtualRegister reg)
 {
-    unsigned local = VirtualRegister(operand).toLocal();
+    unsigned local = reg.toLocal();
     if (local >= out.numBits())
         return false;
     return out[local];
 }
 
-inline bool operandIsLive(const FastBitVector& out, int operand)
+inline bool virtualRegisterIsLive(const FastBitVector& out, VirtualRegister operand)
 {
-    return operandIsAlwaysLive(operand) || operandThatIsNotAlwaysLiveIsLive(out, operand);
+    return virtualRegisterIsAlwaysLive(operand) || virtualRegisterThatIsNotAlwaysLiveIsLive(out, operand);
 }
 
 inline bool isValidRegisterForLiveness(VirtualRegister operand)
diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.cpp b/Source/JavaScriptCore/bytecode/CodeBlock.cpp
index aaefad2..d204bd2 100644
--- a/Source/JavaScriptCore/bytecode/CodeBlock.cpp
+++ b/Source/JavaScriptCore/bytecode/CodeBlock.cpp
@@ -38,6 +38,7 @@
 #include "BytecodeStructs.h"
 #include "BytecodeUseDef.h"
 #include "CallLinkStatus.h"
+#include "CheckpointOSRExitSideState.h"
 #include "CodeBlockInlines.h"
 #include "CodeBlockSet.h"
 #include "DFGCapabilities.h"
@@ -409,7 +410,7 @@
             ConcurrentJSLocker locker(clonedSymbolTable->m_lock);
             clonedSymbolTable->prepareForTypeProfiling(locker);
         }
-        replaceConstant(unlinkedModuleProgramCodeBlock->moduleEnvironmentSymbolTableConstantRegisterOffset(), clonedSymbolTable);
+        replaceConstant(VirtualRegister(unlinkedModuleProgramCodeBlock->moduleEnvironmentSymbolTableConstantRegisterOffset()), clonedSymbolTable);
     }
 
     bool shouldUpdateFunctionHasExecutedCache = m_unlinkedCode->wasCompiledWithTypeProfilerOpcodes() || m_unlinkedCode->wasCompiledWithControlFlowProfilerOpcodes();
@@ -643,7 +644,7 @@
             if (bytecode.m_getPutInfo.resolveType() == LocalClosureVar) {
                 // Only do watching if the property we're putting to is not anonymous.
                 if (bytecode.m_var != UINT_MAX) {
-                    SymbolTable* symbolTable = jsCast<SymbolTable*>(getConstant(bytecode.m_symbolTableOrScopeDepth.symbolTable().offset()));
+                    SymbolTable* symbolTable = jsCast<SymbolTable*>(getConstant(bytecode.m_symbolTableOrScopeDepth.symbolTable()));
                     const Identifier& ident = identifier(bytecode.m_var);
                     ConcurrentJSLocker locker(symbolTable->m_lock);
                     auto iter = symbolTable->find(locker, ident.impl());
@@ -711,8 +712,7 @@
                 break;
             }
             case ProfileTypeBytecodeLocallyResolved: {
-                int symbolTableIndex = bytecode.m_symbolTableOrScopeDepth.symbolTable().offset();
-                SymbolTable* symbolTable = jsCast<SymbolTable*>(getConstant(symbolTableIndex));
+                SymbolTable* symbolTable = jsCast<SymbolTable*>(getConstant(bytecode.m_symbolTableOrScopeDepth.symbolTable()));
                 const Identifier& ident = identifier(bytecode.m_identifier);
                 ConcurrentJSLocker locker(symbolTable->m_lock);
                 // If our parent scope was created while profiling was disabled, it will not have prepared for profiling yet.
@@ -1088,6 +1088,15 @@
     
     return true;
 }
+
+BytecodeIndex CodeBlock::bytecodeIndexForExit(BytecodeIndex exitIndex) const
+{
+    if (exitIndex.checkpoint()) {
+        const auto& instruction = instructions().at(exitIndex);
+        exitIndex = instruction.next().index();
+    }
+    return exitIndex;
+}
 #endif // ENABLE(DFG_JIT)
 
 void CodeBlock::propagateTransitions(const ConcurrentJSLocker&, SlotVisitor& visitor)
@@ -1807,12 +1816,12 @@
     });
 
     for (int i = 0; i < numParameters(); ++i)
-        liveOperands.append(virtualRegisterForArgument(i));
+        liveOperands.append(virtualRegisterForArgumentIncludingThis(i));
 
-    auto profiles = makeUnique<ValueProfileAndOperandBuffer>(liveOperands.size());
+    auto profiles = makeUnique<ValueProfileAndVirtualRegisterBuffer>(liveOperands.size());
     RELEASE_ASSERT(profiles->m_size == liveOperands.size());
     for (unsigned i = 0; i < profiles->m_size; ++i)
-        profiles->m_buffer.get()[i].m_operand = liveOperands[i].offset();
+        profiles->m_buffer.get()[i].m_operand = liveOperands[i];
 
     createRareDataIfNecessary();
 
@@ -2694,7 +2703,7 @@
 
     if (auto* rareData = m_rareData.get()) {
         for (auto& profileBucket : rareData->m_catchProfiles) {
-            profileBucket->forEach([&] (ValueProfileAndOperand& profile) {
+            profileBucket->forEach([&] (ValueProfileAndVirtualRegister& profile) {
                 profile.computeUpdatedPrediction(locker);
             });
         }
diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.h b/Source/JavaScriptCore/bytecode/CodeBlock.h
index 0f9bbea..68fd01f 100644
--- a/Source/JavaScriptCore/bytecode/CodeBlock.h
+++ b/Source/JavaScriptCore/bytecode/CodeBlock.h
@@ -163,6 +163,7 @@
     int numCalleeLocals() const { return m_numCalleeLocals; }
 
     int numVars() const { return m_numVars; }
+    int numTmps() const { return m_unlinkedCode->hasCheckpoints() * maxNumCheckpointTmps; }
 
     int* addressOfNumParameters() { return &m_numParameters; }
     static ptrdiff_t offsetOfNumParameters() { return OBJECT_OFFSETOF(CodeBlock, m_numParameters); }
@@ -231,20 +232,20 @@
     bool hasInstalledVMTrapBreakpoints() const;
     bool installVMTrapBreakpoints();
 
-    inline bool isKnownNotImmediate(int index)
+    inline bool isKnownNotImmediate(VirtualRegister reg)
     {
-        if (index == thisRegister().offset() && !isStrictMode())
+        if (reg == thisRegister() && !isStrictMode())
             return true;
 
-        if (isConstantRegisterIndex(index))
-            return getConstant(index).isCell();
+        if (reg.isConstant())
+            return getConstant(reg).isCell();
 
         return false;
     }
 
-    ALWAYS_INLINE bool isTemporaryRegisterIndex(int index)
+    ALWAYS_INLINE bool isTemporaryRegister(VirtualRegister reg)
     {
-        return index >= m_numVars;
+        return reg.offset() >= m_numVars;
     }
 
     HandlerInfo* handlerForBytecodeIndex(BytecodeIndex, RequiredHandler = RequiredHandler::AnyHandler);
@@ -583,10 +584,9 @@
     }
 
     const Vector<WriteBarrier<Unknown>>& constantRegisters() { return m_constantRegisters; }
-    WriteBarrier<Unknown>& constantRegister(int index) { return m_constantRegisters[index - FirstConstantRegisterIndex]; }
-    static ALWAYS_INLINE bool isConstantRegisterIndex(int index) { return index >= FirstConstantRegisterIndex; }
-    ALWAYS_INLINE JSValue getConstant(int index) const { return m_constantRegisters[index - FirstConstantRegisterIndex].get(); }
-    ALWAYS_INLINE SourceCodeRepresentation constantSourceCodeRepresentation(int index) const { return m_constantsSourceCodeRepresentation[index - FirstConstantRegisterIndex]; }
+    WriteBarrier<Unknown>& constantRegister(VirtualRegister reg) { return m_constantRegisters[reg.toConstantIndex()]; }
+    ALWAYS_INLINE JSValue getConstant(VirtualRegister reg) const { return m_constantRegisters[reg.toConstantIndex()].get(); }
+    ALWAYS_INLINE SourceCodeRepresentation constantSourceCodeRepresentation(VirtualRegister reg) const { return m_constantsSourceCodeRepresentation[reg.toConstantIndex()]; }
 
     FunctionExecutable* functionDecl(int index) { return m_functionDecls[index].get(); }
     int numberOfFunctionDecls() { return m_functionDecls.size(); }
@@ -776,6 +776,7 @@
 
     void setOptimizationThresholdBasedOnCompilationResult(CompilationResult);
     
+    BytecodeIndex bytecodeIndexForExit(BytecodeIndex) const;
     uint32_t osrExitCounter() const { return m_osrExitCounter; }
 
     void countOSRExit() { m_osrExitCounter++; }
@@ -876,7 +877,7 @@
         Vector<SimpleJumpTable> m_switchJumpTables;
         Vector<StringJumpTable> m_stringSwitchJumpTables;
 
-        Vector<std::unique_ptr<ValueProfileAndOperandBuffer>> m_catchProfiles;
+        Vector<std::unique_ptr<ValueProfileAndVirtualRegisterBuffer>> m_catchProfiles;
 
         DirectEvalCodeCache m_directEvalCodeCache;
     };
@@ -943,10 +944,10 @@
 
     void setConstantRegisters(const Vector<WriteBarrier<Unknown>>& constants, const Vector<SourceCodeRepresentation>& constantsSourceCodeRepresentation, ScriptExecutable* topLevelExecutable);
 
-    void replaceConstant(int index, JSValue value)
+    void replaceConstant(VirtualRegister reg, JSValue value)
     {
-        ASSERT(isConstantRegisterIndex(index) && static_cast<size_t>(index - FirstConstantRegisterIndex) < m_constantRegisters.size());
-        m_constantRegisters[index - FirstConstantRegisterIndex].set(*m_vm, this, value);
+        ASSERT(reg.isConstant() && static_cast<size_t>(reg.toConstantIndex()) < m_constantRegisters.size());
+        m_constantRegisters[reg.toConstantIndex()].set(*m_vm, this, value);
     }
 
     bool shouldVisitStrongly(const ConcurrentJSLocker&);
diff --git a/Source/JavaScriptCore/bytecode/CodeOrigin.h b/Source/JavaScriptCore/bytecode/CodeOrigin.h
index 2106813..f9829a2 100644
--- a/Source/JavaScriptCore/bytecode/CodeOrigin.h
+++ b/Source/JavaScriptCore/bytecode/CodeOrigin.h
@@ -160,7 +160,9 @@
     unsigned approximateHash(InlineCallFrame* terminal = nullptr) const;
 
     template <typename Function>
-    void walkUpInlineStack(const Function&);
+    void walkUpInlineStack(const Function&) const;
+
+    inline bool inlineStackContainsActiveCheckpoint() const;
     
     // Get the inline stack. This is slow, and is intended for debugging only.
     Vector<CodeOrigin> inlineStack() const;
diff --git a/Source/JavaScriptCore/bytecode/FullBytecodeLiveness.h b/Source/JavaScriptCore/bytecode/FullBytecodeLiveness.h
index e4a575b..47c657c 100644
--- a/Source/JavaScriptCore/bytecode/FullBytecodeLiveness.h
+++ b/Source/JavaScriptCore/bytecode/FullBytecodeLiveness.h
@@ -26,12 +26,16 @@
 #pragma once
 
 #include "BytecodeLivenessAnalysis.h"
+#include "Operands.h"
 #include <wtf/FastBitVector.h>
 
 namespace JSC {
 
 class BytecodeLivenessAnalysis;
+class CodeBlock;
 
+// Note: Full bytecode liveness does not track any information about the liveness of temps.
+// If you want tmp liveness for a checkpoint ask tmpLivenessForCheckpoint.
 class FullBytecodeLiveness {
     WTF_MAKE_FAST_ALLOCATED;
 public:
@@ -47,15 +51,15 @@
         RELEASE_ASSERT_NOT_REACHED();
     }
     
-    bool operandIsLive(int operand, BytecodeIndex bytecodeIndex, LivenessCalculationPoint point) const
+    bool virtualRegisterIsLive(VirtualRegister reg, BytecodeIndex bytecodeIndex, LivenessCalculationPoint point) const
     {
-        return operandIsAlwaysLive(operand) || operandThatIsNotAlwaysLiveIsLive(getLiveness(bytecodeIndex, point), operand);
+        return virtualRegisterIsAlwaysLive(reg) || virtualRegisterThatIsNotAlwaysLiveIsLive(getLiveness(bytecodeIndex, point), reg);
     }
     
 private:
     friend class BytecodeLivenessAnalysis;
     
-    // FIXME: Use FastBitVector's view mechansim to make them compact.
+    // FIXME: Use FastBitVector's view mechanism to make them compact.
     // https://bugs.webkit.org/show_bug.cgi?id=204427<Paste>
     Vector<FastBitVector, 0, UnsafeVectorOverflow> m_beforeUseVector;
     Vector<FastBitVector, 0, UnsafeVectorOverflow> m_afterUseVector;
diff --git a/Source/JavaScriptCore/bytecode/InlineCallFrame.h b/Source/JavaScriptCore/bytecode/InlineCallFrame.h
index 97508bb..e29f384 100644
--- a/Source/JavaScriptCore/bytecode/InlineCallFrame.h
+++ b/Source/JavaScriptCore/bytecode/InlineCallFrame.h
@@ -179,7 +179,8 @@
     WriteBarrier<CodeBlock> baselineCodeBlock;
     CodeOrigin directCaller;
 
-    unsigned argumentCountIncludingThis { 0 }; // Do not include fixups.
+    unsigned argumentCountIncludingThis : 22; // Do not include fixups.
+    unsigned tmpOffset : 10;
     signed stackOffset : 28;
     unsigned kind : 3; // real type is Kind
     bool isClosureCall : 1; // If false then we know that callee/scope are constants and the DFG won't treat them as variables, i.e. they have to be recovered manually.
@@ -191,7 +192,9 @@
     // InlineCallFrame's fields. This constructor is here just to reduce confusion if
     // we forgot to initialize explicitly.
     InlineCallFrame()
-        : stackOffset(0)
+        : argumentCountIncludingThis(0)
+        , tmpOffset(0)
+        , stackOffset(0)
         , kind(Call)
         , isClosureCall(false)
     {
@@ -219,6 +222,12 @@
         RELEASE_ASSERT(static_cast<signed>(stackOffset) == offset);
     }
 
+    void setTmpOffset(unsigned offset)
+    {
+        tmpOffset = offset;
+        RELEASE_ASSERT(static_cast<unsigned>(tmpOffset) == offset);
+    }
+
     ptrdiff_t callerFrameOffset() const { return stackOffset * sizeof(Register) + CallFrame::callerFrameOffset(); }
     ptrdiff_t returnPCOffset() const { return stackOffset * sizeof(Register) + CallFrame::returnPCOffset(); }
 
@@ -247,9 +256,9 @@
     return baselineCodeBlock;
 }
 
-// This function is defined here and not in CodeOrigin because it needs access to the directCaller field in InlineCallFrame
+// These function is defined here and not in CodeOrigin because it needs access to the directCaller field in InlineCallFrame
 template <typename Function>
-inline void CodeOrigin::walkUpInlineStack(const Function& function)
+inline void CodeOrigin::walkUpInlineStack(const Function& function) const
 {
     CodeOrigin codeOrigin = *this;
     while (true) {
@@ -261,11 +270,38 @@
     }
 }
 
-ALWAYS_INLINE VirtualRegister remapOperand(InlineCallFrame* inlineCallFrame, VirtualRegister reg)
+inline bool CodeOrigin::inlineStackContainsActiveCheckpoint() const
+{
+    bool result = false;
+    walkUpInlineStack([&] (CodeOrigin origin) {
+        if (origin.bytecodeIndex().checkpoint())
+            result = true;
+    });
+    return result;
+}
+
+ALWAYS_INLINE Operand remapOperand(InlineCallFrame* inlineCallFrame, Operand operand)
 {
     if (inlineCallFrame)
-        return VirtualRegister(reg.offset() + inlineCallFrame->stackOffset);
-    return reg;
+        return operand.isTmp() ? Operand::tmp(operand.value() + inlineCallFrame->tmpOffset) : operand.virtualRegister() + inlineCallFrame->stackOffset;
+    return operand;
+}
+
+ALWAYS_INLINE Operand remapOperand(InlineCallFrame* inlineCallFrame, VirtualRegister reg)
+{
+    return remapOperand(inlineCallFrame, Operand(reg));
+}
+
+ALWAYS_INLINE Operand unmapOperand(InlineCallFrame* inlineCallFrame, Operand operand)
+{
+    if (inlineCallFrame)
+        return operand.isTmp() ? Operand::tmp(operand.value() - inlineCallFrame->tmpOffset) : Operand(operand.virtualRegister() - inlineCallFrame->stackOffset);
+    return operand;
+}
+
+ALWAYS_INLINE Operand unmapOperand(InlineCallFrame* inlineCallFrame, VirtualRegister reg)
+{
+    return unmapOperand(inlineCallFrame, Operand(reg));
 }
 
 } // namespace JSC
diff --git a/Source/JavaScriptCore/bytecode/LazyOperandValueProfile.h b/Source/JavaScriptCore/bytecode/LazyOperandValueProfile.h
index bfb94b3..81f5a0c 100644
--- a/Source/JavaScriptCore/bytecode/LazyOperandValueProfile.h
+++ b/Source/JavaScriptCore/bytecode/LazyOperandValueProfile.h
@@ -26,6 +26,7 @@
 #pragma once
 
 #include "ConcurrentJSLock.h"
+#include "Operands.h"
 #include "ValueProfile.h"
 #include "VirtualRegister.h"
 #include <wtf/HashMap.h>
@@ -49,7 +50,7 @@
     {
     }
     
-    LazyOperandValueProfileKey(BytecodeIndex bytecodeIndex, VirtualRegister operand)
+    LazyOperandValueProfileKey(BytecodeIndex bytecodeIndex, Operand operand)
         : m_bytecodeIndex(bytecodeIndex)
         , m_operand(operand)
     {
@@ -69,7 +70,7 @@
     
     unsigned hash() const
     {
-        return m_bytecodeIndex.hash() + m_operand.offset();
+        return m_bytecodeIndex.hash() + m_operand.value() + static_cast<unsigned>(m_operand.kind());
     }
     
     BytecodeIndex bytecodeIndex() const
@@ -78,7 +79,7 @@
         return m_bytecodeIndex;
     }
 
-    VirtualRegister operand() const
+    Operand operand() const
     {
         ASSERT(!!*this);
         return m_operand;
@@ -90,7 +91,7 @@
     }
 private: 
     BytecodeIndex m_bytecodeIndex;
-    VirtualRegister m_operand;
+    Operand m_operand;
 };
 
 struct LazyOperandValueProfileKeyHash {
diff --git a/Source/JavaScriptCore/bytecode/MethodOfGettingAValueProfile.cpp b/Source/JavaScriptCore/bytecode/MethodOfGettingAValueProfile.cpp
index 2b69f36..701c330 100644
--- a/Source/JavaScriptCore/bytecode/MethodOfGettingAValueProfile.cpp
+++ b/Source/JavaScriptCore/bytecode/MethodOfGettingAValueProfile.cpp
@@ -42,7 +42,7 @@
     result.m_kind = LazyOperand;
     result.u.lazyOperand.codeBlock = codeBlock;
     result.u.lazyOperand.bytecodeOffset = key.bytecodeIndex();
-    result.u.lazyOperand.operand = key.operand().offset();
+    result.u.lazyOperand.operand = key.operand();
     return result;
 }
 
@@ -57,7 +57,7 @@
         return;
         
     case LazyOperand: {
-        LazyOperandValueProfileKey key(u.lazyOperand.bytecodeOffset, VirtualRegister(u.lazyOperand.operand));
+        LazyOperandValueProfileKey key(u.lazyOperand.bytecodeOffset, u.lazyOperand.operand);
         
         ConcurrentJSLocker locker(u.lazyOperand.codeBlock->m_lock);
         LazyOperandValueProfile* profile =
@@ -91,7 +91,7 @@
         return;
 
     case LazyOperand: {
-        LazyOperandValueProfileKey key(u.lazyOperand.bytecodeOffset, VirtualRegister(u.lazyOperand.operand));
+        LazyOperandValueProfileKey key(u.lazyOperand.bytecodeOffset, u.lazyOperand.operand);
 
         ConcurrentJSLocker locker(u.lazyOperand.codeBlock->m_lock);
         LazyOperandValueProfile* profile =
diff --git a/Source/JavaScriptCore/bytecode/MethodOfGettingAValueProfile.h b/Source/JavaScriptCore/bytecode/MethodOfGettingAValueProfile.h
index 57b4b17..e8b65ea 100644
--- a/Source/JavaScriptCore/bytecode/MethodOfGettingAValueProfile.h
+++ b/Source/JavaScriptCore/bytecode/MethodOfGettingAValueProfile.h
@@ -107,7 +107,7 @@
         struct {
             CodeBlock* codeBlock;
             BytecodeIndex bytecodeOffset;
-            int operand;
+            Operand operand;
         } lazyOperand;
     } u;
 };
diff --git a/Source/JavaScriptCore/bytecode/Operands.h b/Source/JavaScriptCore/bytecode/Operands.h
index f29ff9f..aceff57 100644
--- a/Source/JavaScriptCore/bytecode/Operands.h
+++ b/Source/JavaScriptCore/bytecode/Operands.h
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011-2018 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2019 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -35,111 +35,227 @@
 
 template<typename T> struct OperandValueTraits;
 
-enum OperandKind { ArgumentOperand, LocalOperand };
+constexpr unsigned maxNumCheckpointTmps = 4;
+
+// A OperandKind::Tmp is one that exists for exiting to a checkpoint but does not exist between bytecodes.
+enum class OperandKind { Argument, Local, Tmp };
+
+class Operand {
+public:
+    Operand() = default;
+    Operand(const Operand&) = default;
+
+    Operand(VirtualRegister operand)
+        : m_kind(operand.isLocal() ? OperandKind::Local : OperandKind::Argument)
+        , m_operand(operand.offset())
+    { }
+
+    Operand(OperandKind kind, int operand)
+        : m_kind(kind)
+        , m_operand(operand)
+    { 
+        ASSERT(kind == OperandKind::Tmp || VirtualRegister(operand).isLocal() == (kind == OperandKind::Local));
+    }
+    static Operand tmp(uint32_t index) { return Operand(OperandKind::Tmp, index); }
+
+    OperandKind kind() const { return m_kind; }
+    int value() const { return m_operand; }
+    VirtualRegister virtualRegister() const
+    {
+        ASSERT(m_kind != OperandKind::Tmp);
+        return VirtualRegister(m_operand);
+    }
+    uint64_t asBits() const { return bitwise_cast<uint64_t>(*this); }
+    static Operand fromBits(uint64_t value);
+
+    bool isTmp() const { return kind() == OperandKind::Tmp; }
+    bool isArgument() const { return kind() == OperandKind::Argument; }
+    bool isLocal() const { return kind() == OperandKind::Local && virtualRegister().isLocal(); }
+    bool isHeader() const { return kind() != OperandKind::Tmp && virtualRegister().isHeader(); }
+    bool isConstant() const { return kind() != OperandKind::Tmp && virtualRegister().isConstant(); }
+
+    int toArgument() const { ASSERT(isArgument()); return virtualRegister().toArgument(); }
+    int toLocal() const { ASSERT(isLocal()); return virtualRegister().toLocal(); }
+
+    inline bool isValid() const;
+
+    inline bool operator==(const Operand&) const;
+
+    void dump(PrintStream&) const;
+
+private:
+    OperandKind m_kind { OperandKind::Argument };
+    int m_operand { VirtualRegister::invalidVirtualRegister };
+};
+
+ALWAYS_INLINE bool Operand::operator==(const Operand& other) const
+{
+    if (kind() != other.kind())
+        return false;
+    if (isTmp())
+        return value() == other.value();
+    return virtualRegister() == other.virtualRegister();
+}
+
+inline bool Operand::isValid() const
+{
+    if (isTmp())
+        return value() >= 0;
+    return virtualRegister().isValid();
+}
+
+inline Operand Operand::fromBits(uint64_t value)
+{
+    Operand result = bitwise_cast<Operand>(value);
+    ASSERT(result.isValid());
+    return result;
+}
+
+static_assert(sizeof(Operand) == sizeof(uint64_t), "Operand::asBits() relies on this.");
 
 enum OperandsLikeTag { OperandsLike };
 
 template<typename T>
 class Operands {
 public:
-    Operands()
-        : m_numArguments(0) { }
-    
-    explicit Operands(size_t numArguments, size_t numLocals)
+    using Storage = std::conditional_t<std::is_same_v<T, bool>, FastBitVector, Vector<T, 0, UnsafeVectorOverflow>>;
+    using RefType = std::conditional_t<std::is_same_v<T, bool>, FastBitReference, T&>;
+    using ConstRefType = std::conditional_t<std::is_same_v<T, bool>, bool, const T&>;
+
+    Operands() = default;
+
+    explicit Operands(size_t numArguments, size_t numLocals, size_t numTmps)
         : m_numArguments(numArguments)
+        , m_numLocals(numLocals)
     {
-        if (WTF::VectorTraits<T>::needsInitialization) {
-            m_values.resize(numArguments + numLocals);
-        } else {
-            m_values.fill(T(), numArguments + numLocals);
-        }
+        size_t size = numArguments + numLocals + numTmps;
+        m_values.grow(size);
+        if (!WTF::VectorTraits<T>::needsInitialization)
+            m_values.fill(T());
     }
 
-    explicit Operands(size_t numArguments, size_t numLocals, const T& initialValue)
+    explicit Operands(size_t numArguments, size_t numLocals, size_t numTmps, const T& initialValue)
         : m_numArguments(numArguments)
+        , m_numLocals(numLocals)
     {
-        m_values.fill(initialValue, numArguments + numLocals);
+        m_values.grow(numArguments + numLocals + numTmps);
+        m_values.fill(initialValue);
     }
     
     template<typename U>
-    explicit Operands(OperandsLikeTag, const Operands<U>& other)
+    explicit Operands(OperandsLikeTag, const Operands<U>& other, const T& initialValue = T())
         : m_numArguments(other.numberOfArguments())
+        , m_numLocals(other.numberOfLocals())
     {
-        m_values.fill(T(), other.numberOfArguments() + other.numberOfLocals());
+        m_values.grow(other.size());
+        m_values.fill(initialValue);
     }
-    
+
     size_t numberOfArguments() const { return m_numArguments; }
-    size_t numberOfLocals() const { return m_values.size() - m_numArguments; }
-    
+    size_t numberOfLocals() const { return m_numLocals; }
+    size_t numberOfTmps() const { return m_values.size() - numberOfArguments() - numberOfLocals(); }
+
+    size_t tmpIndex(size_t idx) const
+    {
+        ASSERT(idx < numberOfTmps());
+        return idx + numberOfArguments() + numberOfLocals();
+    }
     size_t argumentIndex(size_t idx) const
     {
-        ASSERT(idx < m_numArguments);
+        ASSERT(idx < numberOfArguments());
         return idx;
     }
     
     size_t localIndex(size_t idx) const
     {
-        return m_numArguments + idx;
+        ASSERT(idx < numberOfLocals());
+        return numberOfArguments() + idx;
     }
+
+    RefType tmp(size_t idx) { return m_values[tmpIndex(idx)]; }
+    ConstRefType tmp(size_t idx) const { return m_values[tmpIndex(idx)]; }
     
-    T& argument(size_t idx)
-    {
-        return m_values[argumentIndex(idx)];
-    }
-    const T& argument(size_t idx) const
-    {
-        return m_values[argumentIndex(idx)];
-    }
+    RefType argument(size_t idx) { return m_values[argumentIndex(idx)]; }
+    ConstRefType argument(size_t idx) const { return m_values[argumentIndex(idx)]; }
     
-    T& local(size_t idx) { return m_values[localIndex(idx)]; }
-    const T& local(size_t idx) const { return m_values[localIndex(idx)]; }
+    RefType local(size_t idx) { return m_values[localIndex(idx)]; }
+    ConstRefType local(size_t idx) const { return m_values[localIndex(idx)]; }
     
     template<OperandKind operandKind>
     size_t sizeFor() const
     {
-        if (operandKind == ArgumentOperand)
+        switch (operandKind) {
+        case OperandKind::Tmp:
+            return numberOfTmps();
+        case OperandKind::Argument:
             return numberOfArguments();
-        return numberOfLocals();
+        case OperandKind::Local:
+            return numberOfLocals();
+        }
+        RELEASE_ASSERT_NOT_REACHED();
+        return 0;
     }
     template<OperandKind operandKind>
-    T& atFor(size_t idx)
+    RefType atFor(size_t idx)
     {
-        if (operandKind == ArgumentOperand)
+        switch (operandKind) {
+        case OperandKind::Tmp:
+            return tmp(idx);
+        case OperandKind::Argument:
             return argument(idx);
-        return local(idx);
+        case OperandKind::Local:
+            return local(idx);
+        }
+        RELEASE_ASSERT_NOT_REACHED();
+        return tmp(0);
     }
     template<OperandKind operandKind>
-    const T& atFor(size_t idx) const
+    ConstRefType atFor(size_t idx) const
     {
-        if (operandKind == ArgumentOperand)
+        switch (operandKind) {
+        case OperandKind::Tmp:
+            return tmp(idx);
+        case OperandKind::Argument:
             return argument(idx);
-        return local(idx);
+        case OperandKind::Local:
+            return local(idx);
+        }
+        RELEASE_ASSERT_NOT_REACHED();
+        return tmp(0);
     }
-    
-    void ensureLocals(size_t size)
+
+    void ensureLocals(size_t size, const T& ensuredValue = T())
     {
-        size_t oldSize = m_values.size();
-        size_t newSize = m_numArguments + size;
-        if (newSize <= oldSize)
+        if (size <= numberOfLocals())
             return;
 
+        size_t newSize = numberOfArguments() + numberOfTmps() + size;
+        size_t oldNumLocals = numberOfLocals();
+        size_t oldNumTmps = numberOfTmps();
         m_values.grow(newSize);
-        if (!WTF::VectorTraits<T>::needsInitialization) {
-            for (size_t i = oldSize; i < m_values.size(); ++i)
-                m_values[i] = T();
+        for (size_t i = 0; i < oldNumTmps; ++i)
+            m_values[newSize - 1 - i] = m_values[tmpIndex(oldNumTmps - 1 - i)];
+
+        m_numLocals = size;
+        if (ensuredValue != T() || !WTF::VectorTraits<T>::needsInitialization) {
+            for (size_t i = 0; i < size - oldNumLocals; ++i)
+                m_values[localIndex(oldNumLocals + i)] = ensuredValue;
         }
     }
 
-    void ensureLocals(size_t size, const T& ensuredValue)
+    void ensureTmps(size_t size, const T& ensuredValue = T())
     {
-        size_t oldSize = m_values.size();
-        size_t newSize = m_numArguments + size;
-        if (newSize <= oldSize)
+        if (size <= numberOfTmps())
             return;
 
+        size_t oldSize = m_values.size();
+        size_t newSize = numberOfArguments() + numberOfLocals() + size;
         m_values.grow(newSize);
-        for (size_t i = oldSize; i < m_values.size(); ++i)
-            m_values[i] = ensuredValue;
+
+        if (ensuredValue != T() || !WTF::VectorTraits<T>::needsInitialization) {
+            for (size_t i = oldSize; i < newSize; ++i)
+                m_values[i] = ensuredValue;
+        }
     }
     
     void setLocal(size_t idx, const T& value)
@@ -164,84 +280,76 @@
         ASSERT(idx >= numberOfLocals() || local(idx) == T());
         setLocal(idx, value);
     }
-    
-    size_t operandIndex(int operand) const
+
+    RefType getForOperandIndex(size_t index) { return m_values[index]; }
+    ConstRefType getForOperandIndex(size_t index) const { return const_cast<Operands*>(this)->getForOperandIndex(index); }
+
+    size_t operandIndex(VirtualRegister operand) const
     {
-        if (operandIsArgument(operand))
-            return argumentIndex(VirtualRegister(operand).toArgument());
-        return localIndex(VirtualRegister(operand).toLocal());
+        if (operand.isArgument())
+            return argumentIndex(operand.toArgument());
+        return localIndex(operand.toLocal());
     }
     
-    size_t operandIndex(VirtualRegister virtualRegister) const
+    size_t operandIndex(Operand op) const
     {
-        return operandIndex(virtualRegister.offset());
+        if (!op.isTmp())
+            return operandIndex(op.virtualRegister());
+        return tmpIndex(op.value());
     }
     
-    T& operand(int operand)
+    RefType operand(VirtualRegister operand)
     {
-        if (operandIsArgument(operand))
-            return argument(VirtualRegister(operand).toArgument());
-        return local(VirtualRegister(operand).toLocal());
+        if (operand.isArgument())
+            return argument(operand.toArgument());
+        return local(operand.toLocal());
     }
 
-    T& operand(VirtualRegister virtualRegister)
+    RefType operand(Operand op)
     {
-        return operand(virtualRegister.offset());
+        if (!op.isTmp())
+            return operand(op.virtualRegister());
+        return tmp(op.value());
     }
 
-    const T& operand(int operand) const { return const_cast<const T&>(const_cast<Operands*>(this)->operand(operand)); }
-    const T& operand(VirtualRegister operand) const { return const_cast<const T&>(const_cast<Operands*>(this)->operand(operand)); }
+    ConstRefType operand(VirtualRegister operand) const { return const_cast<Operands*>(this)->operand(operand); }
+    ConstRefType operand(Operand operand) const { return const_cast<Operands*>(this)->operand(operand); }
     
-    bool hasOperand(int operand) const
+    bool hasOperand(VirtualRegister operand) const
     {
-        if (operandIsArgument(operand))
+        if (operand.isArgument())
             return true;
-        return static_cast<size_t>(VirtualRegister(operand).toLocal()) < numberOfLocals();
+        return static_cast<size_t>(operand.toLocal()) < numberOfLocals();
     }
-    bool hasOperand(VirtualRegister reg) const
+    bool hasOperand(Operand op) const
     {
-        return hasOperand(reg.offset());
+        if (op.isTmp()) {
+            ASSERT(op.value() >= 0);
+            return static_cast<size_t>(op.value()) < numberOfTmps();
+        }
+        return hasOperand(op.virtualRegister());
     }
     
-    void setOperand(int operand, const T& value)
+    void setOperand(Operand operand, const T& value)
     {
         this->operand(operand) = value;
     }
-    
-    void setOperand(VirtualRegister virtualRegister, const T& value)
-    {
-        setOperand(virtualRegister.offset(), value);
-    }
 
     size_t size() const { return m_values.size(); }
-    const T& at(size_t index) const { return m_values[index]; }
-    T& at(size_t index) { return m_values[index]; }
-    const T& operator[](size_t index) const { return at(index); }
-    T& operator[](size_t index) { return at(index); }
+    ConstRefType at(size_t index) const { return m_values[index]; }
+    RefType at(size_t index) { return m_values[index]; }
+    ConstRefType operator[](size_t index) const { return at(index); }
+    RefType operator[](size_t index) { return at(index); }
 
-    bool isArgument(size_t index) const { return index < m_numArguments; }
-    bool isLocal(size_t index) const { return !isArgument(index); }
-    int operandForIndex(size_t index) const
+    Operand operandForIndex(size_t index) const
     {
         if (index < numberOfArguments())
-            return virtualRegisterForArgument(index).offset();
-        return virtualRegisterForLocal(index - numberOfArguments()).offset();
+            return virtualRegisterForArgumentIncludingThis(index);
+        else if (index < numberOfLocals() + numberOfArguments())
+            return virtualRegisterForLocal(index - numberOfArguments());
+        return Operand::tmp(index - (numberOfLocals() + numberOfArguments()));
     }
-    VirtualRegister virtualRegisterForIndex(size_t index) const
-    {
-        return VirtualRegister(operandForIndex(index));
-    }
-    
-    void setOperandFirstTime(int operand, const T& value)
-    {
-        if (operandIsArgument(operand)) {
-            setArgumentFirstTime(VirtualRegister(operand).toArgument(), value);
-            return;
-        }
-        
-        setLocalFirstTime(VirtualRegister(operand).toLocal(), value);
-    }
-    
+
     void fill(T value)
     {
         for (size_t i = 0; i < m_values.size(); ++i)
@@ -257,6 +365,7 @@
     {
         ASSERT(numberOfArguments() == other.numberOfArguments());
         ASSERT(numberOfLocals() == other.numberOfLocals());
+        ASSERT(numberOfTmps() == other.numberOfTmps());
         
         return m_values == other.m_values;
     }
@@ -265,9 +374,10 @@
     void dump(PrintStream& out) const;
     
 private:
-    // The first m_numArguments of m_values are arguments, the rest are locals.
-    Vector<T, 0, UnsafeVectorOverflow> m_values;
-    unsigned m_numArguments;
+    // The first m_numArguments of m_values are arguments, the next m_numLocals are locals, and the rest are tmps.
+    Storage m_values;
+    unsigned m_numArguments { 0 };
+    unsigned m_numLocals { 0 };
 };
 
 } // namespace JSC
diff --git a/Source/JavaScriptCore/bytecode/OperandsInlines.h b/Source/JavaScriptCore/bytecode/OperandsInlines.h
index 65fedda..0b1cc1f 100644
--- a/Source/JavaScriptCore/bytecode/OperandsInlines.h
+++ b/Source/JavaScriptCore/bytecode/OperandsInlines.h
@@ -30,6 +30,14 @@
 
 namespace JSC {
 
+inline void Operand::dump(PrintStream& out) const
+{
+    if (isTmp())
+        out.print("tmp", value());
+    else
+        out.print(virtualRegister());
+}
+
 template<typename T>
 void Operands<T>::dumpInContext(PrintStream& out, DumpContext* context) const
 {
@@ -44,6 +52,11 @@
             continue;
         out.print(comma, "loc", localIndex, ":", inContext(local(localIndex), context));
     }
+    for (size_t tmpIndex = 0; tmpIndex < numberOfTmps(); ++tmpIndex) {
+        if (!tmp(tmpIndex))
+            continue;
+        out.print(comma, "tmp", tmpIndex, ":", inContext(tmp(tmpIndex), context));
+    }
 }
 
 template<typename T>
@@ -60,6 +73,11 @@
             continue;
         out.print(comma, "loc", localIndex, ":", local(localIndex));
     }
+    for (size_t tmpIndex = 0; tmpIndex < numberOfTmps(); ++tmpIndex) {
+        if (!tmp(tmpIndex))
+            continue;
+        out.print(comma, "tmp", tmpIndex, ":", tmp(tmpIndex));
+    }
 }
 
 } // namespace JSC
diff --git a/Source/JavaScriptCore/bytecode/UnlinkedCodeBlock.cpp b/Source/JavaScriptCore/bytecode/UnlinkedCodeBlock.cpp
index 20993e2..ddb4121 100644
--- a/Source/JavaScriptCore/bytecode/UnlinkedCodeBlock.cpp
+++ b/Source/JavaScriptCore/bytecode/UnlinkedCodeBlock.cpp
@@ -72,6 +72,7 @@
     , m_codeType(static_cast<unsigned>(codeType))
     , m_didOptimize(static_cast<unsigned>(MixedTriState))
     , m_age(0)
+    , m_hasCheckpoints(false)
     , m_parseMode(info.parseMode())
     , m_codeGenerationMode(codeGenerationMode)
     , m_metadata(UnlinkedMetadataTable::create())
diff --git a/Source/JavaScriptCore/bytecode/UnlinkedCodeBlock.h b/Source/JavaScriptCore/bytecode/UnlinkedCodeBlock.h
index 3d9fb3c..3192170 100644
--- a/Source/JavaScriptCore/bytecode/UnlinkedCodeBlock.h
+++ b/Source/JavaScriptCore/bytecode/UnlinkedCodeBlock.h
@@ -141,6 +141,9 @@
     bool hasExpressionInfo() { return m_expressionInfo.size(); }
     const Vector<ExpressionRangeInfo>& expressionInfo() { return m_expressionInfo; }
 
+    bool hasCheckpoints() const { return m_hasCheckpoints; }
+    void setHasCheckpoints() { m_hasCheckpoints = true; }
+
     // Special registers
     void setThisRegister(VirtualRegister thisRegister) { m_thisRegister = thisRegister; }
     void setScopeRegister(VirtualRegister scopeRegister) { m_scopeRegister = scopeRegister; }
@@ -198,9 +201,8 @@
     }
 
     const Vector<WriteBarrier<Unknown>>& constantRegisters() { return m_constantRegisters; }
-    const WriteBarrier<Unknown>& constantRegister(int index) const { return m_constantRegisters[index - FirstConstantRegisterIndex]; }
-    ALWAYS_INLINE bool isConstantRegisterIndex(int index) const { return index >= FirstConstantRegisterIndex; }
-    ALWAYS_INLINE JSValue getConstant(int index) const { return m_constantRegisters[index - FirstConstantRegisterIndex].get(); }
+    const WriteBarrier<Unknown>& constantRegister(VirtualRegister reg) const { return m_constantRegisters[reg.toConstantIndex()]; }
+    ALWAYS_INLINE JSValue getConstant(VirtualRegister reg) const { return m_constantRegisters[reg.toConstantIndex()].get(); }
     const Vector<SourceCodeRepresentation>& constantsSourceCodeRepresentation() { return m_constantsSourceCodeRepresentation; }
 
     unsigned numberOfConstantIdentifierSets() const { return m_rareData ? m_rareData->m_constantIdentifierSets.size() : 0; }
@@ -426,6 +428,7 @@
     unsigned m_didOptimize : 2;
     unsigned m_age : 3;
     static_assert(((1U << 3) - 1) >= maxAge);
+    bool m_hasCheckpoints : 1;
 public:
     ConcurrentJSLock m_lock;
 private:
diff --git a/Source/JavaScriptCore/bytecode/ValueProfile.h b/Source/JavaScriptCore/bytecode/ValueProfile.h
index dfa0776..90f93dd 100644
--- a/Source/JavaScriptCore/bytecode/ValueProfile.h
+++ b/Source/JavaScriptCore/bytecode/ValueProfile.h
@@ -176,28 +176,28 @@
     return rareCaseProfile->m_bytecodeIndex;
 }
 
-struct ValueProfileAndOperand : public ValueProfile {
-    int m_operand;
+struct ValueProfileAndVirtualRegister : public ValueProfile {
+    VirtualRegister m_operand;
 };
 
-struct ValueProfileAndOperandBuffer {
+struct ValueProfileAndVirtualRegisterBuffer {
     WTF_MAKE_STRUCT_FAST_ALLOCATED;
 
-    ValueProfileAndOperandBuffer(unsigned size)
+    ValueProfileAndVirtualRegisterBuffer(unsigned size)
         : m_size(size)
     {
         // FIXME: ValueProfile has more stuff than we need. We could optimize these value profiles
         // to be more space efficient.
         // https://bugs.webkit.org/show_bug.cgi?id=175413
-        m_buffer = MallocPtr<ValueProfileAndOperand, VMMalloc>::malloc(m_size * sizeof(ValueProfileAndOperand));
+        m_buffer = MallocPtr<ValueProfileAndVirtualRegister, VMMalloc>::malloc(m_size * sizeof(ValueProfileAndVirtualRegister));
         for (unsigned i = 0; i < m_size; ++i)
-            new (&m_buffer.get()[i]) ValueProfileAndOperand();
+            new (&m_buffer.get()[i]) ValueProfileAndVirtualRegister();
     }
 
-    ~ValueProfileAndOperandBuffer()
+    ~ValueProfileAndVirtualRegisterBuffer()
     {
         for (unsigned i = 0; i < m_size; ++i)
-            m_buffer.get()[i].~ValueProfileAndOperand();
+            m_buffer.get()[i].~ValueProfileAndVirtualRegister();
     }
 
     template <typename Function>
@@ -208,7 +208,7 @@
     }
 
     unsigned m_size;
-    MallocPtr<ValueProfileAndOperand, VMMalloc> m_buffer;
+    MallocPtr<ValueProfileAndVirtualRegister, VMMalloc> m_buffer;
 };
 
 } // namespace JSC
diff --git a/Source/JavaScriptCore/bytecode/ValueRecovery.cpp b/Source/JavaScriptCore/bytecode/ValueRecovery.cpp
index c5ca4de..b13947f 100644
--- a/Source/JavaScriptCore/bytecode/ValueRecovery.cpp
+++ b/Source/JavaScriptCore/bytecode/ValueRecovery.cpp
@@ -35,22 +35,22 @@
 {
     switch (technique()) {
     case DisplacedInJSStack:
-        return callFrame->r(virtualRegister().offset()).jsValue();
+        return callFrame->r(virtualRegister()).jsValue();
     case Int32DisplacedInJSStack:
-        return jsNumber(callFrame->r(virtualRegister().offset()).unboxedInt32());
+        return jsNumber(callFrame->r(virtualRegister()).unboxedInt32());
     case Int52DisplacedInJSStack:
-        return jsNumber(callFrame->r(virtualRegister().offset()).unboxedInt52());
+        return jsNumber(callFrame->r(virtualRegister()).unboxedInt52());
     case StrictInt52DisplacedInJSStack:
-        return jsNumber(callFrame->r(virtualRegister().offset()).unboxedStrictInt52());
+        return jsNumber(callFrame->r(virtualRegister()).unboxedStrictInt52());
     case DoubleDisplacedInJSStack:
-        return jsNumber(purifyNaN(callFrame->r(virtualRegister().offset()).unboxedDouble()));
+        return jsNumber(purifyNaN(callFrame->r(virtualRegister()).unboxedDouble()));
     case CellDisplacedInJSStack:
-        return callFrame->r(virtualRegister().offset()).unboxedCell();
+        return callFrame->r(virtualRegister()).unboxedCell();
     case BooleanDisplacedInJSStack:
 #if USE(JSVALUE64)
-        return callFrame->r(virtualRegister().offset()).jsValue();
+        return callFrame->r(virtualRegister()).jsValue();
 #else
-        return jsBoolean(callFrame->r(virtualRegister().offset()).unboxedBoolean());
+        return jsBoolean(callFrame->r(virtualRegister()).unboxedBoolean());
 #endif
     case Constant:
         return constant();
diff --git a/Source/JavaScriptCore/bytecode/ValueRecovery.h b/Source/JavaScriptCore/bytecode/ValueRecovery.h
index d8f18c3..5e43bc7 100644
--- a/Source/JavaScriptCore/bytecode/ValueRecovery.h
+++ b/Source/JavaScriptCore/bytecode/ValueRecovery.h
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011, 2013, 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2019 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
diff --git a/Source/JavaScriptCore/bytecode/VirtualRegister.h b/Source/JavaScriptCore/bytecode/VirtualRegister.h
index 3619529..fb91af1 100644
--- a/Source/JavaScriptCore/bytecode/VirtualRegister.h
+++ b/Source/JavaScriptCore/bytecode/VirtualRegister.h
@@ -31,12 +31,12 @@
 
 namespace JSC {
 
-inline bool operandIsLocal(int operand)
+inline bool virtualRegisterIsLocal(int operand)
 {
     return operand < 0;
 }
 
-inline bool operandIsArgument(int operand)
+inline bool virtualRegisterIsArgument(int operand)
 {
     return operand >= 0;
 }
@@ -47,27 +47,34 @@
 class VirtualRegister {
 public:
     friend VirtualRegister virtualRegisterForLocal(int);
-    friend VirtualRegister virtualRegisterForArgument(int, int);
+    friend VirtualRegister virtualRegisterForArgumentIncludingThis(int, int);
+
+    static constexpr int invalidVirtualRegister = 0x3fffffff;
+    static constexpr int firstConstantRegisterIndex = FirstConstantRegisterIndex;
 
     VirtualRegister(RegisterID*);
     VirtualRegister(RefPtr<RegisterID>);
 
     VirtualRegister()
-        : m_virtualRegister(s_invalidVirtualRegister)
+        : m_virtualRegister(invalidVirtualRegister)
     { }
 
     explicit VirtualRegister(int virtualRegister)
         : m_virtualRegister(virtualRegister)
     { }
 
-    bool isValid() const { return (m_virtualRegister != s_invalidVirtualRegister); }
-    bool isLocal() const { return operandIsLocal(m_virtualRegister); }
-    bool isArgument() const { return operandIsArgument(m_virtualRegister); }
+    VirtualRegister(CallFrameSlot slot)
+        : m_virtualRegister(static_cast<int>(slot))
+    { }
+
+    bool isValid() const { return (m_virtualRegister != invalidVirtualRegister); }
+    bool isLocal() const { return virtualRegisterIsLocal(m_virtualRegister); }
+    bool isArgument() const { return virtualRegisterIsArgument(m_virtualRegister); }
     bool isHeader() const { return m_virtualRegister >= 0 && m_virtualRegister < CallFrameSlot::thisArgument; }
-    bool isConstant() const { return m_virtualRegister >= s_firstConstantRegisterIndex; }
+    bool isConstant() const { return m_virtualRegister >= firstConstantRegisterIndex; }
     int toLocal() const { ASSERT(isLocal()); return operandToLocal(m_virtualRegister); }
     int toArgument() const { ASSERT(isArgument()); return operandToArgument(m_virtualRegister); }
-    int toConstantIndex() const { ASSERT(isConstant()); return m_virtualRegister - s_firstConstantRegisterIndex; }
+    int toConstantIndex() const { ASSERT(isConstant()); return m_virtualRegister - firstConstantRegisterIndex; }
     int offset() const { return m_virtualRegister; }
     int offsetInBytes() const { return m_virtualRegister * sizeof(Register); }
 
@@ -106,9 +113,6 @@
     void dump(PrintStream& out) const;
 
 private:
-    static constexpr int s_invalidVirtualRegister = 0x3fffffff;
-    static constexpr int s_firstConstantRegisterIndex = FirstConstantRegisterIndex;
-
     static int localToOperand(int local) { return -1 - local; }
     static int operandToLocal(int operand) { return -1 - operand; }
     static int operandToArgument(int operand) { return operand - CallFrame::thisArgumentOffset(); }
@@ -124,7 +128,7 @@
     return VirtualRegister(VirtualRegister::localToOperand(local));
 }
 
-inline VirtualRegister virtualRegisterForArgument(int argument, int offset = 0)
+inline VirtualRegister virtualRegisterForArgumentIncludingThis(int argument, int offset = 0)
 {
     return VirtualRegister(VirtualRegister::argumentToOperand(argument) + offset);
 }
diff --git a/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp b/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp
index 9101d10..c1f8639 100644
--- a/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp
+++ b/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp
@@ -512,7 +512,7 @@
                     entry.disableWatching(m_vm);
                     functionSymbolTable->set(NoLockingNecessary, name, entry);
                 }
-                OpPutToScope::emit(this, m_lexicalEnvironmentRegister, UINT_MAX, virtualRegisterForArgument(1 + i), GetPutInfo(ThrowIfNotFound, LocalClosureVar, InitializationMode::NotInitialization), SymbolTableOrScopeDepth::symbolTable(VirtualRegister { symbolTableConstantIndex }), offset.offset());
+                OpPutToScope::emit(this, m_lexicalEnvironmentRegister, UINT_MAX, virtualRegisterForArgumentIncludingThis(1 + i), GetPutInfo(ThrowIfNotFound, LocalClosureVar, InitializationMode::NotInitialization), SymbolTableOrScopeDepth::symbolTable(VirtualRegister { symbolTableConstantIndex }), offset.offset());
             }
             
             // This creates a scoped arguments object and copies the overflow arguments into the
@@ -541,7 +541,7 @@
             if (!captures(name)) {
                 // This is the easy case - just tell the symbol table about the argument. It will
                 // be accessed directly.
-                functionSymbolTable->set(NoLockingNecessary, name, SymbolTableEntry(VarOffset(virtualRegisterForArgument(1 + i))));
+                functionSymbolTable->set(NoLockingNecessary, name, SymbolTableEntry(VarOffset(virtualRegisterForArgumentIncludingThis(1 + i))));
                 continue;
             }
             
@@ -550,7 +550,7 @@
                 static_cast<const BindingNode*>(parameters.at(i).first)->boundProperty();
             functionSymbolTable->set(NoLockingNecessary, name, SymbolTableEntry(VarOffset(offset)));
             
-            OpPutToScope::emit(this, m_lexicalEnvironmentRegister, addConstant(ident), virtualRegisterForArgument(1 + i), GetPutInfo(ThrowIfNotFound, LocalClosureVar, InitializationMode::NotInitialization), SymbolTableOrScopeDepth::symbolTable(VirtualRegister { symbolTableConstantIndex }), offset.offset());
+            OpPutToScope::emit(this, m_lexicalEnvironmentRegister, addConstant(ident), virtualRegisterForArgumentIncludingThis(1 + i), GetPutInfo(ThrowIfNotFound, LocalClosureVar, InitializationMode::NotInitialization), SymbolTableOrScopeDepth::symbolTable(VirtualRegister { symbolTableConstantIndex }), offset.offset());
         }
     }
     
@@ -1186,10 +1186,10 @@
 
 RegisterID* BytecodeGenerator::initializeNextParameter()
 {
-    VirtualRegister reg = virtualRegisterForArgument(m_codeBlock->numParameters());
+    VirtualRegister reg = virtualRegisterForArgumentIncludingThis(m_codeBlock->numParameters());
     m_parameters.grow(m_parameters.size() + 1);
     auto& parameter = registerFor(reg);
-    parameter.setIndex(reg.offset());
+    parameter.setIndex(reg);
     m_codeBlock->addParameter();
     return &parameter;
 }
@@ -1198,7 +1198,7 @@
 {
     // Make sure the code block knows about all of our parameters, and make sure that parameters
     // needing destructuring are noted.
-    m_thisRegister.setIndex(initializeNextParameter()->index()); // this
+    m_thisRegister.setIndex(VirtualRegister(initializeNextParameter()->index())); // this
 
     bool nonSimpleArguments = false;
     for (unsigned i = 0; i < parameters.size(); ++i) {
@@ -1639,11 +1639,11 @@
 
     if (m_lastInstruction->is<OpTypeof>()) {
         auto op = m_lastInstruction->as<OpTypeof>();
-        if (src1->index() == op.m_dst.offset()
+        if (src1->virtualRegister() == op.m_dst
             && src1->isTemporary()
-            && m_codeBlock->isConstantRegisterIndex(src2->index())
-            && m_codeBlock->constantRegister(src2->index()).get().isString()) {
-            const String& value = asString(m_codeBlock->constantRegister(src2->index()).get())->tryGetValue();
+            && src2->virtualRegister().isConstant()
+            && m_codeBlock->constantRegister(src2->virtualRegister()).get().isString()) {
+            const String& value = asString(m_codeBlock->constantRegister(src2->virtualRegister()).get())->tryGetValue();
             if (value == "undefined") {
                 rewind();
                 OpIsUndefined::emit(this, dst, op.m_value);
@@ -3290,6 +3290,8 @@
     // Emit call.
     ASSERT(dst != ignoredResult());
     VarargsOp::emit(this, dst, func, thisRegister, arguments ? arguments : VirtualRegister(0), firstFreeRegister, firstVarArgOffset);
+    if (VarargsOp::opcodeID != op_tail_call_forward_arguments)
+        ASSERT(m_codeBlock->hasCheckpoints());
     return dst;
 }
 
diff --git a/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.h b/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.h
index c9e4f69..42c6302 100644
--- a/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.h
+++ b/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.h
@@ -1019,6 +1019,7 @@
         bool shouldEmitControlFlowProfilerHooks() const { return m_codeGenerationMode.contains(CodeGenerationMode::ControlFlowProfiler); }
         
         bool isStrictMode() const { return m_codeBlock->isStrictMode(); }
+        void setUsesCheckpoints() { m_codeBlock->setHasCheckpoints(); }
 
         SourceParseMode parseMode() const { return m_codeBlock->parseMode(); }
         
diff --git a/Source/JavaScriptCore/bytecompiler/RegisterID.h b/Source/JavaScriptCore/bytecompiler/RegisterID.h
index da3fe12..73b4320 100644
--- a/Source/JavaScriptCore/bytecompiler/RegisterID.h
+++ b/Source/JavaScriptCore/bytecompiler/RegisterID.h
@@ -69,12 +69,12 @@
         {
         }
 
-        void setIndex(int index)
+        void setIndex(VirtualRegister index)
         {
 #if ASSERT_ENABLED
             m_didSetIndex = true;
 #endif
-            m_virtualRegister = VirtualRegister(index);
+            m_virtualRegister = index;
         }
 
         void setTemporary()
diff --git a/Source/JavaScriptCore/dfg/DFGAbstractHeap.cpp b/Source/JavaScriptCore/dfg/DFGAbstractHeap.cpp
index b83fee0..e64071b1 100644
--- a/Source/JavaScriptCore/dfg/DFGAbstractHeap.cpp
+++ b/Source/JavaScriptCore/dfg/DFGAbstractHeap.cpp
@@ -40,6 +40,14 @@
         out.print(value());
 }
 
+void AbstractHeap::Payload::dumpAsOperand(PrintStream& out) const
+{
+    if (isTop())
+        out.print("TOP");
+    else
+        out.print(Operand::fromBits(value()));
+}
+
 void AbstractHeap::dump(PrintStream& out) const
 {
     out.print(kind());
@@ -49,6 +57,13 @@
         out.print("(", DOMJIT::HeapRange::fromRaw(payload().value32()), ")");
         return;
     }
+    if (kind() == Stack) {
+        out.print("(");
+        payload().dumpAsOperand(out);
+        out.print(")");
+        return;
+    }
+
     out.print("(", payload(), ")");
 }
 
diff --git a/Source/JavaScriptCore/dfg/DFGAbstractHeap.h b/Source/JavaScriptCore/dfg/DFGAbstractHeap.h
index bf99377..3c4ce3d 100644
--- a/Source/JavaScriptCore/dfg/DFGAbstractHeap.h
+++ b/Source/JavaScriptCore/dfg/DFGAbstractHeap.h
@@ -28,6 +28,7 @@
 #if ENABLE(DFG_JIT)
 
 #include "DOMJITHeapRange.h"
+#include "OperandsInlines.h"
 #include "VirtualRegister.h"
 #include <wtf/HashMap.h>
 #include <wtf/PrintStream.h>
@@ -123,10 +124,15 @@
             , m_value(bitwise_cast<intptr_t>(pointer))
         {
         }
-        
-        Payload(VirtualRegister operand)
+
+        Payload(Operand operand)
             : m_isTop(false)
-            , m_value(operand.offset())
+            , m_value(operand.asBits())
+        {
+        }
+
+        Payload(VirtualRegister operand)
+            : Payload(Operand(operand))
         {
         }
         
@@ -183,6 +189,7 @@
         }
         
         void dump(PrintStream&) const;
+        void dumpAsOperand(PrintStream&) const;
         
     private:
         bool m_isTop;
@@ -204,6 +211,7 @@
     {
         ASSERT(kind != InvalidAbstractHeap && kind != World && kind != Heap && kind != SideState);
         m_value = encode(kind, payload);
+        ASSERT(this->kind() == kind && this->payload() == payload);
     }
     
     AbstractHeap(WTF::HashTableDeletedValueType)
@@ -219,6 +227,11 @@
         ASSERT(kind() != World && kind() != InvalidAbstractHeap);
         return payloadImpl();
     }
+    Operand operand() const
+    {
+        ASSERT(kind() == Stack && !payload().isTop());
+        return Operand::fromBits(payload().value());
+    }
     
     AbstractHeap supertype() const
     {
diff --git a/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h b/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
index c26c24a..e5284ad 100644
--- a/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
+++ b/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
@@ -388,7 +388,7 @@
             
     case GetLocal: {
         VariableAccessData* variableAccessData = node->variableAccessData();
-        AbstractValue value = m_state.operand(variableAccessData->local().offset());
+        AbstractValue value = m_state.operand(variableAccessData->operand());
         // The value in the local should already be checked.
         DFG_ASSERT(m_graph, node, value.isType(typeFilterFor(variableAccessData->flushFormat())));
         if (value.value())
@@ -399,7 +399,7 @@
         
     case GetStack: {
         StackAccessData* data = node->stackAccessData();
-        AbstractValue value = m_state.operand(data->local);
+        AbstractValue value = m_state.operand(data->operand);
         // The value in the local should already be checked.
         DFG_ASSERT(m_graph, node, value.isType(typeFilterFor(data->format)));
         if (value.value())
@@ -409,12 +409,12 @@
     }
         
     case SetLocal: {
-        m_state.operand(node->local()) = forNode(node->child1());
+        m_state.operand(node->operand()) = forNode(node->child1());
         break;
     }
         
     case PutStack: {
-        m_state.operand(node->stackAccessData()->local) = forNode(node->child1());
+        m_state.operand(node->stackAccessData()->operand) = forNode(node->child1());
         break;
     }
         
@@ -436,7 +436,7 @@
         // Assert that the state of arguments has been set. SetArgumentDefinitely/SetArgumentMaybe means
         // that someone set the argument values out-of-band, and currently this always means setting to a
         // non-clear value.
-        ASSERT(!m_state.operand(node->local()).isClear());
+        ASSERT(!m_state.operand(node->operand()).isClear());
         break;
 
     case InitializeEntrypointArguments: {
@@ -465,6 +465,12 @@
         break;
     }
 
+    case VarargsLength: {
+        clobberWorld();
+        setTypeForNode(node, SpecInt32Only);
+        break;
+    }
+
     case LoadVarargs:
     case ForwardVarargs: {
         // FIXME: ForwardVarargs should check if the count becomes known, and if it does, it should turn
@@ -483,7 +489,7 @@
         LoadVarargsData* data = node->loadVarargsData();
         m_state.operand(data->count).setNonCellType(SpecInt32Only);
         for (unsigned i = data->limit - 1; i--;)
-            m_state.operand(data->start.offset() + i).makeHeapTop();
+            m_state.operand(data->start + i).makeHeapTop();
         break;
     }
 
@@ -2365,9 +2371,9 @@
             unsigned argumentIndex;
             if (argumentIndexChecked.safeGet(argumentIndex) != CheckedState::DidOverflow) {
                 if (inlineCallFrame) {
-                    if (argumentIndex < inlineCallFrame->argumentCountIncludingThis - 1) {
+                    if (argumentIndex < static_cast<unsigned>(inlineCallFrame->argumentCountIncludingThis - 1)) {
                         setForNode(node, m_state.operand(
-                            virtualRegisterForArgument(argumentIndex + 1) + inlineCallFrame->stackOffset));
+                            virtualRegisterForArgumentIncludingThis(argumentIndex + 1) + inlineCallFrame->stackOffset));
                         m_state.setShouldTryConstantFolding(true);
                         break;
                     }
@@ -2388,7 +2394,7 @@
             for (unsigned i = 1 + node->numberOfArgumentsToSkip(); i < inlineCallFrame->argumentCountIncludingThis; ++i) {
                 result.merge(
                     m_state.operand(
-                        virtualRegisterForArgument(i) + inlineCallFrame->stackOffset));
+                        virtualRegisterForArgumentIncludingThis(i) + inlineCallFrame->stackOffset));
             }
             
             if (node->op() == GetMyArgumentByValOutOfBounds)
diff --git a/Source/JavaScriptCore/dfg/DFGArgumentPosition.h b/Source/JavaScriptCore/dfg/DFGArgumentPosition.h
index d73247b..89c08fa 100644
--- a/Source/JavaScriptCore/dfg/DFGArgumentPosition.h
+++ b/Source/JavaScriptCore/dfg/DFGArgumentPosition.h
@@ -120,7 +120,7 @@
     {
         for (unsigned i = 0; i < m_variables.size(); ++i) {
             VariableAccessData* variable = m_variables[i]->find();
-            VirtualRegister operand = variable->local();
+            Operand operand = variable->operand();
 
             if (i)
                 out.print(" ");
diff --git a/Source/JavaScriptCore/dfg/DFGArgumentsEliminationPhase.cpp b/Source/JavaScriptCore/dfg/DFGArgumentsEliminationPhase.cpp
index 713e799..9bb4f4f 100644
--- a/Source/JavaScriptCore/dfg/DFGArgumentsEliminationPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGArgumentsEliminationPhase.cpp
@@ -358,9 +358,12 @@
                 case NewArrayBuffer:
                     break;
                     
+                case VarargsLength:
+                    break;
+
                 case LoadVarargs:
-                    if (node->loadVarargsData()->offset && (node->child1()->op() == NewArrayWithSpread || node->child1()->op() == Spread || node->child1()->op() == NewArrayBuffer))
-                        escape(node->child1(), node);
+                    if (node->loadVarargsData()->offset && (node->argumentsChild()->op() == NewArrayWithSpread || node->argumentsChild()->op() == Spread || node->argumentsChild()->op() == NewArrayBuffer))
+                        escape(node->argumentsChild(), node);
                     break;
                     
                 case CallVarargs:
@@ -493,10 +496,10 @@
                             return;
                         }
                         ASSERT(!heap.payload().isTop());
-                        VirtualRegister reg(heap.payload().value32());
+                        Operand operand = heap.operand();
                         // The register may not point to an argument or local, for example if we are looking at SetArgumentCountIncludingThis.
-                        if (!reg.isHeader())
-                            clobberedByThisBlock.operand(reg) = true;
+                        if (!operand.isHeader())
+                            clobberedByThisBlock.operand(operand) = true;
                     },
                     NoOpClobberize());
             }
@@ -560,16 +563,16 @@
                         if (inlineCallFrame) {
                             if (inlineCallFrame->isVarargs()) {
                                 isClobberedByBlock |= clobberedByThisBlock.operand(
-                                    inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis);
+                                    VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis));
                             }
 
                             if (!isClobberedByBlock || inlineCallFrame->isClosureCall) {
                                 isClobberedByBlock |= clobberedByThisBlock.operand(
-                                    inlineCallFrame->stackOffset + CallFrameSlot::callee);
+                                    VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::callee));
                             }
 
                             if (!isClobberedByBlock) {
-                                for (unsigned i = 0; i < inlineCallFrame->argumentCountIncludingThis - 1; ++i) {
+                                for (unsigned i = 0; i < static_cast<unsigned>(inlineCallFrame->argumentCountIncludingThis - 1); ++i) {
                                     VirtualRegister reg =
                                         VirtualRegister(inlineCallFrame->stackOffset) +
                                         CallFrame::argumentOffset(i);
@@ -627,7 +630,7 @@
                                 m_graph, node, NoOpClobberize(),
                                 [&] (AbstractHeap heap) {
                                     if (heap.kind() == Stack && !heap.payload().isTop()) {
-                                        if (argumentsInvolveStackSlot(inlineCallFrame, VirtualRegister(heap.payload().value32())))
+                                        if (argumentsInvolveStackSlot(inlineCallFrame, heap.operand()))
                                             found = true;
                                         return;
                                     }
@@ -752,7 +755,7 @@
                     DFG_ASSERT(
                         m_graph, node, node->child1()->op() == PhantomDirectArguments, node->child1()->op());
                     VirtualRegister reg =
-                        virtualRegisterForArgument(node->capturedArgumentsOffset().offset() + 1) +
+                        virtualRegisterForArgumentIncludingThis(node->capturedArgumentsOffset().offset() + 1) +
                         node->origin.semantic.stackOffset();
                     StackAccessData* data = m_graph.m_stackAccessData.add(reg, FlushedJSValue);
                     node->convertToGetStack(data);
@@ -806,14 +809,14 @@
                         
                         bool safeToGetStack = index >= numberOfArgumentsToSkip;
                         if (inlineCallFrame)
-                            safeToGetStack &= index < inlineCallFrame->argumentCountIncludingThis - 1;
+                            safeToGetStack &= index < static_cast<unsigned>(inlineCallFrame->argumentCountIncludingThis - 1);
                         else {
                             safeToGetStack &=
                                 index < static_cast<unsigned>(codeBlock()->numParameters()) - 1;
                         }
                         if (safeToGetStack) {
                             StackAccessData* data;
-                            VirtualRegister arg = virtualRegisterForArgument(index + 1);
+                            VirtualRegister arg = virtualRegisterForArgumentIncludingThis(index + 1);
                             if (inlineCallFrame)
                                 arg += inlineCallFrame->stackOffset;
                             data = m_graph.m_stackAccessData.add(arg, FlushedJSValue);
@@ -845,9 +848,23 @@
                     node->convertToIdentityOn(result);
                     break;
                 }
-                    
+                
+                case VarargsLength: {
+                    Node* candidate = node->argumentsChild().node();
+                    if (!isEliminatedAllocation(candidate))
+                        break;
+
+                    // VarargsLength can exit, so it better be exitOK.
+                    DFG_ASSERT(m_graph, node, node->origin.exitOK);
+                    NodeOrigin origin = node->origin.withExitOK(true);
+
+
+                    node->convertToIdentityOn(emitCodeToGetArgumentsArrayLength(insertionSet, candidate, nodeIndex, origin, /* addThis = */ true));
+                    break;
+                }
+
                 case LoadVarargs: {
-                    Node* candidate = node->child1().node();
+                    Node* candidate = node->argumentsChild().node();
                     if (!isEliminatedAllocation(candidate))
                         break;
                     
@@ -862,10 +879,10 @@
                             jsNumber(argumentCountIncludingThis));
                         insertionSet.insertNode(
                             nodeIndex, SpecNone, KillStack, node->origin.takeValidExit(canExit),
-                            OpInfo(varargsData->count.offset()));
+                            OpInfo(varargsData->count));
                         insertionSet.insertNode(
                             nodeIndex, SpecNone, MovHint, node->origin.takeValidExit(canExit),
-                            OpInfo(varargsData->count.offset()), Edge(argumentCountIncludingThisNode));
+                            OpInfo(varargsData->count), Edge(argumentCountIncludingThisNode));
                         insertionSet.insertNode(
                             nodeIndex, SpecNone, PutStack, node->origin.withExitOK(canExit),
                             OpInfo(m_graph.m_stackAccessData.add(varargsData->count, FlushedInt32)),
@@ -874,14 +891,15 @@
 
                     auto storeValue = [&] (Node* value, unsigned storeIndex) {
                         VirtualRegister reg = varargsData->start + storeIndex;
+                        ASSERT(reg.isLocal());
                         StackAccessData* data =
                             m_graph.m_stackAccessData.add(reg, FlushedJSValue);
                         
                         insertionSet.insertNode(
-                            nodeIndex, SpecNone, KillStack, node->origin.takeValidExit(canExit), OpInfo(reg.offset()));
+                            nodeIndex, SpecNone, KillStack, node->origin.takeValidExit(canExit), OpInfo(reg));
                         insertionSet.insertNode(
                             nodeIndex, SpecNone, MovHint, node->origin.takeValidExit(canExit),
-                            OpInfo(reg.offset()), Edge(value));
+                            OpInfo(reg), Edge(value));
                         insertionSet.insertNode(
                             nodeIndex, SpecNone, PutStack, node->origin.withExitOK(canExit),
                             OpInfo(data), Edge(value));
@@ -935,7 +953,7 @@
                                 ASSERT(candidate->op() == PhantomCreateRest);
                                 unsigned numberOfArgumentsToSkip = candidate->numberOfArgumentsToSkip();
                                 InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame();
-                                unsigned frameArgumentCount = inlineCallFrame->argumentCountIncludingThis - 1;
+                                unsigned frameArgumentCount = static_cast<unsigned>(inlineCallFrame->argumentCountIncludingThis - 1);
                                 if (frameArgumentCount >= numberOfArgumentsToSkip)
                                     return frameArgumentCount - numberOfArgumentsToSkip;
                                 return 0;
@@ -983,9 +1001,9 @@
                                     ASSERT(candidate->op() == PhantomCreateRest);
                                     unsigned numberOfArgumentsToSkip = candidate->numberOfArgumentsToSkip();
                                     InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame();
-                                    unsigned frameArgumentCount = inlineCallFrame->argumentCountIncludingThis - 1;
+                                    unsigned frameArgumentCount = static_cast<unsigned>(inlineCallFrame->argumentCountIncludingThis - 1);
                                     for (unsigned loadIndex = numberOfArgumentsToSkip; loadIndex < frameArgumentCount; ++loadIndex) {
-                                        VirtualRegister reg = virtualRegisterForArgument(loadIndex + 1) + inlineCallFrame->stackOffset;
+                                        VirtualRegister reg = virtualRegisterForArgumentIncludingThis(loadIndex + 1) + inlineCallFrame->stackOffset;
                                         StackAccessData* data = m_graph.m_stackAccessData.add(reg, FlushedJSValue);
                                         Node* value = insertionSet.insertNode(
                                             nodeIndex, SpecNone, GetStack, node->origin.withExitOK(canExit),
@@ -1019,9 +1037,7 @@
 
                         InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame();
 
-                        if (inlineCallFrame
-                            && !inlineCallFrame->isVarargs()) {
-
+                        if (inlineCallFrame && !inlineCallFrame->isVarargs()) {
                             unsigned argumentCountIncludingThis = inlineCallFrame->argumentCountIncludingThis;
                             if (argumentCountIncludingThis > varargsData->offset)
                                 argumentCountIncludingThis -= varargsData->offset;
@@ -1030,7 +1046,6 @@
                             RELEASE_ASSERT(argumentCountIncludingThis >= 1);
 
                             if (argumentCountIncludingThis <= varargsData->limit) {
-                                
                                 storeArgumentCountIncludingThis(argumentCountIncludingThis);
 
                                 DFG_ASSERT(m_graph, node, varargsData->limit - 1 >= varargsData->mandatoryMinimum, varargsData->limit, varargsData->mandatoryMinimum);
@@ -1045,7 +1060,7 @@
                                     unsigned loadIndex = storeIndex + varargsData->offset;
 
                                     if (loadIndex + 1 < inlineCallFrame->argumentCountIncludingThis) {
-                                        VirtualRegister reg = virtualRegisterForArgument(loadIndex + 1) + inlineCallFrame->stackOffset;
+                                        VirtualRegister reg = virtualRegisterForArgumentIncludingThis(loadIndex + 1) + inlineCallFrame->stackOffset;
                                         StackAccessData* data = m_graph.m_stackAccessData.add(
                                             reg, FlushedJSValue);
                                         
@@ -1201,7 +1216,7 @@
                                 unsigned numberOfArgumentsToSkip = candidate->numberOfArgumentsToSkip();
                                 for (unsigned i = 1 + numberOfArgumentsToSkip; i < inlineCallFrame->argumentCountIncludingThis; ++i) {
                                     StackAccessData* data = m_graph.m_stackAccessData.add(
-                                        virtualRegisterForArgument(i) + inlineCallFrame->stackOffset,
+                                        virtualRegisterForArgumentIncludingThis(i) + inlineCallFrame->stackOffset,
                                         FlushedJSValue);
 
                                     Node* value = insertionSet.insertNode(
@@ -1227,7 +1242,7 @@
                             Vector<Node*> arguments;
                             for (unsigned i = 1 + varargsData->firstVarArgOffset; i < inlineCallFrame->argumentCountIncludingThis; ++i) {
                                 StackAccessData* data = m_graph.m_stackAccessData.add(
-                                    virtualRegisterForArgument(i) + inlineCallFrame->stackOffset,
+                                    virtualRegisterForArgumentIncludingThis(i) + inlineCallFrame->stackOffset,
                                     FlushedJSValue);
                                 
                                 Node* value = insertionSet.insertNode(
diff --git a/Source/JavaScriptCore/dfg/DFGArgumentsUtilities.cpp b/Source/JavaScriptCore/dfg/DFGArgumentsUtilities.cpp
index ff1d568..1fdc309 100644
--- a/Source/JavaScriptCore/dfg/DFGArgumentsUtilities.cpp
+++ b/Source/JavaScriptCore/dfg/DFGArgumentsUtilities.cpp
@@ -32,11 +32,15 @@
 
 namespace JSC { namespace DFG {
 
-bool argumentsInvolveStackSlot(InlineCallFrame* inlineCallFrame, VirtualRegister reg)
+bool argumentsInvolveStackSlot(InlineCallFrame* inlineCallFrame, Operand operand)
 {
+    if (operand.isTmp())
+        return false;
+
+    VirtualRegister reg = operand.virtualRegister();
     if (!inlineCallFrame)
         return (reg.isArgument() && reg.toArgument()) || reg.isHeader();
-    
+
     if (inlineCallFrame->isClosureCall
         && reg == VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::callee))
         return true;
@@ -46,19 +50,19 @@
         return true;
     
     // We do not include fixups here since it is not related to |arguments|, rest parameters, and varargs.
-    unsigned numArguments = inlineCallFrame->argumentCountIncludingThis - 1;
+    unsigned numArguments = static_cast<unsigned>(inlineCallFrame->argumentCountIncludingThis - 1);
     VirtualRegister argumentStart =
         VirtualRegister(inlineCallFrame->stackOffset) + CallFrame::argumentOffset(0);
     return reg >= argumentStart && reg < argumentStart + numArguments;
 }
 
-bool argumentsInvolveStackSlot(Node* candidate, VirtualRegister reg)
+bool argumentsInvolveStackSlot(Node* candidate, Operand operand)
 {
-    return argumentsInvolveStackSlot(candidate->origin.semantic.inlineCallFrame(), reg);
+    return argumentsInvolveStackSlot(candidate->origin.semantic.inlineCallFrame(), operand);
 }
 
 Node* emitCodeToGetArgumentsArrayLength(
-    InsertionSet& insertionSet, Node* arguments, unsigned nodeIndex, NodeOrigin origin)
+    InsertionSet& insertionSet, Node* arguments, unsigned nodeIndex, NodeOrigin origin, bool addThis)
 {
     Graph& graph = insertionSet.graph();
 
@@ -69,11 +73,14 @@
         || arguments->op() == NewArrayBuffer
         || arguments->op() == PhantomDirectArguments || arguments->op() == PhantomClonedArguments
         || arguments->op() == PhantomCreateRest || arguments->op() == PhantomNewArrayBuffer
-        || arguments->op() == PhantomNewArrayWithSpread,
+        || arguments->op() == PhantomNewArrayWithSpread || arguments->op() == PhantomSpread,
         arguments->op());
 
+    if (arguments->op() == PhantomSpread)
+        return emitCodeToGetArgumentsArrayLength(insertionSet, arguments->child1().node(), nodeIndex, origin, addThis);
+
     if (arguments->op() == PhantomNewArrayWithSpread) {
-        unsigned numberOfNonSpreadArguments = 0;
+        unsigned numberOfNonSpreadArguments = addThis;
         BitVector* bitVector = arguments->bitVector();
         Node* currentSum = nullptr;
         for (unsigned i = 0; i < arguments->numChildren(); i++) {
@@ -103,7 +110,7 @@
 
     if (arguments->op() == NewArrayBuffer || arguments->op() == PhantomNewArrayBuffer) {
         return insertionSet.insertConstant(
-            nodeIndex, origin, jsNumber(arguments->castOperand<JSImmutableButterfly*>()->length()));
+            nodeIndex, origin, jsNumber(arguments->castOperand<JSImmutableButterfly*>()->length() + addThis));
     }
     
     InlineCallFrame* inlineCallFrame = arguments->origin.semantic.inlineCallFrame();
@@ -113,7 +120,7 @@
         numberOfArgumentsToSkip = arguments->numberOfArgumentsToSkip();
     
     if (inlineCallFrame && !inlineCallFrame->isVarargs()) {
-        unsigned argumentsSize = inlineCallFrame->argumentCountIncludingThis - 1;
+        unsigned argumentsSize = inlineCallFrame->argumentCountIncludingThis - !addThis;
         if (argumentsSize >= numberOfArgumentsToSkip)
             argumentsSize -= numberOfArgumentsToSkip;
         else
@@ -129,14 +136,14 @@
         nodeIndex, SpecInt32Only, ArithSub, origin, OpInfo(Arith::Unchecked),
         Edge(argumentCount, Int32Use),
         insertionSet.insertConstantForUse(
-            nodeIndex, origin, jsNumber(1 + numberOfArgumentsToSkip), Int32Use));
+            nodeIndex, origin, jsNumber(numberOfArgumentsToSkip + !addThis), Int32Use));
 
     if (numberOfArgumentsToSkip) {
         // The above subtraction may produce a negative number if this number is non-zero. We correct that here.
         result = insertionSet.insertNode(
             nodeIndex, SpecInt32Only, ArithMax, origin, 
             Edge(result, Int32Use), 
-            insertionSet.insertConstantForUse(nodeIndex, origin, jsNumber(0), Int32Use));
+            insertionSet.insertConstantForUse(nodeIndex, origin, jsNumber(static_cast<unsigned>(addThis)), Int32Use));
         result->setResult(NodeResultInt32);
     }
 
diff --git a/Source/JavaScriptCore/dfg/DFGArgumentsUtilities.h b/Source/JavaScriptCore/dfg/DFGArgumentsUtilities.h
index 9a7718f..74f939e 100644
--- a/Source/JavaScriptCore/dfg/DFGArgumentsUtilities.h
+++ b/Source/JavaScriptCore/dfg/DFGArgumentsUtilities.h
@@ -31,11 +31,11 @@
 
 namespace JSC { namespace DFG {
 
-bool argumentsInvolveStackSlot(InlineCallFrame*, VirtualRegister);
-bool argumentsInvolveStackSlot(Node* candidate, VirtualRegister);
+bool argumentsInvolveStackSlot(InlineCallFrame*, Operand);
+bool argumentsInvolveStackSlot(Node* candidate, Operand);
 
 Node* emitCodeToGetArgumentsArrayLength(
-    InsertionSet&, Node* arguments, unsigned nodeIndex, NodeOrigin);
+    InsertionSet&, Node* arguments, unsigned nodeIndex, NodeOrigin, bool addThis = false);
 
 } } // namespace JSC::DFG
 
diff --git a/Source/JavaScriptCore/dfg/DFGAtTailAbstractState.h b/Source/JavaScriptCore/dfg/DFGAtTailAbstractState.h
index 96dac39..897c70d 100644
--- a/Source/JavaScriptCore/dfg/DFGAtTailAbstractState.h
+++ b/Source/JavaScriptCore/dfg/DFGAtTailAbstractState.h
@@ -139,8 +139,7 @@
     
     unsigned numberOfArguments() const { return m_block->valuesAtTail.numberOfArguments(); }
     unsigned numberOfLocals() const { return m_block->valuesAtTail.numberOfLocals(); }
-    AbstractValue& operand(int operand) { return m_block->valuesAtTail.operand(operand); }
-    AbstractValue& operand(VirtualRegister operand) { return m_block->valuesAtTail.operand(operand); }
+    AbstractValue& operand(Operand operand) { return m_block->valuesAtTail.operand(operand); }
     AbstractValue& local(size_t index) { return m_block->valuesAtTail.local(index); }
     AbstractValue& argument(size_t index) { return m_block->valuesAtTail.argument(index); }
     
diff --git a/Source/JavaScriptCore/dfg/DFGAvailabilityMap.cpp b/Source/JavaScriptCore/dfg/DFGAvailabilityMap.cpp
index 7743e7d..c2b55aa 100644
--- a/Source/JavaScriptCore/dfg/DFGAvailabilityMap.cpp
+++ b/Source/JavaScriptCore/dfg/DFGAvailabilityMap.cpp
@@ -65,10 +65,10 @@
 
 void AvailabilityMap::pruneByLiveness(Graph& graph, CodeOrigin where)
 {
-    Operands<Availability> localsCopy(m_locals.numberOfArguments(), m_locals.numberOfLocals(), Availability::unavailable());
+    Operands<Availability> localsCopy(OperandsLike, m_locals, Availability::unavailable());
     graph.forAllLiveInBytecode(
         where,
-        [&] (VirtualRegister reg) {
+        [&] (Operand reg) {
             localsCopy.operand(reg) = m_locals.operand(reg);
         });
     m_locals = WTFMove(localsCopy);
diff --git a/Source/JavaScriptCore/dfg/DFGAvailabilityMap.h b/Source/JavaScriptCore/dfg/DFGAvailabilityMap.h
index 5355256..80c12bf1 100644
--- a/Source/JavaScriptCore/dfg/DFGAvailabilityMap.h
+++ b/Source/JavaScriptCore/dfg/DFGAvailabilityMap.h
@@ -66,9 +66,9 @@
     }
     
     template<typename HasFunctor, typename AddFunctor>
-    void closeStartingWithLocal(VirtualRegister reg, const HasFunctor& has, const AddFunctor& add)
+    void closeStartingWithLocal(Operand op, const HasFunctor& has, const AddFunctor& add)
     {
-        Availability availability = m_locals.operand(reg);
+        Availability availability = m_locals.operand(op);
         if (!availability.hasNode())
             return;
         
diff --git a/Source/JavaScriptCore/dfg/DFGBasicBlock.cpp b/Source/JavaScriptCore/dfg/DFGBasicBlock.cpp
index c8d1d17..e48e0c2 100644
--- a/Source/JavaScriptCore/dfg/DFGBasicBlock.cpp
+++ b/Source/JavaScriptCore/dfg/DFGBasicBlock.cpp
@@ -34,7 +34,7 @@
 
 DEFINE_ALLOCATOR_WITH_HEAP_IDENTIFIER(BasicBlock);
 
-BasicBlock::BasicBlock(BytecodeIndex bytecodeBegin, unsigned numArguments, unsigned numLocals, float executionCount)
+BasicBlock::BasicBlock(BytecodeIndex bytecodeBegin, unsigned numArguments, unsigned numLocals, unsigned numTmps, float executionCount)
     : bytecodeBegin(bytecodeBegin)
     , index(NoBlock)
     , cfaStructureClobberStateAtHead(StructuresAreWatched)
@@ -50,11 +50,11 @@
     , isLinked(false)
 #endif
     , isReachable(false)
-    , variablesAtHead(numArguments, numLocals)
-    , variablesAtTail(numArguments, numLocals)
-    , valuesAtHead(numArguments, numLocals)
-    , valuesAtTail(numArguments, numLocals)
-    , intersectionOfPastValuesAtHead(numArguments, numLocals, AbstractValue::fullTop())
+    , variablesAtHead(numArguments, numLocals, numTmps)
+    , variablesAtTail(numArguments, numLocals, numTmps)
+    , valuesAtHead(numArguments, numLocals, numTmps)
+    , valuesAtTail(numArguments, numLocals, numTmps)
+    , intersectionOfPastValuesAtHead(numArguments, numLocals, numTmps, AbstractValue::fullTop())
     , executionCount(executionCount)
 {
 }
@@ -72,6 +72,15 @@
     intersectionOfPastValuesAtHead.ensureLocals(newNumLocals, AbstractValue::fullTop());
 }
 
+void BasicBlock::ensureTmps(unsigned newNumTmps)
+{
+    variablesAtHead.ensureTmps(newNumTmps);
+    variablesAtTail.ensureTmps(newNumTmps);
+    valuesAtHead.ensureTmps(newNumTmps);
+    valuesAtTail.ensureTmps(newNumTmps);
+    intersectionOfPastValuesAtHead.ensureTmps(newNumTmps, AbstractValue::fullTop());
+}
+
 void BasicBlock::replaceTerminal(Graph& graph, Node* node)
 {
     NodeAndIndex result = findTerminal();
diff --git a/Source/JavaScriptCore/dfg/DFGBasicBlock.h b/Source/JavaScriptCore/dfg/DFGBasicBlock.h
index d457175..d2c5ae9 100644
--- a/Source/JavaScriptCore/dfg/DFGBasicBlock.h
+++ b/Source/JavaScriptCore/dfg/DFGBasicBlock.h
@@ -49,11 +49,12 @@
 struct BasicBlock : RefCounted<BasicBlock> {
     WTF_MAKE_STRUCT_FAST_ALLOCATED_WITH_HEAP_IDENTIFIER(BasicBlock);
     BasicBlock(
-        BytecodeIndex bytecodeBegin, unsigned numArguments, unsigned numLocals,
+        BytecodeIndex bytecodeBegin, unsigned numArguments, unsigned numLocals, unsigned numTmps,
         float executionCount);
     ~BasicBlock();
     
     void ensureLocals(unsigned newNumLocals);
+    void ensureTmps(unsigned newNumTmps);
     
     size_t size() const { return m_nodes.size(); }
     bool isEmpty() const { return !size(); }
diff --git a/Source/JavaScriptCore/dfg/DFGBlockInsertionSet.cpp b/Source/JavaScriptCore/dfg/DFGBlockInsertionSet.cpp
index 7508e21..008cf52 100644
--- a/Source/JavaScriptCore/dfg/DFGBlockInsertionSet.cpp
+++ b/Source/JavaScriptCore/dfg/DFGBlockInsertionSet.cpp
@@ -51,7 +51,7 @@
 
 BasicBlock* BlockInsertionSet::insert(size_t index, float executionCount)
 {
-    Ref<BasicBlock> block = adoptRef(*new BasicBlock(BytecodeIndex(), m_graph.block(0)->variablesAtHead.numberOfArguments(), m_graph.block(0)->variablesAtHead.numberOfLocals(), executionCount));
+    Ref<BasicBlock> block = adoptRef(*new BasicBlock(BytecodeIndex(), m_graph.block(0)->variablesAtHead.numberOfArguments(), m_graph.block(0)->variablesAtHead.numberOfLocals(), m_graph.block(0)->variablesAtHead.numberOfTmps(), executionCount));
     block->isReachable = true;
     auto* result = block.ptr();
     insert(index, WTFMove(block));
diff --git a/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp b/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
index 9ffc555..bc01cfb 100644
--- a/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
+++ b/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
@@ -113,6 +113,7 @@
         , m_constantOne(graph.freeze(jsNumber(1)))
         , m_numArguments(m_codeBlock->numParameters())
         , m_numLocals(m_codeBlock->numCalleeLocals())
+        , m_numTmps(m_codeBlock->numTmps())
         , m_parameterSlots(0)
         , m_numPassedVarArgs(0)
         , m_inlineStackTop(0)
@@ -141,6 +142,17 @@
             m_graph.block(i)->ensureLocals(newNumLocals);
     }
 
+    void ensureTmps(unsigned newNumTmps)
+    {
+        VERBOSE_LOG("   ensureTmps: trying to raise m_numTmps from ", m_numTmps, " to ", newNumTmps, "\n");
+        if (newNumTmps <= m_numTmps)
+            return;
+        m_numTmps = newNumTmps;
+        for (size_t i = 0; i < m_graph.numBlocks(); ++i)
+            m_graph.block(i)->ensureTmps(newNumTmps);
+    }
+
+
     // Helper for min and max.
     template<typename ChecksFunctor>
     bool handleMinMax(VirtualRegister result, NodeType op, int registerOffset, int argumentCountIncludingThis, const ChecksFunctor& insertChecks);
@@ -272,7 +284,17 @@
     void linkBlock(BasicBlock*, Vector<BasicBlock*>& possibleTargets);
     void linkBlocks(Vector<BasicBlock*>& unlinkedBlocks, Vector<BasicBlock*>& possibleTargets);
     
-    VariableAccessData* newVariableAccessData(VirtualRegister operand)
+    void progressToNextCheckpoint()
+    {
+        m_currentIndex = BytecodeIndex(m_currentIndex.offset(), m_currentIndex.checkpoint() + 1);
+        // At this point, it's again OK to OSR exit.
+        m_exitOK = true;
+        addToGraph(ExitOK);
+
+        processSetLocalQueue();
+    }
+
+    VariableAccessData* newVariableAccessData(Operand operand)
     {
         ASSERT(!operand.isConstant());
         
@@ -281,16 +303,14 @@
     }
     
     // Get/Set the operands/result of a bytecode instruction.
-    Node* getDirect(VirtualRegister operand)
+    Node* getDirect(Operand operand)
     {
         ASSERT(!operand.isConstant());
 
-        // Is this an argument?
         if (operand.isArgument())
-            return getArgument(operand);
+            return getArgument(operand.virtualRegister());
 
-        // Must be a local.
-        return getLocal(operand);
+        return getLocalOrTmp(operand);
     }
 
     Node* get(VirtualRegister operand)
@@ -300,8 +320,8 @@
             unsigned oldSize = m_constants.size();
             if (constantIndex >= oldSize || !m_constants[constantIndex]) {
                 const CodeBlock& codeBlock = *m_inlineStackTop->m_codeBlock;
-                JSValue value = codeBlock.getConstant(operand.offset());
-                SourceCodeRepresentation sourceCodeRepresentation = codeBlock.constantSourceCodeRepresentation(operand.offset());
+                JSValue value = codeBlock.getConstant(operand);
+                SourceCodeRepresentation sourceCodeRepresentation = codeBlock.constantSourceCodeRepresentation(operand);
                 if (constantIndex >= oldSize) {
                     m_constants.grow(constantIndex + 1);
                     for (unsigned i = oldSize; i < m_constants.size(); ++i)
@@ -360,9 +380,10 @@
         // initializing locals at the top of a function.
         ImmediateNakedSet
     };
-    Node* setDirect(VirtualRegister operand, Node* value, SetMode setMode = NormalSet)
+
+    Node* setDirect(Operand operand, Node* value, SetMode setMode = NormalSet)
     {
-        addToGraph(MovHint, OpInfo(operand.offset()), value);
+        addToGraph(MovHint, OpInfo(operand), value);
 
         // We can't exit anymore because our OSR exit state has changed.
         m_exitOK = false;
@@ -376,7 +397,7 @@
         
         return delayed.execute(this);
     }
-    
+
     void processSetLocalQueue()
     {
         for (unsigned i = 0; i < m_setLocalQueue.size(); ++i)
@@ -394,18 +415,17 @@
         ASSERT(node->op() == GetLocal);
         ASSERT(node->origin.semantic.bytecodeIndex() == m_currentIndex);
         ConcurrentJSLocker locker(m_inlineStackTop->m_profiledBlock->m_lock);
-        LazyOperandValueProfileKey key(m_currentIndex, node->local());
+        LazyOperandValueProfileKey key(m_currentIndex, node->operand());
         SpeculatedType prediction = m_inlineStackTop->m_lazyOperands.prediction(locker, key);
         node->variableAccessData()->predict(prediction);
         return node;
     }
 
     // Used in implementing get/set, above, where the operand is a local variable.
-    Node* getLocal(VirtualRegister operand)
+    Node* getLocalOrTmp(Operand operand)
     {
-        unsigned local = operand.toLocal();
-
-        Node* node = m_currentBlock->variablesAtTail.local(local);
+        ASSERT(operand.isTmp() || operand.isLocal());
+        Node*& node = m_currentBlock->variablesAtTail.operand(operand);
         
         // This has two goals: 1) link together variable access datas, and 2)
         // try to avoid creating redundant GetLocals. (1) is required for
@@ -430,20 +450,26 @@
             variable = newVariableAccessData(operand);
         
         node = injectLazyOperandSpeculation(addToGraph(GetLocal, OpInfo(variable)));
-        m_currentBlock->variablesAtTail.local(local) = node;
         return node;
     }
-    Node* setLocal(const CodeOrigin& semanticOrigin, VirtualRegister operand, Node* value, SetMode setMode = NormalSet)
+    Node* setLocalOrTmp(const CodeOrigin& semanticOrigin, Operand operand, Node* value, SetMode setMode = NormalSet)
     {
+        ASSERT(operand.isTmp() || operand.isLocal());
         SetForScope<CodeOrigin> originChange(m_currentSemanticOrigin, semanticOrigin);
 
-        unsigned local = operand.toLocal();
-        
-        if (setMode != ImmediateNakedSet) {
-            ArgumentPosition* argumentPosition = findArgumentPositionForLocal(operand);
+        if (operand.isTmp() && static_cast<unsigned>(operand.value()) >= m_numTmps) {
+            if (inlineCallFrame())
+                dataLogLn(*inlineCallFrame());
+            dataLogLn("Bad operand: ", operand, " but current number of tmps is: ", m_numTmps, " code block has: ", m_profiledBlock->numTmps(), " tmps.");
+            CRASH();
+        }
+
+        if (setMode != ImmediateNakedSet && !operand.isTmp()) {
+            VirtualRegister reg = operand.virtualRegister();
+            ArgumentPosition* argumentPosition = findArgumentPositionForLocal(reg);
             if (argumentPosition)
                 flushDirect(operand, argumentPosition);
-            else if (m_graph.needsScopeRegister() && operand == m_codeBlock->scopeRegister())
+            else if (m_graph.needsScopeRegister() && reg == m_codeBlock->scopeRegister())
                 flush(operand);
         }
 
@@ -453,7 +479,7 @@
         variableAccessData->mergeCheckArrayHoistingFailed(
             m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex(), BadIndexingType));
         Node* node = addToGraph(SetLocal, OpInfo(variableAccessData), value);
-        m_currentBlock->variablesAtTail.local(local) = node;
+        m_currentBlock->variablesAtTail.operand(operand) = node;
         return node;
     }
 
@@ -485,20 +511,21 @@
         m_currentBlock->variablesAtTail.argument(argument) = node;
         return node;
     }
-    Node* setArgument(const CodeOrigin& semanticOrigin, VirtualRegister operand, Node* value, SetMode setMode = NormalSet)
+    Node* setArgument(const CodeOrigin& semanticOrigin, Operand operand, Node* value, SetMode setMode = NormalSet)
     {
         SetForScope<CodeOrigin> originChange(m_currentSemanticOrigin, semanticOrigin);
 
-        unsigned argument = operand.toArgument();
+        VirtualRegister reg = operand.virtualRegister();
+        unsigned argument = reg.toArgument();
         ASSERT(argument < m_numArguments);
         
-        VariableAccessData* variableAccessData = newVariableAccessData(operand);
+        VariableAccessData* variableAccessData = newVariableAccessData(reg);
 
         // Always flush arguments, except for 'this'. If 'this' is created by us,
         // then make sure that it's never unboxed.
         if (argument || m_graph.needsFlushedThis()) {
             if (setMode != ImmediateNakedSet)
-                flushDirect(operand);
+                flushDirect(reg);
         }
         
         if (!argument && m_codeBlock->specializationKind() == CodeForConstruct)
@@ -534,14 +561,16 @@
             int argument = VirtualRegister(operand.offset() - inlineCallFrame->stackOffset).toArgument();
             return stack->m_argumentPositions[argument];
         }
-        return 0;
+        return nullptr;
     }
     
-    ArgumentPosition* findArgumentPosition(VirtualRegister operand)
+    ArgumentPosition* findArgumentPosition(Operand operand)
     {
+        if (operand.isTmp())
+            return nullptr;
         if (operand.isArgument())
             return findArgumentPositionForArgument(operand.toArgument());
-        return findArgumentPositionForLocal(operand);
+        return findArgumentPositionForLocal(operand.virtualRegister());
     }
 
     template<typename AddFlushDirectFunc>
@@ -552,14 +581,14 @@
             ASSERT(!m_graph.hasDebuggerEnabled());
             numArguments = inlineCallFrame->argumentsWithFixup.size();
             if (inlineCallFrame->isClosureCall)
-                addFlushDirect(inlineCallFrame, remapOperand(inlineCallFrame, VirtualRegister(CallFrameSlot::callee)));
+                addFlushDirect(inlineCallFrame, remapOperand(inlineCallFrame, CallFrameSlot::callee));
             if (inlineCallFrame->isVarargs())
-                addFlushDirect(inlineCallFrame, remapOperand(inlineCallFrame, VirtualRegister(CallFrameSlot::argumentCountIncludingThis)));
+                addFlushDirect(inlineCallFrame, remapOperand(inlineCallFrame, CallFrameSlot::argumentCountIncludingThis));
         } else
             numArguments = m_graph.baselineCodeBlockFor(inlineCallFrame)->numParameters();
 
         for (unsigned argument = numArguments; argument--;)
-            addFlushDirect(inlineCallFrame, remapOperand(inlineCallFrame, virtualRegisterForArgument(argument)));
+            addFlushDirect(inlineCallFrame, remapOperand(inlineCallFrame, virtualRegisterForArgumentIncludingThis(argument)));
 
         if (m_graph.needsScopeRegister())
             addFlushDirect(nullptr, m_graph.m_codeBlock->scopeRegister());
@@ -577,6 +606,7 @@
 
                 CodeBlock* codeBlock = m_graph.baselineCodeBlockFor(inlineCallFrame);
                 FullBytecodeLiveness& fullLiveness = m_graph.livenessFor(codeBlock);
+                // Note: We don't need to handle tmps here because tmps are not required to be flushed to the stack.
                 const auto& livenessAtBytecode = fullLiveness.getLiveness(bytecodeIndex, m_graph.appropriateLivenessCalculationPoint(origin, isCallerOrigin));
                 for (unsigned local = codeBlock->numCalleeLocals(); local--;) {
                     if (livenessAtBytecode[local])
@@ -586,27 +616,27 @@
             });
     }
 
-    void flush(VirtualRegister operand)
+    void flush(Operand operand)
     {
         flushDirect(m_inlineStackTop->remapOperand(operand));
     }
     
-    void flushDirect(VirtualRegister operand)
+    void flushDirect(Operand operand)
     {
         flushDirect(operand, findArgumentPosition(operand));
     }
 
-    void flushDirect(VirtualRegister operand, ArgumentPosition* argumentPosition)
+    void flushDirect(Operand operand, ArgumentPosition* argumentPosition)
     {
         addFlushOrPhantomLocal<Flush>(operand, argumentPosition);
     }
 
     template<NodeType nodeType>
-    void addFlushOrPhantomLocal(VirtualRegister operand, ArgumentPosition* argumentPosition)
+    void addFlushOrPhantomLocal(Operand operand, ArgumentPosition* argumentPosition)
     {
         ASSERT(!operand.isConstant());
         
-        Node* node = m_currentBlock->variablesAtTail.operand(operand);
+        Node*& node = m_currentBlock->variablesAtTail.operand(operand);
         
         VariableAccessData* variable;
         
@@ -616,26 +646,25 @@
             variable = newVariableAccessData(operand);
         
         node = addToGraph(nodeType, OpInfo(variable));
-        m_currentBlock->variablesAtTail.operand(operand) = node;
         if (argumentPosition)
             argumentPosition->addVariable(variable);
     }
 
-    void phantomLocalDirect(VirtualRegister operand)
+    void phantomLocalDirect(Operand operand)
     {
         addFlushOrPhantomLocal<PhantomLocal>(operand, findArgumentPosition(operand));
     }
 
     void flush(InlineStackEntry* inlineStackEntry)
     {
-        auto addFlushDirect = [&] (InlineCallFrame*, VirtualRegister reg) { flushDirect(reg); };
+        auto addFlushDirect = [&] (InlineCallFrame*, Operand operand) { flushDirect(operand); };
         flushImpl(inlineStackEntry->m_inlineCallFrame, addFlushDirect);
     }
 
     void flushForTerminal()
     {
-        auto addFlushDirect = [&] (InlineCallFrame*, VirtualRegister reg) { flushDirect(reg); };
-        auto addPhantomLocalDirect = [&] (InlineCallFrame*, VirtualRegister reg) { phantomLocalDirect(reg); };
+        auto addFlushDirect = [&] (InlineCallFrame*, Operand operand) { flushDirect(operand); };
+        auto addPhantomLocalDirect = [&] (InlineCallFrame*, Operand operand) { phantomLocalDirect(operand); };
         flushForTerminalImpl(currentCodeOrigin(), addFlushDirect, addPhantomLocalDirect);
     }
 
@@ -763,6 +792,11 @@
             Edge(child1), Edge(child2), Edge(child3));
         return addToGraph(result);
     }
+    Node* addToGraph(NodeType op, Operand operand, Node* child1)
+    {
+        ASSERT(op == MovHint);
+        return addToGraph(op, OpInfo(operand.kind()), OpInfo(operand.value()), child1);
+    }
     Node* addToGraph(NodeType op, OpInfo info1, OpInfo info2, Edge child1, Edge child2 = Edge(), Edge child3 = Edge())
     {
         Node* result = m_graph.addNode(
@@ -805,7 +839,7 @@
             m_parameterSlots = parameterSlots;
 
         for (int i = 0; i < argCount; ++i)
-            addVarArgChild(get(virtualRegisterForArgument(i, registerOffset)));
+            addVarArgChild(get(virtualRegisterForArgumentIncludingThis(i, registerOffset)));
 
         return addToGraph(Node::VarArg, op, opInfo, prediction);
     }
@@ -1093,8 +1127,10 @@
 
     // The number of arguments passed to the function.
     unsigned m_numArguments;
-    // The number of locals (vars + temporaries) used in the function.
+    // The number of locals (vars + temporaries) used by the bytecode for the function.
     unsigned m_numLocals;
+    // The max number of temps used for forwarding data to an OSR exit checkpoint.
+    unsigned m_numTmps;
     // The number of slots (in units of sizeof(Register)) that we need to
     // preallocate for arguments to outgoing calls from this frame. This
     // number includes the CallFrame slots that we initialize for the callee
@@ -1161,14 +1197,17 @@
         
         ~InlineStackEntry();
         
-        VirtualRegister remapOperand(VirtualRegister operand) const
+        Operand remapOperand(Operand operand) const
         {
             if (!m_inlineCallFrame)
                 return operand;
-            
-            ASSERT(!operand.isConstant());
 
-            return VirtualRegister(operand.offset() + m_inlineCallFrame->stackOffset);
+            if (operand.isTmp())
+                return Operand::tmp(operand.value() + m_inlineCallFrame->tmpOffset);
+            
+            ASSERT(!operand.virtualRegister().isConstant());
+
+            return operand.virtualRegister() + m_inlineCallFrame->stackOffset;
         }
     };
     
@@ -1177,13 +1216,8 @@
     ICStatusContextStack m_icContextStack;
     
     struct DelayedSetLocal {
-        CodeOrigin m_origin;
-        VirtualRegister m_operand;
-        Node* m_value;
-        SetMode m_setMode;
-        
         DelayedSetLocal() { }
-        DelayedSetLocal(const CodeOrigin& origin, VirtualRegister operand, Node* value, SetMode setMode)
+        DelayedSetLocal(const CodeOrigin& origin, Operand operand, Node* value, SetMode setMode)
             : m_origin(origin)
             , m_operand(operand)
             , m_value(value)
@@ -1196,8 +1230,13 @@
         {
             if (m_operand.isArgument())
                 return parser->setArgument(m_origin, m_operand, m_value, m_setMode);
-            return parser->setLocal(m_origin, m_operand, m_value, m_setMode);
+            return parser->setLocalOrTmp(m_origin, m_operand, m_value, m_setMode);
         }
+
+        CodeOrigin m_origin;
+        Operand m_operand;
+        Node* m_value { nullptr };
+        SetMode m_setMode;
     };
     
     Vector<DelayedSetLocal, 2> m_setLocalQueue;
@@ -1210,7 +1249,7 @@
 BasicBlock* ByteCodeParser::allocateTargetableBlock(BytecodeIndex bytecodeIndex)
 {
     ASSERT(bytecodeIndex);
-    Ref<BasicBlock> block = adoptRef(*new BasicBlock(bytecodeIndex, m_numArguments, m_numLocals, 1));
+    Ref<BasicBlock> block = adoptRef(*new BasicBlock(bytecodeIndex, m_numArguments, m_numLocals, m_numTmps, 1));
     BasicBlock* blockPtr = block.ptr();
     // m_blockLinkingTargets must always be sorted in increasing order of bytecodeBegin
     if (m_inlineStackTop->m_blockLinkingTargets.size())
@@ -1222,7 +1261,7 @@
 
 BasicBlock* ByteCodeParser::allocateUntargetableBlock()
 {
-    Ref<BasicBlock> block = adoptRef(*new BasicBlock(BytecodeIndex(), m_numArguments, m_numLocals, 1));
+    Ref<BasicBlock> block = adoptRef(*new BasicBlock(BytecodeIndex(), m_numArguments, m_numLocals, m_numTmps, 1));
     BasicBlock* blockPtr = block.ptr();
     m_graph.appendBlock(WTFMove(block));
     return blockPtr;
@@ -1292,7 +1331,7 @@
     if (callLinkStatus.canOptimize()) {
         addToGraph(FilterCallLinkStatus, OpInfo(m_graph.m_plan.recordedStatuses().addCallLinkStatus(currentCodeOrigin(), callLinkStatus)), callTarget);
 
-        VirtualRegister thisArgument = virtualRegisterForArgument(0, registerOffset);
+        VirtualRegister thisArgument = virtualRegisterForArgumentIncludingThis(0, registerOffset);
         auto optimizationResult = handleInlining(callTarget, result, callLinkStatus, registerOffset, thisArgument,
             argumentCountIncludingThis, BytecodeIndex(m_currentIndex.offset() + instructionSize), op, kind, prediction);
         if (optimizationResult == CallOptimizationResult::OptimizedToJump)
@@ -1399,7 +1438,7 @@
 void ByteCodeParser::emitArgumentPhantoms(int registerOffset, int argumentCountIncludingThis)
 {
     for (int i = 0; i < argumentCountIncludingThis; ++i)
-        addToGraph(Phantom, get(virtualRegisterForArgument(i, registerOffset)));
+        addToGraph(Phantom, get(virtualRegisterForArgumentIncludingThis(i, registerOffset)));
 }
 
 template<typename ChecksFunctor>
@@ -1425,7 +1464,7 @@
             if (argumentCountIncludingThis != static_cast<int>(callFrame->argumentCountIncludingThis))
                 continue;
             // If the target InlineCallFrame is Varargs, we do not know how many arguments are actually filled by LoadVarargs. Varargs InlineCallFrame's
-            // argumentCountIncludingThis is maximum number of potentially filled arguments by LoadVarargs. We "continue" to the upper frame which may be
+            // argumentCountIncludingThis is maximum number of potentially filled arguments by xkLoadVarargs. We "continue" to the upper frame which may be
             // a good target to jump into.
             if (callFrame->isVarargs())
                 continue;
@@ -1451,7 +1490,7 @@
         // We must set the callee to the right value
         if (stackEntry->m_inlineCallFrame) {
             if (stackEntry->m_inlineCallFrame->isClosureCall)
-                setDirect(stackEntry->remapOperand(VirtualRegister(CallFrameSlot::callee)), callTargetNode, NormalSet);
+                setDirect(remapOperand(stackEntry->m_inlineCallFrame, CallFrameSlot::callee), callTargetNode, NormalSet);
         } else
             addToGraph(SetCallee, callTargetNode);
 
@@ -1460,12 +1499,12 @@
             addToGraph(SetArgumentCountIncludingThis, OpInfo(argumentCountIncludingThis));
         int argIndex = 0;
         for (; argIndex < argumentCountIncludingThis; ++argIndex) {
-            Node* value = get(virtualRegisterForArgument(argIndex, registerOffset));
-            setDirect(stackEntry->remapOperand(virtualRegisterForArgument(argIndex)), value, NormalSet);
+            Node* value = get(virtualRegisterForArgumentIncludingThis(argIndex, registerOffset));
+            setDirect(stackEntry->remapOperand(virtualRegisterForArgumentIncludingThis(argIndex)), value, NormalSet);
         }
         Node* undefined = addToGraph(JSConstant, OpInfo(m_constantUndefined));
         for (; argIndex < stackEntry->m_codeBlock->numParameters(); ++argIndex)
-            setDirect(stackEntry->remapOperand(virtualRegisterForArgument(argIndex)), undefined, NormalSet);
+            setDirect(stackEntry->remapOperand(virtualRegisterForArgumentIncludingThis(argIndex)), undefined, NormalSet);
 
         // We must repeat the work of op_enter here as we will jump right after it.
         // We jump right after it and not before it, because of some invariant saying that a CFG root cannot have predecessors in the IR.
@@ -1616,16 +1655,18 @@
     ASSERT(!(numberOfStackPaddingSlots % stackAlignmentRegisters()));
     int registerOffsetAfterFixup = registerOffset - numberOfStackPaddingSlots;
     
-    int inlineCallFrameStart = m_inlineStackTop->remapOperand(VirtualRegister(registerOffsetAfterFixup)).offset() + CallFrame::headerSizeInRegisters;
+    Operand inlineCallFrameStart = VirtualRegister(m_inlineStackTop->remapOperand(VirtualRegister(registerOffsetAfterFixup)).value() + CallFrame::headerSizeInRegisters);
     
     ensureLocals(
-        VirtualRegister(inlineCallFrameStart).toLocal() + 1 +
+        inlineCallFrameStart.toLocal() + 1 +
         CallFrame::headerSizeInRegisters + codeBlock->numCalleeLocals());
     
+    ensureTmps((m_inlineStackTop->m_inlineCallFrame ? m_inlineStackTop->m_inlineCallFrame->tmpOffset : 0) + m_inlineStackTop->m_codeBlock->numTmps() + codeBlock->numTmps());
+
     size_t argumentPositionStart = m_graph.m_argumentPositions.size();
 
     if (result.isValid())
-        result = m_inlineStackTop->remapOperand(result);
+        result = m_inlineStackTop->remapOperand(result).virtualRegister();
 
     VariableAccessData* calleeVariable = nullptr;
     if (callee.isClosureCall()) {
@@ -1638,7 +1679,7 @@
 
     InlineStackEntry* callerStackTop = m_inlineStackTop;
     InlineStackEntry inlineStackEntry(this, codeBlock, codeBlock, callee.function(), result,
-        (VirtualRegister)inlineCallFrameStart, argumentCountIncludingThis, kind, continuationBlock);
+        inlineCallFrameStart.virtualRegister(), argumentCountIncludingThis, kind, continuationBlock);
 
     // This is where the actual inlining really happens.
     BytecodeIndex oldIndex = m_currentIndex;
@@ -1673,9 +1714,9 @@
         // However, when we begin executing the callee, we need OSR exit to be aware of where it can recover the arguments to the setter, loc9 and loc10. The MovHints in the inlined
         // callee make it so that if we exit at <HERE>, we can recover loc9 and loc10.
         for (int index = 0; index < argumentCountIncludingThis; ++index) {
-            VirtualRegister argumentToGet = callerStackTop->remapOperand(virtualRegisterForArgument(index, registerOffset));
+            Operand argumentToGet = callerStackTop->remapOperand(virtualRegisterForArgumentIncludingThis(index, registerOffset));
             Node* value = getDirect(argumentToGet);
-            addToGraph(MovHint, OpInfo(argumentToGet.offset()), value);
+            addToGraph(MovHint, OpInfo(argumentToGet), value);
             m_setLocalQueue.append(DelayedSetLocal { currentCodeOrigin(), argumentToGet, value, ImmediateNakedSet });
         }
         break;
@@ -1719,16 +1760,16 @@
         // In such cases, we do not need to move frames.
         if (registerOffsetAfterFixup != registerOffset) {
             for (int index = 0; index < argumentCountIncludingThis; ++index) {
-                VirtualRegister argumentToGet = callerStackTop->remapOperand(virtualRegisterForArgument(index, registerOffset));
+                Operand argumentToGet = callerStackTop->remapOperand(virtualRegisterForArgumentIncludingThis(index, registerOffset));
                 Node* value = getDirect(argumentToGet);
-                VirtualRegister argumentToSet = m_inlineStackTop->remapOperand(virtualRegisterForArgument(index));
-                addToGraph(MovHint, OpInfo(argumentToSet.offset()), value);
+                Operand argumentToSet = m_inlineStackTop->remapOperand(virtualRegisterForArgumentIncludingThis(index));
+                addToGraph(MovHint, OpInfo(argumentToSet), value);
                 m_setLocalQueue.append(DelayedSetLocal { currentCodeOrigin(), argumentToSet, value, ImmediateNakedSet });
             }
         }
         for (int index = 0; index < arityFixupCount; ++index) {
-            VirtualRegister argumentToSet = m_inlineStackTop->remapOperand(virtualRegisterForArgument(argumentCountIncludingThis + index));
-            addToGraph(MovHint, OpInfo(argumentToSet.offset()), undefined);
+            Operand argumentToSet = m_inlineStackTop->remapOperand(virtualRegisterForArgumentIncludingThis(argumentCountIncludingThis + index));
+            addToGraph(MovHint, OpInfo(argumentToSet), undefined);
             m_setLocalQueue.append(DelayedSetLocal { currentCodeOrigin(), argumentToSet, undefined, ImmediateNakedSet });
         }
 
@@ -1895,12 +1936,12 @@
         emitFunctionChecks(callVariant, callTargetNode, thisArgument);
         
         int remappedRegisterOffset =
-        m_inlineStackTop->remapOperand(VirtualRegister(registerOffset)).offset();
+        m_inlineStackTop->remapOperand(VirtualRegister(registerOffset)).virtualRegister().offset();
         
         ensureLocals(VirtualRegister(remappedRegisterOffset).toLocal());
         
         int argumentStart = registerOffset + CallFrame::headerSizeInRegisters;
-        int remappedArgumentStart = m_inlineStackTop->remapOperand(VirtualRegister(argumentStart)).offset();
+        int remappedArgumentStart = m_inlineStackTop->remapOperand(VirtualRegister(argumentStart)).virtualRegister().offset();
         
         LoadVarargsData* data = m_graph.m_loadVarargsData.add();
         data->start = VirtualRegister(remappedArgumentStart + 1);
@@ -1908,11 +1949,24 @@
         data->offset = argumentsOffset;
         data->limit = maxArgumentCountIncludingThis;
         data->mandatoryMinimum = mandatoryMinimum;
-        
-        if (callOp == TailCallForwardVarargs)
-            addToGraph(ForwardVarargs, OpInfo(data));
-        else
-            addToGraph(LoadVarargs, OpInfo(data), get(argumentsArgument));
+
+        if (callOp == TailCallForwardVarargs) {
+            Node* argumentCount;
+            if (!inlineCallFrame())
+                argumentCount = addToGraph(GetArgumentCountIncludingThis);
+            else if (inlineCallFrame()->isVarargs())
+                argumentCount = getDirect(remapOperand(inlineCallFrame(), CallFrameSlot::argumentCountIncludingThis));
+            else 
+                argumentCount = addToGraph(JSConstant, OpInfo(m_graph.freeze(jsNumber(inlineCallFrame()->argumentCountIncludingThis))));
+            addToGraph(ForwardVarargs, OpInfo(data), argumentCount);
+        } else {
+            Node* arguments = get(argumentsArgument);
+            auto argCountTmp = m_inlineStackTop->remapOperand(Operand::tmp(OpCallVarargs::argCountIncludingThis));
+            setDirect(argCountTmp, addToGraph(VarargsLength, OpInfo(data), arguments));
+            progressToNextCheckpoint();
+
+            addToGraph(LoadVarargs, OpInfo(data), getLocalOrTmp(argCountTmp), arguments);
+        }
         
         // LoadVarargs may OSR exit. Hence, we need to keep alive callTargetNode, thisArgument
         // and argumentsArgument for the baseline JIT. However, we only need a Phantom for
@@ -1924,14 +1978,14 @@
         // SSA. Fortunately, we also have other reasons for not inserting control flow
         // before SSA.
         
-        VariableAccessData* countVariable = newVariableAccessData(VirtualRegister(remappedRegisterOffset + CallFrameSlot::argumentCountIncludingThis));
+        VariableAccessData* countVariable = newVariableAccessData(data->count);
         // This is pretty lame, but it will force the count to be flushed as an int. This doesn't
         // matter very much, since our use of a SetArgumentDefinitely and Flushes for this local slot is
         // mostly just a formality.
         countVariable->predict(SpecInt32Only);
         countVariable->mergeIsProfitableToUnbox(true);
         Node* setArgumentCount = addToGraph(SetArgumentDefinitely, OpInfo(countVariable));
-        m_currentBlock->variablesAtTail.setOperand(countVariable->local(), setArgumentCount);
+        m_currentBlock->variablesAtTail.setOperand(countVariable->operand(), setArgumentCount);
         
         set(VirtualRegister(argumentStart), get(thisArgument), ImmediateNakedSet);
         unsigned numSetArguments = 0;
@@ -1955,7 +2009,7 @@
             }
             
             Node* setArgument = addToGraph(numSetArguments >= mandatoryMinimum ? SetArgumentMaybe : SetArgumentDefinitely, OpInfo(variable));
-            m_currentBlock->variablesAtTail.setOperand(variable->local(), setArgument);
+            m_currentBlock->variablesAtTail.setOperand(variable->operand(), setArgument);
             ++numSetArguments;
         }
     };
@@ -2055,7 +2109,7 @@
     // yet.
     VERBOSE_LOG("Register offset: ", registerOffset);
     VirtualRegister calleeReg(registerOffset + CallFrameSlot::callee);
-    calleeReg = m_inlineStackTop->remapOperand(calleeReg);
+    calleeReg = m_inlineStackTop->remapOperand(calleeReg).virtualRegister();
     VERBOSE_LOG("Callee is going to be ", calleeReg, "\n");
     setDirect(calleeReg, callTargetNode, ImmediateSetWithFlush);
 
@@ -2174,7 +2228,7 @@
      
     if (argumentCountIncludingThis == 2) {
         insertChecks();
-        Node* resultNode = get(VirtualRegister(virtualRegisterForArgument(1, registerOffset)));
+        Node* resultNode = get(VirtualRegister(virtualRegisterForArgumentIncludingThis(1, registerOffset)));
         addToGraph(Phantom, Edge(resultNode, NumberUse));
         set(result, resultNode);
         return true;
@@ -2182,7 +2236,7 @@
     
     if (argumentCountIncludingThis == 3) {
         insertChecks();
-        set(result, addToGraph(op, get(virtualRegisterForArgument(1, registerOffset)), get(virtualRegisterForArgument(2, registerOffset))));
+        set(result, addToGraph(op, get(virtualRegisterForArgumentIncludingThis(1, registerOffset)), get(virtualRegisterForArgumentIncludingThis(2, registerOffset))));
         return true;
     }
     
@@ -2230,7 +2284,7 @@
                 return false;
 
             insertChecks();
-            Node* node = addToGraph(ArithAbs, get(virtualRegisterForArgument(1, registerOffset)));
+            Node* node = addToGraph(ArithAbs, get(virtualRegisterForArgumentIncludingThis(1, registerOffset)));
             if (m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, Overflow))
                 node->mergeFlags(NodeMayOverflowInt32InDFG);
             setResult(node);
@@ -2267,7 +2321,7 @@
                 RELEASE_ASSERT_NOT_REACHED();
             }
             insertChecks();
-            setResult(addToGraph(ArithUnary, OpInfo(static_cast<std::underlying_type<Arith::UnaryType>::type>(type)), get(virtualRegisterForArgument(1, registerOffset))));
+            setResult(addToGraph(ArithUnary, OpInfo(static_cast<std::underlying_type<Arith::UnaryType>::type>(type)), get(virtualRegisterForArgumentIncludingThis(1, registerOffset))));
             return true;
         }
 
@@ -2291,7 +2345,7 @@
                 RELEASE_ASSERT_NOT_REACHED();
             }
             insertChecks();
-            setResult(addToGraph(nodeType, get(virtualRegisterForArgument(1, registerOffset))));
+            setResult(addToGraph(nodeType, get(virtualRegisterForArgumentIncludingThis(1, registerOffset))));
             return true;
         }
 
@@ -2303,8 +2357,8 @@
                 return true;
             }
             insertChecks();
-            VirtualRegister xOperand = virtualRegisterForArgument(1, registerOffset);
-            VirtualRegister yOperand = virtualRegisterForArgument(2, registerOffset);
+            VirtualRegister xOperand = virtualRegisterForArgumentIncludingThis(1, registerOffset);
+            VirtualRegister yOperand = virtualRegisterForArgumentIncludingThis(2, registerOffset);
             setResult(addToGraph(ArithPow, get(xOperand), get(yOperand)));
             return true;
         }
@@ -2320,8 +2374,8 @@
             if (!mode.isSomeTypedArrayView())
                 return false;
 
-            addToGraph(CheckArray, OpInfo(mode.asWord()), get(virtualRegisterForArgument(0, registerOffset)));
-            addToGraph(CheckNeutered, get(virtualRegisterForArgument(0, registerOffset)));
+            addToGraph(CheckArray, OpInfo(mode.asWord()), get(virtualRegisterForArgumentIncludingThis(0, registerOffset)));
+            addToGraph(CheckNeutered, get(virtualRegisterForArgumentIncludingThis(0, registerOffset)));
             FALLTHROUGH;
         }
 
@@ -2354,7 +2408,7 @@
 
             // We don't have an existing error string.
             unsigned errorStringIndex = UINT32_MAX;
-            Node* object = addToGraph(ToObject, OpInfo(errorStringIndex), OpInfo(SpecNone), get(virtualRegisterForArgument(0, registerOffset)));
+            Node* object = addToGraph(ToObject, OpInfo(errorStringIndex), OpInfo(SpecNone), get(virtualRegisterForArgumentIncludingThis(0, registerOffset)));
 
             JSGlobalObject* globalObject = m_graph.globalObjectFor(currentNodeOrigin().semantic);
             Node* iterator = addToGraph(NewArrayIterator, OpInfo(m_graph.registerStructure(globalObject->arrayIteratorStructure())));
@@ -2382,7 +2436,7 @@
 
                 addVarArgChild(nullptr); // For storage.
                 for (int i = 0; i < argumentCountIncludingThis; ++i)
-                    addVarArgChild(get(virtualRegisterForArgument(i, registerOffset)));
+                    addVarArgChild(get(virtualRegisterForArgumentIncludingThis(i, registerOffset)));
                 Node* arrayPush = addToGraph(Node::VarArg, ArrayPush, OpInfo(arrayMode.asWord()), OpInfo(prediction));
                 setResult(arrayPush);
                 return true;
@@ -2432,7 +2486,7 @@
 
                     insertChecks();
 
-                    Node* array = get(virtualRegisterForArgument(0, registerOffset));
+                    Node* array = get(virtualRegisterForArgumentIncludingThis(0, registerOffset));
                     // We do a few things here to prove that we aren't skipping doing side-effects in an observable way:
                     // 1. We ensure that the "constructor" property hasn't been changed (because the observable
                     // effects of slice require that we perform a Get(array, "constructor") and we can skip
@@ -2460,9 +2514,9 @@
 
                     addVarArgChild(array);
                     if (argumentCountIncludingThis >= 2)
-                        addVarArgChild(get(virtualRegisterForArgument(1, registerOffset))); // Start index.
+                        addVarArgChild(get(virtualRegisterForArgumentIncludingThis(1, registerOffset))); // Start index.
                     if (argumentCountIncludingThis >= 3)
-                        addVarArgChild(get(virtualRegisterForArgument(2, registerOffset))); // End index.
+                        addVarArgChild(get(virtualRegisterForArgumentIncludingThis(2, registerOffset))); // End index.
                     addVarArgChild(addToGraph(GetButterfly, array));
 
                     Node* arraySlice = addToGraph(Node::VarArg, ArraySlice, OpInfo(), OpInfo());
@@ -2521,11 +2575,11 @@
 
                     insertChecks();
 
-                    Node* array = get(virtualRegisterForArgument(0, registerOffset));
+                    Node* array = get(virtualRegisterForArgumentIncludingThis(0, registerOffset));
                     addVarArgChild(array);
-                    addVarArgChild(get(virtualRegisterForArgument(1, registerOffset))); // Search element.
+                    addVarArgChild(get(virtualRegisterForArgumentIncludingThis(1, registerOffset))); // Search element.
                     if (argumentCountIncludingThis >= 3)
-                        addVarArgChild(get(virtualRegisterForArgument(2, registerOffset))); // Start index.
+                        addVarArgChild(get(virtualRegisterForArgumentIncludingThis(2, registerOffset))); // Start index.
                     addVarArgChild(nullptr);
 
                     Node* node = addToGraph(Node::VarArg, ArrayIndexOf, OpInfo(arrayMode.asWord()), OpInfo());
@@ -2554,7 +2608,7 @@
             case Array::Contiguous:
             case Array::ArrayStorage: {
                 insertChecks();
-                Node* arrayPop = addToGraph(ArrayPop, OpInfo(arrayMode.asWord()), OpInfo(prediction), get(virtualRegisterForArgument(0, registerOffset)));
+                Node* arrayPop = addToGraph(ArrayPop, OpInfo(arrayMode.asWord()), OpInfo(prediction), get(virtualRegisterForArgumentIncludingThis(0, registerOffset)));
                 setResult(arrayPop);
                 return true;
             }
@@ -2636,7 +2690,7 @@
             
             Vector<Node*, 3> args;
             for (unsigned i = 0; i < numArgs; ++i)
-                args.append(get(virtualRegisterForArgument(1 + i, registerOffset)));
+                args.append(get(virtualRegisterForArgumentIncludingThis(1 + i, registerOffset)));
             
             Node* resultNode;
             if (numArgs + 1 <= 3) {
@@ -2662,13 +2716,13 @@
                 return false;
 
             insertChecks();
-            VirtualRegister valueOperand = virtualRegisterForArgument(1, registerOffset);
+            VirtualRegister valueOperand = virtualRegisterForArgumentIncludingThis(1, registerOffset);
             Node* parseInt;
             if (argumentCountIncludingThis == 2)
                 parseInt = addToGraph(ParseInt, OpInfo(), OpInfo(prediction), get(valueOperand));
             else {
                 ASSERT(argumentCountIncludingThis > 2);
-                VirtualRegister radixOperand = virtualRegisterForArgument(2, registerOffset);
+                VirtualRegister radixOperand = virtualRegisterForArgumentIncludingThis(2, registerOffset);
                 parseInt = addToGraph(ParseInt, OpInfo(), OpInfo(prediction), get(valueOperand), get(radixOperand));
             }
             setResult(parseInt);
@@ -2683,8 +2737,8 @@
                 return false;
 
             insertChecks();
-            VirtualRegister thisOperand = virtualRegisterForArgument(0, registerOffset);
-            VirtualRegister indexOperand = virtualRegisterForArgument(1, registerOffset);
+            VirtualRegister thisOperand = virtualRegisterForArgumentIncludingThis(0, registerOffset);
+            VirtualRegister indexOperand = virtualRegisterForArgumentIncludingThis(1, registerOffset);
             Node* charCode = addToGraph(StringCharCodeAt, OpInfo(ArrayMode(Array::String, Array::Read).asWord()), get(thisOperand), get(indexOperand));
 
             setResult(charCode);
@@ -2702,8 +2756,8 @@
                 return false;
 
             insertChecks();
-            VirtualRegister thisOperand = virtualRegisterForArgument(0, registerOffset);
-            VirtualRegister indexOperand = virtualRegisterForArgument(1, registerOffset);
+            VirtualRegister thisOperand = virtualRegisterForArgumentIncludingThis(0, registerOffset);
+            VirtualRegister indexOperand = virtualRegisterForArgumentIncludingThis(1, registerOffset);
             Node* result = addToGraph(StringCodePointAt, OpInfo(ArrayMode(Array::String, Array::Read).asWord()), get(thisOperand), get(indexOperand));
 
             setResult(result);
@@ -2722,8 +2776,8 @@
             // https://bugs.webkit.org/show_bug.cgi?id=201678
 
             insertChecks();
-            VirtualRegister thisOperand = virtualRegisterForArgument(0, registerOffset);
-            VirtualRegister indexOperand = virtualRegisterForArgument(1, registerOffset);
+            VirtualRegister thisOperand = virtualRegisterForArgumentIncludingThis(0, registerOffset);
+            VirtualRegister indexOperand = virtualRegisterForArgumentIncludingThis(1, registerOffset);
             Node* charCode = addToGraph(StringCharAt, OpInfo(ArrayMode(Array::String, Array::Read).asWord()), get(thisOperand), get(indexOperand));
 
             setResult(charCode);
@@ -2734,7 +2788,7 @@
             if (argumentCountIncludingThis == 1)
                 setResult(addToGraph(JSConstant, OpInfo(m_graph.freeze(jsNumber(32)))));
             else {
-                Node* operand = get(virtualRegisterForArgument(1, registerOffset));
+                Node* operand = get(virtualRegisterForArgumentIncludingThis(1, registerOffset));
                 setResult(addToGraph(ArithClz32, operand));
             }
             return true;
@@ -2744,7 +2798,7 @@
                 return false;
 
             insertChecks();
-            VirtualRegister indexOperand = virtualRegisterForArgument(1, registerOffset);
+            VirtualRegister indexOperand = virtualRegisterForArgumentIncludingThis(1, registerOffset);
             Node* charCode = addToGraph(StringFromCharCode, get(indexOperand));
 
             setResult(charCode);
@@ -2757,7 +2811,7 @@
                 return false;
             
             insertChecks();
-            Node* regExpExec = addToGraph(RegExpExec, OpInfo(0), OpInfo(prediction), addToGraph(GetGlobalObject, callee), get(virtualRegisterForArgument(0, registerOffset)), get(virtualRegisterForArgument(1, registerOffset)));
+            Node* regExpExec = addToGraph(RegExpExec, OpInfo(0), OpInfo(prediction), addToGraph(GetGlobalObject, callee), get(virtualRegisterForArgumentIncludingThis(0, registerOffset)), get(virtualRegisterForArgumentIncludingThis(1, registerOffset)));
             setResult(regExpExec);
             
             return true;
@@ -2795,7 +2849,7 @@
                     return false;
 
                 // Check that regExpObject is actually a RegExp object.
-                Node* regExpObject = get(virtualRegisterForArgument(0, registerOffset));
+                Node* regExpObject = get(virtualRegisterForArgumentIncludingThis(0, registerOffset));
                 addToGraph(Check, Edge(regExpObject, RegExpObjectUse));
 
                 // Check that regExpObject's exec is actually the primodial RegExp.prototype.exec.
@@ -2807,8 +2861,8 @@
             }
 
             insertChecks();
-            Node* regExpObject = get(virtualRegisterForArgument(0, registerOffset));
-            Node* regExpExec = addToGraph(RegExpTest, OpInfo(0), OpInfo(prediction), addToGraph(GetGlobalObject, callee), regExpObject, get(virtualRegisterForArgument(1, registerOffset)));
+            Node* regExpObject = get(virtualRegisterForArgumentIncludingThis(0, registerOffset));
+            Node* regExpExec = addToGraph(RegExpTest, OpInfo(0), OpInfo(prediction), addToGraph(GetGlobalObject, callee), regExpObject, get(virtualRegisterForArgumentIncludingThis(1, registerOffset)));
             setResult(regExpExec);
             
             return true;
@@ -2818,7 +2872,7 @@
             RELEASE_ASSERT(argumentCountIncludingThis == 2);
 
             insertChecks();
-            Node* regExpMatch = addToGraph(RegExpMatchFast, OpInfo(0), OpInfo(prediction), addToGraph(GetGlobalObject, callee), get(virtualRegisterForArgument(0, registerOffset)), get(virtualRegisterForArgument(1, registerOffset)));
+            Node* regExpMatch = addToGraph(RegExpMatchFast, OpInfo(0), OpInfo(prediction), addToGraph(GetGlobalObject, callee), get(virtualRegisterForArgumentIncludingThis(0, registerOffset)), get(virtualRegisterForArgumentIncludingThis(1, registerOffset)));
             setResult(regExpMatch);
             return true;
         }
@@ -2828,7 +2882,7 @@
                 return false;
 
             insertChecks();
-            setResult(addToGraph(ObjectCreate, get(virtualRegisterForArgument(1, registerOffset))));
+            setResult(addToGraph(ObjectCreate, get(virtualRegisterForArgumentIncludingThis(1, registerOffset))));
             return true;
         }
 
@@ -2837,7 +2891,7 @@
                 return false;
 
             insertChecks();
-            setResult(addToGraph(GetPrototypeOf, OpInfo(0), OpInfo(prediction), get(virtualRegisterForArgument(1, registerOffset))));
+            setResult(addToGraph(GetPrototypeOf, OpInfo(0), OpInfo(prediction), get(virtualRegisterForArgumentIncludingThis(1, registerOffset))));
             return true;
         }
 
@@ -2846,7 +2900,7 @@
                 return false;
 
             insertChecks();
-            setResult(addToGraph(SameValue, get(virtualRegisterForArgument(1, registerOffset)), get(virtualRegisterForArgument(2, registerOffset))));
+            setResult(addToGraph(SameValue, get(virtualRegisterForArgumentIncludingThis(1, registerOffset)), get(virtualRegisterForArgumentIncludingThis(2, registerOffset))));
             return true;
         }
 
@@ -2855,7 +2909,7 @@
                 return false;
 
             insertChecks();
-            setResult(addToGraph(ObjectKeys, get(virtualRegisterForArgument(1, registerOffset))));
+            setResult(addToGraph(ObjectKeys, get(virtualRegisterForArgumentIncludingThis(1, registerOffset))));
             return true;
         }
 
@@ -2864,7 +2918,7 @@
                 return false;
 
             insertChecks();
-            setResult(addToGraph(GetPrototypeOf, OpInfo(0), OpInfo(prediction), Edge(get(virtualRegisterForArgument(1, registerOffset)), ObjectUse)));
+            setResult(addToGraph(GetPrototypeOf, OpInfo(0), OpInfo(prediction), Edge(get(virtualRegisterForArgumentIncludingThis(1, registerOffset)), ObjectUse)));
             return true;
         }
 
@@ -2872,13 +2926,13 @@
             ASSERT(argumentCountIncludingThis == 2);
 
             insertChecks();
-            setResult(addToGraph(IsTypedArrayView, OpInfo(prediction), get(virtualRegisterForArgument(1, registerOffset))));
+            setResult(addToGraph(IsTypedArrayView, OpInfo(prediction), get(virtualRegisterForArgumentIncludingThis(1, registerOffset))));
             return true;
         }
 
         case StringPrototypeValueOfIntrinsic: {
             insertChecks();
-            Node* value = get(virtualRegisterForArgument(0, registerOffset));
+            Node* value = get(virtualRegisterForArgumentIncludingThis(0, registerOffset));
             setResult(addToGraph(StringValueOf, value));
             return true;
         }
@@ -2930,7 +2984,7 @@
 
             insertChecks();
 
-            Node* resultNode = addToGraph(StringReplace, OpInfo(0), OpInfo(prediction), get(virtualRegisterForArgument(0, registerOffset)), get(virtualRegisterForArgument(1, registerOffset)), get(virtualRegisterForArgument(2, registerOffset)));
+            Node* resultNode = addToGraph(StringReplace, OpInfo(0), OpInfo(prediction), get(virtualRegisterForArgumentIncludingThis(0, registerOffset)), get(virtualRegisterForArgumentIncludingThis(1, registerOffset)), get(virtualRegisterForArgumentIncludingThis(2, registerOffset)));
             setResult(resultNode);
             return true;
         }
@@ -2940,7 +2994,7 @@
                 return false;
             
             insertChecks();
-            Node* resultNode = addToGraph(StringReplaceRegExp, OpInfo(0), OpInfo(prediction), get(virtualRegisterForArgument(0, registerOffset)), get(virtualRegisterForArgument(1, registerOffset)), get(virtualRegisterForArgument(2, registerOffset)));
+            Node* resultNode = addToGraph(StringReplaceRegExp, OpInfo(0), OpInfo(prediction), get(virtualRegisterForArgumentIncludingThis(0, registerOffset)), get(virtualRegisterForArgumentIncludingThis(1, registerOffset)), get(virtualRegisterForArgumentIncludingThis(2, registerOffset)));
             setResult(resultNode);
             return true;
         }
@@ -2955,7 +3009,7 @@
                 return true;
             }
             insertChecks();
-            Node* operand = get(virtualRegisterForArgument(1, registerOffset));
+            Node* operand = get(virtualRegisterForArgumentIncludingThis(1, registerOffset));
             NodeType op;
             if (intrinsic == RoundIntrinsic)
                 op = ArithRound;
@@ -2975,8 +3029,8 @@
             if (argumentCountIncludingThis < 3)
                 return false;
             insertChecks();
-            VirtualRegister leftOperand = virtualRegisterForArgument(1, registerOffset);
-            VirtualRegister rightOperand = virtualRegisterForArgument(2, registerOffset);
+            VirtualRegister leftOperand = virtualRegisterForArgumentIncludingThis(1, registerOffset);
+            VirtualRegister rightOperand = virtualRegisterForArgumentIncludingThis(2, registerOffset);
             Node* left = get(leftOperand);
             Node* right = get(rightOperand);
             setResult(addToGraph(ArithIMul, left, right));
@@ -3017,7 +3071,7 @@
         case SetInt32HeapPredictionIntrinsic: {
             insertChecks();
             for (int i = 1; i < argumentCountIncludingThis; ++i) {
-                Node* node = get(virtualRegisterForArgument(i, registerOffset));
+                Node* node = get(virtualRegisterForArgumentIncludingThis(i, registerOffset));
                 if (node->hasHeapPrediction())
                     node->setHeapPrediction(SpecInt32Only);
             }
@@ -3028,7 +3082,7 @@
         case CheckInt32Intrinsic: {
             insertChecks();
             for (int i = 1; i < argumentCountIncludingThis; ++i) {
-                Node* node = get(virtualRegisterForArgument(i, registerOffset));
+                Node* node = get(virtualRegisterForArgumentIncludingThis(i, registerOffset));
                 addToGraph(Phantom, Edge(node, Int32Use));
             }
             setResult(jsConstant(jsBoolean(true)));
@@ -3039,7 +3093,7 @@
             if (argumentCountIncludingThis < 2)
                 return false;
             insertChecks();
-            VirtualRegister operand = virtualRegisterForArgument(1, registerOffset);
+            VirtualRegister operand = virtualRegisterForArgumentIncludingThis(1, registerOffset);
             if (enableInt52())
                 setResult(addToGraph(FiatInt52, get(operand)));
             else
@@ -3052,8 +3106,8 @@
                 return false;
 
             insertChecks();
-            Node* map = get(virtualRegisterForArgument(0, registerOffset));
-            Node* key = get(virtualRegisterForArgument(1, registerOffset));
+            Node* map = get(virtualRegisterForArgumentIncludingThis(0, registerOffset));
+            Node* key = get(virtualRegisterForArgumentIncludingThis(1, registerOffset));
             Node* normalizedKey = addToGraph(NormalizeMapKey, key);
             Node* hash = addToGraph(MapHash, normalizedKey);
             Node* bucket = addToGraph(GetMapBucket, Edge(map, MapObjectUse), Edge(normalizedKey), Edge(hash));
@@ -3068,8 +3122,8 @@
                 return false;
 
             insertChecks();
-            Node* mapOrSet = get(virtualRegisterForArgument(0, registerOffset));
-            Node* key = get(virtualRegisterForArgument(1, registerOffset));
+            Node* mapOrSet = get(virtualRegisterForArgumentIncludingThis(0, registerOffset));
+            Node* key = get(virtualRegisterForArgumentIncludingThis(1, registerOffset));
             Node* normalizedKey = addToGraph(NormalizeMapKey, key);
             Node* hash = addToGraph(MapHash, normalizedKey);
             UseKind useKind = intrinsic == JSSetHasIntrinsic ? SetObjectUse : MapObjectUse;
@@ -3092,8 +3146,8 @@
                 return false;
 
             insertChecks();
-            Node* base = get(virtualRegisterForArgument(0, registerOffset));
-            Node* key = get(virtualRegisterForArgument(1, registerOffset));
+            Node* base = get(virtualRegisterForArgumentIncludingThis(0, registerOffset));
+            Node* key = get(virtualRegisterForArgumentIncludingThis(1, registerOffset));
             Node* normalizedKey = addToGraph(NormalizeMapKey, key);
             Node* hash = addToGraph(MapHash, normalizedKey);
             addToGraph(SetAdd, base, normalizedKey, hash);
@@ -3106,9 +3160,9 @@
                 return false;
 
             insertChecks();
-            Node* base = get(virtualRegisterForArgument(0, registerOffset));
-            Node* key = get(virtualRegisterForArgument(1, registerOffset));
-            Node* value = get(virtualRegisterForArgument(2, registerOffset));
+            Node* base = get(virtualRegisterForArgumentIncludingThis(0, registerOffset));
+            Node* key = get(virtualRegisterForArgumentIncludingThis(1, registerOffset));
+            Node* value = get(virtualRegisterForArgumentIncludingThis(2, registerOffset));
 
             Node* normalizedKey = addToGraph(NormalizeMapKey, key);
             Node* hash = addToGraph(MapHash, normalizedKey);
@@ -3127,7 +3181,7 @@
             ASSERT(argumentCountIncludingThis == 2);
 
             insertChecks();
-            Node* map = get(virtualRegisterForArgument(1, registerOffset));
+            Node* map = get(virtualRegisterForArgumentIncludingThis(1, registerOffset));
             UseKind useKind = intrinsic == JSSetBucketHeadIntrinsic ? SetObjectUse : MapObjectUse;
             Node* resultNode = addToGraph(GetMapBucketHead, Edge(map, useKind));
             setResult(resultNode);
@@ -3139,7 +3193,7 @@
             ASSERT(argumentCountIncludingThis == 2);
 
             insertChecks();
-            Node* bucket = get(virtualRegisterForArgument(1, registerOffset));
+            Node* bucket = get(virtualRegisterForArgumentIncludingThis(1, registerOffset));
             BucketOwnerType type = intrinsic == JSSetBucketNextIntrinsic ? BucketOwnerType::Set : BucketOwnerType::Map;
             Node* resultNode = addToGraph(GetMapBucketNext, OpInfo(type), bucket);
             setResult(resultNode);
@@ -3151,7 +3205,7 @@
             ASSERT(argumentCountIncludingThis == 2);
 
             insertChecks();
-            Node* bucket = get(virtualRegisterForArgument(1, registerOffset));
+            Node* bucket = get(virtualRegisterForArgumentIncludingThis(1, registerOffset));
             BucketOwnerType type = intrinsic == JSSetBucketKeyIntrinsic ? BucketOwnerType::Set : BucketOwnerType::Map;
             Node* resultNode = addToGraph(LoadKeyFromMapBucket, OpInfo(type), OpInfo(prediction), bucket);
             setResult(resultNode);
@@ -3162,7 +3216,7 @@
             ASSERT(argumentCountIncludingThis == 2);
 
             insertChecks();
-            Node* bucket = get(virtualRegisterForArgument(1, registerOffset));
+            Node* bucket = get(virtualRegisterForArgumentIncludingThis(1, registerOffset));
             Node* resultNode = addToGraph(LoadValueFromMapBucket, OpInfo(BucketOwnerType::Map), OpInfo(prediction), bucket);
             setResult(resultNode);
             return true;
@@ -3176,8 +3230,8 @@
                 return false;
 
             insertChecks();
-            Node* map = get(virtualRegisterForArgument(0, registerOffset));
-            Node* key = get(virtualRegisterForArgument(1, registerOffset));
+            Node* map = get(virtualRegisterForArgumentIncludingThis(0, registerOffset));
+            Node* key = get(virtualRegisterForArgumentIncludingThis(1, registerOffset));
             addToGraph(Check, Edge(key, ObjectUse));
             Node* hash = addToGraph(MapHash, key);
             Node* holder = addToGraph(WeakMapGet, Edge(map, WeakMapObjectUse), Edge(key, ObjectUse), Edge(hash, Int32Use));
@@ -3195,8 +3249,8 @@
                 return false;
 
             insertChecks();
-            Node* map = get(virtualRegisterForArgument(0, registerOffset));
-            Node* key = get(virtualRegisterForArgument(1, registerOffset));
+            Node* map = get(virtualRegisterForArgumentIncludingThis(0, registerOffset));
+            Node* key = get(virtualRegisterForArgumentIncludingThis(1, registerOffset));
             addToGraph(Check, Edge(key, ObjectUse));
             Node* hash = addToGraph(MapHash, key);
             Node* holder = addToGraph(WeakMapGet, Edge(map, WeakMapObjectUse), Edge(key, ObjectUse), Edge(hash, Int32Use));
@@ -3215,8 +3269,8 @@
                 return false;
 
             insertChecks();
-            Node* map = get(virtualRegisterForArgument(0, registerOffset));
-            Node* key = get(virtualRegisterForArgument(1, registerOffset));
+            Node* map = get(virtualRegisterForArgumentIncludingThis(0, registerOffset));
+            Node* key = get(virtualRegisterForArgumentIncludingThis(1, registerOffset));
             addToGraph(Check, Edge(key, ObjectUse));
             Node* hash = addToGraph(MapHash, key);
             Node* holder = addToGraph(WeakMapGet, Edge(map, WeakSetObjectUse), Edge(key, ObjectUse), Edge(hash, Int32Use));
@@ -3235,8 +3289,8 @@
                 return false;
 
             insertChecks();
-            Node* base = get(virtualRegisterForArgument(0, registerOffset));
-            Node* key = get(virtualRegisterForArgument(1, registerOffset));
+            Node* base = get(virtualRegisterForArgumentIncludingThis(0, registerOffset));
+            Node* key = get(virtualRegisterForArgumentIncludingThis(1, registerOffset));
             addToGraph(Check, Edge(key, ObjectUse));
             Node* hash = addToGraph(MapHash, key);
             addToGraph(WeakSetAdd, Edge(base, WeakSetObjectUse), Edge(key, ObjectUse), Edge(hash, Int32Use));
@@ -3252,9 +3306,9 @@
                 return false;
 
             insertChecks();
-            Node* base = get(virtualRegisterForArgument(0, registerOffset));
-            Node* key = get(virtualRegisterForArgument(1, registerOffset));
-            Node* value = get(virtualRegisterForArgument(2, registerOffset));
+            Node* base = get(virtualRegisterForArgumentIncludingThis(0, registerOffset));
+            Node* key = get(virtualRegisterForArgumentIncludingThis(1, registerOffset));
+            Node* value = get(virtualRegisterForArgumentIncludingThis(2, registerOffset));
 
             addToGraph(Check, Edge(key, ObjectUse));
             Node* hash = addToGraph(MapHash, key);
@@ -3272,7 +3326,7 @@
             if (!is64Bit())
                 return false;
             insertChecks();
-            Node* base = get(virtualRegisterForArgument(0, registerOffset));
+            Node* base = get(virtualRegisterForArgumentIncludingThis(0, registerOffset));
             setResult(addToGraph(DateGetTime, OpInfo(intrinsic), OpInfo(), base));
             return true;
         }
@@ -3298,7 +3352,7 @@
             if (!is64Bit())
                 return false;
             insertChecks();
-            Node* base = get(virtualRegisterForArgument(0, registerOffset));
+            Node* base = get(virtualRegisterForArgumentIncludingThis(0, registerOffset));
             setResult(addToGraph(DateGetInt32OrNaN, OpInfo(intrinsic), OpInfo(prediction), base));
             return true;
         }
@@ -3368,7 +3422,7 @@
                 if (argumentCountIncludingThis < 3)
                     isLittleEndian = FalseTriState;
                 else {
-                    littleEndianChild = get(virtualRegisterForArgument(2, registerOffset));
+                    littleEndianChild = get(virtualRegisterForArgumentIncludingThis(2, registerOffset));
                     if (littleEndianChild->hasConstant()) {
                         JSValue constant = littleEndianChild->constant()->value();
                         if (constant) {
@@ -3387,7 +3441,7 @@
             data.byteSize = byteSize;
 
             setResult(
-                addToGraph(op, OpInfo(data.asQuadWord), OpInfo(prediction), get(virtualRegisterForArgument(0, registerOffset)), get(virtualRegisterForArgument(1, registerOffset)), littleEndianChild));
+                addToGraph(op, OpInfo(data.asQuadWord), OpInfo(prediction), get(virtualRegisterForArgumentIncludingThis(0, registerOffset)), get(virtualRegisterForArgumentIncludingThis(1, registerOffset)), littleEndianChild));
             return true;
         }
 
@@ -3453,7 +3507,7 @@
                 if (argumentCountIncludingThis < 4)
                     isLittleEndian = FalseTriState;
                 else {
-                    littleEndianChild = get(virtualRegisterForArgument(3, registerOffset));
+                    littleEndianChild = get(virtualRegisterForArgumentIncludingThis(3, registerOffset));
                     if (littleEndianChild->hasConstant()) {
                         JSValue constant = littleEndianChild->constant()->value();
                         if (constant) {
@@ -3472,9 +3526,9 @@
             data.byteSize = byteSize;
             data.isFloatingPoint = isFloatingPoint;
 
-            addVarArgChild(get(virtualRegisterForArgument(0, registerOffset)));
-            addVarArgChild(get(virtualRegisterForArgument(1, registerOffset)));
-            addVarArgChild(get(virtualRegisterForArgument(2, registerOffset)));
+            addVarArgChild(get(virtualRegisterForArgumentIncludingThis(0, registerOffset)));
+            addVarArgChild(get(virtualRegisterForArgumentIncludingThis(1, registerOffset)));
+            addVarArgChild(get(virtualRegisterForArgumentIncludingThis(2, registerOffset)));
             addVarArgChild(littleEndianChild);
 
             addToGraph(Node::VarArg, DataViewSet, OpInfo(data.asQuadWord), OpInfo());
@@ -3495,8 +3549,8 @@
                 return false;
 
             insertChecks();
-            Node* object = get(virtualRegisterForArgument(0, registerOffset));
-            Node* key = get(virtualRegisterForArgument(1, registerOffset));
+            Node* object = get(virtualRegisterForArgumentIncludingThis(0, registerOffset));
+            Node* key = get(virtualRegisterForArgumentIncludingThis(1, registerOffset));
             Node* resultNode = addToGraph(HasOwnProperty, object, key);
             setResult(resultNode);
             return true;
@@ -3510,11 +3564,11 @@
                 return false;
 
             insertChecks();
-            Node* thisString = get(virtualRegisterForArgument(0, registerOffset));
-            Node* start = get(virtualRegisterForArgument(1, registerOffset));
+            Node* thisString = get(virtualRegisterForArgumentIncludingThis(0, registerOffset));
+            Node* start = get(virtualRegisterForArgumentIncludingThis(1, registerOffset));
             Node* end = nullptr;
             if (argumentCountIncludingThis > 2)
-                end = get(virtualRegisterForArgument(2, registerOffset));
+                end = get(virtualRegisterForArgumentIncludingThis(2, registerOffset));
             Node* resultNode = addToGraph(StringSlice, thisString, start, end);
             setResult(resultNode);
             return true;
@@ -3525,7 +3579,7 @@
                 return false;
 
             insertChecks();
-            Node* thisString = get(virtualRegisterForArgument(0, registerOffset));
+            Node* thisString = get(virtualRegisterForArgumentIncludingThis(0, registerOffset));
             Node* resultNode = addToGraph(ToLowerCase, thisString);
             setResult(resultNode);
             return true;
@@ -3536,12 +3590,12 @@
                 return false;
 
             insertChecks();
-            Node* thisNumber = get(virtualRegisterForArgument(0, registerOffset));
+            Node* thisNumber = get(virtualRegisterForArgumentIncludingThis(0, registerOffset));
             if (argumentCountIncludingThis == 1) {
                 Node* resultNode = addToGraph(ToString, thisNumber);
                 setResult(resultNode);
             } else {
-                Node* radix = get(virtualRegisterForArgument(1, registerOffset));
+                Node* radix = get(virtualRegisterForArgumentIncludingThis(1, registerOffset));
                 Node* resultNode = addToGraph(NumberToStringWithRadix, thisNumber, radix);
                 setResult(resultNode);
             }
@@ -3553,7 +3607,7 @@
                 return false;
 
             insertChecks();
-            Node* input = get(virtualRegisterForArgument(1, registerOffset));
+            Node* input = get(virtualRegisterForArgumentIncludingThis(1, registerOffset));
             Node* resultNode = addToGraph(NumberIsInteger, input);
             setResult(resultNode);
             return true;
@@ -3851,7 +3905,7 @@
 
     insertChecks();
     set(result,
-        addToGraph(NewTypedArray, OpInfo(type), get(virtualRegisterForArgument(1, registerOffset))));
+        addToGraph(NewTypedArray, OpInfo(type), get(virtualRegisterForArgumentIncludingThis(1, registerOffset))));
     return true;
 }
 
@@ -3868,7 +3922,7 @@
         return false;
 
     if (kind == CodeForConstruct) {
-        Node* newTargetNode = get(virtualRegisterForArgument(0, registerOffset));
+        Node* newTargetNode = get(virtualRegisterForArgumentIncludingThis(0, registerOffset));
         // We cannot handle the case where new.target != callee (i.e. a construct from a super call) because we
         // don't know what the prototype of the constructed object will be.
         // FIXME: If we have inlined super calls up to the call site, however, we should be able to figure out the structure. https://bugs.webkit.org/show_bug.cgi?id=152700
@@ -3883,12 +3937,12 @@
         insertChecks();
         if (argumentCountIncludingThis == 2) {
             set(result,
-                addToGraph(NewArrayWithSize, OpInfo(ArrayWithUndecided), get(virtualRegisterForArgument(1, registerOffset))));
+                addToGraph(NewArrayWithSize, OpInfo(ArrayWithUndecided), get(virtualRegisterForArgumentIncludingThis(1, registerOffset))));
             return true;
         }
         
         for (int i = 1; i < argumentCountIncludingThis; ++i)
-            addVarArgChild(get(virtualRegisterForArgument(i, registerOffset)));
+            addVarArgChild(get(virtualRegisterForArgumentIncludingThis(i, registerOffset)));
         set(result,
             addToGraph(Node::VarArg, NewArray, OpInfo(ArrayWithUndecided), OpInfo(argumentCountIncludingThis - 1)));
         return true;
@@ -3902,7 +3956,7 @@
         if (argumentCountIncludingThis <= 1)
             set(result, jsConstant(jsNumber(0)));
         else
-            set(result, addToGraph(ToNumber, OpInfo(0), OpInfo(prediction), get(virtualRegisterForArgument(1, registerOffset))));
+            set(result, addToGraph(ToNumber, OpInfo(0), OpInfo(prediction), get(virtualRegisterForArgumentIncludingThis(1, registerOffset))));
 
         return true;
     }
@@ -3915,7 +3969,7 @@
         if (argumentCountIncludingThis <= 1)
             resultNode = jsConstant(m_vm->smallStrings.emptyString());
         else
-            resultNode = addToGraph(CallStringConstructor, get(virtualRegisterForArgument(1, registerOffset)));
+            resultNode = addToGraph(CallStringConstructor, get(virtualRegisterForArgumentIncludingThis(1, registerOffset)));
         
         if (kind == CodeForConstruct)
             resultNode = addToGraph(NewStringObject, OpInfo(m_graph.registerStructure(function->globalObject()->stringObjectStructure())), resultNode);
@@ -3932,7 +3986,7 @@
         if (argumentCountIncludingThis <= 1)
             resultNode = addToGraph(NewSymbol);
         else
-            resultNode = addToGraph(NewSymbol, addToGraph(ToString, get(virtualRegisterForArgument(1, registerOffset))));
+            resultNode = addToGraph(NewSymbol, addToGraph(ToString, get(virtualRegisterForArgumentIncludingThis(1, registerOffset))));
 
         set(result, resultNode);
         return true;
@@ -3946,7 +4000,7 @@
         if (argumentCountIncludingThis <= 1)
             resultNode = addToGraph(NewObject, OpInfo(m_graph.registerStructure(function->globalObject()->objectStructureForObjectConstructor())));
         else
-            resultNode = addToGraph(CallObjectConstructor, OpInfo(m_graph.freeze(function->globalObject())), OpInfo(prediction), get(virtualRegisterForArgument(1, registerOffset)));
+            resultNode = addToGraph(CallObjectConstructor, OpInfo(m_graph.freeze(function->globalObject())), OpInfo(prediction), get(virtualRegisterForArgumentIncludingThis(1, registerOffset)));
         set(result, resultNode);
         return true;
     }
@@ -4548,7 +4602,7 @@
     //    the dreaded arguments object on the getter, the right things happen. Well, sort of -
     //    since we only really care about 'this' in this case. But we're not going to take that
     //    shortcut.
-    set(virtualRegisterForArgument(0, registerOffset), base, ImmediateNakedSet);
+    set(virtualRegisterForArgumentIncludingThis(0, registerOffset), base, ImmediateNakedSet);
 
     // We've set some locals, but they are not user-visible. It's still OK to exit from here.
     m_exitOK = true;
@@ -4731,8 +4785,8 @@
             m_inlineStackTop->remapOperand(
                 VirtualRegister(registerOffset)).toLocal());
     
-        set(virtualRegisterForArgument(0, registerOffset), base, ImmediateNakedSet);
-        set(virtualRegisterForArgument(1, registerOffset), value, ImmediateNakedSet);
+        set(virtualRegisterForArgumentIncludingThis(0, registerOffset), base, ImmediateNakedSet);
+        set(virtualRegisterForArgumentIncludingThis(1, registerOffset), value, ImmediateNakedSet);
 
         // We've set some locals, but they are not user-visible. It's still OK to exit from here.
         m_exitOK = true;
@@ -4849,7 +4903,7 @@
         m_exitOK = true;
         for (unsigned argument = 0; argument < m_numArguments; ++argument) {
             VariableAccessData* variable = newVariableAccessData(
-                virtualRegisterForArgument(argument));
+                virtualRegisterForArgumentIncludingThis(argument));
             variable->mergeStructureCheckHoistingFailed(
                 m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, BadCache));
             variable->mergeCheckArrayHoistingFailed(
@@ -5158,7 +5212,7 @@
         case op_new_regexp: {
             auto bytecode = currentInstruction->as<OpNewRegexp>();
             ASSERT(bytecode.m_regexp.isConstant());
-            FrozenValue* frozenRegExp = m_graph.freezeStrong(m_inlineStackTop->m_codeBlock->getConstant(bytecode.m_regexp.offset()));
+            FrozenValue* frozenRegExp = m_graph.freezeStrong(m_inlineStackTop->m_codeBlock->getConstant(bytecode.m_regexp));
             set(bytecode.m_dst, addToGraph(NewRegexp, OpInfo(frozenRegExp), jsConstant(jsNumber(0))));
             NEXT_OPCODE(op_new_regexp);
         }
@@ -6239,7 +6293,7 @@
 
             RELEASE_ASSERT(!m_currentBlock->size() || (m_graph.compilation() && m_currentBlock->size() == 1 && m_currentBlock->at(0)->op() == CountExecution));
 
-            ValueProfileAndOperandBuffer* buffer = bytecode.metadata(codeBlock).m_buffer;
+            ValueProfileAndVirtualRegisterBuffer* buffer = bytecode.metadata(codeBlock).m_buffer;
 
             if (!buffer) {
                 NEXT_OPCODE(op_catch); // This catch has yet to execute. Note: this load can be racy with the main thread.
@@ -6256,7 +6310,7 @@
             {
                 ConcurrentJSLocker locker(m_inlineStackTop->m_profiledBlock->m_lock);
 
-                buffer->forEach([&] (ValueProfileAndOperand& profile) {
+                buffer->forEach([&] (ValueProfileAndVirtualRegister& profile) {
                     VirtualRegister operand(profile.m_operand);
                     SpeculatedType prediction = profile.computeUpdatedPrediction(locker);
                     if (operand.isLocal())
@@ -6284,14 +6338,14 @@
             m_exitOK = false; 
 
             unsigned numberOfLocals = 0;
-            buffer->forEach([&] (ValueProfileAndOperand& profile) {
+            buffer->forEach([&] (ValueProfileAndVirtualRegister& profile) {
                 VirtualRegister operand(profile.m_operand);
                 if (operand.isArgument())
                     return;
                 ASSERT(operand.isLocal());
                 Node* value = addToGraph(ExtractCatchLocal, OpInfo(numberOfLocals), OpInfo(localPredictions[numberOfLocals]));
                 ++numberOfLocals;
-                addToGraph(MovHint, OpInfo(profile.m_operand), value);
+                addToGraph(MovHint, OpInfo(operand), value);
                 localsToSet.uncheckedAppend(std::make_pair(operand, value));
             });
             if (numberOfLocals)
@@ -6322,7 +6376,7 @@
                 BytecodeIndex exitBytecodeIndex = BytecodeIndex(m_currentIndex.offset() + currentInstruction->size());
 
                 for (unsigned argument = 0; argument < argumentPredictions.size(); ++argument) {
-                    VariableAccessData* variable = newVariableAccessData(virtualRegisterForArgument(argument));
+                    VariableAccessData* variable = newVariableAccessData(virtualRegisterForArgumentIncludingThis(argument));
                     variable->predict(argumentPredictions[argument]);
 
                     variable->mergeStructureCheckHoistingFailed(
@@ -6419,7 +6473,7 @@
             
         case op_jneq_ptr: {
             auto bytecode = currentInstruction->as<OpJneqPtr>();
-            FrozenValue* frozenPointer = m_graph.freezeStrong(m_inlineStackTop->m_codeBlock->getConstant(bytecode.m_specialPointer.offset()));
+            FrozenValue* frozenPointer = m_graph.freezeStrong(m_inlineStackTop->m_codeBlock->getConstant(bytecode.m_specialPointer));
             unsigned relativeOffset = jumpTarget(bytecode.m_targetLabel);
             Node* child = get(bytecode.m_value);
             if (bytecode.metadata(codeBlock).m_hasJumped) {
@@ -6887,8 +6941,8 @@
         case op_create_lexical_environment: {
             auto bytecode = currentInstruction->as<OpCreateLexicalEnvironment>();
             ASSERT(bytecode.m_symbolTable.isConstant() && bytecode.m_initialValue.isConstant());
-            FrozenValue* symbolTable = m_graph.freezeStrong(m_inlineStackTop->m_codeBlock->getConstant(bytecode.m_symbolTable.offset()));
-            FrozenValue* initialValue = m_graph.freezeStrong(m_inlineStackTop->m_codeBlock->getConstant(bytecode.m_initialValue.offset()));
+            FrozenValue* symbolTable = m_graph.freezeStrong(m_inlineStackTop->m_codeBlock->getConstant(bytecode.m_symbolTable));
+            FrozenValue* initialValue = m_graph.freezeStrong(m_inlineStackTop->m_codeBlock->getConstant(bytecode.m_initialValue));
             Node* scope = get(bytecode.m_scope);
             Node* lexicalEnvironment = addToGraph(CreateActivation, OpInfo(symbolTable), OpInfo(initialValue), scope);
             set(bytecode.m_dst, lexicalEnvironment);
@@ -6918,7 +6972,7 @@
             // loads from the scope register later, as that would prevent the DFG from tracking the
             // bytecode-level liveness of the scope register.
             auto bytecode = currentInstruction->as<OpGetScope>();
-            Node* callee = get(VirtualRegister(CallFrameSlot::callee));
+            Node* callee = get(CallFrameSlot::callee);
             Node* result;
             if (JSFunction* function = callee->dynamicCastConstant<JSFunction*>(*m_vm))
                 result = weakJSConstant(function->scope());
@@ -6995,7 +7049,7 @@
             if (inlineCallFrame && !inlineCallFrame->isVarargs()) {
                 int32_t argumentCountIncludingThisWithFixup = inlineCallFrame->argumentsWithFixup.size();
                 if (argumentIndexIncludingThis < argumentCountIncludingThisWithFixup)
-                    argument = get(virtualRegisterForArgument(argumentIndexIncludingThis));
+                    argument = get(virtualRegisterForArgumentIncludingThis(argumentIndexIncludingThis));
                 else
                     argument = addToGraph(JSConstant, OpInfo(m_constantUndefined));
             } else
@@ -7343,8 +7397,10 @@
         // The owner is the machine code block, and we already have a barrier on that when the
         // plan finishes.
         m_inlineCallFrame->baselineCodeBlock.setWithoutWriteBarrier(codeBlock->baselineVersion());
+        m_inlineCallFrame->setTmpOffset((m_caller->m_inlineCallFrame ? m_caller->m_inlineCallFrame->tmpOffset : 0) + m_caller->m_codeBlock->numTmps());
         m_inlineCallFrame->setStackOffset(inlineCallFrameStart.offset() - CallFrame::headerSizeInRegisters);
         m_inlineCallFrame->argumentCountIncludingThis = argumentCountIncludingThis;
+        RELEASE_ASSERT(m_inlineCallFrame->argumentCountIncludingThis == argumentCountIncludingThis);
         if (callee) {
             m_inlineCallFrame->calleeRecovery = ValueRecovery::constant(callee);
             m_inlineCallFrame->isClosureCall = false;
@@ -7706,7 +7762,7 @@
                     Node* node = block->at(nodeIndex);
 
                     if (node->hasVariableAccessData(m_graph))
-                        mapping.operand(node->local()) = node->variableAccessData();
+                        mapping.operand(node->operand()) = node->variableAccessData();
 
                     if (node->op() != ForceOSRExit)
                         continue;
@@ -7726,24 +7782,24 @@
                             RELEASE_ASSERT(successor->predecessors.isEmpty());
                     }
 
-                    auto insertLivenessPreservingOp = [&] (InlineCallFrame* inlineCallFrame, NodeType op, VirtualRegister operand) {
+                    auto insertLivenessPreservingOp = [&] (InlineCallFrame* inlineCallFrame, NodeType op, Operand operand) {
                         VariableAccessData* variable = mapping.operand(operand);
                         if (!variable) {
                             variable = newVariableAccessData(operand);
                             mapping.operand(operand) = variable;
                         }
 
-                        VirtualRegister argument = operand - (inlineCallFrame ? inlineCallFrame->stackOffset : 0);
+                        Operand argument = unmapOperand(inlineCallFrame, operand);
                         if (argument.isArgument() && !argument.isHeader()) {
                             const Vector<ArgumentPosition*>& arguments = m_inlineCallFrameToArgumentPositions.get(inlineCallFrame);
                             arguments[argument.toArgument()]->addVariable(variable);
                         }
                         insertionSet.insertNode(nodeIndex, SpecNone, op, origin, OpInfo(variable));
                     };
-                    auto addFlushDirect = [&] (InlineCallFrame* inlineCallFrame, VirtualRegister operand) {
+                    auto addFlushDirect = [&] (InlineCallFrame* inlineCallFrame, Operand operand) {
                         insertLivenessPreservingOp(inlineCallFrame, Flush, operand);
                     };
-                    auto addPhantomLocalDirect = [&] (InlineCallFrame* inlineCallFrame, VirtualRegister operand) {
+                    auto addPhantomLocalDirect = [&] (InlineCallFrame* inlineCallFrame, Operand operand) {
                         insertLivenessPreservingOp(inlineCallFrame, PhantomLocal, operand);
                     };
                     flushForTerminalImpl(origin.semantic, addFlushDirect, addPhantomLocalDirect);
@@ -7808,6 +7864,7 @@
         ASSERT(block->variablesAtTail.numberOfArguments() == m_graph.block(0)->variablesAtHead.numberOfArguments());
     }
 
+    m_graph.m_tmps = m_numTmps;
     m_graph.m_localVars = m_numLocals;
     m_graph.m_parameterSlots = m_parameterSlots;
 }
diff --git a/Source/JavaScriptCore/dfg/DFGCFAPhase.cpp b/Source/JavaScriptCore/dfg/DFGCFAPhase.cpp
index f9b4597..460805f 100644
--- a/Source/JavaScriptCore/dfg/DFGCFAPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGCFAPhase.cpp
@@ -170,22 +170,22 @@
         bool changed = false;
         const Operands<Optional<JSValue>>& mustHandleValues = m_graph.m_plan.mustHandleValues();
         for (size_t i = mustHandleValues.size(); i--;) {
-            int operand = mustHandleValues.operandForIndex(i);
+            Operand operand = mustHandleValues.operandForIndex(i);
             Optional<JSValue> value = mustHandleValues[i];
             if (!value) {
                 if (m_verbose)
-                    dataLog("   Not live in bytecode: ", VirtualRegister(operand), "\n");
+                    dataLog("   Not live in bytecode: ", operand, "\n");
                 continue;
             }
             Node* node = block->variablesAtHead.operand(operand);
             if (!node) {
                 if (m_verbose)
-                    dataLog("   Not live: ", VirtualRegister(operand), "\n");
+                    dataLog("   Not live: ", operand, "\n");
                 continue;
             }
             
             if (m_verbose)
-                dataLog("   Widening ", VirtualRegister(operand), " with ", value.value(), "\n");
+                dataLog("   Widening ", operand, " with ", value.value(), "\n");
             
             AbstractValue& target = block->valuesAtHead.operand(operand);
             changed |= target.mergeOSREntryValue(m_graph, value.value(), node->variableAccessData(), node);
diff --git a/Source/JavaScriptCore/dfg/DFGCFGSimplificationPhase.cpp b/Source/JavaScriptCore/dfg/DFGCFGSimplificationPhase.cpp
index 7783527..e7f6e76 100644
--- a/Source/JavaScriptCore/dfg/DFGCFGSimplificationPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGCFGSimplificationPhase.cpp
@@ -303,7 +303,7 @@
     void jettisonBlock(BasicBlock* block, BasicBlock* jettisonedBlock, NodeOrigin boundaryNodeOrigin)
     {
         for (size_t i = 0; i < jettisonedBlock->variablesAtHead.numberOfArguments(); ++i)
-            keepOperandAlive(block, jettisonedBlock, boundaryNodeOrigin, virtualRegisterForArgument(i));
+            keepOperandAlive(block, jettisonedBlock, boundaryNodeOrigin, virtualRegisterForArgumentIncludingThis(i));
         for (size_t i = 0; i < jettisonedBlock->variablesAtHead.numberOfLocals(); ++i)
             keepOperandAlive(block, jettisonedBlock, boundaryNodeOrigin, virtualRegisterForLocal(i));
         
@@ -354,7 +354,7 @@
             // different path than secondBlock.
             
             for (size_t i = 0; i < jettisonedBlock->variablesAtHead.numberOfArguments(); ++i)
-                keepOperandAlive(firstBlock, jettisonedBlock, boundaryNodeOrigin, virtualRegisterForArgument(i));
+                keepOperandAlive(firstBlock, jettisonedBlock, boundaryNodeOrigin, virtualRegisterForArgumentIncludingThis(i));
             for (size_t i = 0; i < jettisonedBlock->variablesAtHead.numberOfLocals(); ++i)
                 keepOperandAlive(firstBlock, jettisonedBlock, boundaryNodeOrigin, virtualRegisterForLocal(i));
         }
diff --git a/Source/JavaScriptCore/dfg/DFGCPSRethreadingPhase.cpp b/Source/JavaScriptCore/dfg/DFGCPSRethreadingPhase.cpp
index d98fd4a..873fa04 100644
--- a/Source/JavaScriptCore/dfg/DFGCPSRethreadingPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGCPSRethreadingPhase.cpp
@@ -54,8 +54,9 @@
         m_graph.clearReplacements();
         canonicalizeLocalsInBlocks();
         specialCaseArguments();
-        propagatePhis<LocalOperand>();
-        propagatePhis<ArgumentOperand>();
+        propagatePhis<OperandKind::Local>();
+        propagatePhis<OperandKind::Argument>();
+        propagatePhis<OperandKind::Tmp>();
         computeIsFlushed();
         
         m_graph.m_form = ThreadedCPS;
@@ -211,10 +212,20 @@
     void canonicalizeGetLocal(Node* node)
     {
         VariableAccessData* variable = node->variableAccessData();
-        if (variable->local().isArgument())
-            canonicalizeGetLocalFor<ArgumentOperand>(node, variable, variable->local().toArgument());
-        else
-            canonicalizeGetLocalFor<LocalOperand>(node, variable, variable->local().toLocal());
+        switch (variable->operand().kind()) {
+        case OperandKind::Argument: {
+            canonicalizeGetLocalFor<OperandKind::Argument>(node, variable, variable->operand().toArgument());
+            break;
+        }
+        case OperandKind::Local: {
+            canonicalizeGetLocalFor<OperandKind::Local>(node, variable, variable->operand().toLocal());
+            break;
+        }
+        case OperandKind::Tmp: {
+            canonicalizeGetLocalFor<OperandKind::Tmp>(node, variable, variable->operand().value());
+            break;
+        }
+        }
     }
     
     template<NodeType nodeType, OperandKind operandKind>
@@ -229,6 +240,7 @@
             case Flush:
             case PhantomLocal:
             case GetLocal:
+                ASSERT(otherNode->child1().node());
                 otherNode = otherNode->child1().node();
                 break;
             default:
@@ -270,15 +282,25 @@
     void canonicalizeFlushOrPhantomLocal(Node* node)
     {
         VariableAccessData* variable = node->variableAccessData();
-        if (variable->local().isArgument())
-            canonicalizeFlushOrPhantomLocalFor<nodeType, ArgumentOperand>(node, variable, variable->local().toArgument());
-        else
-            canonicalizeFlushOrPhantomLocalFor<nodeType, LocalOperand>(node, variable, variable->local().toLocal());
+        switch (variable->operand().kind()) {
+        case OperandKind::Argument: {
+            canonicalizeFlushOrPhantomLocalFor<nodeType, OperandKind::Argument>(node, variable, variable->operand().toArgument());
+            break;
+        }
+        case OperandKind::Local: {
+            canonicalizeFlushOrPhantomLocalFor<nodeType, OperandKind::Local>(node, variable, variable->operand().toLocal());
+            break;
+        }
+        case OperandKind::Tmp: {
+            canonicalizeFlushOrPhantomLocalFor<nodeType, OperandKind::Tmp>(node, variable, variable->operand().value());
+            break;
+        }
+        }
     }
     
     void canonicalizeSet(Node* node)
     {
-        m_block->variablesAtTail.setOperand(node->local(), node);
+        m_block->variablesAtTail.setOperand(node->operand(), node);
     }
     
     void canonicalizeLocalsInBlock()
@@ -287,8 +309,9 @@
             return;
         ASSERT(m_block->isReachable);
         
-        clearVariables<ArgumentOperand>();
-        clearVariables<LocalOperand>();
+        clearVariables<OperandKind::Argument>();
+        clearVariables<OperandKind::Local>();
+        clearVariables<OperandKind::Tmp>();
         
         // Assumes that all phi references have been removed. Assumes that things that
         // should be live have a non-zero ref count, but doesn't assume that the ref
@@ -388,7 +411,7 @@
     template<OperandKind operandKind>
     void propagatePhis()
     {
-        Vector<PhiStackEntry, 128>& phiStack = operandKind == ArgumentOperand ? m_argumentPhiStack : m_localPhiStack;
+        Vector<PhiStackEntry, 128>& phiStack = phiStackFor<operandKind>();
         
         // Ensure that attempts to use this fail instantly.
         m_block = 0;
@@ -466,9 +489,12 @@
     template<OperandKind operandKind>
     Vector<PhiStackEntry, 128>& phiStackFor()
     {
-        if (operandKind == ArgumentOperand)
-            return m_argumentPhiStack;
-        return m_localPhiStack;
+        switch (operandKind) {
+        case OperandKind::Argument: return m_argumentPhiStack;
+        case OperandKind::Local: return m_localPhiStack;
+        case OperandKind::Tmp: return m_tmpPhiStack;
+        }
+        RELEASE_ASSERT_NOT_REACHED();
     }
     
     void computeIsFlushed()
@@ -521,6 +547,7 @@
     BasicBlock* m_block;
     Vector<PhiStackEntry, 128> m_argumentPhiStack;
     Vector<PhiStackEntry, 128> m_localPhiStack;
+    Vector<PhiStackEntry, 128> m_tmpPhiStack;
     Vector<Node*, 128> m_flushedLocalOpWorklist;
 };
 
diff --git a/Source/JavaScriptCore/dfg/DFGCSEPhase.cpp b/Source/JavaScriptCore/dfg/DFGCSEPhase.cpp
index 2381c6b..4cce174 100644
--- a/Source/JavaScriptCore/dfg/DFGCSEPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGCSEPhase.cpp
@@ -147,8 +147,7 @@
             break;
         case Stack: {
             ASSERT(!heap.payload().isTop());
-            ASSERT(heap.payload().value() == heap.payload().value32());
-            m_abstractHeapStackMap.remove(heap.payload().value32());
+            m_abstractHeapStackMap.remove(heap.payload().value());
             if (clobberConservatively)
                 m_fallbackStackMap.clear();
             else
@@ -172,7 +171,7 @@
                 if (!clobberConservatively)
                     break;
                 if (pair.key.heap().kind() == Stack) {
-                    auto iterator = m_abstractHeapStackMap.find(pair.key.heap().payload().value32());
+                    auto iterator = m_abstractHeapStackMap.find(pair.key.heap().payload().value());
                     if (iterator != m_abstractHeapStackMap.end() && iterator->value->key == pair.key)
                         return false;
                     return true;
@@ -226,8 +225,7 @@
             AbstractHeap abstractHeap = location.heap();
             if (abstractHeap.payload().isTop())
                 return add(m_fallbackStackMap, location, node);
-            ASSERT(abstractHeap.payload().value() == abstractHeap.payload().value32());
-            auto addResult = m_abstractHeapStackMap.add(abstractHeap.payload().value32(), nullptr);
+            auto addResult = m_abstractHeapStackMap.add(abstractHeap.payload().value(), nullptr);
             if (addResult.isNewEntry) {
                 addResult.iterator->value.reset(new ImpureDataSlot {location, node, 0});
                 return nullptr;
@@ -249,8 +247,7 @@
         case SideState:
             RELEASE_ASSERT_NOT_REACHED();
         case Stack: {
-            ASSERT(location.heap().payload().value() == location.heap().payload().value32());
-            auto iterator = m_abstractHeapStackMap.find(location.heap().payload().value32());
+            auto iterator = m_abstractHeapStackMap.find(location.heap().payload().value());
             if (iterator != m_abstractHeapStackMap.end()
                 && iterator->value->key == location)
                 return iterator->value->value;
@@ -298,7 +295,7 @@
     // a duplicate in the past and now only live in m_fallbackStackMap.
     //
     // Obviously, TOP always goes into m_fallbackStackMap since it does not have a unique value.
-    HashMap<int32_t, std::unique_ptr<ImpureDataSlot>, DefaultHash<int32_t>::Hash, WTF::SignedWithZeroKeyHashTraits<int32_t>> m_abstractHeapStackMap;
+    HashMap<int64_t, std::unique_ptr<ImpureDataSlot>, DefaultHash<int64_t>::Hash, WTF::SignedWithZeroKeyHashTraits<int64_t>> m_abstractHeapStackMap;
     Map m_fallbackStackMap;
 
     Map m_heapMap;
diff --git a/Source/JavaScriptCore/dfg/DFGCapabilities.cpp b/Source/JavaScriptCore/dfg/DFGCapabilities.cpp
index b182317..ce5f6aa 100644
--- a/Source/JavaScriptCore/dfg/DFGCapabilities.cpp
+++ b/Source/JavaScriptCore/dfg/DFGCapabilities.cpp
@@ -308,6 +308,8 @@
     case llint_native_construct_trampoline:
     case llint_internal_function_call_trampoline:
     case llint_internal_function_construct_trampoline:
+    case checkpoint_osr_exit_from_inlined_call_trampoline:
+    case checkpoint_osr_exit_trampoline:
     case handleUncaughtException:
     case op_call_return_location:
     case op_construct_return_location:
diff --git a/Source/JavaScriptCore/dfg/DFGClobberize.h b/Source/JavaScriptCore/dfg/DFGClobberize.h
index bfb1d14..eee9bae 100644
--- a/Source/JavaScriptCore/dfg/DFGClobberize.h
+++ b/Source/JavaScriptCore/dfg/DFGClobberize.h
@@ -111,9 +111,9 @@
     // scan would read. That's what this does.
     for (InlineCallFrame* inlineCallFrame = node->origin.semantic.inlineCallFrame(); inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame()) {
         if (inlineCallFrame->isClosureCall)
-            read(AbstractHeap(Stack, inlineCallFrame->stackOffset + CallFrameSlot::callee));
+            read(AbstractHeap(Stack, VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::callee)));
         if (inlineCallFrame->isVarargs())
-            read(AbstractHeap(Stack, inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis));
+            read(AbstractHeap(Stack, VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis)));
     }
 
     // We don't want to specifically account which nodes can read from the scope
@@ -441,7 +441,7 @@
         return;
 
     case KillStack:
-        write(AbstractHeap(Stack, node->unlinkedLocal()));
+        write(AbstractHeap(Stack, node->unlinkedOperand()));
         return;
          
     case MovHint:
@@ -497,7 +497,7 @@
         return;
 
     case Flush:
-        read(AbstractHeap(Stack, node->local()));
+        read(AbstractHeap(Stack, node->operand()));
         write(SideState);
         return;
 
@@ -739,7 +739,7 @@
     case CallEval:
         ASSERT(!node->origin.semantic.inlineCallFrame());
         read(AbstractHeap(Stack, graph.m_codeBlock->scopeRegister()));
-        read(AbstractHeap(Stack, virtualRegisterForArgument(0)));
+        read(AbstractHeap(Stack, virtualRegisterForArgumentIncludingThis(0)));
         read(World);
         write(Heap);
         return;
@@ -765,12 +765,12 @@
         return;
         
     case GetCallee:
-        read(AbstractHeap(Stack, CallFrameSlot::callee));
-        def(HeapLocation(StackLoc, AbstractHeap(Stack, CallFrameSlot::callee)), LazyNode(node));
+        read(AbstractHeap(Stack, VirtualRegister(CallFrameSlot::callee)));
+        def(HeapLocation(StackLoc, AbstractHeap(Stack, VirtualRegister(CallFrameSlot::callee))), LazyNode(node));
         return;
 
     case SetCallee:
-        write(AbstractHeap(Stack, CallFrameSlot::callee));
+        write(AbstractHeap(Stack, VirtualRegister(CallFrameSlot::callee)));
         return;
         
     case GetArgumentCountIncludingThis: {
@@ -781,7 +781,7 @@
     }
 
     case SetArgumentCountIncludingThis:
-        write(AbstractHeap(Stack, CallFrameSlot::argumentCountIncludingThis));
+        write(AbstractHeap(Stack, VirtualRegister(CallFrameSlot::argumentCountIncludingThis)));
         return;
 
     case GetRestLength:
@@ -789,36 +789,42 @@
         return;
         
     case GetLocal:
-        read(AbstractHeap(Stack, node->local()));
-        def(HeapLocation(StackLoc, AbstractHeap(Stack, node->local())), LazyNode(node));
+        read(AbstractHeap(Stack, node->operand()));
+        def(HeapLocation(StackLoc, AbstractHeap(Stack, node->operand())), LazyNode(node));
         return;
         
     case SetLocal:
-        write(AbstractHeap(Stack, node->local()));
-        def(HeapLocation(StackLoc, AbstractHeap(Stack, node->local())), LazyNode(node->child1().node()));
+        write(AbstractHeap(Stack, node->operand()));
+        def(HeapLocation(StackLoc, AbstractHeap(Stack, node->operand())), LazyNode(node->child1().node()));
         return;
         
     case GetStack: {
-        AbstractHeap heap(Stack, node->stackAccessData()->local);
+        AbstractHeap heap(Stack, node->stackAccessData()->operand);
         read(heap);
         def(HeapLocation(StackLoc, heap), LazyNode(node));
         return;
     }
         
     case PutStack: {
-        AbstractHeap heap(Stack, node->stackAccessData()->local);
+        AbstractHeap heap(Stack, node->stackAccessData()->operand);
         write(heap);
         def(HeapLocation(StackLoc, heap), LazyNode(node->child1().node()));
         return;
     }
         
+    case VarargsLength: {
+        read(World);
+        write(Heap);
+        return;  
+    }
+
     case LoadVarargs: {
         read(World);
         write(Heap);
         LoadVarargsData* data = node->loadVarargsData();
-        write(AbstractHeap(Stack, data->count.offset()));
+        write(AbstractHeap(Stack, data->count));
         for (unsigned i = data->limit; i--;)
-            write(AbstractHeap(Stack, data->start.offset() + static_cast<int>(i)));
+            write(AbstractHeap(Stack, data->start + static_cast<int>(i)));
         return;
     }
         
@@ -827,9 +833,9 @@
         read(Stack);
         
         LoadVarargsData* data = node->loadVarargsData();
-        write(AbstractHeap(Stack, data->count.offset()));
+        write(AbstractHeap(Stack, data->count));
         for (unsigned i = data->limit; i--;)
-            write(AbstractHeap(Stack, data->start.offset() + static_cast<int>(i)));
+            write(AbstractHeap(Stack, data->start + static_cast<int>(i)));
         return;
     }
         
diff --git a/Source/JavaScriptCore/dfg/DFGCombinedLiveness.cpp b/Source/JavaScriptCore/dfg/DFGCombinedLiveness.cpp
index 95866a7..15746c0 100644
--- a/Source/JavaScriptCore/dfg/DFGCombinedLiveness.cpp
+++ b/Source/JavaScriptCore/dfg/DFGCombinedLiveness.cpp
@@ -39,7 +39,7 @@
 {
     graph.forAllLiveInBytecode(
         node->origin.forExit,
-        [&] (VirtualRegister reg) {
+        [&] (Operand reg) {
             availabilityMap.closeStartingWithLocal(
                 reg,
                 [&] (Node* node) -> bool {
diff --git a/Source/JavaScriptCore/dfg/DFGCommonData.cpp b/Source/JavaScriptCore/dfg/DFGCommonData.cpp
index b4fc8d4..0712ccb 100644
--- a/Source/JavaScriptCore/dfg/DFGCommonData.cpp
+++ b/Source/JavaScriptCore/dfg/DFGCommonData.cpp
@@ -56,7 +56,7 @@
         codeOrigins.append(codeOrigin);
     unsigned index = codeOrigins.size() - 1;
     ASSERT(codeOrigins[index] == codeOrigin);
-    return CallSiteIndex(BytecodeIndex(index));
+    return CallSiteIndex(index);
 }
 
 CallSiteIndex CommonData::addUniqueCallSiteIndex(CodeOrigin codeOrigin)
@@ -64,13 +64,13 @@
     codeOrigins.append(codeOrigin);
     unsigned index = codeOrigins.size() - 1;
     ASSERT(codeOrigins[index] == codeOrigin);
-    return CallSiteIndex(BytecodeIndex(index));
+    return CallSiteIndex(index);
 }
 
 CallSiteIndex CommonData::lastCallSite() const
 {
     RELEASE_ASSERT(codeOrigins.size());
-    return CallSiteIndex(BytecodeIndex(codeOrigins.size() - 1));
+    return CallSiteIndex(codeOrigins.size() - 1);
 }
 
 DisposableCallSiteIndex CommonData::addDisposableCallSiteIndex(CodeOrigin codeOrigin)
diff --git a/Source/JavaScriptCore/dfg/DFGConstantFoldingPhase.cpp b/Source/JavaScriptCore/dfg/DFGConstantFoldingPhase.cpp
index d37d298..cebb3c8 100644
--- a/Source/JavaScriptCore/dfg/DFGConstantFoldingPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGConstantFoldingPhase.cpp
@@ -383,7 +383,7 @@
                 // GetMyArgumentByVal in such statically-out-of-bounds accesses; we just lose CFA unless
                 // GCSE removes the access entirely.
                 if (inlineCallFrame) {
-                    if (index >= inlineCallFrame->argumentCountIncludingThis - 1)
+                    if (index >= static_cast<unsigned>(inlineCallFrame->argumentCountIncludingThis - 1))
                         break;
                 } else {
                     if (index >= m_state.numberOfArguments() - 1)
@@ -401,10 +401,10 @@
                         FlushedJSValue);
                 } else {
                     data = m_graph.m_stackAccessData.add(
-                        virtualRegisterForArgument(index + 1), FlushedJSValue);
+                        virtualRegisterForArgumentIncludingThis(index + 1), FlushedJSValue);
                 }
                 
-                if (inlineCallFrame && !inlineCallFrame->isVarargs() && index < inlineCallFrame->argumentCountIncludingThis - 1) {
+                if (inlineCallFrame && !inlineCallFrame->isVarargs() && index < static_cast<unsigned>(inlineCallFrame->argumentCountIncludingThis - 1)) {
                     node->convertToGetStack(data);
                     eliminated = true;
                     break;
diff --git a/Source/JavaScriptCore/dfg/DFGDoesGC.cpp b/Source/JavaScriptCore/dfg/DFGDoesGC.cpp
index a583993..1f30dff 100644
--- a/Source/JavaScriptCore/dfg/DFGDoesGC.cpp
+++ b/Source/JavaScriptCore/dfg/DFGDoesGC.cpp
@@ -297,6 +297,7 @@
     case InByVal:
     case InstanceOf:
     case InstanceOfCustom:
+    case VarargsLength:
     case LoadVarargs:
     case NumberToStringWithRadix:
     case NumberToStringWithValidRadixConstant:
diff --git a/Source/JavaScriptCore/dfg/DFGDriver.cpp b/Source/JavaScriptCore/dfg/DFGDriver.cpp
index 17b3707..45fbc43 100644
--- a/Source/JavaScriptCore/dfg/DFGDriver.cpp
+++ b/Source/JavaScriptCore/dfg/DFGDriver.cpp
@@ -90,7 +90,6 @@
     // Make sure that any stubs that the DFG is going to use are initialized. We want to
     // make sure that all JIT code generation does finalization on the main thread.
     vm.getCTIStub(arityFixupGenerator);
-    vm.getCTIStub(osrExitThunkGenerator);
     vm.getCTIStub(osrExitGenerationThunkGenerator);
     vm.getCTIStub(throwExceptionFromCallSlowPathGenerator);
     vm.getCTIStub(linkCallThunkGenerator);
diff --git a/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp b/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
index 374d526..922d35b 100644
--- a/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
@@ -2439,6 +2439,12 @@
             break;
         }
 
+        case ForwardVarargs:
+        case LoadVarargs: {
+            fixEdge<KnownInt32Use>(node->child1());
+            break;
+        }
+
 #if ASSERT_ENABLED
         // Have these no-op cases here to ensure that nobody forgets to add handlers for new opcodes.
         case SetArgumentDefinitely:
@@ -2471,8 +2477,7 @@
         case ConstructForwardVarargs:
         case TailCallForwardVarargs:
         case TailCallForwardVarargsInlinedCaller:
-        case LoadVarargs:
-        case ForwardVarargs:
+        case VarargsLength:
         case ProfileControlFlow:
         case NewObject:
         case NewPromise:
diff --git a/Source/JavaScriptCore/dfg/DFGForAllKills.h b/Source/JavaScriptCore/dfg/DFGForAllKills.h
index 0bf1c3f..bf4350c 100644
--- a/Source/JavaScriptCore/dfg/DFGForAllKills.h
+++ b/Source/JavaScriptCore/dfg/DFGForAllKills.h
@@ -32,6 +32,10 @@
 
 namespace JSC { namespace DFG {
 
+namespace ForAllKillsInternal {
+constexpr bool verbose = false;
+}
+
 // Utilities for finding the last points where a node is live in DFG SSA. This accounts for liveness due
 // to OSR exit. This is usually used for enumerating over all of the program points where a node is live,
 // by exploring all blocks where the node is live at tail and then exploring all program points where the
@@ -53,13 +57,13 @@
     
     CodeOrigin after = nodeAfter->origin.forExit;
     
-    VirtualRegister alreadyNoted;
+    Operand alreadyNoted;
     // If we MovHint something that is live at the time, then we kill the old value.
     if (nodeAfter->containsMovHint()) {
-        VirtualRegister reg = nodeAfter->unlinkedLocal();
-        if (graph.isLiveInBytecode(reg, after)) {
-            functor(reg);
-            alreadyNoted = reg;
+        Operand operand = nodeAfter->unlinkedOperand();
+        if (graph.isLiveInBytecode(operand, after)) {
+            functor(operand);
+            alreadyNoted = operand;
         }
     }
     
@@ -70,29 +74,48 @@
     // other loop, below.
     auto* beforeInlineCallFrame = before.inlineCallFrame();
     if (beforeInlineCallFrame == after.inlineCallFrame()) {
-        int stackOffset = beforeInlineCallFrame ? beforeInlineCallFrame->stackOffset : 0;
         CodeBlock* codeBlock = graph.baselineCodeBlockFor(beforeInlineCallFrame);
+        if (after.bytecodeIndex().checkpoint()) {
+            ASSERT(before.bytecodeIndex().checkpoint() != after.bytecodeIndex().checkpoint());
+            ASSERT_WITH_MESSAGE(before.bytecodeIndex().offset() == after.bytecodeIndex().offset(), "When the DFG does code motion it should change the forExit origin to match the surrounding bytecodes.");
+
+            auto liveBefore = tmpLivenessForCheckpoint(*codeBlock, before.bytecodeIndex());
+            auto liveAfter = tmpLivenessForCheckpoint(*codeBlock, after.bytecodeIndex());
+            liveAfter.invert();
+            liveBefore.filter(liveAfter);
+
+            liveBefore.forEachSetBit([&] (size_t tmp) {
+                functor(remapOperand(beforeInlineCallFrame, Operand::tmp(tmp)));
+            });
+            // No locals can die at a checkpoint.
+            return;
+        }
+
         FullBytecodeLiveness& fullLiveness = graph.livenessFor(codeBlock);
         const FastBitVector& liveBefore = fullLiveness.getLiveness(before.bytecodeIndex(), LivenessCalculationPoint::BeforeUse);
         const FastBitVector& liveAfter = fullLiveness.getLiveness(after.bytecodeIndex(), LivenessCalculationPoint::BeforeUse);
         
         (liveBefore & ~liveAfter).forEachSetBit(
             [&] (size_t relativeLocal) {
-                functor(virtualRegisterForLocal(relativeLocal) + stackOffset);
+                functor(remapOperand(beforeInlineCallFrame, virtualRegisterForLocal(relativeLocal)));
             });
         return;
     }
-    
+
+    ASSERT_WITH_MESSAGE(!after.bytecodeIndex().checkpoint(), "Transitioning across a checkpoint but before and after don't share an inlineCallFrame.");
+
     // Detect kills the super conservative way: it is killed if it was live before and dead after.
-    BitVector liveAfter = graph.localsLiveInBytecode(after);
-    graph.forAllLocalsLiveInBytecode(
+    BitVector liveAfter = graph.localsAndTmpsLiveInBytecode(after);
+    unsigned numLocals = graph.block(0)->variablesAtHead.numberOfLocals();
+    graph.forAllLocalsAndTmpsLiveInBytecode(
         before,
-        [&] (VirtualRegister reg) {
-            if (reg == alreadyNoted)
+        [&] (Operand operand) {
+            if (operand == alreadyNoted)
                 return;
-            if (liveAfter.get(reg.toLocal()))
+            unsigned offset = operand.isTmp() ? numLocals + operand.value() : operand.toLocal();
+            if (liveAfter.get(offset))
                 return;
-            functor(reg);
+            functor(operand);
         });
 }
     
@@ -105,7 +128,8 @@
     static constexpr unsigned seenInClosureFlag = 1;
     static constexpr unsigned calledFunctorFlag = 2;
     HashMap<Node*, unsigned> flags;
-    
+
+    ASSERT(nodeIndex);
     Node* node = block->at(nodeIndex);
     
     graph.doToChildren(
@@ -120,15 +144,13 @@
             }
         });
 
-    Node* before = nullptr;
-    if (nodeIndex)
-        before = block->at(nodeIndex - 1);
+    Node* before = block->at(nodeIndex - 1);
 
     forAllKilledOperands(
         graph, before, node,
-        [&] (VirtualRegister reg) {
+        [&] (Operand operand) {
             availabilityMap.closeStartingWithLocal(
-                reg,
+                operand,
                 [&] (Node* node) -> bool {
                     return flags.get(node) & seenInClosureFlag;
                 },
@@ -159,6 +181,7 @@
     // Start at the second node, because the functor is expected to only inspect nodes from the start of
     // the block up to nodeIndex (exclusive), so if nodeIndex is zero then the functor has nothing to do.
     for (unsigned nodeIndex = 1; nodeIndex < block->size(); ++nodeIndex) {
+        dataLogLnIf(ForAllKillsInternal::verbose, "local availability at index: ", nodeIndex, " ", localAvailability.m_availability);
         forAllKilledNodesAtNodeIndex(
             graph, localAvailability.m_availability, block, nodeIndex,
             [&] (Node* node) {
diff --git a/Source/JavaScriptCore/dfg/DFGGraph.cpp b/Source/JavaScriptCore/dfg/DFGGraph.cpp
index e51355c..b9f8572 100644
--- a/Source/JavaScriptCore/dfg/DFGGraph.cpp
+++ b/Source/JavaScriptCore/dfg/DFGGraph.cpp
@@ -308,8 +308,8 @@
     if (node->hasVariableAccessData(*this)) {
         VariableAccessData* variableAccessData = node->tryGetVariableAccessData();
         if (variableAccessData) {
-            VirtualRegister operand = variableAccessData->local();
-            out.print(comma, variableAccessData->local(), "(", VariableAccessDataDump(*this, variableAccessData), ")");
+            Operand operand = variableAccessData->operand();
+            out.print(comma, variableAccessData->operand(), "(", VariableAccessDataDump(*this, variableAccessData), ")");
             operand = variableAccessData->machineLocal();
             if (operand.isValid())
                 out.print(comma, "machine:", operand);
@@ -317,13 +317,13 @@
     }
     if (node->hasStackAccessData()) {
         StackAccessData* data = node->stackAccessData();
-        out.print(comma, data->local);
+        out.print(comma, data->operand);
         if (data->machineLocal.isValid())
             out.print(comma, "machine:", data->machineLocal);
         out.print(comma, data->format);
     }
-    if (node->hasUnlinkedLocal()) 
-        out.print(comma, node->unlinkedLocal());
+    if (node->hasUnlinkedOperand())
+        out.print(comma, node->unlinkedOperand());
     if (node->hasVectorLengthHint())
         out.print(comma, "vectorLengthHint = ", node->vectorLengthHint());
     if (node->hasLazyJSValue())
@@ -515,9 +515,10 @@
         out.print(prefix, "  Phi Nodes:");
         for (size_t i = 0; i < block->phis.size(); ++i) {
             Node* phiNode = block->phis[i];
+            ASSERT(phiNode->op() == Phi);
             if (!phiNode->shouldGenerate() && phiNodeDumpMode == DumpLivePhisOnly)
                 continue;
-            out.print(" @", phiNode->index(), "<", phiNode->local(), ",", phiNode->refCount(), ">->(");
+            out.print(" @", phiNode->index(), "<", phiNode->operand(), ",", phiNode->refCount(), ">->(");
             if (phiNode->child1()) {
                 out.print("@", phiNode->child1()->index());
                 if (phiNode->child2()) {
@@ -884,7 +885,7 @@
         bool shouldContinue = true;
         switch (node->op()) {
         case SetLocal: {
-            if (node->local() == variableAccessData->local())
+            if (node->operand() == variableAccessData->operand())
                 shouldContinue = false;
             break;
         }
@@ -893,9 +894,9 @@
             if (node->variableAccessData() != variableAccessData)
                 continue;
             substitute(block, indexInBlock, node, newGetLocal);
-            Node* oldTailNode = block.variablesAtTail.operand(variableAccessData->local());
+            Node* oldTailNode = block.variablesAtTail.operand(variableAccessData->operand());
             if (oldTailNode == node)
-                block.variablesAtTail.operand(variableAccessData->local()) = newGetLocal;
+                block.variablesAtTail.operand(variableAccessData->operand()) = newGetLocal;
             shouldContinue = false;
             break;
         }
@@ -1133,36 +1134,56 @@
     return killsFor(baselineCodeBlockFor(inlineCallFrame));
 }
 
-bool Graph::isLiveInBytecode(VirtualRegister operand, CodeOrigin codeOrigin)
+bool Graph::isLiveInBytecode(Operand operand, CodeOrigin codeOrigin)
 {
     static constexpr bool verbose = false;
     
     if (verbose)
         dataLog("Checking of operand is live: ", operand, "\n");
     bool isCallerOrigin = false;
+
     CodeOrigin* codeOriginPtr = &codeOrigin;
-    for (;;) {
-        VirtualRegister reg = VirtualRegister(
-            operand.offset() - codeOriginPtr->stackOffset());
+    auto* inlineCallFrame = codeOriginPtr->inlineCallFrame();
+    // We need to handle tail callers because we may decide to exit to the
+    // the return bytecode following the tail call.
+    for (; codeOriginPtr; codeOriginPtr = inlineCallFrame ? &inlineCallFrame->directCaller : nullptr) {
+        inlineCallFrame = codeOriginPtr->inlineCallFrame();
+        if (operand.isTmp()) {
+            unsigned tmpOffset = inlineCallFrame ? inlineCallFrame->tmpOffset : 0;
+            unsigned operandIndex = static_cast<unsigned>(operand.value());
+
+            ASSERT(operand.value() >= 0);
+            // This tmp should have belonged to someone we inlined.
+            if (operandIndex > tmpOffset + maxNumCheckpointTmps)
+                return false;
+
+            CodeBlock* codeBlock = baselineCodeBlockFor(inlineCallFrame);
+            if (!codeBlock->numTmps() || operandIndex < tmpOffset)
+                continue;
+
+            auto bitMap = tmpLivenessForCheckpoint(*codeBlock, codeOriginPtr->bytecodeIndex());
+            return bitMap.get(operandIndex - tmpOffset);
+        }
+
+        VirtualRegister reg = operand.virtualRegister() - codeOriginPtr->stackOffset();
         
         if (verbose)
             dataLog("reg = ", reg, "\n");
 
-        auto* inlineCallFrame = codeOriginPtr->inlineCallFrame();
-        if (operand.offset() < codeOriginPtr->stackOffset() + CallFrame::headerSizeInRegisters) {
+        if (operand.virtualRegister().offset() < codeOriginPtr->stackOffset() + CallFrame::headerSizeInRegisters) {
             if (reg.isArgument()) {
                 RELEASE_ASSERT(reg.offset() < CallFrame::headerSizeInRegisters);
 
 
                 if (inlineCallFrame->isClosureCall
-                    && reg.offset() == CallFrameSlot::callee) {
+                    && reg == CallFrameSlot::callee) {
                     if (verbose)
                         dataLog("Looks like a callee.\n");
                     return true;
                 }
                 
                 if (inlineCallFrame->isVarargs()
-                    && reg.offset() == CallFrameSlot::argumentCountIncludingThis) {
+                    && reg == CallFrameSlot::argumentCountIncludingThis) {
                     if (verbose)
                         dataLog("Looks like the argument count.\n");
                     return true;
@@ -1176,42 +1197,39 @@
             CodeBlock* codeBlock = baselineCodeBlockFor(inlineCallFrame);
             FullBytecodeLiveness& fullLiveness = livenessFor(codeBlock);
             BytecodeIndex bytecodeIndex = codeOriginPtr->bytecodeIndex();
-            return fullLiveness.operandIsLive(reg.offset(), bytecodeIndex, appropriateLivenessCalculationPoint(*codeOriginPtr, isCallerOrigin));
-        }
-
-        if (!inlineCallFrame) {
-            if (verbose)
-                dataLog("Ran out of stack, returning true.\n");
-            return true;
+            return fullLiveness.virtualRegisterIsLive(reg, bytecodeIndex, appropriateLivenessCalculationPoint(*codeOriginPtr, isCallerOrigin));
         }
 
         // Arguments are always live. This would be redundant if it wasn't for our
         // op_call_varargs inlining.
-        if (reg.isArgument()
+        if (inlineCallFrame && reg.isArgument()
             && static_cast<size_t>(reg.toArgument()) < inlineCallFrame->argumentsWithFixup.size()) {
             if (verbose)
                 dataLog("Argument is live.\n");
             return true;
         }
 
-        // We need to handle tail callers because we may decide to exit to the
-        // the return bytecode following the tail call.
-        codeOriginPtr = &inlineCallFrame->directCaller;
         isCallerOrigin = true;
     }
-    
-    RELEASE_ASSERT_NOT_REACHED();
+
+    if (operand.isTmp())
+        return false;
+
+    if (verbose)
+        dataLog("Ran out of stack, returning true.\n");
+    return true;    
 }
 
-BitVector Graph::localsLiveInBytecode(CodeOrigin codeOrigin)
+BitVector Graph::localsAndTmpsLiveInBytecode(CodeOrigin codeOrigin)
 {
     BitVector result;
-    result.ensureSize(block(0)->variablesAtHead.numberOfLocals());
-    forAllLocalsLiveInBytecode(
+    unsigned numLocals = block(0)->variablesAtHead.numberOfLocals();
+    result.ensureSize(numLocals + block(0)->variablesAtHead.numberOfTmps());
+    forAllLocalsAndTmpsLiveInBytecode(
         codeOrigin,
-        [&] (VirtualRegister reg) {
-            ASSERT(reg.isLocal());
-            result.quickSet(reg.toLocal());
+        [&] (Operand operand) {
+            unsigned offset = operand.isTmp() ? numLocals + operand.value() : operand.toLocal();
+            result.quickSet(offset);
         });
     return result;
 }
@@ -1635,8 +1653,8 @@
 
     for (Node* node = operandNode; node;) {
         if (node->accessesStack(*this)) {
-            if (m_form != SSA && node->local().isArgument()) {
-                int argument = node->local().toArgument();
+            if (m_form != SSA && node->operand().isArgument()) {
+                int argument = node->operand().toArgument();
                 Node* argumentNode = m_rootToArguments.find(block(0))->value[argument];
                 // FIXME: We should match SetArgumentDefinitely nodes at other entrypoints as well:
                 // https://bugs.webkit.org/show_bug.cgi?id=175841
@@ -1656,7 +1674,7 @@
                     return MethodOfGettingAValueProfile::fromLazyOperand(
                         profiledBlock,
                         LazyOperandValueProfileKey(
-                            node->origin.semantic.bytecodeIndex(), node->local()));
+                            node->origin.semantic.bytecodeIndex(), node->operand()));
                 }
             }
 
diff --git a/Source/JavaScriptCore/dfg/DFGGraph.h b/Source/JavaScriptCore/dfg/DFGGraph.h
index ec4f22b..947efaa 100644
--- a/Source/JavaScriptCore/dfg/DFGGraph.h
+++ b/Source/JavaScriptCore/dfg/DFGGraph.h
@@ -840,13 +840,13 @@
     // Quickly query if a single local is live at the given point. This is faster than calling
     // forAllLiveInBytecode() if you will only query one local. But, if you want to know all of the
     // locals live, then calling this for each local is much slower than forAllLiveInBytecode().
-    bool isLiveInBytecode(VirtualRegister, CodeOrigin);
+    bool isLiveInBytecode(Operand, CodeOrigin);
     
-    // Quickly get all of the non-argument locals live at the given point. This doesn't give you
+    // Quickly get all of the non-argument locals and tmps live at the given point. This doesn't give you
     // any arguments because those are all presumed live. You can call forAllLiveInBytecode() to
     // also get the arguments. This is much faster than calling isLiveInBytecode() for each local.
     template<typename Functor>
-    void forAllLocalsLiveInBytecode(CodeOrigin codeOrigin, const Functor& functor)
+    void forAllLocalsAndTmpsLiveInBytecode(CodeOrigin codeOrigin, const Functor& functor)
     {
         // Support for not redundantly reporting arguments. Necessary because in case of a varargs
         // call, only the callee knows that arguments are live while in the case of a non-varargs
@@ -881,6 +881,14 @@
                 if (livenessAtBytecode[relativeLocal])
                     functor(reg);
             }
+
+            if (codeOriginPtr->bytecodeIndex().checkpoint()) {
+                ASSERT(codeBlock->numTmps());
+                auto liveTmps = tmpLivenessForCheckpoint(*codeBlock, codeOriginPtr->bytecodeIndex());
+                liveTmps.forEachSetBit([&] (size_t tmp) {
+                    functor(remapOperand(inlineCallFrame, Operand::tmp(tmp)));
+                });
+            }
             
             if (!inlineCallFrame)
                 break;
@@ -904,9 +912,9 @@
         }
     }
     
-    // Get a BitVector of all of the non-argument locals live right now. This is mostly useful if
+    // Get a BitVector of all of the locals and tmps live right now. This is mostly useful if
     // you want to compare two sets of live locals from two different CodeOrigins.
-    BitVector localsLiveInBytecode(CodeOrigin);
+    BitVector localsAndTmpsLiveInBytecode(CodeOrigin);
 
     LivenessCalculationPoint appropriateLivenessCalculationPoint(CodeOrigin origin, bool isCallerOrigin)
     {
@@ -938,16 +946,16 @@
         return LivenessCalculationPoint::BeforeUse;
     }
     
-    // Tells you all of the arguments and locals live at the given CodeOrigin. This is a small
-    // extension to forAllLocalsLiveInBytecode(), since all arguments are always presumed live.
+    // Tells you all of the operands live at the given CodeOrigin. This is a small
+    // extension to forAllLocalsOrTmpsLiveInBytecode(), since all arguments are always presumed live.
     template<typename Functor>
     void forAllLiveInBytecode(CodeOrigin codeOrigin, const Functor& functor)
     {
-        forAllLocalsLiveInBytecode(codeOrigin, functor);
+        forAllLocalsAndTmpsLiveInBytecode(codeOrigin, functor);
         
         // Report all arguments as being live.
         for (unsigned argument = block(0)->variablesAtHead.numberOfArguments(); argument--;)
-            functor(virtualRegisterForArgument(argument));
+            functor(virtualRegisterForArgumentIncludingThis(argument));
     }
     
     BytecodeKills& killsFor(CodeBlock*);
@@ -1114,6 +1122,7 @@
     std::unique_ptr<BackwardsCFG> m_backwardsCFG;
     std::unique_ptr<BackwardsDominators> m_backwardsDominators;
     std::unique_ptr<ControlEquivalenceAnalysis> m_controlEquivalenceAnalysis;
+    unsigned m_tmps;
     unsigned m_localVars;
     unsigned m_nextMachineLocal;
     unsigned m_parameterSlots;
diff --git a/Source/JavaScriptCore/dfg/DFGInPlaceAbstractState.cpp b/Source/JavaScriptCore/dfg/DFGInPlaceAbstractState.cpp
index 99aeffc..202d225 100644
--- a/Source/JavaScriptCore/dfg/DFGInPlaceAbstractState.cpp
+++ b/Source/JavaScriptCore/dfg/DFGInPlaceAbstractState.cpp
@@ -45,8 +45,8 @@
 InPlaceAbstractState::InPlaceAbstractState(Graph& graph)
     : m_graph(graph)
     , m_abstractValues(*graph.m_abstractValuesCache)
-    , m_variables(m_graph.m_codeBlock->numParameters(), graph.m_localVars)
-    , m_block(0)
+    , m_variables(OperandsLike, graph.block(0)->variablesAtHead)
+    , m_block(nullptr)
 {
 }
 
diff --git a/Source/JavaScriptCore/dfg/DFGInPlaceAbstractState.h b/Source/JavaScriptCore/dfg/DFGInPlaceAbstractState.h
index bf5ead1..1865e6a 100644
--- a/Source/JavaScriptCore/dfg/DFGInPlaceAbstractState.h
+++ b/Source/JavaScriptCore/dfg/DFGInPlaceAbstractState.h
@@ -167,13 +167,11 @@
         return fastForward(m_variables[index]);
     }
 
-    AbstractValue& operand(int operand)
+    AbstractValue& operand(Operand operand)
     {
         return variableAt(m_variables.operandIndex(operand));
     }
     
-    AbstractValue& operand(VirtualRegister operand) { return this->operand(operand.offset()); }
-    
     AbstractValue& local(size_t index)
     {
         return variableAt(m_variables.localIndex(index));
diff --git a/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp b/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
index 70eef88..c5544bb 100644
--- a/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
+++ b/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
@@ -85,8 +85,6 @@
         }
     }
     
-    MacroAssemblerCodeRef<JITThunkPtrTag> osrExitThunk = vm().getCTIStub(osrExitThunkGenerator);
-    auto osrExitThunkLabel = CodeLocationLabel<JITThunkPtrTag>(osrExitThunk.code());
     for (unsigned i = 0; i < m_jitCode->osrExit.size(); ++i) {
         OSRExitCompilationInfo& info = m_exitCompilationInfo[i];
         JumpList& failureJumps = info.m_failureJumps;
@@ -97,13 +95,7 @@
 
         jitAssertHasValidCallFrame();
         store32(TrustedImm32(i), &vm().osrExitIndex);
-        if (Options::useProbeOSRExit()) {
-            Jump target = jump();
-            addLinkTask([target, osrExitThunkLabel] (LinkBuffer& linkBuffer) {
-                linkBuffer.link(target, osrExitThunkLabel);
-            });
-        } else
-            info.m_patchableJump = patchableJump();
+        info.m_patchableJump = patchableJump();
     }
 }
 
@@ -595,11 +587,12 @@
             default:
                 break;
             }
-            
-            if (variable->local() != variable->machineLocal()) {
+
+            ASSERT(!variable->operand().isTmp());
+            if (variable->operand().virtualRegister() != variable->machineLocal()) {
                 entry->m_reshufflings.append(
                     OSREntryReshuffling(
-                        variable->local().offset(), variable->machineLocal().offset()));
+                        variable->operand().virtualRegister().offset(), variable->machineLocal().offset()));
             }
         }
     }
diff --git a/Source/JavaScriptCore/dfg/DFGJITCompiler.h b/Source/JavaScriptCore/dfg/DFGJITCompiler.h
index d1bd7da..4880874 100644
--- a/Source/JavaScriptCore/dfg/DFGJITCompiler.h
+++ b/Source/JavaScriptCore/dfg/DFGJITCompiler.h
@@ -135,7 +135,7 @@
 
     void emitStoreCallSiteIndex(CallSiteIndex callSite)
     {
-        store32(TrustedImm32(callSite.bits()), tagFor(static_cast<VirtualRegister>(CallFrameSlot::argumentCountIncludingThis)));
+        store32(TrustedImm32(callSite.bits()), tagFor(CallFrameSlot::argumentCountIncludingThis));
     }
 
     // Add a call out from JIT code, without an exception check.
diff --git a/Source/JavaScriptCore/dfg/DFGLiveCatchVariablePreservationPhase.cpp b/Source/JavaScriptCore/dfg/DFGLiveCatchVariablePreservationPhase.cpp
index 42f68ad..5c35e53 100644
--- a/Source/JavaScriptCore/dfg/DFGLiveCatchVariablePreservationPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGLiveCatchVariablePreservationPhase.cpp
@@ -63,7 +63,7 @@
         return true;
     }
 
-    bool isValidFlushLocation(BasicBlock* startingBlock, unsigned index, VirtualRegister operand)
+    bool isValidFlushLocation(BasicBlock* startingBlock, unsigned index, Operand operand)
     {
         // This code is not meant to be fast. We just use it for assertions. If we got liveness wrong,
         // this function would return false for a Flush that we insert.
@@ -81,7 +81,7 @@
         auto flushIsDefinitelyInvalid = [&] (BasicBlock* block, unsigned index) {
             bool allGood = false;
             for (unsigned i = index; i--; ) {
-                if (block->at(i)->accessesStack(m_graph) && block->at(i)->local() == operand) {
+                if (block->at(i)->accessesStack(m_graph) && block->at(i)->operand() == operand) {
                     allGood = true;
                     break;
                 }
@@ -115,8 +115,7 @@
     void handleBlockForTryCatch(BasicBlock* block, InsertionSet& insertionSet)
     {
         HandlerInfo* currentExceptionHandler = nullptr;
-        FastBitVector liveAtCatchHead;
-        liveAtCatchHead.resize(m_graph.block(0)->variablesAtTail.numberOfLocals());
+        Operands<bool> liveAtCatchHead(0, m_graph.block(0)->variablesAtTail.numberOfLocals(), m_graph.block(0)->variablesAtTail.numberOfTmps());
 
         HandlerInfo* cachedHandlerResult;
         CodeOrigin cachedCodeOrigin;
@@ -133,11 +132,11 @@
                 InlineCallFrame* inlineCallFrame = origin.inlineCallFrame();
                 CodeBlock* codeBlock = m_graph.baselineCodeBlockFor(inlineCallFrame);
                 if (HandlerInfo* handler = codeBlock->handlerForBytecodeIndex(bytecodeIndexToCheck)) {
-                    liveAtCatchHead.clearAll();
+                    liveAtCatchHead.fill(false);
 
                     BytecodeIndex catchBytecodeIndex = BytecodeIndex(handler->target);
-                    m_graph.forAllLocalsLiveInBytecode(CodeOrigin(catchBytecodeIndex, inlineCallFrame), [&] (VirtualRegister operand) {
-                        liveAtCatchHead[operand.toLocal()] = true;
+                    m_graph.forAllLocalsAndTmpsLiveInBytecode(CodeOrigin(catchBytecodeIndex, inlineCallFrame), [&] (Operand operand) {
+                        liveAtCatchHead.operand(operand) = true;
                     });
 
                     cachedHandlerResult = handler;
@@ -156,12 +155,12 @@
             return cachedHandlerResult;
         };
 
-        Operands<VariableAccessData*> currentBlockAccessData(block->variablesAtTail.numberOfArguments(), block->variablesAtTail.numberOfLocals(), nullptr);
+        Operands<VariableAccessData*> currentBlockAccessData(OperandsLike, block->variablesAtTail, nullptr);
 
         auto flushEverything = [&] (NodeOrigin origin, unsigned index) {
             RELEASE_ASSERT(currentExceptionHandler);
-            auto flush = [&] (VirtualRegister operand) {
-                if ((operand.isLocal() && liveAtCatchHead[operand.toLocal()]) || operand.isArgument()) {
+            auto flush = [&] (Operand operand) {
+                if (operand.isArgument() || liveAtCatchHead.operand(operand)) {
 
                     ASSERT(isValidFlushLocation(block, index, operand));
 
@@ -178,6 +177,8 @@
 
             for (unsigned local = 0; local < block->variablesAtTail.numberOfLocals(); local++)
                 flush(virtualRegisterForLocal(local));
+            for (unsigned tmp = 0; tmp < block->variablesAtTail.numberOfTmps(); ++tmp)
+                flush(Operand::tmp(tmp));
             flush(VirtualRegister(CallFrame::thisArgumentOffset()));
         };
 
@@ -192,9 +193,8 @@
             }
 
             if (currentExceptionHandler && (node->op() == SetLocal || node->op() == SetArgumentDefinitely || node->op() == SetArgumentMaybe)) {
-                VirtualRegister operand = node->local();
-                if ((operand.isLocal() && liveAtCatchHead[operand.toLocal()]) || operand.isArgument()) {
-
+                Operand operand = node->operand();
+                if (operand.isArgument() || liveAtCatchHead.operand(operand)) {
                     ASSERT(isValidFlushLocation(block, nodeIndex, operand));
 
                     VariableAccessData* variableAccessData = currentBlockAccessData.operand(operand);
@@ -207,7 +207,7 @@
             }
 
             if (node->accessesStack(m_graph))
-                currentBlockAccessData.operand(node->local()) = node->variableAccessData();
+                currentBlockAccessData.operand(node->operand()) = node->variableAccessData();
         }
 
         if (currentExceptionHandler) {
@@ -216,7 +216,7 @@
         }
     }
 
-    VariableAccessData* newVariableAccessData(VirtualRegister operand)
+    VariableAccessData* newVariableAccessData(Operand operand)
     {
         ASSERT(!operand.isConstant());
         
diff --git a/Source/JavaScriptCore/dfg/DFGMovHintRemovalPhase.cpp b/Source/JavaScriptCore/dfg/DFGMovHintRemovalPhase.cpp
index 73a3d75..fff96bf 100644
--- a/Source/JavaScriptCore/dfg/DFGMovHintRemovalPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGMovHintRemovalPhase.cpp
@@ -85,7 +85,7 @@
         m_state.fill(Epoch());
         m_graph.forAllLiveInBytecode(
             block->terminal()->origin.forExit,
-            [&] (VirtualRegister reg) {
+            [&] (Operand reg) {
                 m_state.operand(reg) = currentEpoch;
             });
         
@@ -99,7 +99,7 @@
             Node* node = block->at(nodeIndex);
             
             if (node->op() == MovHint) {
-                Epoch localEpoch = m_state.operand(node->unlinkedLocal());
+                Epoch localEpoch = m_state.operand(node->unlinkedOperand());
                 if (DFGMovHintRemovalPhaseInternal::verbose)
                     dataLog("    At ", node, ": current = ", currentEpoch, ", local = ", localEpoch, "\n");
                 if (!localEpoch || localEpoch == currentEpoch) {
@@ -107,7 +107,7 @@
                     node->child1() = Edge();
                     m_changed = true;
                 }
-                m_state.operand(node->unlinkedLocal()) = Epoch();
+                m_state.operand(node->unlinkedOperand()) = Epoch();
             }
             
             if (mayExit(m_graph, node) != DoesNotExit)
@@ -116,15 +116,15 @@
             if (nodeIndex) {
                 forAllKilledOperands(
                     m_graph, block->at(nodeIndex - 1), node,
-                    [&] (VirtualRegister reg) {
+                    [&] (Operand operand) {
                         // This function is a bit sloppy - it might claim to kill a local even if
                         // it's still live after. We need to protect against that.
-                        if (!!m_state.operand(reg))
+                        if (!!m_state.operand(operand))
                             return;
                         
                         if (DFGMovHintRemovalPhaseInternal::verbose)
-                            dataLog("    Killed operand at ", node, ": ", reg, "\n");
-                        m_state.operand(reg) = currentEpoch;
+                            dataLog("    Killed operand at ", node, ": ", operand, "\n");
+                        m_state.operand(operand) = currentEpoch;
                     });
             }
         }
diff --git a/Source/JavaScriptCore/dfg/DFGNode.h b/Source/JavaScriptCore/dfg/DFGNode.h
index 4bfd3e5..517af10 100644
--- a/Source/JavaScriptCore/dfg/DFGNode.h
+++ b/Source/JavaScriptCore/dfg/DFGNode.h
@@ -251,13 +251,13 @@
     {
     }
     
-    StackAccessData(VirtualRegister local, FlushFormat format)
-        : local(local)
+    StackAccessData(Operand operand, FlushFormat format)
+        : operand(operand)
         , format(format)
     {
     }
     
-    VirtualRegister local;
+    Operand operand;
     VirtualRegister machineLocal;
     FlushFormat format;
     
@@ -915,6 +915,7 @@
         switch (op()) {
         case GetMyArgumentByVal:
         case GetMyArgumentByValOutOfBounds:
+        case VarargsLength:
         case LoadVarargs:
         case ForwardVarargs:
         case CallVarargs:
@@ -936,9 +937,11 @@
         switch (op()) {
         case GetMyArgumentByVal:
         case GetMyArgumentByValOutOfBounds:
+        case VarargsLength:
+            return child1();
         case LoadVarargs:
         case ForwardVarargs:
-            return child1();
+            return child2();
         case CallVarargs:
         case CallForwardVarargs:
         case ConstructVarargs:
@@ -986,9 +989,9 @@
         return m_opInfo.as<VariableAccessData*>()->find();
     }
     
-    VirtualRegister local()
+    Operand operand()
     {
-        return variableAccessData()->local();
+        return variableAccessData()->operand();
     }
     
     VirtualRegister machineLocal()
@@ -996,7 +999,7 @@
         return variableAccessData()->machineLocal();
     }
     
-    bool hasUnlinkedLocal()
+    bool hasUnlinkedOperand()
     {
         switch (op()) {
         case ExtractOSREntryLocal:
@@ -1009,10 +1012,10 @@
         }
     }
     
-    VirtualRegister unlinkedLocal()
+    Operand unlinkedOperand()
     {
-        ASSERT(hasUnlinkedLocal());
-        return VirtualRegister(m_opInfo.as<int32_t>());
+        ASSERT(hasUnlinkedOperand());
+        return Operand::fromBits(m_opInfo.as<uint64_t>());
     }
     
     bool hasStackAccessData()
@@ -1363,7 +1366,7 @@
     
     bool hasLoadVarargsData()
     {
-        return op() == LoadVarargs || op() == ForwardVarargs;
+        return op() == LoadVarargs || op() == ForwardVarargs || op() == VarargsLength;
     }
     
     LoadVarargsData* loadVarargsData()
diff --git a/Source/JavaScriptCore/dfg/DFGNodeType.h b/Source/JavaScriptCore/dfg/DFGNodeType.h
index c047c6e..7c6d1b2 100644
--- a/Source/JavaScriptCore/dfg/DFGNodeType.h
+++ b/Source/JavaScriptCore/dfg/DFGNodeType.h
@@ -74,6 +74,7 @@
     macro(GetLocal, NodeResultJS | NodeMustGenerate) \
     macro(SetLocal, 0) \
     \
+    /* These are used in SSA form to represent to track */\
     macro(PutStack, NodeMustGenerate) \
     macro(KillStack, NodeMustGenerate) \
     macro(GetStack, NodeResultJS) \
@@ -202,6 +203,7 @@
     macro(GetByValWithThis, NodeResultJS | NodeMustGenerate) \
     macro(GetMyArgumentByVal, NodeResultJS | NodeMustGenerate) \
     macro(GetMyArgumentByValOutOfBounds, NodeResultJS | NodeMustGenerate) \
+    macro(VarargsLength, NodeMustGenerate | NodeResultInt32) \
     macro(LoadVarargs, NodeMustGenerate) \
     macro(ForwardVarargs, NodeMustGenerate) \
     macro(PutByValDirect, NodeMustGenerate | NodeHasVarArgs) \
diff --git a/Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.cpp b/Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.cpp
index 79bc796..f59f589 100644
--- a/Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.cpp
@@ -35,11 +35,9 @@
 #include "JSCInlines.h"
 
 namespace JSC { namespace DFG {
-namespace DFGOSRAvailabilityAnalysisPhaseInternal {
-static constexpr bool verbose = false;
-}
 
 class OSRAvailabilityAnalysisPhase : public Phase {
+    static constexpr bool verbose = false;
 public:
     OSRAvailabilityAnalysisPhase(Graph& graph)
         : Phase(graph, "OSR availability analysis")
@@ -75,8 +73,8 @@
             dataLog("Live: ");
             m_graph.forAllLiveInBytecode(
                 block->at(0)->origin.forExit,
-                [&] (VirtualRegister reg) {
-                    dataLog(reg, " ");
+                [&] (Operand operand) {
+                    dataLog(operand, " ");
                 });
             dataLogLn("");
         };
@@ -91,7 +89,7 @@
                 if (!block)
                     continue;
                 
-                if (DFGOSRAvailabilityAnalysisPhaseInternal::verbose) {
+                if (verbose) {
                     dataLogLn("Before changing Block #", block->index);
                     dumpAvailability(block);
                 }
@@ -106,7 +104,7 @@
                 block->ssa->availabilityAtTail = calculator.m_availability;
                 changed = true;
 
-                if (DFGOSRAvailabilityAnalysisPhaseInternal::verbose) {
+                if (verbose) {
                     dataLogLn("After changing Block #", block->index);
                     dumpAvailability(block);
                 }
@@ -120,7 +118,7 @@
                     BasicBlock* successor = block->successor(successorIndex);
                     successor->ssa->availabilityAtHead.pruneByLiveness(
                         m_graph, successor->at(0)->origin.forExit);
-                    if (DFGOSRAvailabilityAnalysisPhaseInternal::verbose) {
+                    if (verbose) {
                         dataLogLn("After pruning Block #", successor->index);
                         dumpAvailability(successor);
                         dumpBytecodeLivenessAtHead(successor);
@@ -208,28 +206,28 @@
     switch (node->op()) {
     case PutStack: {
         StackAccessData* data = node->stackAccessData();
-        m_availability.m_locals.operand(data->local).setFlush(data->flushedAt());
+        m_availability.m_locals.operand(data->operand).setFlush(data->flushedAt());
         break;
     }
         
     case KillStack: {
-        m_availability.m_locals.operand(node->unlinkedLocal()).setFlush(FlushedAt(ConflictingFlush));
+        m_availability.m_locals.operand(node->unlinkedOperand()).setFlush(FlushedAt(ConflictingFlush));
         break;
     }
 
     case GetStack: {
         StackAccessData* data = node->stackAccessData();
-        m_availability.m_locals.operand(data->local) = Availability(node, data->flushedAt());
+        m_availability.m_locals.operand(data->operand) = Availability(node, data->flushedAt());
         break;
     }
 
     case MovHint: {
-        m_availability.m_locals.operand(node->unlinkedLocal()).setNode(node->child1().node());
+        m_availability.m_locals.operand(node->unlinkedOperand()).setNode(node->child1().node());
         break;
     }
 
     case ZombieHint: {
-        m_availability.m_locals.operand(node->unlinkedLocal()).setNodeUnavailable();
+        m_availability.m_locals.operand(node->unlinkedOperand()).setNodeUnavailable();
         break;
     }
 
@@ -237,20 +235,23 @@
         unsigned entrypointIndex = node->entrypointIndex();
         const Vector<FlushFormat>& argumentFormats = m_graph.m_argumentFormats[entrypointIndex];
         for (unsigned argument = argumentFormats.size(); argument--; ) {
-            FlushedAt flushedAt = FlushedAt(argumentFormats[argument], virtualRegisterForArgument(argument));
+            FlushedAt flushedAt = FlushedAt(argumentFormats[argument], virtualRegisterForArgumentIncludingThis(argument));
             m_availability.m_locals.argument(argument) = Availability(flushedAt);
         }
         break;
     }
-        
+
+    case VarargsLength: {
+        break;
+    }
+
     case LoadVarargs:
     case ForwardVarargs: {
         LoadVarargsData* data = node->loadVarargsData();
-        m_availability.m_locals.operand(data->count) =
-            Availability(FlushedAt(FlushedInt32, data->machineCount));
+        m_availability.m_locals.operand(data->count) = Availability(FlushedAt(FlushedInt32, data->machineCount));
         for (unsigned i = data->limit; i--;) {
-            m_availability.m_locals.operand(VirtualRegister(data->start.offset() + i)) =
-                Availability(FlushedAt(FlushedJSValue, VirtualRegister(data->machineStart.offset() + i)));
+            m_availability.m_locals.operand(data->start + i) =
+                Availability(FlushedAt(FlushedJSValue, data->machineStart + i));
         }
         break;
     }
@@ -272,20 +273,20 @@
         if (inlineCallFrame->isVarargs()) {
             // Record how to read each argument and the argument count.
             Availability argumentCount =
-                m_availability.m_locals.operand(inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis);
+                m_availability.m_locals.operand(VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis));
             
             m_availability.m_heap.set(PromotedHeapLocation(ArgumentCountPLoc, node), argumentCount);
         }
         
         if (inlineCallFrame->isClosureCall) {
             Availability callee = m_availability.m_locals.operand(
-                inlineCallFrame->stackOffset + CallFrameSlot::callee);
+                VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::callee));
             m_availability.m_heap.set(PromotedHeapLocation(ArgumentsCalleePLoc, node), callee);
         }
         
-        for (unsigned i = numberOfArgumentsToSkip; i < inlineCallFrame->argumentCountIncludingThis - 1; ++i) {
+        for (unsigned i = numberOfArgumentsToSkip; i < static_cast<unsigned>(inlineCallFrame->argumentCountIncludingThis - 1); ++i) {
             Availability argument = m_availability.m_locals.operand(
-                inlineCallFrame->stackOffset + CallFrame::argumentOffset(i));
+                VirtualRegister(inlineCallFrame->stackOffset + CallFrame::argumentOffset(i)));
             
             m_availability.m_heap.set(PromotedHeapLocation(ArgumentPLoc, node, i), argument);
         }
diff --git a/Source/JavaScriptCore/dfg/DFGOSREntry.cpp b/Source/JavaScriptCore/dfg/DFGOSREntry.cpp
index a20d7d3..ac8c6cb 100644
--- a/Source/JavaScriptCore/dfg/DFGOSREntry.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOSREntry.cpp
@@ -76,7 +76,7 @@
     CommaPrinter comma;
     for (size_t argumentIndex = m_expectedValues.numberOfArguments(); argumentIndex--;) {
         out.print(comma, "arg", argumentIndex, ":");
-        printOperand(virtualRegisterForArgument(argumentIndex));
+        printOperand(virtualRegisterForArgumentIncludingThis(argumentIndex));
     }
     for (size_t localIndex = 0; localIndex < m_expectedValues.numberOfLocals(); ++localIndex) {
         out.print(comma, "loc", localIndex, ":");
@@ -313,7 +313,7 @@
     
     // 7) Fix the call frame to have the right code block.
     
-    *bitwise_cast<CodeBlock**>(pivot - 1 - CallFrameSlot::codeBlock) = codeBlock;
+    *bitwise_cast<CodeBlock**>(pivot - (CallFrameSlot::codeBlock + 1)) = codeBlock;
     
     dataLogLnIf(Options::verboseOSR(), "    OSR returning data buffer ", RawPointer(scratch));
     return scratch;
@@ -341,7 +341,7 @@
 
     // We're only allowed to OSR enter if we've proven we have compatible argument types.
     for (unsigned argument = 0; argument < catchEntrypoint->argumentFormats.size(); ++argument) {
-        JSValue value = callFrame->uncheckedR(virtualRegisterForArgument(argument)).jsValue();
+        JSValue value = callFrame->uncheckedR(virtualRegisterForArgumentIncludingThis(argument)).jsValue();
         switch (catchEntrypoint->argumentFormats[argument]) {
         case DFG::FlushedInt32:
             if (!value.isInt32())
@@ -372,10 +372,10 @@
 
     auto instruction = baselineCodeBlock->instructions().at(callFrame->bytecodeIndex());
     ASSERT(instruction->is<OpCatch>());
-    ValueProfileAndOperandBuffer* buffer = instruction->as<OpCatch>().metadata(baselineCodeBlock).m_buffer;
+    ValueProfileAndVirtualRegisterBuffer* buffer = instruction->as<OpCatch>().metadata(baselineCodeBlock).m_buffer;
     JSValue* dataBuffer = reinterpret_cast<JSValue*>(dfgCommon->catchOSREntryBuffer->dataBuffer());
     unsigned index = 0;
-    buffer->forEach([&] (ValueProfileAndOperand& profile) {
+    buffer->forEach([&] (ValueProfileAndVirtualRegister& profile) {
         if (!VirtualRegister(profile.m_operand).isLocal())
             return;
         dataBuffer[index] = callFrame->uncheckedR(profile.m_operand).jsValue();
diff --git a/Source/JavaScriptCore/dfg/DFGOSREntrypointCreationPhase.cpp b/Source/JavaScriptCore/dfg/DFGOSREntrypointCreationPhase.cpp
index dc76bcb..6096866 100644
--- a/Source/JavaScriptCore/dfg/DFGOSREntrypointCreationPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOSREntrypointCreationPhase.cpp
@@ -102,10 +102,10 @@
             VariableAccessData* variable = previousHead->variableAccessData();
             locals[local] = newRoot->appendNode(
                 m_graph, variable->prediction(), ExtractOSREntryLocal, origin,
-                OpInfo(variable->local().offset()));
+                OpInfo(variable->operand().virtualRegister()));
             
             newRoot->appendNode(
-                m_graph, SpecNone, MovHint, origin, OpInfo(variable->local().offset()),
+                m_graph, SpecNone, MovHint, origin, OpInfo(variable->operand().virtualRegister()),
                 Edge(locals[local]));
         }
 
diff --git a/Source/JavaScriptCore/dfg/DFGOSRExit.cpp b/Source/JavaScriptCore/dfg/DFGOSRExit.cpp
index 42cb5e6..92380a7 100644
--- a/Source/JavaScriptCore/dfg/DFGOSRExit.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOSRExit.cpp
@@ -30,6 +30,7 @@
 
 #include "AssemblyHelpers.h"
 #include "BytecodeUseDef.h"
+#include "CheckpointOSRExitSideState.h"
 #include "ClonedArguments.h"
 #include "DFGGraph.h"
 #include "DFGMayExit.h"
@@ -47,895 +48,6 @@
 
 namespace JSC { namespace DFG {
 
-// Probe based OSR Exit.
-
-using CPUState = Probe::CPUState;
-using Context = Probe::Context;
-using Frame = Probe::Frame;
-
-static void reifyInlinedCallFrames(Probe::Context&, CodeBlock* baselineCodeBlock, const OSRExitBase&);
-static void adjustAndJumpToTarget(Probe::Context&, VM&, CodeBlock*, CodeBlock* baselineCodeBlock, OSRExit&);
-static void printOSRExit(Context&, uint32_t osrExitIndex, const OSRExit&);
-
-static JSValue jsValueFor(CPUState& cpu, JSValueSource source)
-{
-    if (source.isAddress()) {
-        JSValue result;
-        std::memcpy(&result, cpu.gpr<uint8_t*>(source.base()) + source.offset(), sizeof(JSValue));
-        return result;
-    }
-#if USE(JSVALUE64)
-    return JSValue::decode(cpu.gpr<EncodedJSValue>(source.gpr()));
-#else
-    if (source.hasKnownTag())
-        return JSValue(source.tag(), cpu.gpr<int32_t>(source.payloadGPR()));
-    return JSValue(cpu.gpr<int32_t>(source.tagGPR()), cpu.gpr<int32_t>(source.payloadGPR()));
-#endif
-}
-
-#if NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
-
-// Based on AssemblyHelpers::emitRestoreCalleeSavesFor().
-static void restoreCalleeSavesFor(Context& context, CodeBlock* codeBlock)
-{
-    ASSERT(codeBlock);
-
-    const RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters();
-    RegisterSet dontRestoreRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs());
-    unsigned registerCount = calleeSaves->size();
-
-    UCPURegister* physicalStackFrame = context.fp<UCPURegister*>();
-    for (unsigned i = 0; i < registerCount; i++) {
-        RegisterAtOffset entry = calleeSaves->at(i);
-        if (dontRestoreRegisters.get(entry.reg()))
-            continue;
-        // The callee saved values come from the original stack, not the recovered stack.
-        // Hence, we read the values directly from the physical stack memory instead of
-        // going through context.stack().
-        ASSERT(!(entry.offset() % sizeof(UCPURegister)));
-        context.gpr(entry.reg().gpr()) = physicalStackFrame[entry.offset() / sizeof(UCPURegister)];
-    }
-}
-
-// Based on AssemblyHelpers::emitSaveCalleeSavesFor().
-static void saveCalleeSavesFor(Context& context, CodeBlock* codeBlock)
-{
-    auto& stack = context.stack();
-    ASSERT(codeBlock);
-
-    const RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters();
-    RegisterSet dontSaveRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs());
-    unsigned registerCount = calleeSaves->size();
-
-    for (unsigned i = 0; i < registerCount; i++) {
-        RegisterAtOffset entry = calleeSaves->at(i);
-        if (dontSaveRegisters.get(entry.reg()))
-            continue;
-        stack.set(context.fp(), entry.offset(), context.gpr<UCPURegister>(entry.reg().gpr()));
-    }
-}
-
-// Based on AssemblyHelpers::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer().
-static void restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(Context& context)
-{
-    VM& vm = *context.arg<VM*>();
-
-    RegisterAtOffsetList* allCalleeSaves = RegisterSet::vmCalleeSaveRegisterOffsets();
-    RegisterSet dontRestoreRegisters = RegisterSet::stackRegisters();
-    unsigned registerCount = allCalleeSaves->size();
-
-    VMEntryRecord* entryRecord = vmEntryRecord(vm.topEntryFrame);
-    UCPURegister* calleeSaveBuffer = reinterpret_cast<UCPURegister*>(entryRecord->calleeSaveRegistersBuffer);
-
-    // Restore all callee saves.
-    for (unsigned i = 0; i < registerCount; i++) {
-        RegisterAtOffset entry = allCalleeSaves->at(i);
-        if (dontRestoreRegisters.get(entry.reg()))
-            continue;
-        size_t uintptrOffset = entry.offset() / sizeof(UCPURegister);
-        if (entry.reg().isGPR())
-            context.gpr(entry.reg().gpr()) = calleeSaveBuffer[uintptrOffset];
-        else {
-#if USE(JSVALUE64)
-            context.fpr(entry.reg().fpr()) = bitwise_cast<double>(calleeSaveBuffer[uintptrOffset]);
-#else
-            // FIXME: <https://webkit.org/b/193275> support callee-saved floating point registers on 32-bit architectures
-            RELEASE_ASSERT_NOT_REACHED();
-#endif
-        }
-    }
-}
-
-// Based on AssemblyHelpers::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer().
-static void copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(Context& context)
-{
-    VM& vm = *context.arg<VM*>();
-    auto& stack = context.stack();
-
-    VMEntryRecord* entryRecord = vmEntryRecord(vm.topEntryFrame);
-    void* calleeSaveBuffer = entryRecord->calleeSaveRegistersBuffer;
-
-    RegisterAtOffsetList* allCalleeSaves = RegisterSet::vmCalleeSaveRegisterOffsets();
-    RegisterSet dontCopyRegisters = RegisterSet::stackRegisters();
-    unsigned registerCount = allCalleeSaves->size();
-
-    for (unsigned i = 0; i < registerCount; i++) {
-        RegisterAtOffset entry = allCalleeSaves->at(i);
-        if (dontCopyRegisters.get(entry.reg()))
-            continue;
-        if (entry.reg().isGPR())
-            stack.set(calleeSaveBuffer, entry.offset(), context.gpr<UCPURegister>(entry.reg().gpr()));
-        else {
-#if USE(JSVALUE64)
-            stack.set(calleeSaveBuffer, entry.offset(), context.fpr<UCPURegister>(entry.reg().fpr()));
-#else
-            // FIXME: <https://webkit.org/b/193275> support callee-saved floating point registers on 32-bit architectures
-            RELEASE_ASSERT_NOT_REACHED();
-#endif
-        }
-    }
-}
-
-// Based on AssemblyHelpers::emitSaveOrCopyCalleeSavesFor().
-static void saveOrCopyCalleeSavesFor(Context& context, CodeBlock* codeBlock, VirtualRegister offsetVirtualRegister, bool wasCalledViaTailCall)
-{
-    Frame frame(context.fp(), context.stack());
-    ASSERT(codeBlock);
-
-    const RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters();
-    RegisterSet dontSaveRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs());
-    unsigned registerCount = calleeSaves->size();
-
-    RegisterSet baselineCalleeSaves = RegisterSet::llintBaselineCalleeSaveRegisters();
-
-    for (unsigned i = 0; i < registerCount; i++) {
-        RegisterAtOffset entry = calleeSaves->at(i);
-        if (dontSaveRegisters.get(entry.reg()))
-            continue;
-
-        uintptr_t savedRegisterValue;
-
-        if (wasCalledViaTailCall && baselineCalleeSaves.get(entry.reg()))
-            savedRegisterValue = frame.get<uintptr_t>(entry.offset());
-        else
-            savedRegisterValue = context.gpr(entry.reg().gpr());
-
-        frame.set(offsetVirtualRegister.offsetInBytes() + entry.offset(), savedRegisterValue);
-    }
-}
-#else // not NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
-
-static void restoreCalleeSavesFor(Context&, CodeBlock*) { }
-static void saveCalleeSavesFor(Context&, CodeBlock*) { }
-static void restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(Context&) { }
-static void copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(Context&) { }
-static void saveOrCopyCalleeSavesFor(Context&, CodeBlock*, VirtualRegister, bool) { }
-
-#endif // NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
-
-static JSCell* createDirectArgumentsDuringExit(Context& context, CodeBlock* codeBlock, InlineCallFrame* inlineCallFrame, JSFunction* callee, int32_t argumentCount)
-{
-    VM& vm = *context.arg<VM*>();
-
-    ASSERT(vm.heap.isDeferred());
-
-    if (inlineCallFrame)
-        codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame);
-
-    unsigned length = argumentCount - 1;
-    unsigned capacity = std::max(length, static_cast<unsigned>(codeBlock->numParameters() - 1));
-    DirectArguments* result = DirectArguments::create(
-        vm, codeBlock->globalObject()->directArgumentsStructure(), length, capacity);
-
-    result->setCallee(vm, callee);
-
-    void* frameBase = context.fp<Register*>() + (inlineCallFrame ? inlineCallFrame->stackOffset : 0);
-    Frame frame(frameBase, context.stack());
-    for (unsigned i = length; i--;)
-        result->setIndexQuickly(vm, i, frame.argument(i));
-
-    return result;
-}
-
-static JSCell* createClonedArgumentsDuringExit(Context& context, CodeBlock* codeBlock, InlineCallFrame* inlineCallFrame, JSFunction* callee, int32_t argumentCount)
-{
-    VM& vm = *context.arg<VM*>();
-
-    ASSERT(vm.heap.isDeferred());
-
-    if (inlineCallFrame)
-        codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame);
-
-    JSGlobalObject* globalObject = codeBlock->globalObject();
-    unsigned length = argumentCount - 1;
-    ClonedArguments* result = ClonedArguments::createEmpty(
-        vm, globalObject->clonedArgumentsStructure(), callee, length);
-
-    void* frameBase = context.fp<Register*>() + (inlineCallFrame ? inlineCallFrame->stackOffset : 0);
-    Frame frame(frameBase, context.stack());
-    for (unsigned i = length; i--;)
-        result->putDirectIndex(globalObject, i, frame.argument(i));
-    return result;
-}
-
-static void emitRestoreArguments(Context& context, CodeBlock* codeBlock, DFG::JITCode* dfgJITCode, const Operands<ValueRecovery>& operands)
-{
-    Frame frame(context.fp(), context.stack());
-
-    HashMap<MinifiedID, int> alreadyAllocatedArguments; // Maps phantom arguments node ID to operand.
-    for (size_t index = 0; index < operands.size(); ++index) {
-        const ValueRecovery& recovery = operands[index];
-        int operand = operands.operandForIndex(index);
-
-        if (recovery.technique() != DirectArgumentsThatWereNotCreated
-            && recovery.technique() != ClonedArgumentsThatWereNotCreated)
-            continue;
-
-        MinifiedID id = recovery.nodeID();
-        auto iter = alreadyAllocatedArguments.find(id);
-        if (iter != alreadyAllocatedArguments.end()) {
-            frame.setOperand(operand, frame.operand(iter->value));
-            continue;
-        }
-
-        InlineCallFrame* inlineCallFrame =
-            dfgJITCode->minifiedDFG.at(id)->inlineCallFrame();
-
-        int stackOffset;
-        if (inlineCallFrame)
-            stackOffset = inlineCallFrame->stackOffset;
-        else
-            stackOffset = 0;
-
-        JSFunction* callee;
-        if (!inlineCallFrame || inlineCallFrame->isClosureCall)
-            callee = jsCast<JSFunction*>(frame.operand(stackOffset + CallFrameSlot::callee).asCell());
-        else
-            callee = jsCast<JSFunction*>(inlineCallFrame->calleeRecovery.constant().asCell());
-
-        int32_t argumentCount;
-        if (!inlineCallFrame || inlineCallFrame->isVarargs())
-            argumentCount = frame.operand<int32_t>(stackOffset + CallFrameSlot::argumentCountIncludingThis, PayloadOffset);
-        else
-            argumentCount = inlineCallFrame->argumentCountIncludingThis;
-
-        JSCell* argumentsObject;
-        switch (recovery.technique()) {
-        case DirectArgumentsThatWereNotCreated:
-            argumentsObject = createDirectArgumentsDuringExit(context, codeBlock, inlineCallFrame, callee, argumentCount);
-            break;
-        case ClonedArgumentsThatWereNotCreated:
-            argumentsObject = createClonedArgumentsDuringExit(context, codeBlock, inlineCallFrame, callee, argumentCount);
-            break;
-        default:
-            RELEASE_ASSERT_NOT_REACHED();
-            break;
-        }
-        frame.setOperand(operand, JSValue(argumentsObject));
-
-        alreadyAllocatedArguments.add(id, operand);
-    }
-}
-
-// The following is a list of extra initializations that need to be done in order
-// of most likely needed (lower enum value) to least likely needed (higher enum value).
-// Each level initialization includes the previous lower enum value (see use of the
-// extraInitializationLevel value below).
-enum class ExtraInitializationLevel {
-    None,
-    SpeculationRecovery,
-    ValueProfileUpdate,
-    ArrayProfileUpdate,
-    Other
-};
-
-void OSRExit::executeOSRExit(Context& context)
-{
-    VM& vm = *context.arg<VM*>();
-    auto scope = DECLARE_THROW_SCOPE(vm);
-
-    CallFrame* callFrame = context.fp<CallFrame*>();
-    ASSERT(&callFrame->deprecatedVM() == &vm);
-    auto& cpu = context.cpu;
-
-    if (validateDFGDoesGC) {
-        // We're about to exit optimized code. So, there's no longer any optimized
-        // code running that expects no GC.
-        vm.heap.setExpectDoesGC(true);
-    }
-
-    if (vm.callFrameForCatch) {
-        callFrame = vm.callFrameForCatch;
-        context.fp() = callFrame;
-    }
-
-    CodeBlock* codeBlock = callFrame->codeBlock();
-    ASSERT(codeBlock);
-    ASSERT(codeBlock->jitType() == JITType::DFGJIT);
-
-    // It's sort of preferable that we don't GC while in here. Anyways, doing so wouldn't
-    // really be profitable.
-    DeferGCForAWhile deferGC(vm.heap);
-
-    uint32_t exitIndex = vm.osrExitIndex;
-    DFG::JITCode* dfgJITCode = codeBlock->jitCode()->dfg();
-    OSRExit& exit = dfgJITCode->osrExit[exitIndex];
-
-    ASSERT(!vm.callFrameForCatch || exit.m_kind == GenericUnwind);
-    EXCEPTION_ASSERT_UNUSED(scope, !!scope.exception() || !exit.isExceptionHandler());
-
-    if (UNLIKELY(!exit.exitState)) {
-        ExtraInitializationLevel extraInitializationLevel = ExtraInitializationLevel::None;
-
-        // We only need to execute this block once for each OSRExit record. The computed
-        // results will be cached in the OSRExitState record for use of the rest of the
-        // exit ramp code.
-
-        CodeBlock* baselineCodeBlock = codeBlock->baselineAlternative();
-        ASSERT(JITCode::isBaselineCode(baselineCodeBlock->jitType()));
-
-        SpeculationRecovery* recovery = nullptr;
-        if (exit.m_recoveryIndex != UINT_MAX) {
-            recovery = &dfgJITCode->speculationRecovery[exit.m_recoveryIndex];
-            extraInitializationLevel = std::max(extraInitializationLevel, ExtraInitializationLevel::SpeculationRecovery);
-        }
-
-        if (UNLIKELY(exit.m_kind == GenericUnwind))
-            extraInitializationLevel = std::max(extraInitializationLevel, ExtraInitializationLevel::Other);
-
-        ArrayProfile* arrayProfile = nullptr;
-        if (!!exit.m_jsValueSource) {
-            if (exit.m_valueProfile)
-                extraInitializationLevel = std::max(extraInitializationLevel, ExtraInitializationLevel::ValueProfileUpdate);
-            if (exit.m_kind == BadCache || exit.m_kind == BadIndexingType) {
-                CodeOrigin codeOrigin = exit.m_codeOriginForExitProfile;
-                CodeBlock* profiledCodeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(codeOrigin, baselineCodeBlock);
-                arrayProfile = profiledCodeBlock->getArrayProfile(codeOrigin.bytecodeIndex());
-                if (arrayProfile)
-                    extraInitializationLevel = std::max(extraInitializationLevel, ExtraInitializationLevel::ArrayProfileUpdate);
-            }
-        }
-
-        int32_t activeThreshold = baselineCodeBlock->adjustedCounterValue(Options::thresholdForOptimizeAfterLongWarmUp());
-        double adjustedThreshold = applyMemoryUsageHeuristicsAndConvertToInt(activeThreshold, baselineCodeBlock);
-        ASSERT(adjustedThreshold > 0);
-        adjustedThreshold = BaselineExecutionCounter::clippedThreshold(codeBlock->globalObject(), adjustedThreshold);
-
-        CodeBlock* codeBlockForExit = baselineCodeBlockForOriginAndBaselineCodeBlock(exit.m_codeOrigin, baselineCodeBlock);
-        bool exitToLLInt = Options::forceOSRExitToLLInt() || codeBlockForExit->jitType() == JITType::InterpreterThunk;
-        void* jumpTarget;
-        if (exitToLLInt) {
-            BytecodeIndex bytecodeOffset = exit.m_codeOrigin.bytecodeIndex();
-            const Instruction& currentInstruction = *codeBlockForExit->instructions().at(bytecodeOffset).ptr();
-            MacroAssemblerCodePtr<JSEntryPtrTag> destination = LLInt::getCodePtr<JSEntryPtrTag>(currentInstruction);
-            jumpTarget = destination.executableAddress();    
-        } else {
-            const JITCodeMap& codeMap = codeBlockForExit->jitCodeMap();
-            CodeLocationLabel<JSEntryPtrTag> codeLocation = codeMap.find(exit.m_codeOrigin.bytecodeIndex());
-            ASSERT(codeLocation);
-            jumpTarget = codeLocation.executableAddress();
-        }
-
-        // Compute the value recoveries.
-        Operands<ValueRecovery> operands;
-        Vector<UndefinedOperandSpan> undefinedOperandSpans;
-        dfgJITCode->variableEventStream.reconstruct(codeBlock, exit.m_codeOrigin, dfgJITCode->minifiedDFG, exit.m_streamIndex, operands, &undefinedOperandSpans);
-        ptrdiff_t stackPointerOffset = -static_cast<ptrdiff_t>(codeBlock->jitCode()->dfgCommon()->requiredRegisterCountForExit) * sizeof(Register);
-
-        exit.exitState = adoptRef(new OSRExitState(exit, codeBlock, baselineCodeBlock, operands, WTFMove(undefinedOperandSpans), recovery, stackPointerOffset, activeThreshold, adjustedThreshold, jumpTarget, arrayProfile, exitToLLInt));
-
-        if (UNLIKELY(vm.m_perBytecodeProfiler && codeBlock->jitCode()->dfgCommon()->compilation)) {
-            Profiler::Database& database = *vm.m_perBytecodeProfiler;
-            Profiler::Compilation* compilation = codeBlock->jitCode()->dfgCommon()->compilation.get();
-
-            Profiler::OSRExit* profilerExit = compilation->addOSRExit(
-                exitIndex, Profiler::OriginStack(database, codeBlock, exit.m_codeOrigin),
-                exit.m_kind, exit.m_kind == UncountableInvalidation);
-            exit.exitState->profilerExit = profilerExit;
-            extraInitializationLevel = std::max(extraInitializationLevel, ExtraInitializationLevel::Other);
-        }
-
-        if (UNLIKELY(Options::printEachOSRExit()))
-            extraInitializationLevel = std::max(extraInitializationLevel, ExtraInitializationLevel::Other);
-
-        exit.exitState->extraInitializationLevel = extraInitializationLevel;
-
-        if (UNLIKELY(Options::verboseOSR() || Options::verboseDFGOSRExit())) {
-            dataLogF("DFG OSR exit #%u (%s, %s) from %s, with operands = %s\n",
-                exitIndex, toCString(exit.m_codeOrigin).data(),
-                exitKindToString(exit.m_kind), toCString(*codeBlock).data(),
-                toCString(ignoringContext<DumpContext>(operands)).data());
-        }
-    }
-
-    OSRExitState& exitState = *exit.exitState.get();
-    CodeBlock* baselineCodeBlock = exitState.baselineCodeBlock;
-    ASSERT(JITCode::isBaselineCode(baselineCodeBlock->jitType()));
-
-    Operands<ValueRecovery>& operands = exitState.operands;
-    Vector<UndefinedOperandSpan>& undefinedOperandSpans = exitState.undefinedOperandSpans;
-
-    context.sp() = context.fp<uint8_t*>() + exitState.stackPointerOffset;
-
-    // The only reason for using this do while loop is so we can break out midway when appropriate.
-    do {
-        auto extraInitializationLevel = static_cast<ExtraInitializationLevel>(exitState.extraInitializationLevel);
-
-        if (extraInitializationLevel == ExtraInitializationLevel::None)
-            break;
-
-        // Begin extra initilization level: SpeculationRecovery
-
-        // We need to do speculation recovery first because array profiling and value profiling
-        // may rely on a value that it recovers. However, that doesn't mean that it is likely
-        // to have a recovery value. So, we'll decorate it as UNLIKELY.
-        SpeculationRecovery* recovery = exitState.recovery;
-        if (UNLIKELY(recovery)) {
-            switch (recovery->type()) {
-            case SpeculativeAdd:
-                cpu.gpr(recovery->dest()) = cpu.gpr<uint32_t>(recovery->dest()) - cpu.gpr<uint32_t>(recovery->src());
-#if USE(JSVALUE64)
-                ASSERT(!(cpu.gpr(recovery->dest()) >> 32));
-                cpu.gpr(recovery->dest()) |= JSValue::NumberTag;
-#endif
-                break;
-
-            case SpeculativeAddSelf:
-                cpu.gpr(recovery->dest()) = static_cast<uint32_t>(cpu.gpr<int32_t>(recovery->dest()) >> 1) ^ 0x80000000U;
-#if USE(JSVALUE64)
-                ASSERT(!(cpu.gpr(recovery->dest()) >> 32));
-                cpu.gpr(recovery->dest()) |= JSValue::NumberTag;
-#endif
-                break;
-
-            case SpeculativeAddImmediate:
-                cpu.gpr(recovery->dest()) = (cpu.gpr<uint32_t>(recovery->dest()) - recovery->immediate());
-#if USE(JSVALUE64)
-                ASSERT(!(cpu.gpr(recovery->dest()) >> 32));
-                cpu.gpr(recovery->dest()) |= JSValue::NumberTag;
-#endif
-                break;
-
-            case BooleanSpeculationCheck:
-#if USE(JSVALUE64)
-                cpu.gpr(recovery->dest()) = cpu.gpr(recovery->dest()) ^ JSValue::ValueFalse;
-#endif
-                break;
-
-            default:
-                break;
-            }
-        }
-        if (extraInitializationLevel <= ExtraInitializationLevel::SpeculationRecovery)
-            break;
-
-        // Begin extra initilization level: ValueProfileUpdate
-        JSValue profiledValue;
-        if (!!exit.m_jsValueSource) {
-            profiledValue = jsValueFor(cpu, exit.m_jsValueSource);
-            if (MethodOfGettingAValueProfile profile = exit.m_valueProfile)
-                profile.reportValue(profiledValue);
-        }
-        if (extraInitializationLevel <= ExtraInitializationLevel::ValueProfileUpdate)
-            break;
-
-        // Begin extra initilization level: ArrayProfileUpdate
-        if (ArrayProfile* arrayProfile = exitState.arrayProfile) {
-            ASSERT(!!exit.m_jsValueSource);
-            ASSERT(exit.m_kind == BadCache || exit.m_kind == BadIndexingType);
-            CodeBlock* profiledCodeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(exit.m_codeOriginForExitProfile, baselineCodeBlock);
-            const Instruction* instruction = profiledCodeBlock->instructions().at(exit.m_codeOriginForExitProfile.bytecodeIndex()).ptr();
-            bool doProfile = !instruction->is<OpGetById>() || instruction->as<OpGetById>().metadata(profiledCodeBlock).m_modeMetadata.mode == GetByIdMode::ArrayLength;
-            if (doProfile) {
-                Structure* structure = profiledValue.asCell()->structure(vm);
-                arrayProfile->observeStructure(structure);
-                arrayProfile->observeArrayMode(arrayModesFromStructure(structure));
-            }
-        }
-        if (extraInitializationLevel <= ExtraInitializationLevel::ArrayProfileUpdate)
-            break;
-
-        // Begin Extra initilization level: Other
-        if (UNLIKELY(exit.m_kind == GenericUnwind)) {
-            // We are acting as a defacto op_catch because we arrive here from genericUnwind().
-            // So, we must restore our call frame and stack pointer.
-            restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(context);
-            ASSERT(context.fp() == vm.callFrameForCatch);
-        }
-
-        if (exitState.profilerExit)
-            exitState.profilerExit->incCount();
-
-        if (UNLIKELY(Options::printEachOSRExit()))
-            printOSRExit(context, vm.osrExitIndex, exit);
-
-    } while (false); // End extra initialization.
-
-    Frame frame(cpu.fp(), context.stack());
-    ASSERT(!(context.fp<uintptr_t>() & 0x7));
-
-#if USE(JSVALUE64)
-    ASSERT(cpu.gpr<int64_t>(GPRInfo::numberTagRegister) == JSValue::NumberTag);
-    ASSERT(cpu.gpr<int64_t>(GPRInfo::notCellMaskRegister) == JSValue::NotCellMask);
-#endif
-
-    // Do all data format conversions and store the results into the stack.
-    // Note: we need to recover values before restoring callee save registers below
-    // because the recovery may rely on values in some of callee save registers.
-
-    int calleeSaveSpaceAsVirtualRegisters = static_cast<int>(baselineCodeBlock->calleeSaveSpaceAsVirtualRegisters());
-    size_t numberOfOperands = operands.size();
-    size_t numUndefinedOperandSpans = undefinedOperandSpans.size();
-
-    size_t nextUndefinedSpanIndex = 0;
-    size_t nextUndefinedOperandIndex = numberOfOperands;
-    if (numUndefinedOperandSpans)
-        nextUndefinedOperandIndex = undefinedOperandSpans[nextUndefinedSpanIndex].firstIndex;
-
-    JSValue undefined = jsUndefined();
-    for (size_t spanIndex = 0; spanIndex < numUndefinedOperandSpans; ++spanIndex) {
-        auto& span = undefinedOperandSpans[spanIndex];
-        int firstOffset = span.minOffset;
-        int lastOffset = firstOffset + span.numberOfRegisters;
-
-        for (int offset = firstOffset; offset < lastOffset; ++offset)
-            frame.setOperand(offset, undefined);
-    }
-
-    for (size_t index = 0; index < numberOfOperands; ++index) {
-        const ValueRecovery& recovery = operands[index];
-        VirtualRegister reg = operands.virtualRegisterForIndex(index);
-
-        if (UNLIKELY(index == nextUndefinedOperandIndex)) {
-            index += undefinedOperandSpans[nextUndefinedSpanIndex++].numberOfRegisters - 1;
-            if (nextUndefinedSpanIndex < numUndefinedOperandSpans)
-                nextUndefinedOperandIndex = undefinedOperandSpans[nextUndefinedSpanIndex].firstIndex;
-            else
-                nextUndefinedOperandIndex = numberOfOperands;
-            continue;
-        }
-
-        if (reg.isLocal() && reg.toLocal() < calleeSaveSpaceAsVirtualRegisters)
-            continue;
-
-        int operand = reg.offset();
-
-        switch (recovery.technique()) {
-        case DisplacedInJSStack:
-            frame.setOperand(operand, callFrame->r(recovery.virtualRegister()).asanUnsafeJSValue());
-            break;
-
-        case InFPR:
-            frame.setOperand(operand, cpu.fpr<JSValue>(recovery.fpr()));
-            break;
-
-#if USE(JSVALUE64)
-        case InGPR:
-            frame.setOperand(operand, cpu.gpr<JSValue>(recovery.gpr()));
-            break;
-#else
-        case InPair:
-            frame.setOperand(operand, JSValue(cpu.gpr<int32_t>(recovery.tagGPR()), cpu.gpr<int32_t>(recovery.payloadGPR())));
-            break;
-#endif
-
-        case UnboxedCellInGPR:
-            frame.setOperand(operand, JSValue(cpu.gpr<JSCell*>(recovery.gpr())));
-            break;
-
-        case CellDisplacedInJSStack:
-            frame.setOperand(operand, JSValue(callFrame->r(recovery.virtualRegister()).asanUnsafeUnboxedCell()));
-            break;
-
-#if USE(JSVALUE32_64)
-        case UnboxedBooleanInGPR:
-            frame.setOperand(operand, jsBoolean(cpu.gpr<bool>(recovery.gpr())));
-            break;
-#endif
-
-        case BooleanDisplacedInJSStack:
-#if USE(JSVALUE64)
-            frame.setOperand(operand, callFrame->r(recovery.virtualRegister()).asanUnsafeJSValue());
-#else
-            frame.setOperand(operand, jsBoolean(callFrame->r(recovery.virtualRegister()).asanUnsafeJSValue().payload()));
-#endif
-            break;
-
-        case UnboxedInt32InGPR:
-            frame.setOperand(operand, JSValue(cpu.gpr<int32_t>(recovery.gpr())));
-            break;
-
-        case Int32DisplacedInJSStack:
-            frame.setOperand(operand, JSValue(callFrame->r(recovery.virtualRegister()).asanUnsafeUnboxedInt32()));
-            break;
-
-#if USE(JSVALUE64)
-        case UnboxedInt52InGPR:
-            frame.setOperand(operand, JSValue(cpu.gpr<int64_t>(recovery.gpr()) >> JSValue::int52ShiftAmount));
-            break;
-
-        case Int52DisplacedInJSStack:
-            frame.setOperand(operand, JSValue(callFrame->r(recovery.virtualRegister()).asanUnsafeUnboxedInt52()));
-            break;
-
-        case UnboxedStrictInt52InGPR:
-            frame.setOperand(operand, JSValue(cpu.gpr<int64_t>(recovery.gpr())));
-            break;
-
-        case StrictInt52DisplacedInJSStack:
-            frame.setOperand(operand, JSValue(callFrame->r(recovery.virtualRegister()).asanUnsafeUnboxedStrictInt52()));
-            break;
-#endif
-
-        case UnboxedDoubleInFPR:
-            frame.setOperand(operand, JSValue(JSValue::EncodeAsDouble, purifyNaN(cpu.fpr(recovery.fpr()))));
-            break;
-
-        case DoubleDisplacedInJSStack:
-            frame.setOperand(operand, JSValue(JSValue::EncodeAsDouble, purifyNaN(callFrame->r(recovery.virtualRegister()).asanUnsafeUnboxedDouble())));
-            break;
-
-        case Constant:
-            frame.setOperand(operand, recovery.constant());
-            break;
-
-        case DirectArgumentsThatWereNotCreated:
-        case ClonedArgumentsThatWereNotCreated:
-            // Don't do this, yet.
-            break;
-
-        default:
-            RELEASE_ASSERT_NOT_REACHED();
-            break;
-        }
-    }
-
-    // Restore the DFG callee saves and then save the ones the baseline JIT uses.
-    restoreCalleeSavesFor(context, codeBlock);
-    saveCalleeSavesFor(context, baselineCodeBlock);
-
-#if USE(JSVALUE64)
-    cpu.gpr(GPRInfo::numberTagRegister) = static_cast<JSC::UCPURegister>(JSValue::NumberTag);
-    cpu.gpr(GPRInfo::notCellMaskRegister) = static_cast<JSC::UCPURegister>(JSValue::NotCellMask);
-#endif
-
-    if (exit.isExceptionHandler())
-        copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(context);
-
-    // Now that things on the stack are recovered, do the arguments recovery. We assume that arguments
-    // recoveries don't recursively refer to each other. But, we don't try to assume that they only
-    // refer to certain ranges of locals. Hence why we need to do this here, once the stack is sensible.
-    // Note that we also roughly assume that the arguments might still be materialized outside of its
-    // inline call frame scope - but for now the DFG wouldn't do that.
-
-    DFG::emitRestoreArguments(context, codeBlock, dfgJITCode, operands);
-
-    // Adjust the old JIT's execute counter. Since we are exiting OSR, we know
-    // that all new calls into this code will go to the new JIT, so the execute
-    // counter only affects call frames that performed OSR exit and call frames
-    // that were still executing the old JIT at the time of another call frame's
-    // OSR exit. We want to ensure that the following is true:
-    //
-    // (a) Code the performs an OSR exit gets a chance to reenter optimized
-    //     code eventually, since optimized code is faster. But we don't
-    //     want to do such reentery too aggressively (see (c) below).
-    //
-    // (b) If there is code on the call stack that is still running the old
-    //     JIT's code and has never OSR'd, then it should get a chance to
-    //     perform OSR entry despite the fact that we've exited.
-    //
-    // (c) Code the performs an OSR exit should not immediately retry OSR
-    //     entry, since both forms of OSR are expensive. OSR entry is
-    //     particularly expensive.
-    //
-    // (d) Frequent OSR failures, even those that do not result in the code
-    //     running in a hot loop, result in recompilation getting triggered.
-    //
-    // To ensure (c), we'd like to set the execute counter to
-    // counterValueForOptimizeAfterWarmUp(). This seems like it would endanger
-    // (a) and (b), since then every OSR exit would delay the opportunity for
-    // every call frame to perform OSR entry. Essentially, if OSR exit happens
-    // frequently and the function has few loops, then the counter will never
-    // become non-negative and OSR entry will never be triggered. OSR entry
-    // will only happen if a loop gets hot in the old JIT, which does a pretty
-    // good job of ensuring (a) and (b). But that doesn't take care of (d),
-    // since each speculation failure would reset the execute counter.
-    // So we check here if the number of speculation failures is significantly
-    // larger than the number of successes (we want 90% success rate), and if
-    // there have been a large enough number of failures. If so, we set the
-    // counter to 0; otherwise we set the counter to
-    // counterValueForOptimizeAfterWarmUp().
-
-    if (UNLIKELY(codeBlock->updateOSRExitCounterAndCheckIfNeedToReoptimize(exitState) == CodeBlock::OptimizeAction::ReoptimizeNow))
-        triggerReoptimizationNow(baselineCodeBlock, codeBlock, &exit);
-
-    reifyInlinedCallFrames(context, baselineCodeBlock, exit);
-    adjustAndJumpToTarget(context, vm, codeBlock, baselineCodeBlock, exit);
-}
-
-static void reifyInlinedCallFrames(Context& context, CodeBlock* outermostBaselineCodeBlock, const OSRExitBase& exit)
-{
-    auto& cpu = context.cpu;
-    Frame frame(cpu.fp(), context.stack());
-
-    // FIXME: We shouldn't leave holes on the stack when performing an OSR exit
-    // in presence of inlined tail calls.
-    // https://bugs.webkit.org/show_bug.cgi?id=147511
-    ASSERT(JITCode::isBaselineCode(outermostBaselineCodeBlock->jitType()));
-    frame.setOperand<CodeBlock*>(CallFrameSlot::codeBlock, outermostBaselineCodeBlock);
-
-    const CodeOrigin* codeOrigin;
-    for (codeOrigin = &exit.m_codeOrigin; codeOrigin && codeOrigin->inlineCallFrame(); codeOrigin = codeOrigin->inlineCallFrame()->getCallerSkippingTailCalls()) {
-        InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame();
-        CodeBlock* baselineCodeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(*codeOrigin, outermostBaselineCodeBlock);
-        InlineCallFrame::Kind trueCallerCallKind;
-        CodeOrigin* trueCaller = inlineCallFrame->getCallerSkippingTailCalls(&trueCallerCallKind);
-        void* callerFrame = cpu.fp();
-
-        bool callerIsLLInt = false;
-
-        if (!trueCaller) {
-            ASSERT(inlineCallFrame->isTail());
-            void* returnPC = frame.get<void*>(CallFrame::returnPCOffset());
-#if CPU(ARM64E)
-            void* oldEntrySP = cpu.fp<uint8_t*>() + sizeof(CallerFrameAndPC);
-            void* newEntrySP = cpu.fp<uint8_t*>() + inlineCallFrame->returnPCOffset() + sizeof(void*);
-            returnPC = retagCodePtr(returnPC, bitwise_cast<PtrTag>(oldEntrySP), bitwise_cast<PtrTag>(newEntrySP));
-#endif
-            frame.set<void*>(inlineCallFrame->returnPCOffset(), returnPC);
-            callerFrame = frame.get<void*>(CallFrame::callerFrameOffset());
-        } else {
-            CodeBlock* baselineCodeBlockForCaller = baselineCodeBlockForOriginAndBaselineCodeBlock(*trueCaller, outermostBaselineCodeBlock);
-            BytecodeIndex callBytecodeIndex = trueCaller->bytecodeIndex();
-            void* jumpTarget = callerReturnPC(baselineCodeBlockForCaller, callBytecodeIndex, trueCallerCallKind, callerIsLLInt).untaggedExecutableAddress();
-
-            if (trueCaller->inlineCallFrame())
-                callerFrame = cpu.fp<uint8_t*>() + trueCaller->inlineCallFrame()->stackOffset * sizeof(EncodedJSValue);
-
-#if CPU(ARM64E)
-            void* newEntrySP = cpu.fp<uint8_t*>() + inlineCallFrame->returnPCOffset() + sizeof(void*);
-            jumpTarget = tagCodePtr(jumpTarget, bitwise_cast<PtrTag>(newEntrySP));
-#endif
-            frame.set<void*>(inlineCallFrame->returnPCOffset(), jumpTarget);
-        }
-
-        frame.setOperand<void*>(inlineCallFrame->stackOffset + CallFrameSlot::codeBlock, baselineCodeBlock);
-
-        // Restore the inline call frame's callee save registers.
-        // If this inlined frame is a tail call that will return back to the original caller, we need to
-        // copy the prior contents of the tag registers already saved for the outer frame to this frame.
-        saveOrCopyCalleeSavesFor(context, baselineCodeBlock, VirtualRegister(inlineCallFrame->stackOffset), !trueCaller);
-
-        if (callerIsLLInt) {
-            CodeBlock* baselineCodeBlockForCaller = baselineCodeBlockForOriginAndBaselineCodeBlock(*trueCaller, outermostBaselineCodeBlock);
-            frame.set<const void*>(calleeSaveSlot(inlineCallFrame, baselineCodeBlock, LLInt::Registers::metadataTableGPR).offset, baselineCodeBlockForCaller->metadataTable());
-#if USE(JSVALUE64)
-            frame.set<const void*>(calleeSaveSlot(inlineCallFrame, baselineCodeBlock, LLInt::Registers::pbGPR).offset, baselineCodeBlockForCaller->instructionsRawPointer());
-#endif
-        }
-
-        if (!inlineCallFrame->isVarargs())
-            frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis, PayloadOffset, inlineCallFrame->argumentCountIncludingThis);
-        ASSERT(callerFrame);
-        frame.set<void*>(inlineCallFrame->callerFrameOffset(), callerFrame);
-#if USE(JSVALUE64)
-        uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex()).bits();
-        frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis, TagOffset, locationBits);
-        if (!inlineCallFrame->isClosureCall)
-            frame.setOperand(inlineCallFrame->stackOffset + CallFrameSlot::callee, JSValue(inlineCallFrame->calleeConstant()));
-#else // USE(JSVALUE64) // so this is the 32-bit part
-        const Instruction* instruction = baselineCodeBlock->instructions().at(codeOrigin->bytecodeIndex()).ptr();
-        uint32_t locationBits = CallSiteIndex(BytecodeIndex(bitwise_cast<uint32_t>(instruction))).bits();
-        frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis, TagOffset, locationBits);
-        frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::callee, TagOffset, static_cast<uint32_t>(JSValue::CellTag));
-        if (!inlineCallFrame->isClosureCall)
-            frame.setOperand(inlineCallFrame->stackOffset + CallFrameSlot::callee, PayloadOffset, inlineCallFrame->calleeConstant());
-#endif // USE(JSVALUE64) // ending the #else part, so directly above is the 32-bit part
-    }
-
-    // Don't need to set the toplevel code origin if we only did inline tail calls
-    if (codeOrigin) {
-#if USE(JSVALUE64)
-        uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex()).bits();
-#else
-        const Instruction* instruction = outermostBaselineCodeBlock->instructions().at(codeOrigin->bytecodeIndex()).ptr();
-        uint32_t locationBits = CallSiteIndex(BytecodeIndex(bitwise_cast<uint32_t>(instruction))).bits();
-#endif
-        frame.setOperand<uint32_t>(CallFrameSlot::argumentCountIncludingThis, TagOffset, locationBits);
-    }
-}
-
-static void adjustAndJumpToTarget(Context& context, VM& vm, CodeBlock* codeBlock, CodeBlock* baselineCodeBlock, OSRExit& exit)
-{
-    OSRExitState* exitState = exit.exitState.get();
-
-    WTF::storeLoadFence(); // The optimizing compiler expects that the OSR exit mechanism will execute this fence.
-    vm.heap.writeBarrier(baselineCodeBlock);
-
-    // We barrier all inlined frames -- and not just the current inline stack --
-    // because we don't know which inlined function owns the value profile that
-    // we'll update when we exit. In the case of "f() { a(); b(); }", if both
-    // a and b are inlined, we might exit inside b due to a bad value loaded
-    // from a.
-    // FIXME: MethodOfGettingAValueProfile should remember which CodeBlock owns
-    // the value profile.
-    InlineCallFrameSet* inlineCallFrames = codeBlock->jitCode()->dfgCommon()->inlineCallFrames.get();
-    if (inlineCallFrames) {
-        for (InlineCallFrame* inlineCallFrame : *inlineCallFrames)
-            vm.heap.writeBarrier(inlineCallFrame->baselineCodeBlock.get());
-    }
-
-    auto* exitInlineCallFrame = exit.m_codeOrigin.inlineCallFrame();
-    if (exitInlineCallFrame)
-        context.fp() = context.fp<uint8_t*>() + exitInlineCallFrame->stackOffset * sizeof(EncodedJSValue);
-
-    void* jumpTarget = exitState->jumpTarget;
-    ASSERT(jumpTarget);
-
-    if (exit.isExceptionHandler()) {
-        // Since we're jumping to op_catch, we need to set callFrameForCatch.
-        vm.callFrameForCatch = context.fp<CallFrame*>();
-    }
-
-    vm.topCallFrame = context.fp<CallFrame*>();
-
-    if (exitState->isJumpToLLInt) {
-        CodeBlock* codeBlockForExit = baselineCodeBlockForOriginAndBaselineCodeBlock(exit.m_codeOrigin, baselineCodeBlock);
-        BytecodeIndex bytecodeIndex = exit.m_codeOrigin.bytecodeIndex();
-        const Instruction& currentInstruction = *codeBlockForExit->instructions().at(bytecodeIndex).ptr();
-
-        context.gpr(LLInt::Registers::metadataTableGPR) = bitwise_cast<uintptr_t>(codeBlockForExit->metadataTable());
-#if USE(JSVALUE64)
-        context.gpr(LLInt::Registers::pbGPR) = bitwise_cast<uintptr_t>(codeBlockForExit->instructionsRawPointer());
-        context.gpr(LLInt::Registers::pcGPR) = static_cast<uintptr_t>(exit.m_codeOrigin.bytecodeIndex().offset());
-#else
-        context.gpr(LLInt::Registers::pcGPR) = bitwise_cast<uintptr_t>(&currentInstruction);
-#endif
-
-        if (exit.isExceptionHandler())
-            vm.targetInterpreterPCForThrow = &currentInstruction;
-    }
-
-    context.pc() = untagCodePtr<JSEntryPtrTag>(jumpTarget);
-}
-
-static void printOSRExit(Context& context, uint32_t osrExitIndex, const OSRExit& exit)
-{
-    CallFrame* callFrame = context.fp<CallFrame*>();
-    CodeBlock* codeBlock = callFrame->codeBlock();
-    CodeBlock* alternative = codeBlock->alternative();
-    ExitKind kind = exit.m_kind;
-    BytecodeIndex bytecodeOffset = exit.m_codeOrigin.bytecodeIndex();
-
-    dataLog("Speculation failure in ", *codeBlock);
-    dataLog(" @ exit #", osrExitIndex, " (", bytecodeOffset, ", ", exitKindToString(kind), ") with ");
-    if (alternative) {
-        dataLog(
-            "executeCounter = ", alternative->jitExecuteCounter(),
-            ", reoptimizationRetryCounter = ", alternative->reoptimizationRetryCounter(),
-            ", optimizationDelayCounter = ", alternative->optimizationDelayCounter());
-    } else
-        dataLog("no alternative code block (i.e. we've been jettisoned)");
-    dataLog(", osrExitCounter = ", codeBlock->osrExitCounter(), "\n");
-    dataLog("    GPRs at time of exit:");
-    for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {
-        GPRReg gpr = GPRInfo::toRegister(i);
-        dataLog(" ", context.gprName(gpr), ":", RawPointer(context.gpr<void*>(gpr)));
-    }
-    dataLog("\n");
-    dataLog("    FPRs at time of exit:");
-    for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {
-        FPRReg fpr = FPRInfo::toRegister(i);
-        dataLog(" ", context.fprName(fpr), ":");
-        uint64_t bits = context.fpr<uint64_t>(fpr);
-        double value = context.fpr(fpr);
-        dataLogF("%llx:%lf", static_cast<long long>(bits), value);
-    }
-    dataLog("\n");
-}
-
-// JIT based OSR Exit.
-
 OSRExit::OSRExit(ExitKind kind, JSValueSource jsValueSource, MethodOfGettingAValueProfile valueProfile, SpeculativeJIT* jit, unsigned streamIndex, unsigned recoveryIndex)
     : OSRExitBase(kind, jit->m_origin.forExit, jit->m_origin.semantic, jit->m_origin.wasHoisted)
     , m_jsValueSource(jsValueSource)
@@ -958,15 +70,18 @@
 
 void OSRExit::emitRestoreArguments(CCallHelpers& jit, VM& vm, const Operands<ValueRecovery>& operands)
 {
-    HashMap<MinifiedID, int> alreadyAllocatedArguments; // Maps phantom arguments node ID to operand.
+    HashMap<MinifiedID, VirtualRegister> alreadyAllocatedArguments; // Maps phantom arguments node ID to operand.
     for (size_t index = 0; index < operands.size(); ++index) {
         const ValueRecovery& recovery = operands[index];
-        int operand = operands.operandForIndex(index);
 
         if (recovery.technique() != DirectArgumentsThatWereNotCreated
             && recovery.technique() != ClonedArgumentsThatWereNotCreated)
             continue;
 
+        Operand operand = operands.operandForIndex(index);
+        if (operand.isTmp())
+            continue;
+
         MinifiedID id = recovery.nodeID();
         auto iter = alreadyAllocatedArguments.find(id);
         if (iter != alreadyAllocatedArguments.end()) {
@@ -987,7 +102,7 @@
 
         if (!inlineCallFrame || inlineCallFrame->isClosureCall) {
             jit.loadPtr(
-                AssemblyHelpers::addressFor(stackOffset + CallFrameSlot::callee),
+                AssemblyHelpers::addressFor(VirtualRegister(stackOffset + CallFrameSlot::callee)),
                 GPRInfo::regT0);
         } else {
             jit.move(
@@ -997,7 +112,7 @@
 
         if (!inlineCallFrame || inlineCallFrame->isVarargs()) {
             jit.load32(
-                AssemblyHelpers::payloadFor(stackOffset + CallFrameSlot::argumentCountIncludingThis),
+                AssemblyHelpers::payloadFor(VirtualRegister(stackOffset + CallFrameSlot::argumentCountIncludingThis)),
                 GPRInfo::regT1);
         } else {
             jit.move(
@@ -1023,7 +138,7 @@
         jit.call(GPRInfo::nonArgGPR0, OperationPtrTag);
         jit.storeCell(GPRInfo::returnValueGPR, AssemblyHelpers::addressFor(operand));
 
-        alreadyAllocatedArguments.add(id, operand);
+        alreadyAllocatedArguments.add(id, operand.virtualRegister());
     }
 }
 
@@ -1466,16 +581,79 @@
     if (exit.isExceptionHandler())
         jit.copyCalleeSavesToEntryFrameCalleeSavesBuffer(vm.topEntryFrame);
 
+    if (exit.m_codeOrigin.inlineStackContainsActiveCheckpoint()) {
+        // FIXME: Maybe we shouldn't use a probe but filling all the side state objects is tricky otherwise...
+        Vector<ValueRecovery> values(operands.numberOfTmps());
+        for (size_t i = 0; i < operands.numberOfTmps(); ++i)
+            values[i] = operands.tmp(i);
+
+        VM* vmPtr = &vm;
+        auto* tmpScratch = scratch + operands.tmpIndex(0);
+        jit.probe([=, values = WTFMove(values)] (Probe::Context& context) {
+            auto addSideState = [&] (CallFrame* frame, BytecodeIndex index, size_t tmpOffset) {
+                std::unique_ptr<CheckpointOSRExitSideState> sideState = WTF::makeUnique<CheckpointOSRExitSideState>();
+
+                sideState->bytecodeIndex = index;
+                for (size_t i = 0; i < maxNumCheckpointTmps; ++i) {
+                    auto& recovery = values[i + tmpOffset];
+                    // FIXME: We should do what the FTL does and materialize all the JSValues into the scratch buffer.
+                    switch (recovery.technique()) {
+                    case Constant:
+                        sideState->tmps[i] = recovery.constant();
+                        break;
+
+                    case UnboxedInt32InGPR:
+                    case Int32DisplacedInJSStack: {
+                        sideState->tmps[i] = jsNumber(static_cast<int32_t>(tmpScratch[i + tmpOffset]));
+                        break;
+                    }
+
+                    case BooleanDisplacedInJSStack:
+                    case UnboxedCellInGPR:
+                    case CellDisplacedInJSStack:
+                    case InGPR:
+                    case DisplacedInJSStack: {
+                        sideState->tmps[i] = reinterpret_cast<JSValue*>(tmpScratch)[i + tmpOffset];
+                        break;
+                    }
+                    default: 
+                        RELEASE_ASSERT_NOT_REACHED();
+                        break;
+                    }
+                }
+
+                vmPtr->addCheckpointOSRSideState(frame, WTFMove(sideState));
+            };
+
+            const CodeOrigin* codeOrigin;
+            CallFrame* callFrame = context.gpr<CallFrame*>(GPRInfo::callFrameRegister);
+            for (codeOrigin = &exit.m_codeOrigin; codeOrigin && codeOrigin->inlineCallFrame(); codeOrigin = codeOrigin->inlineCallFrame()->getCallerSkippingTailCalls()) {
+                BytecodeIndex callBytecodeIndex = codeOrigin->bytecodeIndex();
+                if (!callBytecodeIndex.checkpoint())
+                    continue;
+
+                auto* inlineCallFrame = codeOrigin->inlineCallFrame();
+                addSideState(reinterpret_cast<CallFrame*>(reinterpret_cast<char*>(callFrame) + inlineCallFrame->returnPCOffset() - sizeof(CPURegister)), callBytecodeIndex, inlineCallFrame->tmpOffset);
+            }
+
+            if (!codeOrigin)
+                return;
+
+            if (BytecodeIndex bytecodeIndex = codeOrigin->bytecodeIndex(); bytecodeIndex.checkpoint())
+                addSideState(callFrame, bytecodeIndex, 0);
+        });
+    }
+
     // Do all data format conversions and store the results into the stack.
 
     for (size_t index = 0; index < operands.size(); ++index) {
         const ValueRecovery& recovery = operands[index];
-        VirtualRegister reg = operands.virtualRegisterForIndex(index);
-
-        if (reg.isLocal() && reg.toLocal() < static_cast<int>(jit.baselineCodeBlock()->calleeSaveSpaceAsVirtualRegisters()))
+        Operand operand = operands.operandForIndex(index);
+        if (operand.isTmp())
             continue;
 
-        int operand = reg.offset();
+        if (operand.isLocal() && operand.toLocal() < static_cast<int>(jit.baselineCodeBlock()->calleeSaveSpaceAsVirtualRegisters()))
+            continue;
 
         switch (recovery.technique()) {
         case DisplacedInJSStack:
diff --git a/Source/JavaScriptCore/dfg/DFGOSRExit.h b/Source/JavaScriptCore/dfg/DFGOSRExit.h
index a7e5878..9e9bcea 100644
--- a/Source/JavaScriptCore/dfg/DFGOSRExit.h
+++ b/Source/JavaScriptCore/dfg/DFGOSRExit.h
@@ -150,8 +150,6 @@
 
     friend void JIT_OPERATION operationCompileOSRExit(CallFrame*);
 
-    static void executeOSRExit(Probe::Context&);
-
     CodeLocationLabel<JSInternalPtrTag> m_patchableJumpLocation;
     MacroAssemblerCodeRef<OSRExitPtrTag> m_code;
 
diff --git a/Source/JavaScriptCore/dfg/DFGOSRExitBase.h b/Source/JavaScriptCore/dfg/DFGOSRExitBase.h
index 310963d..003b618 100644
--- a/Source/JavaScriptCore/dfg/DFGOSRExitBase.h
+++ b/Source/JavaScriptCore/dfg/DFGOSRExitBase.h
@@ -70,6 +70,11 @@
         return m_kind == GenericUnwind;
     }
 
+    ALWAYS_INLINE bool isExitingToCheckpointHandler() const
+    {
+        return m_codeOrigin.bytecodeIndex().checkpoint();
+    }
+
 protected:
     void considerAddingAsFrequentExitSite(CodeBlock* profiledCodeBlock, ExitingJITType jitType)
     {
diff --git a/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp b/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
index 7ba5d6e..27e2aa8 100644
--- a/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
@@ -29,12 +29,15 @@
 #if ENABLE(DFG_JIT)
 
 #include "Bytecodes.h"
+#include "CheckpointOSRExitSideState.h"
 #include "DFGJITCode.h"
 #include "DFGOperations.h"
 #include "JIT.h"
 #include "JSCJSValueInlines.h"
 #include "JSCInlines.h"
 #include "LLIntData.h"
+#include "LLIntThunks.h"
+#include "ProbeContext.h"
 #include "StructureStubInfo.h"
 
 namespace JSC { namespace DFG {
@@ -142,6 +145,10 @@
 MacroAssemblerCodePtr<JSEntryPtrTag> callerReturnPC(CodeBlock* baselineCodeBlockForCaller, BytecodeIndex callBytecodeIndex, InlineCallFrame::Kind trueCallerCallKind, bool& callerIsLLInt)
 {
     callerIsLLInt = Options::forceOSRExitToLLInt() || baselineCodeBlockForCaller->jitType() == JITType::InterpreterThunk;
+
+    if (callBytecodeIndex.checkpoint())
+        return LLInt::getCodePtr<JSEntryPtrTag>(checkpoint_osr_exit_from_inlined_call_trampoline);
+
     MacroAssemblerCodePtr<JSEntryPtrTag> jumpTarget;
 
     if (callerIsLLInt) {
@@ -237,7 +244,7 @@
     // in presence of inlined tail calls.
     // https://bugs.webkit.org/show_bug.cgi?id=147511
     ASSERT(JITCode::isBaselineCode(jit.baselineCodeBlock()->jitType()));
-    jit.storePtr(AssemblyHelpers::TrustedImmPtr(jit.baselineCodeBlock()), AssemblyHelpers::addressFor((VirtualRegister)CallFrameSlot::codeBlock));
+    jit.storePtr(AssemblyHelpers::TrustedImmPtr(jit.baselineCodeBlock()), AssemblyHelpers::addressFor(CallFrameSlot::codeBlock));
 
     const CodeOrigin* codeOrigin;
     for (codeOrigin = &exit.m_codeOrigin; codeOrigin && codeOrigin->inlineCallFrame(); codeOrigin = codeOrigin->inlineCallFrame()->getCallerSkippingTailCalls()) {
@@ -304,33 +311,34 @@
         }
 
         if (!inlineCallFrame->isVarargs())
-            jit.store32(AssemblyHelpers::TrustedImm32(inlineCallFrame->argumentCountIncludingThis), AssemblyHelpers::payloadFor((VirtualRegister)(inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis)));
+            jit.store32(AssemblyHelpers::TrustedImm32(inlineCallFrame->argumentCountIncludingThis), AssemblyHelpers::payloadFor(VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis)));
 #if USE(JSVALUE64)
         jit.storePtr(callerFrameGPR, AssemblyHelpers::addressForByteOffset(inlineCallFrame->callerFrameOffset()));
-        uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex()).bits();
-        jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor((VirtualRegister)(inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis)));
+        uint32_t locationBits = CallSiteIndex(baselineCodeBlock->bytecodeIndexForExit(codeOrigin->bytecodeIndex())).bits();
+        jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor(VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis)));
         if (!inlineCallFrame->isClosureCall)
-            jit.store64(AssemblyHelpers::TrustedImm64(JSValue::encode(JSValue(inlineCallFrame->calleeConstant()))), AssemblyHelpers::addressFor((VirtualRegister)(inlineCallFrame->stackOffset + CallFrameSlot::callee)));
+            jit.store64(AssemblyHelpers::TrustedImm64(JSValue::encode(JSValue(inlineCallFrame->calleeConstant()))), AssemblyHelpers::addressFor(VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::callee)));
 #else // USE(JSVALUE64) // so this is the 32-bit part
         jit.storePtr(callerFrameGPR, AssemblyHelpers::addressForByteOffset(inlineCallFrame->callerFrameOffset()));
         const Instruction* instruction = baselineCodeBlock->instructions().at(codeOrigin->bytecodeIndex()).ptr();
-        uint32_t locationBits = CallSiteIndex(BytecodeIndex(bitwise_cast<uint32_t>(instruction))).bits();
-        jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor((VirtualRegister)(inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis)));
-        jit.store32(AssemblyHelpers::TrustedImm32(JSValue::CellTag), AssemblyHelpers::tagFor((VirtualRegister)(inlineCallFrame->stackOffset + CallFrameSlot::callee)));
+        uint32_t locationBits = CallSiteIndex().bits();
+        jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor(VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis)));
+        jit.store32(AssemblyHelpers::TrustedImm32(JSValue::CellTag), AssemblyHelpers::tagFor(VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::callee)));
         if (!inlineCallFrame->isClosureCall)
-            jit.storePtr(AssemblyHelpers::TrustedImmPtr(inlineCallFrame->calleeConstant()), AssemblyHelpers::payloadFor((VirtualRegister)(inlineCallFrame->stackOffset + CallFrameSlot::callee)));
+            jit.storePtr(AssemblyHelpers::TrustedImmPtr(inlineCallFrame->calleeConstant()), AssemblyHelpers::payloadFor(VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::callee)));
 #endif // USE(JSVALUE64) // ending the #else part, so directly above is the 32-bit part
     }
 
     // Don't need to set the toplevel code origin if we only did inline tail calls
     if (codeOrigin) {
 #if USE(JSVALUE64)
-        uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex()).bits();
+        uint32_t locationBits = CallSiteIndex(BytecodeIndex(codeOrigin->bytecodeIndex().offset())).bits();
 #else
-        const Instruction* instruction = jit.baselineCodeBlock()->instructions().at(codeOrigin->bytecodeIndex()).ptr();
-        uint32_t locationBits = CallSiteIndex(BytecodeIndex(bitwise_cast<uint32_t>(instruction))).bits();
+        auto bytecodeIndex = jit.baselineCodeBlock()->bytecodeIndexForExit(codeOrigin->bytecodeIndex());
+        const Instruction* instruction = jit.baselineCodeBlock()->instructions(bytecodeIndex).at().ptr();
+        uint32_t locationBits = CallSiteIndex(bitwise_cast<uint32_t>(instruction)).bits();
 #endif
-        jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor((VirtualRegister)(CallFrameSlot::argumentCountIncludingThis)));
+        jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor(CallFrameSlot::argumentCountIncludingThis));
     }
 }
 
@@ -385,7 +393,11 @@
     if (exitToLLInt) {
         auto bytecodeIndex = exit.m_codeOrigin.bytecodeIndex();
         const Instruction& currentInstruction = *codeBlockForExit->instructions().at(bytecodeIndex).ptr();
-        MacroAssemblerCodePtr<JSEntryPtrTag> destination = LLInt::getCodePtr<JSEntryPtrTag>(currentInstruction);
+        MacroAssemblerCodePtr<JSEntryPtrTag> destination;
+        if (bytecodeIndex.checkpoint())
+            destination = LLInt::getCodePtr<JSEntryPtrTag>(checkpoint_osr_exit_trampoline);
+        else 
+            destination = LLInt::getCodePtr<JSEntryPtrTag>(currentInstruction);
 
         if (exit.isExceptionHandler()) {
             jit.move(CCallHelpers::TrustedImmPtr(&currentInstruction), GPRInfo::regT2);
@@ -401,10 +413,18 @@
 #endif
         jumpTarget = destination.retagged<OSRExitPtrTag>().executableAddress();
     } else {
-        CodeLocationLabel<JSEntryPtrTag> codeLocation = codeBlockForExit->jitCodeMap().find(exit.m_codeOrigin.bytecodeIndex());
-        ASSERT(codeLocation);
+        BytecodeIndex exitIndex = exit.m_codeOrigin.bytecodeIndex();
+        MacroAssemblerCodePtr<JSEntryPtrTag> destination;
+        if (exitIndex.checkpoint())
+            destination = LLInt::getCodePtr<JSEntryPtrTag>(checkpoint_osr_exit_trampoline);
+        else {
+            ASSERT(codeBlockForExit->bytecodeIndexForExit(exitIndex) == exitIndex);
+            destination = codeBlockForExit->jitCodeMap().find(exitIndex);
+        }
 
-        jumpTarget = codeLocation.retagged<OSRExitPtrTag>().executableAddress();
+        ASSERT(destination);
+
+        jumpTarget = destination.retagged<OSRExitPtrTag>().executableAddress();
     }
 
     jit.addPtr(AssemblyHelpers::TrustedImm32(JIT::stackPointerOffsetFor(codeBlockForExit) * sizeof(Register)), GPRInfo::callFrameRegister, AssemblyHelpers::stackPointerRegister);
diff --git a/Source/JavaScriptCore/dfg/DFGObjectAllocationSinkingPhase.cpp b/Source/JavaScriptCore/dfg/DFGObjectAllocationSinkingPhase.cpp
index e25b68a..0ea2315 100644
--- a/Source/JavaScriptCore/dfg/DFGObjectAllocationSinkingPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGObjectAllocationSinkingPhase.cpp
@@ -2242,7 +2242,7 @@
             if (m_heap.follow(availability.m_locals[i].node()) != escapee)
                 continue;
 
-            int operand = availability.m_locals.operandForIndex(i);
+            Operand operand = availability.m_locals.operandForIndex(i);
             m_insertionSet.insertNode(
                 nodeIndex, SpecNone, MovHint, origin.takeValidExit(canExit), OpInfo(operand),
                 materialization->defaultEdge());
diff --git a/Source/JavaScriptCore/dfg/DFGOpInfo.h b/Source/JavaScriptCore/dfg/DFGOpInfo.h
index f39c14b..77efcfb 100644
--- a/Source/JavaScriptCore/dfg/DFGOpInfo.h
+++ b/Source/JavaScriptCore/dfg/DFGOpInfo.h
@@ -44,7 +44,7 @@
     explicit OpInfo(IntegralType value)
         : m_value(static_cast<uint64_t>(value)) { }
     explicit OpInfo(RegisteredStructure structure) : m_value(static_cast<uint64_t>(bitwise_cast<uintptr_t>(structure))) { }
-
+    explicit OpInfo(Operand op) : m_value(op.asBits()) { }
 
     template <typename T>
     explicit OpInfo(T* ptr)
diff --git a/Source/JavaScriptCore/dfg/DFGOperations.cpp b/Source/JavaScriptCore/dfg/DFGOperations.cpp
index d208479..f7ac7ce 100644
--- a/Source/JavaScriptCore/dfg/DFGOperations.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOperations.cpp
@@ -2971,17 +2971,18 @@
     return -1;
 }
 
-void JIT_OPERATION operationLoadVarargs(JSGlobalObject* globalObject, int32_t firstElementDest, EncodedJSValue encodedArguments, uint32_t offset, uint32_t length, uint32_t mandatoryMinimum)
+void JIT_OPERATION operationLoadVarargs(JSGlobalObject* globalObject, int32_t firstElementDest, EncodedJSValue encodedArguments, uint32_t offset, uint32_t lengthIncludingThis, uint32_t mandatoryMinimum)
 {
+    VirtualRegister firstElement { firstElementDest };
     VM& vm = globalObject->vm();
     CallFrame* callFrame = DECLARE_CALL_FRAME(vm);
     JITOperationPrologueCallFrameTracer tracer(vm, callFrame);
     JSValue arguments = JSValue::decode(encodedArguments);
     
-    loadVarargs(globalObject, callFrame, VirtualRegister(firstElementDest), arguments, offset, length);
+    loadVarargs(globalObject, bitwise_cast<JSValue*>(&callFrame->r(firstElement)), arguments, offset, lengthIncludingThis - 1);
     
-    for (uint32_t i = length; i < mandatoryMinimum; ++i)
-        callFrame->r(firstElementDest + i) = jsUndefined();
+    for (uint32_t i = lengthIncludingThis - 1; i < mandatoryMinimum; ++i)
+        callFrame->r(firstElement + i) = jsUndefined();
 }
 
 double JIT_OPERATION operationFModOnInts(int32_t a, int32_t b)
diff --git a/Source/JavaScriptCore/dfg/DFGPhantomInsertionPhase.cpp b/Source/JavaScriptCore/dfg/DFGPhantomInsertionPhase.cpp
index cbc2c8b..2b105b2 100644
--- a/Source/JavaScriptCore/dfg/DFGPhantomInsertionPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGPhantomInsertionPhase.cpp
@@ -43,11 +43,8 @@
 
 namespace {
 
-namespace DFGPhantomInsertionPhaseInternal {
-static constexpr bool verbose = false;
-}
-
 class PhantomInsertionPhase : public Phase {
+    static constexpr bool verbose = false;
 public:
     PhantomInsertionPhase(Graph& graph)
         : Phase(graph, "phantom insertion")
@@ -62,7 +59,7 @@
         // SetLocals execute, which is inaccurate. That causes us to insert too few Phantoms.
         DFG_ASSERT(m_graph, nullptr, m_graph.m_refCountState == ExactRefCount);
         
-        if (DFGPhantomInsertionPhaseInternal::verbose) {
+        if (verbose) {
             dataLog("Graph before Phantom insertion:\n");
             m_graph.dump();
         }
@@ -72,7 +69,7 @@
         for (BasicBlock* block : m_graph.blocksInNaturalOrder())
             handleBlock(block);
         
-        if (DFGPhantomInsertionPhaseInternal::verbose) {
+        if (verbose) {
             dataLog("Graph after Phantom insertion:\n");
             m_graph.dump();
         }
@@ -103,22 +100,22 @@
         unsigned lastExitingIndex = 0;
         for (unsigned nodeIndex = 0; nodeIndex < block->size(); ++nodeIndex) {
             Node* node = block->at(nodeIndex);
-            if (DFGPhantomInsertionPhaseInternal::verbose)
+            if (verbose)
                 dataLog("Considering ", node, "\n");
             
             switch (node->op()) {
             case MovHint:
-                m_values.operand(node->unlinkedLocal()) = node->child1().node();
+                m_values.operand(node->unlinkedOperand()) = node->child1().node();
                 break;
                 
             case ZombieHint:
-                m_values.operand(node->unlinkedLocal()) = nullptr;
+                m_values.operand(node->unlinkedOperand()) = nullptr;
                 break;
 
             case GetLocal:
             case SetArgumentDefinitely:
             case SetArgumentMaybe:
-                m_values.operand(node->local()) = nullptr;
+                m_values.operand(node->operand()) = nullptr;
                 break;
                 
             default:
@@ -134,38 +131,42 @@
             m_graph.doToChildren(
                 node,
                 [&] (Edge edge) {
+                    dataLogLnIf(verbose, "Updating epoch for ", edge, " to ", currentEpoch);
                     edge->setEpoch(currentEpoch);
                 });
             
             node->setEpoch(currentEpoch);
 
-            VirtualRegister alreadyKilled;
+            Operand alreadyKilled;
 
-            auto processKilledOperand = [&] (VirtualRegister reg) {
-                if (DFGPhantomInsertionPhaseInternal::verbose)
-                    dataLog("    Killed operand: ", reg, "\n");
+            auto processKilledOperand = [&] (Operand operand) {
+                dataLogLnIf(verbose, "    Killed operand: ", operand);
 
                 // Already handled from SetLocal.
-                if (reg == alreadyKilled)
+                if (operand == alreadyKilled) {
+                    dataLogLnIf(verbose, "    Operand ", operand, " already killed by set local");
                     return;
+                }
                 
-                Node* killedNode = m_values.operand(reg);
-                if (!killedNode)
+                Node* killedNode = m_values.operand(operand); 
+                if (!killedNode) {
+                    dataLogLnIf(verbose, "    Operand ", operand, " was not defined in this block.");
                     return;
+                }
 
-                m_values.operand(reg) = nullptr;
+                m_values.operand(operand) = nullptr;
                 
                 // We only need to insert a Phantom if the node hasn't been used since the last
                 // exit, and was born before the last exit.
-                if (killedNode->epoch() == currentEpoch)
+                if (killedNode->epoch() == currentEpoch) {
+                    dataLogLnIf(verbose, "    Operand ", operand, " has current epoch ", currentEpoch);
                     return;
-                
-                if (DFGPhantomInsertionPhaseInternal::verbose) {
-                    dataLog(
-                        "    Inserting Phantom on ", killedNode, " after ",
-                        block->at(lastExitingIndex), "\n");
                 }
                 
+                dataLogLnIf(verbose,
+                    "    Inserting Phantom on ", killedNode, " after ",
+                    block->at(lastExitingIndex));
+                
                 // We have exact ref counts, so creating a new use means that we have to
                 // increment the ref count.
                 killedNode->postfixRef();
@@ -179,14 +180,14 @@
             };
 
             if (node->op() == SetLocal) {
-                VirtualRegister local = node->local();
+                Operand operand = node->operand();
                 if (nodeMayExit) {
                     // If the SetLocal does exit, we need the MovHint of its local
                     // to be live until the SetLocal is done.
-                    processKilledOperand(local);
-                    alreadyKilled = local;
+                    processKilledOperand(operand);
+                    alreadyKilled = operand;
                 }
-                m_values.operand(local) = nullptr;
+                m_values.operand(operand) = nullptr;
             }
 
             forAllKilledOperands(m_graph, node, block->tryAt(nodeIndex + 1), processKilledOperand);
diff --git a/Source/JavaScriptCore/dfg/DFGPreciseLocalClobberize.h b/Source/JavaScriptCore/dfg/DFGPreciseLocalClobberize.h
index 128c841..36181a2 100644
--- a/Source/JavaScriptCore/dfg/DFGPreciseLocalClobberize.h
+++ b/Source/JavaScriptCore/dfg/DFGPreciseLocalClobberize.h
@@ -53,7 +53,7 @@
                 return;
             }
             
-            callIfAppropriate(m_read, VirtualRegister(heap.payload().value32()));
+            callIfAppropriate(m_read, heap.operand());
             return;
         }
         
@@ -68,7 +68,7 @@
         // We expect stack writes to already be precisely characterized by DFG::clobberize().
         if (heap.kind() == Stack) {
             RELEASE_ASSERT(!heap.payload().isTop());
-            callIfAppropriate(m_unconditionalWrite, VirtualRegister(heap.payload().value32()));
+            callIfAppropriate(m_unconditionalWrite, heap.operand());
             return;
         }
         
@@ -87,12 +87,12 @@
         
         RELEASE_ASSERT(location.heap().kind() == Stack);
         
-        m_def(VirtualRegister(location.heap().payload().value32()), node);
+        m_def(location.heap().operand(), node);
     }
     
 private:
     template<typename Functor>
-    void callIfAppropriate(const Functor& functor, VirtualRegister operand)
+    void callIfAppropriate(const Functor& functor, Operand operand)
     {
         if (operand.isLocal() && static_cast<unsigned>(operand.toLocal()) >= m_graph.block(0)->variablesAtHead.numberOfLocals())
             return;
@@ -109,13 +109,13 @@
             if (!inlineCallFrame) {
                 // Read the outermost arguments and argument count.
                 for (unsigned i = numberOfArgumentsToSkip; i < static_cast<unsigned>(m_graph.m_codeBlock->numParameters()); i++)
-                    m_read(virtualRegisterForArgument(i));
+                    m_read(virtualRegisterForArgumentIncludingThis(i));
                 m_read(VirtualRegister(CallFrameSlot::argumentCountIncludingThis));
                 return;
             }
             
             for (unsigned i = numberOfArgumentsToSkip; i < inlineCallFrame->argumentsWithFixup.size(); i++)
-                m_read(VirtualRegister(inlineCallFrame->stackOffset + virtualRegisterForArgument(i).offset()));
+                m_read(VirtualRegister(inlineCallFrame->stackOffset + virtualRegisterForArgumentIncludingThis(i).offset()));
             if (inlineCallFrame->isVarargs())
                 m_read(VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis));
         };
@@ -225,14 +225,14 @@
             unsigned indexIncludingThis = m_node->argumentIndex();
             if (!inlineCallFrame) {
                 if (indexIncludingThis < static_cast<unsigned>(m_graph.m_codeBlock->numParameters()))
-                    m_read(virtualRegisterForArgument(indexIncludingThis));
+                    m_read(virtualRegisterForArgumentIncludingThis(indexIncludingThis));
                 m_read(VirtualRegister(CallFrameSlot::argumentCountIncludingThis));
                 break;
             }
 
             ASSERT_WITH_MESSAGE(inlineCallFrame->isVarargs(), "GetArgument is only used for InlineCallFrame if the call frame is varargs.");
             if (indexIncludingThis < inlineCallFrame->argumentsWithFixup.size())
-                m_read(VirtualRegister(inlineCallFrame->stackOffset + virtualRegisterForArgument(indexIncludingThis).offset()));
+                m_read(VirtualRegister(inlineCallFrame->stackOffset + virtualRegisterForArgumentIncludingThis(indexIncludingThis).offset()));
             m_read(VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis));
             break;
         }
@@ -241,7 +241,7 @@
             // All of the outermost arguments, except this, are read in sloppy mode.
             if (!m_graph.m_codeBlock->isStrictMode()) {
                 for (unsigned i = m_graph.m_codeBlock->numParameters(); i--;)
-                    m_read(virtualRegisterForArgument(i));
+                    m_read(virtualRegisterForArgumentIncludingThis(i));
             }
         
             // The stack header is read.
@@ -252,7 +252,7 @@
             for (InlineCallFrame* inlineCallFrame = m_node->origin.semantic.inlineCallFrame(); inlineCallFrame; inlineCallFrame = inlineCallFrame->getCallerInlineFrameSkippingTailCalls()) {
                 if (!inlineCallFrame->isStrictMode()) {
                     for (unsigned i = inlineCallFrame->argumentsWithFixup.size(); i--;)
-                        m_read(VirtualRegister(inlineCallFrame->stackOffset + virtualRegisterForArgument(i).offset()));
+                        m_read(VirtualRegister(inlineCallFrame->stackOffset + virtualRegisterForArgumentIncludingThis(i).offset()));
                 }
                 if (inlineCallFrame->isClosureCall)
                     m_read(VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::callee));
diff --git a/Source/JavaScriptCore/dfg/DFGPredictionInjectionPhase.cpp b/Source/JavaScriptCore/dfg/DFGPredictionInjectionPhase.cpp
index 447e24e..5b4f992 100644
--- a/Source/JavaScriptCore/dfg/DFGPredictionInjectionPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGPredictionInjectionPhase.cpp
@@ -72,7 +72,7 @@
                 continue;
             const Operands<Optional<JSValue>>& mustHandleValues = m_graph.m_plan.mustHandleValues();
             for (size_t i = 0; i < mustHandleValues.size(); ++i) {
-                int operand = mustHandleValues.operandForIndex(i);
+                Operand operand = mustHandleValues.operandForIndex(i);
                 Optional<JSValue> value = mustHandleValues[i];
                 if (!value)
                     continue;
diff --git a/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp b/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
index f11ce76..f92ae9d 100644
--- a/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
@@ -1374,6 +1374,7 @@
         case MovHint:
         case ZombieHint:
         case ExitOK:
+        case VarargsLength:
         case LoadVarargs:
         case ForwardVarargs:
         case PutDynamicVar:
diff --git a/Source/JavaScriptCore/dfg/DFGPutStackSinkingPhase.cpp b/Source/JavaScriptCore/dfg/DFGPutStackSinkingPhase.cpp
index 8727760c..c979335 100644
--- a/Source/JavaScriptCore/dfg/DFGPutStackSinkingPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGPutStackSinkingPhase.cpp
@@ -42,11 +42,8 @@
 
 namespace {
 
-namespace DFGPutStackSinkingPhaseInternal {
-static constexpr bool verbose = false;
-}
-
 class PutStackSinkingPhase : public Phase {
+    static constexpr bool verbose = false;
 public:
     PutStackSinkingPhase(Graph& graph)
         : Phase(graph, "PutStack sinking")
@@ -73,7 +70,7 @@
         // the stack. It's not clear to me if this is important or not.
         // https://bugs.webkit.org/show_bug.cgi?id=145296
         
-        if (DFGPutStackSinkingPhaseInternal::verbose) {
+        if (verbose) {
             dataLog("Graph before PutStack sinking:\n");
             m_graph.dump();
         }
@@ -88,11 +85,8 @@
         BlockMap<Operands<bool>> liveAtTail(m_graph);
         
         for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
-            liveAtHead[block] = Operands<bool>(OperandsLike, block->variablesAtHead);
-            liveAtTail[block] = Operands<bool>(OperandsLike, block->variablesAtHead);
-            
-            liveAtHead[block].fill(false);
-            liveAtTail[block].fill(false);
+            liveAtHead[block] = Operands<bool>(OperandsLike, block->variablesAtHead, false);
+            liveAtTail[block] = Operands<bool>(OperandsLike, block->variablesAtHead, false);
         }
         
         bool changed;
@@ -107,33 +101,34 @@
                 Operands<bool> live = liveAtTail[block];
                 for (unsigned nodeIndex = block->size(); nodeIndex--;) {
                     Node* node = block->at(nodeIndex);
-                    if (DFGPutStackSinkingPhaseInternal::verbose)
+                    if (verbose)
                         dataLog("Live at ", node, ": ", live, "\n");
                     
-                    Vector<VirtualRegister, 4> reads;
-                    Vector<VirtualRegister, 4> writes;
-                    auto escapeHandler = [&] (VirtualRegister operand) {
+                    Vector<Operand, 4> reads;
+                    Vector<Operand, 4> writes;
+                    auto escapeHandler = [&] (Operand operand) {
                         if (operand.isHeader())
                             return;
-                        if (DFGPutStackSinkingPhaseInternal::verbose)
+                        if (verbose)
                             dataLog("    ", operand, " is live at ", node, "\n");
                         reads.append(operand);
                     };
 
-                    auto writeHandler = [&] (VirtualRegister operand) {
+                    auto writeHandler = [&] (Operand operand) {
                         if (operand.isHeader())
                             return;
-                        RELEASE_ASSERT(node->op() == PutStack || node->op() == LoadVarargs || node->op() == ForwardVarargs || node->op() == KillStack);
+                        auto op = node->op();
+                        RELEASE_ASSERT(op == PutStack || op == LoadVarargs || op == ForwardVarargs || op == KillStack);
                         writes.append(operand);
                     };
 
                     preciseLocalClobberize(
                         m_graph, node, escapeHandler, writeHandler,
-                        [&] (VirtualRegister, LazyNode) { });
+                        [&] (Operand, LazyNode) { });
 
-                    for (VirtualRegister operand : writes)
+                    for (Operand operand : writes)
                         live.operand(operand) = false;
-                    for (VirtualRegister operand : reads)
+                    for (Operand operand : reads)
                         live.operand(operand) = true;
                 }
                 
@@ -234,7 +229,7 @@
                 Operands<FlushFormat> deferred = deferredAtHead[block];
                 
                 for (Node* node : *block) {
-                    if (DFGPutStackSinkingPhaseInternal::verbose)
+                    if (verbose)
                         dataLog("Deferred at ", node, ":", deferred, "\n");
                     
                     if (node->op() == GetStack) {
@@ -261,7 +256,7 @@
                         // https://bugs.webkit.org/show_bug.cgi?id=150398
 
                         bool isConflicting =
-                            deferred.operand(node->stackAccessData()->local) == ConflictingFlush;
+                            deferred.operand(node->stackAccessData()->operand) == ConflictingFlush;
                         
                         if (validationEnabled())
                             DFG_ASSERT(m_graph, node, !isConflicting);
@@ -275,17 +270,20 @@
                         // from.
                         continue;
                     } else if (node->op() == PutStack) {
-                        VirtualRegister operand = node->stackAccessData()->local;
+                        Operand operand = node->stackAccessData()->operand;
+                        dataLogLnIf(verbose, "Setting flush format for ", node, " at operand ", operand);
                         deferred.operand(operand) = node->stackAccessData()->format;
                         continue;
                     } else if (node->op() == KillStack) {
                         // We don't want to sink a PutStack past a KillStack.
-                        deferred.operand(node->unlinkedLocal()) = ConflictingFlush;
+                        if (verbose)
+                            dataLogLn("Killing stack for ", node->unlinkedOperand());
+                        deferred.operand(node->unlinkedOperand()) = ConflictingFlush;
                         continue;
                     }
                     
-                    auto escapeHandler = [&] (VirtualRegister operand) {
-                        if (DFGPutStackSinkingPhaseInternal::verbose)
+                    auto escapeHandler = [&] (Operand operand) {
+                        if (verbose)
                             dataLog("For ", node, " escaping ", operand, "\n");
                         if (operand.isHeader())
                             return;
@@ -293,16 +291,18 @@
                         deferred.operand(operand) = DeadFlush;
                     };
 
-                    auto writeHandler = [&] (VirtualRegister operand) {
+                    auto writeHandler = [&] (Operand operand) {
+                        ASSERT(!operand.isTmp());
                         if (operand.isHeader())
                             return;
-                        RELEASE_ASSERT(node->op() == LoadVarargs || node->op() == ForwardVarargs);
+                        RELEASE_ASSERT(node->op() == VarargsLength || node->op() == LoadVarargs || node->op() == ForwardVarargs);
+                        dataLogLnIf(verbose, "Writing dead flush for ", node, " at operand ", operand);
                         deferred.operand(operand) = DeadFlush;
                     };
                     
                     preciseLocalClobberize(
                         m_graph, node, escapeHandler, writeHandler,
-                        [&] (VirtualRegister, LazyNode) { });
+                        [&] (Operand, LazyNode) { });
                 }
                 
                 if (deferred == deferredAtTail[block])
@@ -313,13 +313,13 @@
                 
                 for (BasicBlock* successor : block->successors()) {
                     for (size_t i = deferred.size(); i--;) {
-                        if (DFGPutStackSinkingPhaseInternal::verbose)
-                            dataLog("Considering ", VirtualRegister(deferred.operandForIndex(i)), " at ", pointerDump(block), "->", pointerDump(successor), ": ", deferred[i], " and ", deferredAtHead[successor][i], " merges to ");
+                        if (verbose)
+                            dataLog("Considering ", deferred.operandForIndex(i), " at ", pointerDump(block), "->", pointerDump(successor), ": ", deferred[i], " and ", deferredAtHead[successor][i], " merges to ");
 
                         deferredAtHead[successor][i] =
                             merge(deferredAtHead[successor][i], deferred[i]);
                         
-                        if (DFGPutStackSinkingPhaseInternal::verbose)
+                        if (verbose)
                             dataLog(deferredAtHead[successor][i], "\n");
                     }
                 }
@@ -352,10 +352,10 @@
         
         Operands<SSACalculator::Variable*> operandToVariable(
             OperandsLike, m_graph.block(0)->variablesAtHead);
-        Vector<VirtualRegister> indexToOperand;
+        Vector<Operand> indexToOperand;
         for (size_t i = m_graph.block(0)->variablesAtHead.size(); i--;) {
-            VirtualRegister operand(m_graph.block(0)->variablesAtHead.operandForIndex(i));
-            
+            Operand operand = m_graph.block(0)->variablesAtHead.operandForIndex(i);
+
             SSACalculator::Variable* variable = ssaCalculator.newVariable();
             operandToVariable.operand(operand) = variable;
             ASSERT(indexToOperand.size() == variable->index());
@@ -370,12 +370,12 @@
                 case PutStack:
                     putStacksToSink.add(node);
                     ssaCalculator.newDef(
-                        operandToVariable.operand(node->stackAccessData()->local),
+                        operandToVariable.operand(node->stackAccessData()->operand),
                         block, node->child1().node());
                     break;
                 case GetStack:
                     ssaCalculator.newDef(
-                        operandToVariable.operand(node->stackAccessData()->local),
+                        operandToVariable.operand(node->stackAccessData()->operand),
                         block, node);
                     break;
                 default:
@@ -386,7 +386,7 @@
         
         ssaCalculator.computePhis(
             [&] (SSACalculator::Variable* variable, BasicBlock* block) -> Node* {
-                VirtualRegister operand = indexToOperand[variable->index()];
+                Operand operand = indexToOperand[variable->index()];
                 
                 if (!liveAtHead[block].operand(operand))
                     return nullptr;
@@ -397,7 +397,7 @@
                 if (!isConcrete(format))
                     return nullptr;
 
-                if (DFGPutStackSinkingPhaseInternal::verbose)
+                if (verbose)
                     dataLog("Adding Phi for ", operand, " at ", pointerDump(block), "\n");
                 
                 Node* phiNode = m_graph.addNode(SpecHeapTop, Phi, block->at(0)->origin.withInvalidExit());
@@ -411,7 +411,7 @@
             mapping.fill(nullptr);
             
             for (size_t i = mapping.size(); i--;) {
-                VirtualRegister operand(mapping.operandForIndex(i));
+                Operand operand(mapping.operandForIndex(i));
                 
                 SSACalculator::Variable* variable = operandToVariable.operand(operand);
                 SSACalculator::Def* def = ssaCalculator.reachingDefAtHead(block, variable);
@@ -421,15 +421,15 @@
                 mapping.operand(operand) = def->value();
             }
             
-            if (DFGPutStackSinkingPhaseInternal::verbose)
+            if (verbose)
                 dataLog("Mapping at top of ", pointerDump(block), ": ", mapping, "\n");
             
             for (SSACalculator::Def* phiDef : ssaCalculator.phisForBlock(block)) {
-                VirtualRegister operand = indexToOperand[phiDef->variable()->index()];
+                Operand operand = indexToOperand[phiDef->variable()->index()];
                 
                 insertionSet.insert(0, phiDef->value());
                 
-                if (DFGPutStackSinkingPhaseInternal::verbose)
+                if (verbose)
                     dataLog("   Mapping ", operand, " to ", phiDef->value(), "\n");
                 mapping.operand(operand) = phiDef->value();
             }
@@ -437,15 +437,15 @@
             deferred = deferredAtHead[block];
             for (unsigned nodeIndex = 0; nodeIndex < block->size(); ++nodeIndex) {
                 Node* node = block->at(nodeIndex);
-                if (DFGPutStackSinkingPhaseInternal::verbose)
+                if (verbose)
                     dataLog("Deferred at ", node, ":", deferred, "\n");
                 
                 switch (node->op()) {
                 case PutStack: {
                     StackAccessData* data = node->stackAccessData();
-                    VirtualRegister operand = data->local;
+                    Operand operand = data->operand;
                     deferred.operand(operand) = data->format;
-                    if (DFGPutStackSinkingPhaseInternal::verbose)
+                    if (verbose)
                         dataLog("   Mapping ", operand, " to ", node->child1().node(), " at ", node, "\n");
                     mapping.operand(operand) = node->child1().node();
                     break;
@@ -453,16 +453,16 @@
                     
                 case GetStack: {
                     StackAccessData* data = node->stackAccessData();
-                    FlushFormat format = deferred.operand(data->local);
+                    FlushFormat format = deferred.operand(data->operand);
                     if (!isConcrete(format)) {
                         DFG_ASSERT(
                             m_graph, node,
-                            deferred.operand(data->local) != ConflictingFlush, deferred.operand(data->local));
+                            deferred.operand(data->operand) != ConflictingFlush, deferred.operand(data->operand));
                         
                         // This means there is no deferral. No deferral means that the most
                         // authoritative value for this stack slot is what is stored in the stack. So,
                         // keep the GetStack.
-                        mapping.operand(data->local) = node;
+                        mapping.operand(data->operand) = node;
                         break;
                     }
                     
@@ -472,20 +472,20 @@
                     // have stored and get rid of the GetStack.
                     DFG_ASSERT(m_graph, node, format == data->format, format, data->format);
                     
-                    Node* incoming = mapping.operand(data->local);
+                    Node* incoming = mapping.operand(data->operand);
                     node->child1() = incoming->defaultEdge();
                     node->convertToIdentity();
                     break;
                 }
 
                 case KillStack: {
-                    deferred.operand(node->unlinkedLocal()) = ConflictingFlush;
+                    deferred.operand(node->unlinkedOperand()) = ConflictingFlush;
                     break;
                 }
                 
                 default: {
-                    auto escapeHandler = [&] (VirtualRegister operand) {
-                        if (DFGPutStackSinkingPhaseInternal::verbose)
+                    auto escapeHandler = [&] (Operand operand) {
+                        if (verbose)
                             dataLog("For ", node, " escaping ", operand, "\n");
 
                         if (operand.isHeader())
@@ -499,7 +499,7 @@
                         }
                     
                         // Gotta insert a PutStack.
-                        if (DFGPutStackSinkingPhaseInternal::verbose)
+                        if (verbose)
                             dataLog("Inserting a PutStack for ", operand, " at ", node, "\n");
 
                         Node* incoming = mapping.operand(operand);
@@ -513,7 +513,7 @@
                         deferred.operand(operand) = DeadFlush;
                     };
 
-                    auto writeHandler = [&] (VirtualRegister operand) {
+                    auto writeHandler = [&] (Operand operand) {
                         if (operand.isHeader())
                             return;
                         // LoadVarargs and ForwardVarargs are unconditional writes to the stack
@@ -521,13 +521,13 @@
                         // locations they write to. This makes those stack locations dead right 
                         // before a LoadVarargs/ForwardVarargs. This means we should never sink
                         // PutStacks right to this point.
-                        RELEASE_ASSERT(node->op() == LoadVarargs || node->op() == ForwardVarargs);
+                        RELEASE_ASSERT(node->op() == VarargsLength || node->op() == LoadVarargs || node->op() == ForwardVarargs);
                         deferred.operand(operand) = DeadFlush;
                     };
 
                     preciseLocalClobberize(
                         m_graph, node, escapeHandler, writeHandler,
-                        [&] (VirtualRegister, LazyNode) { });
+                        [&] (Operand, LazyNode) { });
                     break;
                 } }
             }
@@ -539,8 +539,8 @@
                 for (SSACalculator::Def* phiDef : ssaCalculator.phisForBlock(successorBlock)) {
                     Node* phiNode = phiDef->value();
                     SSACalculator::Variable* variable = phiDef->variable();
-                    VirtualRegister operand = indexToOperand[variable->index()];
-                    if (DFGPutStackSinkingPhaseInternal::verbose)
+                    Operand operand = indexToOperand[variable->index()];
+                    if (verbose)
                         dataLog("Creating Upsilon for ", operand, " at ", pointerDump(block), "->", pointerDump(successorBlock), "\n");
                     FlushFormat format = deferredAtHead[successorBlock].operand(operand);
                     DFG_ASSERT(m_graph, nullptr, isConcrete(format), format);
@@ -585,7 +585,7 @@
             }
         }
         
-        if (DFGPutStackSinkingPhaseInternal::verbose) {
+        if (verbose) {
             dataLog("Graph after PutStack sinking:\n");
             m_graph.dump();
         }
diff --git a/Source/JavaScriptCore/dfg/DFGSSAConversionPhase.cpp b/Source/JavaScriptCore/dfg/DFGSSAConversionPhase.cpp
index 23d4b94..ca99026 100644
--- a/Source/JavaScriptCore/dfg/DFGSSAConversionPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSSAConversionPhase.cpp
@@ -158,7 +158,7 @@
                     childNode = m_insertionSet.insertNode(
                         nodeIndex, node->variableAccessData()->prediction(),
                         GetStack, node->origin,
-                        OpInfo(m_graph.m_stackAccessData.add(variable->local(), variable->flushFormat())));
+                        OpInfo(m_graph.m_stackAccessData.add(variable->operand(), variable->flushFormat())));
                     if (ASSERT_ENABLED)
                         m_argumentGetters.add(childNode);
                     m_argumentMapping.add(node, childNode);
@@ -178,7 +178,7 @@
                 VariableAccessData* variable = m_variableForSSAIndex[ssaVariable->index()];
                 
                 // Prune by liveness. This doesn't buy us much other than compile times.
-                Node* headNode = block->variablesAtHead.operand(variable->local());
+                Node* headNode = block->variablesAtHead.operand(variable->operand());
                 if (!headNode)
                     return nullptr;
 
@@ -300,7 +300,7 @@
                         ASSERT(!node->replacement());
                     }
                     if (verbose)
-                        dataLog("Mapping: ", VirtualRegister(valueForOperand.operandForIndex(i)), " -> ", node, "\n");
+                        dataLog("Mapping: ", valueForOperand.operandForIndex(i), " -> ", node, "\n");
                     valueForOperand[i] = node;
                 }
             }
@@ -313,11 +313,11 @@
                 VariableAccessData* variable = m_variableForSSAIndex[phiDef->variable()->index()];
                 
                 m_insertionSet.insert(phiInsertionPoint, phiDef->value());
-                valueForOperand.operand(variable->local()) = phiDef->value();
+                valueForOperand.operand(variable->operand()) = phiDef->value();
                 
                 m_insertionSet.insertNode(
                     phiInsertionPoint, SpecNone, MovHint, block->at(0)->origin.withInvalidExit(),
-                    OpInfo(variable->local().offset()), phiDef->value()->defaultEdge());
+                    OpInfo(variable->operand()), phiDef->value()->defaultEdge());
             }
 
             if (block->at(0)->origin.exitOK)
@@ -337,7 +337,7 @@
                 case MovHint: {
                     m_insertionSet.insertNode(
                         nodeIndex, SpecNone, KillStack, node->origin,
-                        OpInfo(node->unlinkedLocal().offset()));
+                        OpInfo(node->unlinkedOperand()));
                     node->origin.exitOK = false; // KillStack clobbers exit.
                     break;
                 }
@@ -349,19 +349,19 @@
                     if (!!(node->flags() & NodeIsFlushed)) {
                         node->convertToPutStack(
                             m_graph.m_stackAccessData.add(
-                                variable->local(), variable->flushFormat()));
+                                variable->operand(), variable->flushFormat()));
                     } else
                         node->remove(m_graph);
                     
                     if (verbose)
-                        dataLog("Mapping: ", variable->local(), " -> ", child, "\n");
-                    valueForOperand.operand(variable->local()) = child;
+                        dataLog("Mapping: ", variable->operand(), " -> ", child, "\n");
+                    valueForOperand.operand(variable->operand()) = child;
                     break;
                 }
                     
                 case GetStack: {
                     ASSERT(m_argumentGetters.contains(node));
-                    valueForOperand.operand(node->stackAccessData()->local) = node;
+                    valueForOperand.operand(node->stackAccessData()->operand) = node;
                     break;
                 }
                     
@@ -371,8 +371,8 @@
                     
                     node->remove(m_graph);
                     if (verbose)
-                        dataLog("Replacing node ", node, " with ", valueForOperand.operand(variable->local()), "\n");
-                    node->setReplacement(valueForOperand.operand(variable->local()));
+                        dataLog("Replacing node ", node, " with ", valueForOperand.operand(variable->operand()), "\n");
+                    node->setReplacement(valueForOperand.operand(variable->operand()));
                     break;
                 }
                     
@@ -385,7 +385,7 @@
                 case PhantomLocal: {
                     ASSERT(node->child1().useKind() == UntypedUse);
                     VariableAccessData* variable = node->variableAccessData();
-                    node->child1() = valueForOperand.operand(variable->local())->defaultEdge();
+                    node->child1() = valueForOperand.operand(variable->operand())->defaultEdge();
                     node->remove(m_graph);
                     break;
                 }
@@ -425,12 +425,12 @@
                     // is not exitOK.
                     UseKind useKind = uncheckedUseKindFor(format);
 
-                    dataLogLnIf(verbose, "Inserting Upsilon for ", variable->local(), " propagating ", valueForOperand.operand(variable->local()), " to ", phiNode);
+                    dataLogLnIf(verbose, "Inserting Upsilon for ", variable->operand(), " propagating ", valueForOperand.operand(variable->operand()), " to ", phiNode);
                     
                     m_insertionSet.insertNode(
                         upsilonInsertionPoint, SpecNone, Upsilon, upsilonOrigin,
                         OpInfo(phiNode), Edge(
-                            valueForOperand.operand(variable->local()),
+                            valueForOperand.operand(variable->operand()),
                             useKind));
                 }
             }
diff --git a/Source/JavaScriptCore/dfg/DFGSafeToExecute.h b/Source/JavaScriptCore/dfg/DFGSafeToExecute.h
index f91ba48..203387b 100644
--- a/Source/JavaScriptCore/dfg/DFGSafeToExecute.h
+++ b/Source/JavaScriptCore/dfg/DFGSafeToExecute.h
@@ -323,6 +323,7 @@
     case TailCallVarargsInlinedCaller:
     case TailCallForwardVarargsInlinedCaller:
     case ConstructVarargs:
+    case VarargsLength:
     case LoadVarargs:
     case CallForwardVarargs:
     case ConstructForwardVarargs:
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
index 934637e..68f523f 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
@@ -1778,7 +1778,7 @@
     Node* child = node->child1().node();
     noticeOSRBirth(child);
     
-    m_stream->appendAndLog(VariableEvent::movHint(MinifiedID(child), node->unlinkedLocal()));
+    m_stream->appendAndLog(VariableEvent::movHint(MinifiedID(child), node->unlinkedOperand()));
 }
 
 void SpeculativeJIT::compileCheckNeutered(Node* node)
@@ -1843,7 +1843,7 @@
     m_state.beginBasicBlock(m_block);
     
     for (size_t i = m_block->variablesAtHead.size(); i--;) {
-        int operand = m_block->variablesAtHead.operandForIndex(i);
+        Operand operand = m_block->variablesAtHead.operandForIndex(i);
         Node* node = m_block->variablesAtHead[i];
         if (!node)
             continue; // No need to record dead SetLocal's.
@@ -1853,11 +1853,8 @@
         if (!node->refCount())
             continue; // No need to record dead SetLocal's.
         format = dataFormatFor(variable->flushFormat());
-        m_stream->appendAndLog(
-            VariableEvent::setLocal(
-                VirtualRegister(operand),
-                variable->machineLocal(),
-                format));
+        DFG_ASSERT(m_jit.graph(), node, !operand.isArgument() || operand.virtualRegister().toArgument() >= 0);
+        m_stream->appendAndLog(VariableEvent::setLocal(operand, variable->machineLocal(), format));
     }
 
     m_origin = NodeOrigin();
@@ -1937,7 +1934,8 @@
         if (format == FlushedJSValue)
             continue;
         
-        VirtualRegister virtualRegister = variableAccessData->local();
+        VirtualRegister virtualRegister = variableAccessData->operand().virtualRegister();
+        ASSERT(virtualRegister.isArgument());
 
         JSValueSource valueSource = JSValueSource(JITCompiler::addressFor(virtualRegister));
         
@@ -7360,54 +7358,55 @@
     noResult(node);
 }
 
-void SpeculativeJIT::compileLoadVarargs(Node* node)
+void SpeculativeJIT::compileVarargsLength(Node* node)
 {
     LoadVarargsData* data = node->loadVarargsData();
 
     JSValueRegs argumentsRegs;
-    {
-        JSValueOperand arguments(this, node->child1());
-        argumentsRegs = arguments.jsValueRegs();
-        flushRegisters();
-    }
+    lock(GPRInfo::returnValueGPR);
+    JSValueOperand arguments(this, node->argumentsChild());
+    argumentsRegs = arguments.jsValueRegs();
+    flushRegisters();
+    unlock(GPRInfo::returnValueGPR);
 
     callOperation(operationSizeOfVarargs, GPRInfo::returnValueGPR, TrustedImmPtr::weakPointer(m_graph, m_graph.globalObjectFor(node->origin.semantic)), argumentsRegs, data->offset);
     m_jit.exceptionCheck();
 
     lock(GPRInfo::returnValueGPR);
-    {
-        JSValueOperand arguments(this, node->child1());
-        argumentsRegs = arguments.jsValueRegs();
-        flushRegisters();
-    }
+    GPRTemporary argCountIncludingThis(this);
+    GPRReg argCountIncludingThisGPR = argCountIncludingThis.gpr();
     unlock(GPRInfo::returnValueGPR);
 
-    // FIXME: There is a chance that we will call an effectful length property twice. This is safe
-    // from the standpoint of the VM's integrity, but it's subtly wrong from a spec compliance
-    // standpoint. The best solution would be one where we can exit *into* the op_call_varargs right
-    // past the sizing.
-    // https://bugs.webkit.org/show_bug.cgi?id=141448
-
-    GPRReg argCountIncludingThisGPR =
-        JITCompiler::selectScratchGPR(GPRInfo::returnValueGPR, argumentsRegs);
-
     m_jit.add32(TrustedImm32(1), GPRInfo::returnValueGPR, argCountIncludingThisGPR);
 
+    int32Result(argCountIncludingThisGPR, node);  
+}
+
+void SpeculativeJIT::compileLoadVarargs(Node* node)
+{
+    LoadVarargsData* data = node->loadVarargsData();
+
+    SpeculateStrictInt32Operand argumentCount(this, node->child1());    
+    JSValueOperand arguments(this, node->argumentsChild());
+    GPRReg argumentCountIncludingThis = argumentCount.gpr();
+    JSValueRegs argumentsRegs = arguments.jsValueRegs();
+
     speculationCheck(
-        VarargsOverflow, JSValueSource(), Edge(), m_jit.branch32(
-            MacroAssembler::Above,
-            GPRInfo::returnValueGPR,
-            argCountIncludingThisGPR));
+        VarargsOverflow, JSValueSource(), Edge(), m_jit.branchTest32(
+            MacroAssembler::Zero,
+            argumentCountIncludingThis));
 
     speculationCheck(
         VarargsOverflow, JSValueSource(), Edge(), m_jit.branch32(
             MacroAssembler::Above,
-            argCountIncludingThisGPR,
+            argumentCountIncludingThis,
             TrustedImm32(data->limit)));
 
-    m_jit.store32(argCountIncludingThisGPR, JITCompiler::payloadFor(data->machineCount));
+    flushRegisters();
 
-    callOperation(operationLoadVarargs, TrustedImmPtr::weakPointer(m_graph, m_graph.globalObjectFor(node->origin.semantic)), data->machineStart.offset(), argumentsRegs, data->offset, GPRInfo::returnValueGPR, data->mandatoryMinimum);
+    m_jit.store32(argumentCountIncludingThis, JITCompiler::payloadFor(data->machineCount));
+
+    callOperation(operationLoadVarargs, TrustedImmPtr::weakPointer(m_graph, m_graph.globalObjectFor(node->origin.semantic)), data->machineStart.offset(), argumentsRegs, data->offset, argumentCountIncludingThis, data->mandatoryMinimum);
     m_jit.exceptionCheck();
 
     noResult(node);
@@ -7417,17 +7416,19 @@
 {
     LoadVarargsData* data = node->loadVarargsData();
     InlineCallFrame* inlineCallFrame;
-    if (node->child1())
-        inlineCallFrame = node->child1()->origin.semantic.inlineCallFrame();
+    if (node->argumentsChild())
+        inlineCallFrame = node->argumentsChild()->origin.semantic.inlineCallFrame();
     else
         inlineCallFrame = node->origin.semantic.inlineCallFrame();
 
+    SpeculateStrictInt32Operand argumentCount(this, node->child1());
     GPRTemporary length(this);
     JSValueRegsTemporary temp(this);
-    GPRReg lengthGPR = length.gpr();
+    GPRReg argumentCountIncludingThis = argumentCount.gpr();
+    GPRReg lengthGPR = argumentCount.gpr();
     JSValueRegs tempRegs = temp.regs();
-        
-    emitGetLength(inlineCallFrame, lengthGPR, /* includeThis = */ true);
+    
+    m_jit.move(argumentCountIncludingThis, lengthGPR);
     if (data->offset)
         m_jit.sub32(TrustedImm32(data->offset), lengthGPR);
         
@@ -7586,7 +7587,7 @@
     auto* inlineCallFrame = node->origin.semantic.inlineCallFrame();
     if (inlineCallFrame
         && !inlineCallFrame->isVarargs()) {
-        knownLength = inlineCallFrame->argumentCountIncludingThis - 1;
+        knownLength = static_cast<unsigned>(inlineCallFrame->argumentCountIncludingThis - 1);
         lengthIsKnown = true;
     } else {
         knownLength = UINT_MAX;
@@ -12491,7 +12492,7 @@
     if (InlineCallFrame* inlineCallFrame = node->argumentsInlineCallFrame())
         argumentCountRegister = inlineCallFrame->argumentCountRegister;
     else
-        argumentCountRegister = VirtualRegister(CallFrameSlot::argumentCountIncludingThis);
+        argumentCountRegister = CallFrameSlot::argumentCountIncludingThis;
     m_jit.load32(JITCompiler::payloadFor(argumentCountRegister), result.gpr());
     int32Result(result.gpr(), node);
 }
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
index e10b578..8f96182 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
@@ -1370,6 +1370,7 @@
     void compileSetFunctionName(Node*);
     void compileNewRegexp(Node*);
     void compileForwardVarargs(Node*);
+    void compileVarargsLength(Node*);
     void compileLoadVarargs(Node*);
     void compileCreateActivation(Node*);
     void compileCreateDirectArguments(Node*);
@@ -1648,15 +1649,16 @@
     void cageTypedArrayStorage(GPRReg, GPRReg);
     
     void recordSetLocal(
-        VirtualRegister bytecodeReg, VirtualRegister machineReg, DataFormat format)
+        Operand bytecodeReg, VirtualRegister machineReg, DataFormat format)
     {
+        ASSERT(!bytecodeReg.isArgument() || bytecodeReg.virtualRegister().toArgument() >= 0);
         m_stream->appendAndLog(VariableEvent::setLocal(bytecodeReg, machineReg, format));
     }
     
     void recordSetLocal(DataFormat format)
     {
         VariableAccessData* variable = m_currentNode->variableAccessData();
-        recordSetLocal(variable->local(), variable->machineLocal(), format);
+        recordSetLocal(variable->operand(), variable->machineLocal(), format);
     }
 
     GenerationInfo& generationInfoFromVirtualRegister(VirtualRegister virtualRegister)
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
index 72b89a2..d45edc1 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
@@ -1828,7 +1828,7 @@
         break;
 
     case GetLocal: {
-        AbstractValue& value = m_state.operand(node->local());
+        AbstractValue& value = m_state.operand(node->operand());
 
         // If the CFA is tracking this variable and it found that the variable
         // cannot have been assigned, then don't attempt to proceed.
@@ -1912,7 +1912,7 @@
     }
         
     case ZombieHint: {
-        recordSetLocal(m_currentNode->unlinkedLocal(), VirtualRegister(), DataFormatDead);
+        recordSetLocal(m_currentNode->unlinkedOperand(), VirtualRegister(), DataFormatDead);
         noResult(node);
         break;
     }
@@ -3834,6 +3834,11 @@
         emitCall(node);
         break;
 
+    case VarargsLength: {
+        compileVarargsLength(node);
+        break;
+    }
+
     case LoadVarargs: {
         compileLoadVarargs(node);
         break;
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
index 2def8ff..ec7f337 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
@@ -1936,7 +1936,7 @@
         break;
 
     case GetLocal: {
-        AbstractValue& value = m_state.operand(node->local());
+        AbstractValue& value = m_state.operand(node->operand());
 
         // If the CFA is tracking this variable and it found that the variable
         // cannot have been assigned, then don't attempt to proceed.
@@ -2007,7 +2007,7 @@
     }
         
     case ZombieHint: {
-        recordSetLocal(m_currentNode->unlinkedLocal(), VirtualRegister(), DataFormatDead);
+        recordSetLocal(m_currentNode->unlinkedOperand(), VirtualRegister(), DataFormatDead);
         noResult(node);
         break;
     }
@@ -4474,6 +4474,11 @@
         emitCall(node);
         break;
 
+    case VarargsLength: {
+        compileVarargsLength(node);
+        break;
+    }
+
     case LoadVarargs: {
         compileLoadVarargs(node);
         break;
diff --git a/Source/JavaScriptCore/dfg/DFGStackLayoutPhase.cpp b/Source/JavaScriptCore/dfg/DFGStackLayoutPhase.cpp
index 168b3df..273f7d4 100644
--- a/Source/JavaScriptCore/dfg/DFGStackLayoutPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGStackLayoutPhase.cpp
@@ -51,7 +51,7 @@
         // treat a variable as being "used" if there exists an access to it (SetLocal, GetLocal,
         // Flush, PhantomLocal).
         
-        BitVector usedLocals;
+        Operands<bool> usedOperands(0, graph().m_localVars, graph().m_tmps, false);
         
         // Collect those variables that are used from IR.
         bool hasNodesThatNeedFixup = false;
@@ -67,23 +67,22 @@
                 case Flush:
                 case PhantomLocal: {
                     VariableAccessData* variable = node->variableAccessData();
-                    if (variable->local().isArgument())
+                    if (variable->operand().isArgument())
                         break;
-                    usedLocals.set(variable->local().toLocal());
+                    usedOperands.setOperand(variable->operand(), true);
                     break;
                 }
                     
                 case LoadVarargs:
                 case ForwardVarargs: {
                     LoadVarargsData* data = node->loadVarargsData();
-                    if (data->count.isLocal())
-                        usedLocals.set(data->count.toLocal());
+                    usedOperands.setOperand(data->count, true);
                     if (data->start.isLocal()) {
                         // This part really relies on the contiguity of stack layout
                         // assignments.
                         ASSERT(VirtualRegister(data->start.offset() + data->limit - 1).isLocal());
                         for (unsigned i = data->limit; i--;) 
-                            usedLocals.set(VirtualRegister(data->start.offset() + i).toLocal());
+                            usedOperands.setOperand(VirtualRegister(data->start.offset() + i), true);
                     } // the else case shouldn't happen.
                     hasNodesThatNeedFixup = true;
                     break;
@@ -92,9 +91,9 @@
                 case PutStack:
                 case GetStack: {
                     StackAccessData* stack = node->stackAccessData();
-                    if (stack->local.isArgument())
+                    if (stack->operand.isArgument())
                         break;
-                    usedLocals.set(stack->local.toLocal());
+                    usedOperands.setOperand(stack->operand, true);
                     break;
                 }
                     
@@ -108,21 +107,21 @@
             InlineCallFrame* inlineCallFrame = *iter;
             
             if (inlineCallFrame->isVarargs()) {
-                usedLocals.set(VirtualRegister(
-                    CallFrameSlot::argumentCountIncludingThis + inlineCallFrame->stackOffset).toLocal());
+                usedOperands.setOperand(VirtualRegister(
+                    CallFrameSlot::argumentCountIncludingThis + inlineCallFrame->stackOffset), true);
             }
             
             for (unsigned argument = inlineCallFrame->argumentsWithFixup.size(); argument--;) {
-                usedLocals.set(VirtualRegister(
-                    virtualRegisterForArgument(argument).offset() +
-                    inlineCallFrame->stackOffset).toLocal());
+                usedOperands.setOperand(VirtualRegister(
+                    virtualRegisterForArgumentIncludingThis(argument).offset() +
+                    inlineCallFrame->stackOffset), true);
             }
         }
         
-        Vector<unsigned> allocation(usedLocals.size());
+        Vector<unsigned> allocation(usedOperands.size());
         m_graph.m_nextMachineLocal = codeBlock()->calleeSaveSpaceAsVirtualRegisters();
-        for (unsigned i = 0; i < usedLocals.size(); ++i) {
-            if (!usedLocals.get(i)) {
+        for (unsigned i = 0; i < usedOperands.size(); ++i) {
+            if (!usedOperands.getForOperandIndex(i)) {
                 allocation[i] = UINT_MAX;
                 continue;
             }
@@ -135,49 +134,50 @@
             if (!variable->isRoot())
                 continue;
             
-            if (variable->local().isArgument()) {
-                variable->machineLocal() = variable->local();
+            if (variable->operand().isArgument()) {
+                variable->machineLocal() = variable->operand().virtualRegister();
                 continue;
             }
             
-            size_t local = variable->local().toLocal();
-            if (local >= allocation.size())
+            Operand operand = variable->operand();
+            size_t index = usedOperands.operandIndex(operand);
+            if (index >= allocation.size())
                 continue;
             
-            if (allocation[local] == UINT_MAX)
+            if (allocation[index] == UINT_MAX)
                 continue;
             
-            variable->machineLocal() = assign(allocation, variable->local());
+            variable->machineLocal() = assign(usedOperands, allocation, variable->operand());
         }
         
         for (StackAccessData* data : m_graph.m_stackAccessData) {
-            if (!data->local.isLocal()) {
-                data->machineLocal = data->local;
+            if (data->operand.isArgument()) {
+                data->machineLocal = data->operand.virtualRegister();
                 continue;
             }
             
-            if (static_cast<size_t>(data->local.toLocal()) >= allocation.size())
-                continue;
-            if (allocation[data->local.toLocal()] == UINT_MAX)
-                continue;
+            if (data->operand.isLocal()) {
+                if (static_cast<size_t>(data->operand.toLocal()) >= allocation.size())
+                    continue;
+                if (allocation[data->operand.toLocal()] == UINT_MAX)
+                    continue;
+            }
             
-            data->machineLocal = assign(allocation, data->local);
+            data->machineLocal = assign(usedOperands, allocation, data->operand);
         }
         
         if (!m_graph.needsScopeRegister())
             codeBlock()->setScopeRegister(VirtualRegister());
         else
-            codeBlock()->setScopeRegister(assign(allocation, codeBlock()->scopeRegister()));
+            codeBlock()->setScopeRegister(assign(usedOperands, allocation, codeBlock()->scopeRegister()));
 
         for (unsigned i = m_graph.m_inlineVariableData.size(); i--;) {
             InlineVariableData data = m_graph.m_inlineVariableData[i];
             InlineCallFrame* inlineCallFrame = data.inlineCallFrame;
             
-            if (inlineCallFrame->isVarargs()) {
-                inlineCallFrame->argumentCountRegister = assign(
-                    allocation, VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis));
-            }
-            
+            if (inlineCallFrame->isVarargs())
+                inlineCallFrame->argumentCountRegister = assign(usedOperands, allocation, VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis));
+
             for (unsigned argument = inlineCallFrame->argumentsWithFixup.size(); argument--;) {
                 ArgumentPosition& position = m_graph.m_argumentPositions[
                     data.argumentPositionStart + argument];
@@ -215,8 +215,8 @@
                     case LoadVarargs:
                     case ForwardVarargs: {
                         LoadVarargsData* data = node->loadVarargsData();
-                        data->machineCount = assign(allocation, data->count);
-                        data->machineStart = assign(allocation, data->start);
+                        data->machineCount = assign(usedOperands, allocation, data->count);
+                        data->machineStart = assign(usedOperands, allocation, data->start);
                         break;
                     }
                         
@@ -231,17 +231,16 @@
     }
 
 private:
-    VirtualRegister assign(const Vector<unsigned>& allocation, VirtualRegister src)
+    VirtualRegister assign(const Operands<bool>& usedOperands, const Vector<unsigned>& allocation, Operand operand)
     {
-        VirtualRegister result = src;
-        if (result.isLocal()) {
-            unsigned myAllocation = allocation[result.toLocal()];
-            if (myAllocation == UINT_MAX)
-                result = VirtualRegister();
-            else
-                result = virtualRegisterForLocal(myAllocation);
-        }
-        return result;
+        if (operand.isArgument())
+            return operand.virtualRegister();
+
+        size_t operandIndex = usedOperands.operandIndex(operand);
+        unsigned myAllocation = allocation[operandIndex];
+        if (myAllocation == UINT_MAX)
+            return VirtualRegister();
+        return virtualRegisterForLocal(myAllocation);
     }
 };
 
diff --git a/Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp b/Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp
index 18e67e7..b3e41d8 100644
--- a/Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp
@@ -285,17 +285,17 @@
             }
             
             Node* setLocal = nullptr;
-            VirtualRegister local = m_node->local();
+            Operand operand = m_node->operand();
             
             for (unsigned i = m_nodeIndex; i--;) {
                 Node* node = m_block->at(i);
 
-                if (node->op() == SetLocal && node->local() == local) {
+                if (node->op() == SetLocal && node->operand() == operand) {
                     setLocal = node;
                     break;
                 }
 
-                if (accessesOverlap(m_graph, node, AbstractHeap(Stack, local)))
+                if (accessesOverlap(m_graph, node, AbstractHeap(Stack, operand)))
                     break;
 
             }
diff --git a/Source/JavaScriptCore/dfg/DFGThunks.cpp b/Source/JavaScriptCore/dfg/DFGThunks.cpp
index 6acafbf..f1b1ff6 100644
--- a/Source/JavaScriptCore/dfg/DFGThunks.cpp
+++ b/Source/JavaScriptCore/dfg/DFGThunks.cpp
@@ -40,14 +40,6 @@
 
 namespace JSC { namespace DFG {
 
-MacroAssemblerCodeRef<JITThunkPtrTag> osrExitThunkGenerator(VM& vm)
-{
-    CCallHelpers jit(nullptr);
-    jit.probe(OSRExit::executeOSRExit, &vm);
-    LinkBuffer patchBuffer(jit, GLOBAL_THUNK_ID);
-    return FINALIZE_CODE(patchBuffer, JITThunkPtrTag, "DFG OSR exit thunk");
-}
-
 MacroAssemblerCodeRef<JITThunkPtrTag> osrExitGenerationThunkGenerator(VM& vm)
 {
     CCallHelpers jit(nullptr);
diff --git a/Source/JavaScriptCore/dfg/DFGThunks.h b/Source/JavaScriptCore/dfg/DFGThunks.h
index 05a50a5..8b5c3f0 100644
--- a/Source/JavaScriptCore/dfg/DFGThunks.h
+++ b/Source/JavaScriptCore/dfg/DFGThunks.h
@@ -35,7 +35,6 @@
 
 namespace DFG {
 
-MacroAssemblerCodeRef<JITThunkPtrTag> osrExitThunkGenerator(VM&);
 MacroAssemblerCodeRef<JITThunkPtrTag> osrExitGenerationThunkGenerator(VM&);
 MacroAssemblerCodeRef<JITThunkPtrTag> osrEntryThunkGenerator(VM&);
 
diff --git a/Source/JavaScriptCore/dfg/DFGTypeCheckHoistingPhase.cpp b/Source/JavaScriptCore/dfg/DFGTypeCheckHoistingPhase.cpp
index 0e25d7d..f5b9ed2 100644
--- a/Source/JavaScriptCore/dfg/DFGTypeCheckHoistingPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGTypeCheckHoistingPhase.cpp
@@ -146,9 +146,9 @@
                     if (iter->value.m_structure) {
                         auto checkOp = CheckStructure;
                         if (SpecCellCheck & SpecEmpty) {
-                            VirtualRegister local = node->variableAccessData()->local();
+                            VirtualRegister local = node->variableAccessData()->operand().virtualRegister();
                             auto* inlineCallFrame = node->origin.semantic.inlineCallFrame();
-                            if ((local - (inlineCallFrame ? inlineCallFrame->stackOffset : 0)) == virtualRegisterForArgument(0)) {
+                            if ((local - (inlineCallFrame ? inlineCallFrame->stackOffset : 0)) == virtualRegisterForArgumentIncludingThis(0)) {
                                 // |this| can be the TDZ value. The call entrypoint won't have |this| as TDZ,
                                 // but a catch or a loop OSR entry may have |this| be TDZ.
                                 checkOp = CheckStructureOrEmpty;
@@ -168,8 +168,8 @@
                     } else
                         RELEASE_ASSERT_NOT_REACHED();
 
-                    if (block->variablesAtTail.operand(variable->local()) == node)
-                        block->variablesAtTail.operand(variable->local()) = getLocal;
+                    if (block->variablesAtTail.operand(variable->operand()) == node)
+                        block->variablesAtTail.operand(variable->operand()) = getLocal;
                     
                     m_graph.substituteGetLocal(*block, indexInBlock, variable, getLocal);
                     
@@ -448,7 +448,7 @@
                 continue;
             const Operands<Optional<JSValue>>& mustHandleValues = m_graph.m_plan.mustHandleValues();
             for (size_t i = 0; i < mustHandleValues.size(); ++i) {
-                int operand = mustHandleValues.operandForIndex(i);
+                Operand operand = mustHandleValues.operandForIndex(i);
                 Node* node = block->variablesAtHead.operand(operand);
                 if (!node)
                     continue;
diff --git a/Source/JavaScriptCore/dfg/DFGValidate.cpp b/Source/JavaScriptCore/dfg/DFGValidate.cpp
index 7ea9348..e941887 100644
--- a/Source/JavaScriptCore/dfg/DFGValidate.cpp
+++ b/Source/JavaScriptCore/dfg/DFGValidate.cpp
@@ -483,7 +483,7 @@
                 Node* node = block->phis[i];
                 ASSERT(phisInThisBlock.contains(node));
                 VALIDATE((node), node->op() == Phi);
-                VirtualRegister local = node->local();
+                Operand operand = node->operand();
                 for (unsigned j = 0; j < m_graph.numChildren(node); ++j) {
                     // Phi children in LoadStore form are invalid.
                     if (m_graph.m_form == LoadStore && block->isPhiIndex(i))
@@ -519,10 +519,10 @@
                     for (unsigned k = 0; k < block->predecessors.size(); ++k) {
                         BasicBlock* prevBlock = block->predecessors[k];
                         VALIDATE((block->predecessors[k]), prevBlock);
-                        Node* prevNode = prevBlock->variablesAtTail.operand(local);
+                        Node* prevNode = prevBlock->variablesAtTail.operand(operand);
                         // If we have a Phi that is not referring to *this* block then all predecessors
                         // must have that local available.
-                        VALIDATE((local, block, block->predecessors[k]), prevNode);
+                        VALIDATE((operand, block, block->predecessors[k]), prevNode);
                         switch (prevNode->op()) {
                         case GetLocal:
                         case Flush:
@@ -533,11 +533,11 @@
                             break;
                         }
                         if (node->shouldGenerate()) {
-                            VALIDATE((local, block->predecessors[k], prevNode),
+                            VALIDATE((operand, block->predecessors[k], prevNode),
                                      prevNode->shouldGenerate());
                         }
                         VALIDATE(
-                            (local, block->predecessors[k], prevNode),
+                            (operand, block->predecessors[k], prevNode),
                             prevNode->op() == SetLocal
                             || prevNode->op() == SetArgumentDefinitely
                             || prevNode->op() == SetArgumentMaybe
@@ -547,24 +547,20 @@
                             break;
                         }
                         // At this point it cannot refer into this block.
-                        VALIDATE((local, block->predecessors[k], prevNode), !prevBlock->isInBlock(edge.node()));
+                        VALIDATE((operand, block->predecessors[k], prevNode), !prevBlock->isInBlock(edge.node()));
                     }
                     
                     VALIDATE((node, edge), found);
                 }
             }
             
-            Operands<size_t> getLocalPositions(
-                block->variablesAtHead.numberOfArguments(),
-                block->variablesAtHead.numberOfLocals());
-            Operands<size_t> setLocalPositions(
-                block->variablesAtHead.numberOfArguments(),
-                block->variablesAtHead.numberOfLocals());
+            Operands<size_t> getLocalPositions(OperandsLike, block->variablesAtHead);
+            Operands<size_t> setLocalPositions(OperandsLike, block->variablesAtHead);
             
             for (size_t i = 0; i < block->variablesAtHead.numberOfArguments(); ++i) {
-                VALIDATE((virtualRegisterForArgument(i), block), !block->variablesAtHead.argument(i) || block->variablesAtHead.argument(i)->accessesStack(m_graph));
+                VALIDATE((virtualRegisterForArgumentIncludingThis(i), block), !block->variablesAtHead.argument(i) || block->variablesAtHead.argument(i)->accessesStack(m_graph));
                 if (m_graph.m_form == ThreadedCPS)
-                    VALIDATE((virtualRegisterForArgument(i), block), !block->variablesAtTail.argument(i) || block->variablesAtTail.argument(i)->accessesStack(m_graph));
+                    VALIDATE((virtualRegisterForArgumentIncludingThis(i), block), !block->variablesAtTail.argument(i) || block->variablesAtTail.argument(i)->accessesStack(m_graph));
                 
                 getLocalPositions.argument(i) = notSet;
                 setLocalPositions.argument(i) = notSet;
@@ -668,24 +664,24 @@
                     if (!m_myRefCounts.get(node))
                         break;
                     if (m_graph.m_form == ThreadedCPS) {
-                        VALIDATE((node, block), getLocalPositions.operand(node->local()) == notSet);
+                        VALIDATE((node, block), getLocalPositions.operand(node->operand()) == notSet);
                         VALIDATE((node, block), !!node->child1());
                         VALIDATE((node, block), node->child1()->op() == SetArgumentDefinitely || node->child1()->op() == Phi);
                     }
-                    getLocalPositions.operand(node->local()) = i;
+                    getLocalPositions.operand(node->operand()) = i;
                     break;
                 case SetLocal:
                     // Only record the first SetLocal. There may be multiple SetLocals
                     // because of flushing.
-                    if (setLocalPositions.operand(node->local()) != notSet)
+                    if (setLocalPositions.operand(node->operand()) != notSet)
                         break;
-                    setLocalPositions.operand(node->local()) = i;
+                    setLocalPositions.operand(node->operand()) = i;
                     break;
                 case SetArgumentDefinitely:
                     // This acts like a reset. It's ok to have a second GetLocal for a local in the same
                     // block if we had a SetArgumentDefinitely for that local.
-                    getLocalPositions.operand(node->local()) = notSet;
-                    setLocalPositions.operand(node->local()) = notSet;
+                    getLocalPositions.operand(node->operand()) = notSet;
+                    setLocalPositions.operand(node->operand()) = notSet;
                     break;
                 case SetArgumentMaybe:
                     break;
@@ -711,7 +707,7 @@
             
             for (size_t i = 0; i < block->variablesAtHead.numberOfArguments(); ++i) {
                 checkOperand(
-                    block, getLocalPositions, setLocalPositions, virtualRegisterForArgument(i));
+                    block, getLocalPositions, setLocalPositions, virtualRegisterForArgumentIncludingThis(i));
             }
             for (size_t i = 0; i < block->variablesAtHead.numberOfLocals(); ++i) {
                 checkOperand(
@@ -968,26 +964,26 @@
         dataLog(node, " -> ", edge);
     }
     
-    void reportValidationContext(VirtualRegister local, BasicBlock* block)
+    void reportValidationContext(Operand operand, BasicBlock* block)
     {
         if (!block) {
-            dataLog(local, " in null Block ");
+            dataLog(operand, " in null Block ");
             return;
         }
 
-        dataLog(local, " in Block ", *block);
+        dataLog(operand, " in Block ", *block);
     }
     
     void reportValidationContext(
-        VirtualRegister local, BasicBlock* sourceBlock, BasicBlock* destinationBlock)
+        Operand operand, BasicBlock* sourceBlock, BasicBlock* destinationBlock)
     {
-        dataLog(local, " in Block ", *sourceBlock, " -> ", *destinationBlock);
+        dataLog(operand, " in Block ", *sourceBlock, " -> ", *destinationBlock);
     }
     
     void reportValidationContext(
-        VirtualRegister local, BasicBlock* sourceBlock, Node* prevNode)
+        Operand operand, BasicBlock* sourceBlock, Node* prevNode)
     {
-        dataLog(prevNode, " for ", local, " in Block ", *sourceBlock);
+        dataLog(prevNode, " for ", operand, " in Block ", *sourceBlock);
     }
     
     void reportValidationContext(Node* node, BasicBlock* block)
diff --git a/Source/JavaScriptCore/dfg/DFGVarargsForwardingPhase.cpp b/Source/JavaScriptCore/dfg/DFGVarargsForwardingPhase.cpp
index 03e8920..74ea618 100644
--- a/Source/JavaScriptCore/dfg/DFGVarargsForwardingPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGVarargsForwardingPhase.cpp
@@ -41,11 +41,9 @@
 
 namespace {
 
-namespace DFGVarargsForwardingPhaseInternal {
-static constexpr bool verbose = false;
-}
 
 class VarargsForwardingPhase : public Phase {
+    static constexpr bool verbose = false;
 public:
     VarargsForwardingPhase(Graph& graph)
         : Phase(graph, "varargs forwarding")
@@ -56,7 +54,7 @@
     {
         DFG_ASSERT(m_graph, nullptr, m_graph.m_form != SSA);
         
-        if (DFGVarargsForwardingPhaseInternal::verbose) {
+        if (verbose) {
             dataLog("Graph before varargs forwarding:\n");
             m_graph.dump();
         }
@@ -88,7 +86,7 @@
         // We expect calls into this function to be rare. So, this is written in a simple O(n) manner.
         
         Node* candidate = block->at(candidateNodeIndex);
-        if (DFGVarargsForwardingPhaseInternal::verbose)
+        if (verbose)
             dataLog("Handling candidate ", candidate, "\n");
         
         // We eliminate GetButterfly over CreateClonedArguments if the butterfly is only
@@ -105,7 +103,7 @@
 
             auto defaultEscape = [&] {
                 if (m_graph.uses(node, candidate)) {
-                    if (DFGVarargsForwardingPhaseInternal::verbose)
+                    if (verbose)
                         dataLog("    Escape at ", node, "\n");
                     return true;
                 }
@@ -117,9 +115,10 @@
             case MovHint:
                 if (node->child1() != candidate)
                     break;
+                ASSERT_WITH_MESSAGE(!node->unlinkedOperand().isTmp(), "We don't currently support a tmp referring to an arguments object.");
                 lastUserIndex = nodeIndex;
-                if (!relevantLocals.contains(node->unlinkedLocal()))
-                    relevantLocals.append(node->unlinkedLocal());
+                if (!relevantLocals.contains(node->unlinkedOperand().virtualRegister()))
+                    relevantLocals.append(node->unlinkedOperand().virtualRegister());
                 break;
                 
             case CheckVarargs:
@@ -140,13 +139,14 @@
                         sawEscape = true;
                     });
                 if (sawEscape) {
-                    if (DFGVarargsForwardingPhaseInternal::verbose)
+                    if (verbose)
                         dataLog("    Escape at ", node, "\n");
                     return;
                 }
                 break;
             }
                 
+            case VarargsLength:
             case LoadVarargs:
                 if (m_graph.uses(node, candidate))
                     lastUserIndex = nodeIndex;
@@ -157,7 +157,7 @@
             case TailCallVarargs:
             case TailCallVarargsInlinedCaller:
                 if (node->child1() == candidate || node->child2() == candidate) {
-                    if (DFGVarargsForwardingPhaseInternal::verbose)
+                    if (verbose)
                         dataLog("    Escape at ", node, "\n");
                     return;
                 }
@@ -167,7 +167,7 @@
                 
             case SetLocal:
                 if (node->child1() == candidate && node->variableAccessData()->isLoadedFrom()) {
-                    if (DFGVarargsForwardingPhaseInternal::verbose)
+                    if (verbose)
                         dataLog("    Escape at ", node, "\n");
                     return;
                 }
@@ -226,7 +226,7 @@
             if (!validGetByOffset) {
                 for (Node* butterfly : candidateButterflies) {
                     if (m_graph.uses(node, butterfly)) {
-                        if (DFGVarargsForwardingPhaseInternal::verbose)
+                        if (verbose)
                             dataLog("    Butterfly escaped at ", node, "\n");
                         return;
                     }
@@ -235,11 +235,11 @@
 
             forAllKilledOperands(
                 m_graph, node, block->tryAt(nodeIndex + 1),
-                [&] (VirtualRegister reg) {
-                    if (DFGVarargsForwardingPhaseInternal::verbose)
-                        dataLog("    Killing ", reg, " while we are interested in ", listDump(relevantLocals), "\n");
+                [&] (Operand operand) {
+                    if (verbose)
+                        dataLog("    Killing ", operand, " while we are interested in ", listDump(relevantLocals), "\n");
                     for (unsigned i = 0; i < relevantLocals.size(); ++i) {
-                        if (relevantLocals[i] == reg) {
+                        if (operand == relevantLocals[i]) {
                             relevantLocals[i--] = relevantLocals.last();
                             relevantLocals.removeLast();
                             lastUserIndex = nodeIndex;
@@ -247,7 +247,7 @@
                     }
                 });
         }
-        if (DFGVarargsForwardingPhaseInternal::verbose)
+        if (verbose)
             dataLog("Selected lastUserIndex = ", lastUserIndex, ", ", block->at(lastUserIndex), "\n");
         
         // We're still in business. Determine if between the candidate and the last user there is any
@@ -263,16 +263,16 @@
             case MovHint:
             case ZombieHint:
             case KillStack:
-                if (argumentsInvolveStackSlot(candidate, node->unlinkedLocal())) {
-                    if (DFGVarargsForwardingPhaseInternal::verbose)
+                if (argumentsInvolveStackSlot(candidate, node->unlinkedOperand())) {
+                    if (verbose)
                         dataLog("    Interference at ", node, "\n");
                     return;
                 }
                 break;
                 
             case PutStack:
-                if (argumentsInvolveStackSlot(candidate, node->stackAccessData()->local)) {
-                    if (DFGVarargsForwardingPhaseInternal::verbose)
+                if (argumentsInvolveStackSlot(candidate, node->stackAccessData()->operand)) {
+                    if (verbose)
                         dataLog("    Interference at ", node, "\n");
                     return;
                 }
@@ -280,8 +280,8 @@
                 
             case SetLocal:
             case Flush:
-                if (argumentsInvolveStackSlot(candidate, node->local())) {
-                    if (DFGVarargsForwardingPhaseInternal::verbose)
+                if (argumentsInvolveStackSlot(candidate, node->operand())) {
+                    if (verbose)
                         dataLog("    Interference at ", node, "\n");
                     return;
                 }
@@ -297,13 +297,12 @@
                             return;
                         }
                         ASSERT(!heap.payload().isTop());
-                        VirtualRegister reg(heap.payload().value32());
-                        if (argumentsInvolveStackSlot(candidate, reg))
+                        if (argumentsInvolveStackSlot(candidate, heap.operand()))
                             doesInterfere = true;
                     },
                     NoOpClobberize());
                 if (doesInterfere) {
-                    if (DFGVarargsForwardingPhaseInternal::verbose)
+                    if (verbose)
                         dataLog("    Interference at ", node, "\n");
                     return;
                 }
@@ -311,7 +310,7 @@
         }
         
         // We can make this work.
-        if (DFGVarargsForwardingPhaseInternal::verbose)
+        if (verbose)
             dataLog("    Will do forwarding!\n");
         m_changed = true;
         
@@ -341,8 +340,16 @@
                 // We don't need to change anything with these.
                 break;
                 
+            case VarargsLength: {
+                if (node->argumentsChild() != candidate)
+                    break;
+
+                node->convertToIdentityOn(emitCodeToGetArgumentsArrayLength(insertionSet, candidate, nodeIndex, node->origin, /* addThis = */ true));
+                break;
+            }
+
             case LoadVarargs:
-                if (node->child1() != candidate)
+                if (node->argumentsChild() != candidate)
                     break;
                 node->setOpAndDefaultFlags(ForwardVarargs);
                 break;
diff --git a/Source/JavaScriptCore/dfg/DFGVariableAccessData.cpp b/Source/JavaScriptCore/dfg/DFGVariableAccessData.cpp
index 8ebffb4..2ac2ee3 100644
--- a/Source/JavaScriptCore/dfg/DFGVariableAccessData.cpp
+++ b/Source/JavaScriptCore/dfg/DFGVariableAccessData.cpp
@@ -31,8 +31,7 @@
 namespace JSC { namespace DFG {
 
 VariableAccessData::VariableAccessData()
-    : m_local(static_cast<VirtualRegister>(std::numeric_limits<int>::min()))
-    , m_prediction(SpecNone)
+    : m_prediction(SpecNone)
     , m_argumentAwarePrediction(SpecNone)
     , m_flags(0)
     , m_shouldNeverUnbox(false)
@@ -45,10 +44,10 @@
     clearVotes();
 }
 
-VariableAccessData::VariableAccessData(VirtualRegister local)
-    : m_local(local)
-    , m_prediction(SpecNone)
+VariableAccessData::VariableAccessData(Operand operand)
+    : m_prediction(SpecNone)
     , m_argumentAwarePrediction(SpecNone)
+    , m_operand(operand)
     , m_flags(0)
     , m_shouldNeverUnbox(false)
     , m_structureCheckHoistingFailed(false)
@@ -87,7 +86,7 @@
 {
     // We don't support this facility for arguments, yet.
     // FIXME: make this work for arguments.
-    if (local().isArgument())
+    if (operand().isArgument())
         return false;
         
     // If the variable is not a number prediction, then this doesn't
@@ -121,7 +120,7 @@
 {
     ASSERT(isRoot());
         
-    if (local().isArgument() || shouldNeverUnbox()
+    if (operand().isArgument() || shouldNeverUnbox()
         || (flags() & NodeBytecodeUsesAsArrayIndex))
         return DFG::mergeDoubleFormatState(m_doubleFormatState, NotUsingDoubleFormat);
     
@@ -176,7 +175,7 @@
         return false;
     
     // We punt for machine arguments.
-    if (m_local.isArgument())
+    if (operand().isArgument())
         return false;
     
     // The argument-aware prediction -- which merges all of an (inlined or machine)
diff --git a/Source/JavaScriptCore/dfg/DFGVariableAccessData.h b/Source/JavaScriptCore/dfg/DFGVariableAccessData.h
index 1d3892f..be2fb54 100644
--- a/Source/JavaScriptCore/dfg/DFGVariableAccessData.h
+++ b/Source/JavaScriptCore/dfg/DFGVariableAccessData.h
@@ -48,12 +48,12 @@
     WTF_MAKE_NONCOPYABLE(VariableAccessData);
 public:
     VariableAccessData();
-    VariableAccessData(VirtualRegister local);
+    VariableAccessData(Operand);
     
-    VirtualRegister local()
+    Operand operand()
     {
-        ASSERT(m_local == find()->m_local);
-        return m_local;
+        ASSERT(m_operand == find()->m_operand);
+        return m_operand;
     }
     
     VirtualRegister& machineLocal()
@@ -208,10 +208,10 @@
     // putting them here simplifies the code, and we don't expect DFG space
     // usage for variable access nodes do be significant.
 
-    VirtualRegister m_local;
-    VirtualRegister m_machineLocal;
     SpeculatedType m_prediction;
     SpeculatedType m_argumentAwarePrediction;
+    Operand m_operand;
+    VirtualRegister m_machineLocal;
     NodeFlags m_flags;
 
     bool m_shouldNeverUnbox;
diff --git a/Source/JavaScriptCore/dfg/DFGVariableEvent.cpp b/Source/JavaScriptCore/dfg/DFGVariableEvent.cpp
index d3b66cb..9ce0f67 100644
--- a/Source/JavaScriptCore/dfg/DFGVariableEvent.cpp
+++ b/Source/JavaScriptCore/dfg/DFGVariableEvent.cpp
@@ -60,11 +60,11 @@
         out.print("Death(", id(), ")");
         break;
     case MovHintEvent:
-        out.print("MovHint(", id(), ", ", bytecodeRegister(), ")");
+        out.print("MovHint(", id(), ", ", operand(), ")");
         break;
     case SetLocalEvent:
         out.print(
-            "SetLocal(machine:", machineRegister(), " -> bytecode:", bytecodeRegister(),
+            "SetLocal(machine:", machineRegister(), " -> bytecode:", operand(),
             ", ", dataFormatToString(dataFormat()), ")");
         break;
     default:
diff --git a/Source/JavaScriptCore/dfg/DFGVariableEvent.h b/Source/JavaScriptCore/dfg/DFGVariableEvent.h
index ce2cac0..7c9922c 100644
--- a/Source/JavaScriptCore/dfg/DFGVariableEvent.h
+++ b/Source/JavaScriptCore/dfg/DFGVariableEvent.h
@@ -69,6 +69,10 @@
 };
 
 union VariableRepresentation {
+    VariableRepresentation() 
+        : operand() 
+    { }
+
     MacroAssembler::RegisterID gpr;
     MacroAssembler::FPRegisterID fpr;
 #if USE(JSVALUE32_64)
@@ -77,7 +81,7 @@
         MacroAssembler::RegisterID payloadGPR;
     } pair;
 #endif
-    int32_t virtualReg;
+    Operand operand;
 };
 
 class VariableEvent {
@@ -163,7 +167,7 @@
         WhichType which;
         which.id = id.bits();
         VariableRepresentation representation;
-        representation.virtualReg = virtualRegister.offset();
+        representation.operand = virtualRegister;
         event.m_kind = kind;
         event.m_dataFormat = format;
         event.m_which = WTFMove(which);
@@ -182,13 +186,13 @@
     }
     
     static VariableEvent setLocal(
-        VirtualRegister bytecodeReg, VirtualRegister machineReg, DataFormat format)
+        Operand bytecodeOperand, VirtualRegister machineReg, DataFormat format)
     {
         VariableEvent event;
         WhichType which;
         which.virtualReg = machineReg.offset();
         VariableRepresentation representation;
-        representation.virtualReg = bytecodeReg.offset();
+        representation.operand = bytecodeOperand;
         event.m_kind = SetLocalEvent;
         event.m_dataFormat = format;
         event.m_which = WTFMove(which);
@@ -196,13 +200,13 @@
         return event;
     }
     
-    static VariableEvent movHint(MinifiedID id, VirtualRegister bytecodeReg)
+    static VariableEvent movHint(MinifiedID id, Operand bytecodeReg)
     {
         VariableEvent event;
         WhichType which;
         which.id = id.bits();
         VariableRepresentation representation;
-        representation.virtualReg = bytecodeReg.offset();
+        representation.operand = bytecodeReg;
         event.m_kind = MovHintEvent;
         event.m_which = WTFMove(which);
         event.m_representation = WTFMove(representation);
@@ -266,13 +270,13 @@
     VirtualRegister spillRegister() const
     {
         ASSERT(m_kind == BirthToSpill || m_kind == Spill);
-        return VirtualRegister(m_representation.get().virtualReg);
+        return m_representation.get().operand.virtualRegister();
     }
-    
-    VirtualRegister bytecodeRegister() const
+
+    Operand operand() const
     {
         ASSERT(m_kind == SetLocalEvent || m_kind == MovHintEvent);
-        return VirtualRegister(m_representation.get().virtualReg);
+        return m_representation.get().operand;
     }
     
     VirtualRegister machineRegister() const
diff --git a/Source/JavaScriptCore/dfg/DFGVariableEventStream.cpp b/Source/JavaScriptCore/dfg/DFGVariableEventStream.cpp
index 4b26cb0..ef70874 100644
--- a/Source/JavaScriptCore/dfg/DFGVariableEventStream.cpp
+++ b/Source/JavaScriptCore/dfg/DFGVariableEventStream.cpp
@@ -43,7 +43,7 @@
 {
     dataLogF("seq#%u:", static_cast<unsigned>(size()));
     event.dump(WTF::dataFile());
-    dataLogF(" ");
+    dataLogLn(" ");
 }
 
 namespace {
@@ -120,10 +120,12 @@
     CodeBlock* codeBlock, CodeOrigin codeOrigin, MinifiedGraph& graph,
     unsigned index, Operands<ValueRecovery>& valueRecoveries, Vector<UndefinedOperandSpan>* undefinedOperandSpans) const
 {
+    constexpr bool verbose = false;
     ASSERT(codeBlock->jitType() == JITType::DFGJIT);
     CodeBlock* baselineCodeBlock = codeBlock->baselineVersion();
 
     unsigned numVariables;
+    unsigned numTmps;
     static constexpr unsigned invalidIndex = std::numeric_limits<unsigned>::max();
     unsigned firstUndefined = invalidIndex;
     bool firstUndefinedIsArgument = false;
@@ -131,37 +133,42 @@
     auto flushUndefinedOperandSpan = [&] (unsigned i) {
         if (firstUndefined == invalidIndex)
             return;
-        int firstOffset = valueRecoveries.virtualRegisterForIndex(firstUndefined).offset();
-        int lastOffset = valueRecoveries.virtualRegisterForIndex(i - 1).offset();
+        int firstOffset = valueRecoveries.operandForIndex(firstUndefined).virtualRegister().offset();
+        int lastOffset = valueRecoveries.operandForIndex(i - 1).virtualRegister().offset();
         int minOffset = std::min(firstOffset, lastOffset);
         undefinedOperandSpans->append({ firstUndefined, minOffset, i - firstUndefined });
         firstUndefined = invalidIndex;
     };
     auto recordUndefinedOperand = [&] (unsigned i) {
         // We want to separate the span of arguments from the span of locals even if they have adjacent operands indexes.
-        if (firstUndefined != invalidIndex && firstUndefinedIsArgument != valueRecoveries.isArgument(i))
+        if (firstUndefined != invalidIndex && firstUndefinedIsArgument != valueRecoveries.operandForIndex(i).isArgument())
             flushUndefinedOperandSpan(i);
 
         if (firstUndefined == invalidIndex) {
             firstUndefined = i;
-            firstUndefinedIsArgument = valueRecoveries.isArgument(i);
+            firstUndefinedIsArgument = valueRecoveries.operandForIndex(i).isArgument();
         }
     };
 
     auto* inlineCallFrame = codeOrigin.inlineCallFrame();
-    if (inlineCallFrame)
-        numVariables = baselineCodeBlockForInlineCallFrame(inlineCallFrame)->numCalleeLocals() + VirtualRegister(inlineCallFrame->stackOffset).toLocal() + 1;
-    else
+    if (inlineCallFrame) {
+        CodeBlock* codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame);
+        numVariables = codeBlock->numCalleeLocals() + VirtualRegister(inlineCallFrame->stackOffset).toLocal() + 1;
+        numTmps = codeBlock->numTmps() + inlineCallFrame->tmpOffset;
+    } else {
         numVariables = baselineCodeBlock->numCalleeLocals();
+        numTmps = baselineCodeBlock->numTmps();
+    }
     
     // Crazy special case: if we're at index == 0 then this must be an argument check
     // failure, in which case all variables are already set up. The recoveries should
     // reflect this.
     if (!index) {
-        valueRecoveries = Operands<ValueRecovery>(codeBlock->numParameters(), numVariables);
+        // We don't include tmps here because they can't be used yet.
+        valueRecoveries = Operands<ValueRecovery>(codeBlock->numParameters(), numVariables, 0);
         for (size_t i = 0; i < valueRecoveries.size(); ++i) {
             valueRecoveries[i] = ValueRecovery::displacedInJSStack(
-                VirtualRegister(valueRecoveries.operandForIndex(i)), DataFormatJS);
+                valueRecoveries.operandForIndex(i).virtualRegister(), DataFormatJS);
         }
         return numVariables;
     }
@@ -172,12 +179,13 @@
         startIndex--;
     
     // Step 2: Create a mock-up of the DFG's state and execute the events.
-    Operands<ValueSource> operandSources(codeBlock->numParameters(), numVariables);
+    Operands<ValueSource> operandSources(codeBlock->numParameters(), numVariables, numTmps);
     for (unsigned i = operandSources.size(); i--;)
         operandSources[i] = ValueSource(SourceIsDead);
     HashMap<MinifiedID, MinifiedGenerationInfo> generationInfos;
     for (unsigned i = startIndex; i < index; ++i) {
         const VariableEvent& event = at(i);
+        dataLogLnIf(verbose, "Processing event ", event);
         switch (event.kind()) {
         case Reset:
             // nothing to do.
@@ -199,21 +207,23 @@
             break;
         }
         case MovHintEvent:
-            if (operandSources.hasOperand(event.bytecodeRegister()))
-                operandSources.setOperand(event.bytecodeRegister(), ValueSource(event.id()));
+            if (operandSources.hasOperand(event.operand()))
+                operandSources.setOperand(event.operand(), ValueSource(event.id()));
             break;
         case SetLocalEvent:
-            if (operandSources.hasOperand(event.bytecodeRegister()))
-                operandSources.setOperand(event.bytecodeRegister(), ValueSource::forDataFormat(event.machineRegister(), event.dataFormat()));
+            if (operandSources.hasOperand(event.operand()))
+                operandSources.setOperand(event.operand(), ValueSource::forDataFormat(event.machineRegister(), event.dataFormat()));
             break;
         default:
             RELEASE_ASSERT_NOT_REACHED();
             break;
         }
     }
+
+    dataLogLnIf(verbose, "Operand sources: ", operandSources);
     
     // Step 3: Compute value recoveries!
-    valueRecoveries = Operands<ValueRecovery>(codeBlock->numParameters(), numVariables);
+    valueRecoveries = Operands<ValueRecovery>(OperandsLike, operandSources);
     for (unsigned i = 0; i < operandSources.size(); ++i) {
         ValueSource& source = operandSources[i];
         if (source.isTriviallyRecoverable()) {
@@ -231,6 +241,7 @@
         MinifiedNode* node = graph.at(source.id());
         MinifiedGenerationInfo info = generationInfos.get(source.id());
         if (!info.alive) {
+            dataLogLnIf(verbose, "Operand ", valueRecoveries.operandForIndex(i), " is dead.");
             valueRecoveries[i] = ValueRecovery::constant(jsUndefined());
             if (style == ReconstructionStyle::Separated)
                 recordUndefinedOperand(i);
@@ -238,6 +249,7 @@
         }
 
         if (tryToSetConstantRecovery(valueRecoveries[i], node)) {
+            dataLogLnIf(verbose, "Operand ", valueRecoveries.operandForIndex(i), " is constant.");
             if (style == ReconstructionStyle::Separated) {
                 if (node->hasConstant() && node->constant() == jsUndefined())
                     recordUndefinedOperand(i);
@@ -267,7 +279,7 @@
         }
         
         valueRecoveries[i] =
-            ValueRecovery::displacedInJSStack(static_cast<VirtualRegister>(info.u.virtualReg), info.format);
+            ValueRecovery::displacedInJSStack(info.u.operand.virtualRegister(), info.format);
     }
     if (style == ReconstructionStyle::Separated)
         flushUndefinedOperandSpan(operandSources.size());
diff --git a/Source/JavaScriptCore/dfg/DFGVariableEventStream.h b/Source/JavaScriptCore/dfg/DFGVariableEventStream.h
index 0157300..e9e43ab 100644
--- a/Source/JavaScriptCore/dfg/DFGVariableEventStream.h
+++ b/Source/JavaScriptCore/dfg/DFGVariableEventStream.h
@@ -42,9 +42,12 @@
 };
 
 class VariableEventStream : public Vector<VariableEvent> {
+    static constexpr bool verbose = false;
 public:
     void appendAndLog(const VariableEvent& event)
     {
+        if (verbose)
+            logEvent(event);
         append(event);
     }
     
diff --git a/Source/JavaScriptCore/ftl/FTLCapabilities.cpp b/Source/JavaScriptCore/ftl/FTLCapabilities.cpp
index 4e7a3c1..b953cfe 100644
--- a/Source/JavaScriptCore/ftl/FTLCapabilities.cpp
+++ b/Source/JavaScriptCore/ftl/FTLCapabilities.cpp
@@ -193,6 +193,7 @@
     case TailCallForwardVarargs:
     case TailCallForwardVarargsInlinedCaller:
     case ConstructForwardVarargs:
+    case VarargsLength:
     case LoadVarargs:
     case ValueToInt32:
     case Branch:
diff --git a/Source/JavaScriptCore/ftl/FTLForOSREntryJITCode.cpp b/Source/JavaScriptCore/ftl/FTLForOSREntryJITCode.cpp
index 3080dc3..c004fac 100644
--- a/Source/JavaScriptCore/ftl/FTLForOSREntryJITCode.cpp
+++ b/Source/JavaScriptCore/ftl/FTLForOSREntryJITCode.cpp
@@ -31,8 +31,7 @@
 namespace JSC { namespace FTL {
 
 ForOSREntryJITCode::ForOSREntryJITCode()
-    : m_bytecodeIndex(UINT_MAX)
-    , m_entryFailureCount(0)
+    : m_entryFailureCount(0)
 {
 }
 
diff --git a/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp b/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
index 4f2cc82..58390d3 100644
--- a/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
+++ b/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
@@ -209,7 +209,7 @@
                         jit.clearStackFrame(GPRInfo::callFrameRegister, CCallHelpers::stackPointerRegister, GPRInfo::regT0, code.frameSize());
 
                     jit.emitSave(code.calleeSaveRegisterAtOffsetList());
-                    jit.emitPutToCallFrameHeader(codeBlock, CallFrameSlot::codeBlock);
+                    jit.emitPutToCallFrameHeader(codeBlock, VirtualRegister(CallFrameSlot::codeBlock));
                 });
 
             for (unsigned catchEntrypointIndex : m_graph.m_entrypointIndexToCatchBytecodeIndex.keys()) {
@@ -275,7 +275,7 @@
         
         // We don't want the CodeBlock to have a weak pointer to itself because
         // that would cause it to always get collected.
-        m_out.storePtr(m_out.constIntPtr(bitwise_cast<intptr_t>(codeBlock())), addressFor(CallFrameSlot::codeBlock));
+        m_out.storePtr(m_out.constIntPtr(bitwise_cast<intptr_t>(codeBlock())), addressFor(VirtualRegister(CallFrameSlot::codeBlock)));
 
         // Stack Overflow Check.
         unsigned exitFrameSize = m_graph.requiredRegisterCountForExit() * sizeof(Register);
@@ -356,15 +356,15 @@
 
             // Check Arguments.
             availabilityMap().clear();
-            availabilityMap().m_locals = Operands<Availability>(codeBlock()->numParameters(), 0);
+            availabilityMap().m_locals = Operands<Availability>(codeBlock()->numParameters(), 0, 0);
             for (unsigned i = codeBlock()->numParameters(); i--;) {
                 availabilityMap().m_locals.argument(i) =
-                    Availability(FlushedAt(FlushedJSValue, virtualRegisterForArgument(i)));
+                    Availability(FlushedAt(FlushedJSValue, virtualRegisterForArgumentIncludingThis(i)));
             }
 
             for (unsigned i = codeBlock()->numParameters(); i--;) {
                 MethodOfGettingAValueProfile profile(&m_graph.m_profiledBlock->valueProfileForArgument(i));
-                VirtualRegister operand = virtualRegisterForArgument(i);
+                VirtualRegister operand = virtualRegisterForArgumentIncludingThis(i);
                 LValue jsValue = m_out.load64(addressFor(operand));
                 
                 switch (m_graph.m_argumentFormats[0][i]) {
@@ -1291,6 +1291,9 @@
         case CallEval:
             compileCallEval();
             break;
+        case VarargsLength:
+            compileVarargsLength();
+            break;
         case LoadVarargs:
             compileLoadVarargs();
             break;
@@ -1967,7 +1970,7 @@
     {
         EncodedJSValue* buffer = static_cast<EncodedJSValue*>(
             m_ftlState.jitCode->ftlForOSREntry()->entryBuffer()->dataBuffer());
-        setJSValue(m_out.load64(m_out.absolute(buffer + m_node->unlinkedLocal().toLocal())));
+        setJSValue(m_out.load64(m_out.absolute(buffer + m_node->unlinkedOperand().virtualRegister().toLocal())));
     }
 
     void compileExtractCatchLocal()
@@ -1986,7 +1989,7 @@
     void compileGetStack()
     {
         StackAccessData* data = m_node->stackAccessData();
-        AbstractValue& value = m_state.operand(data->local);
+        AbstractValue& value = m_state.operand(data->operand);
         
         DFG_ASSERT(m_graph, m_node, isConcrete(data->format), data->format);
         
@@ -4748,7 +4751,7 @@
             if (inlineCallFrame->argumentCountIncludingThis > 1)
                 base = addressFor(inlineCallFrame->argumentsWithFixup[0].virtualRegister());
         } else
-            base = addressFor(virtualRegisterForArgument(0));
+            base = addressFor(virtualRegisterForArgumentIncludingThis(0));
         
         LValue result;
         if (base) {
@@ -7981,13 +7984,13 @@
     
     void compileGetCallee()
     {
-        setJSValue(m_out.loadPtr(addressFor(CallFrameSlot::callee)));
+        setJSValue(m_out.loadPtr(addressFor(VirtualRegister(CallFrameSlot::callee))));
     }
 
     void compileSetCallee()
     {
         auto callee = lowCell(m_node->child1());
-        m_out.storePtr(callee, payloadFor(CallFrameSlot::callee));
+        m_out.storePtr(callee, payloadFor(VirtualRegister(CallFrameSlot::callee)));
     }
     
     void compileGetArgumentCountIncludingThis()
@@ -8002,7 +8005,7 @@
 
     void compileSetArgumentCountIncludingThis()
     {
-        m_out.store32(m_out.constInt32(m_node->argumentCountIncludingThis()), payloadFor(CallFrameSlot::argumentCountIncludingThis));
+        m_out.store32(m_out.constInt32(m_node->argumentCountIncludingThis()), payloadFor(VirtualRegister(CallFrameSlot::argumentCountIncludingThis)));
     }
     
     void compileGetScope()
@@ -8526,7 +8529,7 @@
         addArgument(jsCallee, VirtualRegister(CallFrameSlot::callee), 0);
         addArgument(m_out.constInt32(numArgs), VirtualRegister(CallFrameSlot::argumentCountIncludingThis), PayloadOffset);
         for (unsigned i = 0; i < numArgs; ++i)
-            addArgument(lowJSValue(m_graph.varArgChild(node, 1 + i)), virtualRegisterForArgument(i), 0);
+            addArgument(lowJSValue(m_graph.varArgChild(node, 1 + i)), virtualRegisterForArgumentIncludingThis(i), 0);
 
         PatchpointValue* patchpoint = m_out.patchpoint(Int64);
         patchpoint->appendVector(arguments);
@@ -8636,9 +8639,9 @@
             addArgument(jsCallee, VirtualRegister(CallFrameSlot::callee), 0);
             addArgument(m_out.constInt32(numPassedArgs), VirtualRegister(CallFrameSlot::argumentCountIncludingThis), PayloadOffset);
             for (unsigned i = 0; i < numPassedArgs; ++i)
-                addArgument(lowJSValue(m_graph.varArgChild(node, 1 + i)), virtualRegisterForArgument(i), 0);
+                addArgument(lowJSValue(m_graph.varArgChild(node, 1 + i)), virtualRegisterForArgumentIncludingThis(i), 0);
             for (unsigned i = numPassedArgs; i < numAllocatedArgs; ++i)
-                addArgument(m_out.constInt64(JSValue::encode(jsUndefined())), virtualRegisterForArgument(i), 0);
+                addArgument(m_out.constInt64(JSValue::encode(jsUndefined())), virtualRegisterForArgumentIncludingThis(i), 0);
         } else {
             for (unsigned i = 0; i < numPassedArgs; ++i)
                 arguments.append(ConstrainedValue(lowJSValue(m_graph.varArgChild(node, 1 + i)), ValueRep::WarmAny));
@@ -9534,7 +9537,7 @@
         addArgument(jsCallee, VirtualRegister(CallFrameSlot::callee), 0);
         addArgument(m_out.constInt32(numArgs), VirtualRegister(CallFrameSlot::argumentCountIncludingThis), PayloadOffset);
         for (unsigned i = 0; i < numArgs; ++i)
-            addArgument(lowJSValue(m_graph.varArgChild(node, 1 + i)), virtualRegisterForArgument(i), 0);
+            addArgument(lowJSValue(m_graph.varArgChild(node, 1 + i)), virtualRegisterForArgumentIncludingThis(i), 0);
         
         PatchpointValue* patchpoint = m_out.patchpoint(Int64);
         patchpoint->appendVector(arguments);
@@ -9597,51 +9600,51 @@
         setJSValue(patchpoint);
     }
     
+    void compileVarargsLength()
+    {
+        JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node->origin.semantic);
+        LoadVarargsData* data = m_node->loadVarargsData();
+        LValue jsArguments = lowJSValue(m_node->argumentsChild());
+
+        LValue length = vmCall(
+            Int32, operationSizeOfVarargs, weakPointer(globalObject), jsArguments,
+            m_out.constInt32(data->offset));
+
+        LValue lengthIncludingThis = m_out.add(length, m_out.int32One);
+
+        setInt32(lengthIncludingThis);
+    }
+
     void compileLoadVarargs()
     {
         JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node->origin.semantic);
         LoadVarargsData* data = m_node->loadVarargsData();
-        LValue jsArguments = lowJSValue(m_node->child1());
+        LValue jsArguments = lowJSValue(m_node->argumentsChild());
+        LValue lengthIncludingThis = lowInt32(m_node->child1());
         
-        LValue length = vmCall(
-            Int32, operationSizeOfVarargs, weakPointer(globalObject), jsArguments,
-            m_out.constInt32(data->offset));
-        
-        // FIXME: There is a chance that we will call an effectful length property twice. This is safe
-        // from the standpoint of the VM's integrity, but it's subtly wrong from a spec compliance
-        // standpoint. The best solution would be one where we can exit *into* the op_call_varargs right
-        // past the sizing.
-        // https://bugs.webkit.org/show_bug.cgi?id=141448
-        
-        LValue lengthIncludingThis = m_out.add(length, m_out.int32One);
-
         speculate(
             VarargsOverflow, noValue(), nullptr,
-            m_out.above(length, lengthIncludingThis));
+            m_out.bitOr(m_out.isZero32(lengthIncludingThis), m_out.above(lengthIncludingThis, m_out.constInt32(data->limit)))); 
 
-        speculate(
-            VarargsOverflow, noValue(), nullptr,
-            m_out.above(lengthIncludingThis, m_out.constInt32(data->limit)));
-        
         m_out.store32(lengthIncludingThis, payloadFor(data->machineCount));
         
-        // FIXME: This computation is rather silly. If operationLaodVarargs just took a pointer instead
+        // FIXME: This computation is rather silly. If operationLoadVarargs just took a pointer instead
         // of a VirtualRegister, we wouldn't have to do this.
         // https://bugs.webkit.org/show_bug.cgi?id=141660
         LValue machineStart = m_out.lShr(
-            m_out.sub(addressFor(data->machineStart.offset()).value(), m_callFrame),
+            m_out.sub(addressFor(data->machineStart).value(), m_callFrame),
             m_out.constIntPtr(3));
         
         vmCall(
             Void, operationLoadVarargs, weakPointer(globalObject),
             m_out.castToInt32(machineStart), jsArguments, m_out.constInt32(data->offset),
-            length, m_out.constInt32(data->mandatoryMinimum));
+            lengthIncludingThis, m_out.constInt32(data->mandatoryMinimum));
     }
     
     void compileForwardVarargs()
     {
-        if (m_node->child1()) {
-            Node* arguments = m_node->child1().node();
+        if (m_node->argumentsChild()) {
+            Node* arguments = m_node->argumentsChild().node();
             if (arguments->op() == PhantomNewArrayWithSpread || arguments->op() == PhantomNewArrayBuffer || arguments->op() == PhantomSpread) {
                 compileForwardVarargsWithSpread();
                 return;
@@ -9650,50 +9653,21 @@
 
         LoadVarargsData* data = m_node->loadVarargsData();
         InlineCallFrame* inlineCallFrame;
-        if (m_node->child1())
-            inlineCallFrame = m_node->child1()->origin.semantic.inlineCallFrame();
+        if (m_node->argumentsChild())
+            inlineCallFrame = m_node->argumentsChild()->origin.semantic.inlineCallFrame();
         else
             inlineCallFrame = m_node->origin.semantic.inlineCallFrame();
 
-        LValue length = nullptr; 
-        LValue lengthIncludingThis = nullptr;
-        ArgumentsLength argumentsLength = getArgumentsLength(inlineCallFrame);
-        if (argumentsLength.isKnown) {
-            unsigned knownLength = argumentsLength.known;
-            if (knownLength >= data->offset)
-                knownLength = knownLength - data->offset;
-            else
-                knownLength = 0;
-            length = m_out.constInt32(knownLength);
-            lengthIncludingThis = m_out.constInt32(knownLength + 1);
-        } else {
-            // We need to perform the same logical operation as the code above, but through dynamic operations.
-            if (!data->offset)
-                length = argumentsLength.value;
-            else {
-                LBasicBlock isLarger = m_out.newBlock();
-                LBasicBlock continuation = m_out.newBlock();
+        unsigned numberOfArgumentsToSkip = data->offset;
+        LValue lengthIncludingThis = lowInt32(m_node->child1());
 
-                ValueFromBlock smallerOrEqualLengthResult = m_out.anchor(m_out.constInt32(0));
-                m_out.branch(
-                    m_out.above(argumentsLength.value, m_out.constInt32(data->offset)), unsure(isLarger), unsure(continuation));
-                LBasicBlock lastNext = m_out.appendTo(isLarger, continuation);
-                ValueFromBlock largerLengthResult = m_out.anchor(m_out.sub(argumentsLength.value, m_out.constInt32(data->offset)));
-                m_out.jump(continuation);
-
-                m_out.appendTo(continuation, lastNext);
-                length = m_out.phi(Int32, smallerOrEqualLengthResult, largerLengthResult);
-            }
-            lengthIncludingThis = m_out.add(length, m_out.constInt32(1));
-        }
-
+        LValue length = m_out.sub(lengthIncludingThis, m_out.int32One);
         speculate(
             VarargsOverflow, noValue(), nullptr,
             m_out.above(lengthIncludingThis, m_out.constInt32(data->limit)));
         
         m_out.store32(lengthIncludingThis, payloadFor(data->machineCount));
         
-        unsigned numberOfArgumentsToSkip = data->offset;
         LValue sourceStart = getArgumentsStart(inlineCallFrame, numberOfArgumentsToSkip);
         LValue targetStart = addressFor(data->machineStart).value();
 
@@ -9752,65 +9726,20 @@
         // We need to perform the same logical operation as the code above, but through dynamic operations.
         if (!numberOfArgumentsToSkip)
             return argumentsLength.value;
+        
+        RELEASE_ASSERT(numberOfArgumentsToSkip < static_cast<unsigned>(INT32_MIN));
 
-        LBasicBlock isLarger = m_out.newBlock();
-        LBasicBlock continuation = m_out.newBlock();
+        LValue fixedLength = m_out.sub(argumentsLength.value, m_out.constInt32(numberOfArgumentsToSkip));
 
-        ValueFromBlock smallerOrEqualLengthResult = m_out.anchor(m_out.constInt32(0));
-        m_out.branch(
-            m_out.above(argumentsLength.value, m_out.constInt32(numberOfArgumentsToSkip)), unsure(isLarger), unsure(continuation));
-        LBasicBlock lastNext = m_out.appendTo(isLarger, continuation);
-        ValueFromBlock largerLengthResult = m_out.anchor(m_out.sub(argumentsLength.value, m_out.constInt32(numberOfArgumentsToSkip)));
-        m_out.jump(continuation);
-
-        m_out.appendTo(continuation, lastNext);
-        return m_out.phi(Int32, smallerOrEqualLengthResult, largerLengthResult);
+        return m_out.select(m_out.greaterThanOrEqual(fixedLength, m_out.int32Zero), fixedLength, m_out.int32Zero, SelectPredictability::Predictable);
     }
 
     void compileForwardVarargsWithSpread()
     {
-        HashMap<InlineCallFrame*, LValue, WTF::DefaultHash<InlineCallFrame*>::Hash, WTF::NullableHashTraits<InlineCallFrame*>> cachedSpreadLengths;
-
-        Node* arguments = m_node->child1().node();
+        Node* arguments = m_node->argumentsChild().node();
         RELEASE_ASSERT(arguments->op() == PhantomNewArrayWithSpread || arguments->op() == PhantomNewArrayBuffer || arguments->op() == PhantomSpread);
 
-        unsigned numberOfStaticArguments = 0;
-        Vector<LValue, 2> spreadLengths;
-
-        auto collectArgumentCount = recursableLambda([&](auto self, Node* target) -> void {
-            if (target->op() == PhantomSpread) {
-                self(target->child1().node());
-                return;
-            }
-
-            if (target->op() == PhantomNewArrayWithSpread) {
-                BitVector* bitVector = target->bitVector();
-                for (unsigned i = 0; i < target->numChildren(); i++) {
-                    if (bitVector->get(i))
-                        self(m_graph.varArgChild(target, i).node());
-                    else
-                        ++numberOfStaticArguments;
-                }
-                return;
-            }
-
-            if (target->op() == PhantomNewArrayBuffer) {
-                numberOfStaticArguments += target->castOperand<JSImmutableButterfly*>()->length();
-                return;
-            }
-
-            ASSERT(target->op() == PhantomCreateRest);
-            InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame();
-            unsigned numberOfArgumentsToSkip = target->numberOfArgumentsToSkip();
-            spreadLengths.append(cachedSpreadLengths.ensure(inlineCallFrame, [&] () {
-                return this->getSpreadLengthFromInlineCallFrame(inlineCallFrame, numberOfArgumentsToSkip);
-            }).iterator->value);
-        });
-
-        collectArgumentCount(arguments);
-        LValue lengthIncludingThis = m_out.constInt32(1 + numberOfStaticArguments);
-        for (LValue length : spreadLengths)
-            lengthIncludingThis = m_out.add(lengthIncludingThis, length);
+        LValue lengthIncludingThis = lowInt32(m_node->child1());
 
         LoadVarargsData* data = m_node->loadVarargsData();
         speculate(
@@ -9821,7 +9750,7 @@
 
         LValue targetStart = addressFor(data->machineStart).value();
 
-        auto forwardSpread = recursableLambda([this, &cachedSpreadLengths, &targetStart](auto self, Node* target, LValue storeIndex) -> LValue {
+        auto forwardSpread = recursableLambda([this, &targetStart](auto self, Node* target, LValue storeIndex) -> LValue {
             if (target->op() == PhantomSpread)
                 return self(target->child1().node(), storeIndex);
 
@@ -9853,8 +9782,9 @@
             RELEASE_ASSERT(target->op() == PhantomCreateRest);
             InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame();
 
-            LValue sourceStart = this->getArgumentsStart(inlineCallFrame, target->numberOfArgumentsToSkip());
-            LValue spreadLength = m_out.zeroExtPtr(cachedSpreadLengths.get(inlineCallFrame));
+            auto numberOfArgumentsToSkip = target->numberOfArgumentsToSkip();
+            LValue sourceStart = this->getArgumentsStart(inlineCallFrame, numberOfArgumentsToSkip);
+            LValue spreadLength = m_out.zeroExtPtr(getSpreadLengthFromInlineCallFrame(inlineCallFrame, numberOfArgumentsToSkip));
 
             LBasicBlock loop = m_out.newBlock();
             LBasicBlock continuation = m_out.newBlock();
@@ -12366,8 +12296,8 @@
         LValue scope = lowCell(m_node->child1());
 
         m_out.storePtr(m_callFrame, packet, m_heaps.ShadowChicken_Packet_frame);
-        m_out.storePtr(m_out.loadPtr(addressFor(0)), packet, m_heaps.ShadowChicken_Packet_callerFrame);
-        m_out.storePtr(m_out.loadPtr(payloadFor(CallFrameSlot::callee)), packet, m_heaps.ShadowChicken_Packet_callee);
+        m_out.storePtr(m_out.loadPtr(addressFor(VirtualRegister(0))), packet, m_heaps.ShadowChicken_Packet_callerFrame);
+        m_out.storePtr(m_out.loadPtr(payloadFor(VirtualRegister(CallFrameSlot::callee))), packet, m_heaps.ShadowChicken_Packet_callee);
         m_out.storePtr(scope, packet, m_heaps.ShadowChicken_Packet_scope);
     }
     
@@ -12428,7 +12358,7 @@
         ArgumentsLength length;
 
         if (inlineCallFrame && !inlineCallFrame->isVarargs()) {
-            length.known = inlineCallFrame->argumentCountIncludingThis - 1;
+            length.known = static_cast<unsigned>(inlineCallFrame->argumentCountIncludingThis - 1);
             length.isKnown = true;
             length.value = m_out.constInt32(length.known);
         } else {
@@ -12458,7 +12388,7 @@
                 return m_out.loadPtr(addressFor(frame->calleeRecovery.virtualRegister()));
             return weakPointer(frame->calleeRecovery.constant().asCell());
         }
-        return m_out.loadPtr(addressFor(CallFrameSlot::callee));
+        return m_out.loadPtr(addressFor(VirtualRegister(CallFrameSlot::callee)));
     }
     
     LValue getArgumentsStart(InlineCallFrame* inlineCallFrame, unsigned offset = 0)
@@ -17723,7 +17653,7 @@
         CallSiteIndex callSiteIndex = m_ftlState.jitCode->common.addCodeOrigin(codeOrigin);
         m_out.store32(
             m_out.constInt32(callSiteIndex.bits()),
-            tagFor(CallFrameSlot::argumentCountIncludingThis));
+            tagFor(VirtualRegister(CallFrameSlot::argumentCountIncludingThis)));
 #if !USE(BUILTIN_FRAME_ADDRESS) || ASSERT_ENABLED
         m_out.storePtr(m_callFrame, m_out.absolute(&vm().topCallFrame));
 #endif
@@ -17831,7 +17761,8 @@
         return &m_ftlState.jitCode->osrExitDescriptors.alloc(
             lowValue.format(), profile,
             availabilityMap().m_locals.numberOfArguments(),
-            availabilityMap().m_locals.numberOfLocals());
+            availabilityMap().m_locals.numberOfLocals(),
+            availabilityMap().m_locals.numberOfTmps());
     }
 
     void appendOSRExit(
@@ -17936,15 +17867,15 @@
             });
         
         for (unsigned i = 0; i < exitDescriptor->m_values.size(); ++i) {
-            int operand = exitDescriptor->m_values.operandForIndex(i);
+            Operand operand = exitDescriptor->m_values.operandForIndex(i);
             
             Availability availability = availabilityMap.m_locals[i];
             
             if (Options::validateFTLOSRExitLiveness()
                 && m_graph.m_plan.mode() != FTLForOSREntryMode) {
 
-                if (availability.isDead() && m_graph.isLiveInBytecode(VirtualRegister(operand), exitOrigin))
-                    DFG_CRASH(m_graph, m_node, toCString("Live bytecode local not available: operand = ", VirtualRegister(operand), ", availability = ", availability, ", origin = ", exitOrigin).data());
+                if (availability.isDead() && m_graph.isLiveInBytecode(operand, exitOrigin))
+                    DFG_CRASH(m_graph, m_node, toCString("Live bytecode local not available: operand = ", operand, ", availability = ", availability, ", origin = ", exitOrigin).data());
             }
             ExitValue exitValue = exitValueForAvailability(arguments, map, availability);
             if (exitValue.hasIndexInStackmapLocations())
@@ -18244,39 +18175,39 @@
         return m_out.alreadyRegisteredWeakPointer(m_graph, structure.get());
     }
     
-    TypedPointer addressFor(LValue base, int operand, ptrdiff_t offset = 0)
+    TypedPointer addressFor(LValue base, Operand operand, ptrdiff_t offset = 0)
     {
-        return m_out.address(base, m_heaps.variables[operand], offset);
+        return m_out.address(base, m_heaps.variables[operand.virtualRegister().offset()], offset);
     }
-    TypedPointer payloadFor(LValue base, int operand)
+    TypedPointer payloadFor(LValue base, Operand operand)
     {
         return addressFor(base, operand, PayloadOffset);
     }
-    TypedPointer tagFor(LValue base, int operand)
+    TypedPointer tagFor(LValue base, Operand operand)
     {
         return addressFor(base, operand, TagOffset);
     }
-    TypedPointer addressFor(int operand, ptrdiff_t offset = 0)
+    TypedPointer addressFor(Operand operand, ptrdiff_t offset = 0)
     {
-        return addressFor(VirtualRegister(operand), offset);
+        return addressFor(operand.virtualRegister(), offset);
     }
     TypedPointer addressFor(VirtualRegister operand, ptrdiff_t offset = 0)
     {
         if (operand.isLocal())
-            return addressFor(m_captured, operand.offset(), offset);
-        return addressFor(m_callFrame, operand.offset(), offset);
+            return addressFor(m_captured, operand, offset);
+        return addressFor(m_callFrame, operand, offset);
     }
-    TypedPointer payloadFor(int operand)
+    TypedPointer payloadFor(Operand operand)
     {
-        return payloadFor(VirtualRegister(operand));
+        return payloadFor(operand.virtualRegister());
     }
     TypedPointer payloadFor(VirtualRegister operand)
     {
         return addressFor(operand, PayloadOffset);
     }
-    TypedPointer tagFor(int operand)
+    TypedPointer tagFor(Operand operand)
     {
-        return tagFor(VirtualRegister(operand));
+        return tagFor(operand.virtualRegister());
     }
     TypedPointer tagFor(VirtualRegister operand)
     {
diff --git a/Source/JavaScriptCore/ftl/FTLOSREntry.cpp b/Source/JavaScriptCore/ftl/FTLOSREntry.cpp
index afbf6ed..05caed9 100644
--- a/Source/JavaScriptCore/ftl/FTLOSREntry.cpp
+++ b/Source/JavaScriptCore/ftl/FTLOSREntry.cpp
@@ -71,7 +71,7 @@
     dataLogLnIf(Options::verboseOSR(), "    Values at entry: ", values);
     
     for (int argument = values.numberOfArguments(); argument--;) {
-        JSValue valueOnStack = callFrame->r(virtualRegisterForArgument(argument).offset()).asanUnsafeJSValue();
+        JSValue valueOnStack = callFrame->r(virtualRegisterForArgumentIncludingThis(argument)).asanUnsafeJSValue();
         Optional<JSValue> reconstructedValue = values.argument(argument);
         if ((reconstructedValue && valueOnStack == reconstructedValue.value()) || !argument)
             continue;
diff --git a/Source/JavaScriptCore/ftl/FTLOSRExit.cpp b/Source/JavaScriptCore/ftl/FTLOSRExit.cpp
index 09958d2..04c8579 100644
--- a/Source/JavaScriptCore/ftl/FTLOSRExit.cpp
+++ b/Source/JavaScriptCore/ftl/FTLOSRExit.cpp
@@ -47,10 +47,10 @@
 
 OSRExitDescriptor::OSRExitDescriptor(
     DataFormat profileDataFormat, MethodOfGettingAValueProfile valueProfile,
-    unsigned numberOfArguments, unsigned numberOfLocals)
+    unsigned numberOfArguments, unsigned numberOfLocals, unsigned numberOfTmps)
     : m_profileDataFormat(profileDataFormat)
     , m_valueProfile(valueProfile)
-    , m_values(numberOfArguments, numberOfLocals)
+    , m_values(numberOfArguments, numberOfLocals, numberOfTmps)
 {
 }
 
@@ -117,5 +117,3 @@
 } } // namespace JSC::FTL
 
 #endif // ENABLE(FTL_JIT)
-
-
diff --git a/Source/JavaScriptCore/ftl/FTLOSRExit.h b/Source/JavaScriptCore/ftl/FTLOSRExit.h
index da87ccf..fd7f9f0 100644
--- a/Source/JavaScriptCore/ftl/FTLOSRExit.h
+++ b/Source/JavaScriptCore/ftl/FTLOSRExit.h
@@ -69,7 +69,7 @@
 struct OSRExitDescriptor {
     OSRExitDescriptor(
         DataFormat profileDataFormat, MethodOfGettingAValueProfile,
-        unsigned numberOfArguments, unsigned numberOfLocals);
+        unsigned numberOfArguments, unsigned numberOfLocals, unsigned numberOfTmps);
 
     // The first argument to the exit call may be a value we wish to profile.
     // If that's the case, the format will be not Invalid and we'll have a
diff --git a/Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp b/Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp
index 664d3b9..21715d3 100644
--- a/Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp
+++ b/Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp
@@ -29,6 +29,7 @@
 #if ENABLE(FTL_JIT)
 
 #include "BytecodeStructs.h"
+#include "CheckpointOSRExitSideState.h"
 #include "DFGOSRExitCompilerCommon.h"
 #include "FTLExitArgumentForOperand.h"
 #include "FTLJITCode.h"
@@ -37,10 +38,11 @@
 #include "FTLOperations.h"
 #include "FTLState.h"
 #include "FTLSaveRestore.h"
+#include "JSCInlines.h"
 #include "LinkBuffer.h"
 #include "MaxFrameExtentForSlowPathCall.h"
 #include "OperandsInlines.h"
-#include "JSCInlines.h"
+#include "ProbeContext.h"
 
 namespace JSC { namespace FTL {
 
@@ -474,16 +476,52 @@
 
     size_t baselineVirtualRegistersForCalleeSaves = baselineCodeBlock->calleeSaveSpaceAsVirtualRegisters();
 
+    if (exit.m_codeOrigin.inlineStackContainsActiveCheckpoint()) {
+        JSValue* tmpScratch = reinterpret_cast<JSValue*>(scratch + exit.m_descriptor->m_values.tmpIndex(0));
+        VM* vmPtr = &vm;
+        jit.probe([=] (Probe::Context& context) {
+            auto addSideState = [&] (CallFrame* frame, BytecodeIndex index, size_t tmpOffset) {
+                std::unique_ptr<CheckpointOSRExitSideState> sideState = WTF::makeUnique<CheckpointOSRExitSideState>();
+
+                sideState->bytecodeIndex = index;
+                for (size_t i = 0; i < maxNumCheckpointTmps; ++i)
+                    sideState->tmps[i] = tmpScratch[i + tmpOffset];
+
+                vmPtr->addCheckpointOSRSideState(frame, WTFMove(sideState));
+            };
+
+            const CodeOrigin* codeOrigin;
+            CallFrame* callFrame = context.gpr<CallFrame*>(GPRInfo::callFrameRegister);
+            for (codeOrigin = &exit.m_codeOrigin; codeOrigin && codeOrigin->inlineCallFrame(); codeOrigin = codeOrigin->inlineCallFrame()->getCallerSkippingTailCalls()) {
+                BytecodeIndex callBytecodeIndex = codeOrigin->bytecodeIndex();
+                if (!callBytecodeIndex.checkpoint())
+                    continue;
+
+                auto* inlineCallFrame = codeOrigin->inlineCallFrame();
+                addSideState(reinterpret_cast<CallFrame*>(reinterpret_cast<char*>(callFrame) + inlineCallFrame->returnPCOffset() - sizeof(CPURegister)), callBytecodeIndex, inlineCallFrame->tmpOffset);
+            }
+
+            if (!codeOrigin)
+                return;
+
+            if (BytecodeIndex bytecodeIndex = codeOrigin->bytecodeIndex(); bytecodeIndex.checkpoint())
+                addSideState(callFrame, bytecodeIndex, 0);
+        });
+    }
+
     // Now get state out of the scratch buffer and place it back into the stack. The values are
     // already reboxed so we just move them.
     for (unsigned index = exit.m_descriptor->m_values.size(); index--;) {
-        VirtualRegister reg = exit.m_descriptor->m_values.virtualRegisterForIndex(index);
+        Operand operand = exit.m_descriptor->m_values.operandForIndex(index);
 
-        if (reg.isLocal() && reg.toLocal() < static_cast<int>(baselineVirtualRegistersForCalleeSaves))
+        if (operand.isTmp())
+            continue;
+
+        if (operand.isLocal() && operand.toLocal() < static_cast<int>(baselineVirtualRegistersForCalleeSaves))
             continue;
 
         jit.load64(scratch + index, GPRInfo::regT0);
-        jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(reg));
+        jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand.virtualRegister()));
     }
     
     handleExitCounts(vm, jit, exit);
diff --git a/Source/JavaScriptCore/ftl/FTLOperations.cpp b/Source/JavaScriptCore/ftl/FTLOperations.cpp
index 447c00a..0069a3a 100644
--- a/Source/JavaScriptCore/ftl/FTLOperations.cpp
+++ b/Source/JavaScriptCore/ftl/FTLOperations.cpp
@@ -541,7 +541,7 @@
             // We also cannot allocate a new butterfly from compilation threads since it's invalid to allocate cells from
             // a compilation thread.
             WTF::storeStoreFence();
-            codeBlock->constantRegister(newArrayBuffer.m_immutableButterfly.offset()).set(vm, codeBlock, immutableButterfly);
+            codeBlock->constantRegister(newArrayBuffer.m_immutableButterfly).set(vm, codeBlock, immutableButterfly);
             WTF::storeStoreFence();
         }
 
diff --git a/Source/JavaScriptCore/ftl/FTLOutput.cpp b/Source/JavaScriptCore/ftl/FTLOutput.cpp
index 74933dc..d57fb1e 100644
--- a/Source/JavaScriptCore/ftl/FTLOutput.cpp
+++ b/Source/JavaScriptCore/ftl/FTLOutput.cpp
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013-2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2019 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -655,15 +655,37 @@
     return m_block->appendNew<B3::Value>(m_proc, B3::NotEqual, origin(), value, int64Zero);
 }
 
-LValue Output::select(LValue value, LValue taken, LValue notTaken)
+LValue Output::select(LValue value, LValue left, LValue right, SelectPredictability predictability)
 {
     if (value->hasInt32()) {
         if (value->asInt32())
-            return taken;
+            return left;
         else
-            return notTaken;
+            return right;
     }
-    return m_block->appendNew<B3::Value>(m_proc, B3::Select, origin(), value, taken, notTaken);
+
+    if (predictability == SelectPredictability::NotPredictable)
+        return m_block->appendNew<B3::Value>(m_proc, B3::Select, origin(), value, left, right);
+
+    LBasicBlock continuation = newBlock();
+    LBasicBlock leftTakenBlock = newBlock();
+    LBasicBlock rightTakenBlock = newBlock();
+
+    m_block->appendNewControlValue(
+        m_proc, B3::Branch, origin(), value,
+        FrequentedBlock(leftTakenBlock, predictability != SelectPredictability::RightLikely ? FrequencyClass::Normal : FrequencyClass::Rare),
+        FrequentedBlock(rightTakenBlock, predictability != SelectPredictability::LeftLikely ? FrequencyClass::Normal : FrequencyClass::Rare));
+
+    LValue phi = continuation->appendNew<B3::Value>(m_proc, B3::Phi, left->type(), origin());
+
+    leftTakenBlock->appendNew<B3::UpsilonValue>(m_proc, origin(), left, phi);
+    leftTakenBlock->appendNewControlValue(m_proc, B3::Jump, origin(), B3::FrequentedBlock(continuation));
+
+    rightTakenBlock->appendNew<B3::UpsilonValue>(m_proc, origin(), right, phi);
+    rightTakenBlock->appendNewControlValue(m_proc, B3::Jump, origin(), B3::FrequentedBlock(continuation));
+
+    m_block = continuation;
+    return phi;
 }
 
 LValue Output::atomicXchgAdd(LValue operand, TypedPointer pointer, Width width)
diff --git a/Source/JavaScriptCore/ftl/FTLOutput.h b/Source/JavaScriptCore/ftl/FTLOutput.h
index c56b4ef..4c02b34 100644
--- a/Source/JavaScriptCore/ftl/FTLOutput.h
+++ b/Source/JavaScriptCore/ftl/FTLOutput.h
@@ -39,6 +39,7 @@
 #include "FTLAbbreviatedTypes.h"
 #include "FTLAbstractHeapRepository.h"
 #include "FTLCommonValues.h"
+#include "FTLSelectPredictability.h"
 #include "FTLState.h"
 #include "FTLSwitchCase.h"
 #include "FTLTypedPointer.h"
@@ -365,7 +366,7 @@
     LValue testIsZeroPtr(LValue value, LValue mask) { return isNull(bitAnd(value, mask)); }
     LValue testNonZeroPtr(LValue value, LValue mask) { return notNull(bitAnd(value, mask)); }
 
-    LValue select(LValue value, LValue taken, LValue notTaken);
+    LValue select(LValue value, LValue taken, LValue notTaken, SelectPredictability = SelectPredictability::NotPredictable);
     
     // These are relaxed atomics by default. Use AbstractHeapRepository::decorateFencedAccess() with a
     // non-null heap to make them seq_cst fenced.
diff --git a/Source/JavaScriptCore/ftl/FTLSelectPredictability.h b/Source/JavaScriptCore/ftl/FTLSelectPredictability.h
new file mode 100644
index 0000000..7602acf
--- /dev/null
+++ b/Source/JavaScriptCore/ftl/FTLSelectPredictability.h
@@ -0,0 +1,44 @@
+/*
+ * Copyright (C) 2019 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#if ENABLE(FTL_JIT)
+
+namespace JSC::FTL {
+
+enum class SelectPredictability : uint8_t {
+    // Use this when we expect it to be very unlikely the branch predictor will be able to guess which side of the select will be chosen. This tells B3 to try to emit the select as a conditional move, which (usually) is not speculated by the CPU.
+    NotPredictable,
+    // Use this when it's possible a branch predictor will do a good job guessing the selected value but we don't know a priori which side is more likely.
+    Predictable,
+    // Use these when we are very sure one of the two sides is substantially more likely than the other.
+    LeftLikely,
+    RightLikely,
+};
+
+} // namespace JSC::FTL
+
+#endif // ENABLE(FTL_JIT)
diff --git a/Source/JavaScriptCore/ftl/FTLSlowPathCall.h b/Source/JavaScriptCore/ftl/FTLSlowPathCall.h
index fe2a4ec..70d288e 100644
--- a/Source/JavaScriptCore/ftl/FTLSlowPathCall.h
+++ b/Source/JavaScriptCore/ftl/FTLSlowPathCall.h
@@ -100,7 +100,7 @@
     if (callSiteIndex) {
         jit.store32(
             CCallHelpers::TrustedImm32(callSiteIndex.bits()),
-            CCallHelpers::tagFor(CallFrameSlot::argumentCountIncludingThis));
+            CCallHelpers::tagFor(VirtualRegister(CallFrameSlot::argumentCountIncludingThis)));
     }
     return callOperation(vm, usedRegisters, jit, exceptionTarget, function, resultGPR, arguments...);
 }
diff --git a/Source/JavaScriptCore/generator/Checkpoints.rb b/Source/JavaScriptCore/generator/Checkpoints.rb
new file mode 100644
index 0000000..96a5f52
--- /dev/null
+++ b/Source/JavaScriptCore/generator/Checkpoints.rb
@@ -0,0 +1,64 @@
+# Copyright (C) 2019 Apple Inc. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+#    notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+#    notice, this list of conditions and the following disclaimer in the
+#    documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+# THE POSSIBILITY OF SUCH DAMAGE.
+
+class Checkpoint
+    attr_reader :uses
+    attr_reader :defs
+    attr_reader :name
+    attr_reader :index
+
+    def initialize(name, index, tmps)
+        @name = name
+        @index = index
+        @uses = tmps.filter { |(tmp, useDef)| useDef == :use }
+        @defs = tmps.filter { |(tmp, useDef)| useDef == :define }
+    end
+
+    def liveness(namespace)
+        EOF<--
+    case #{namespace}::#{name}:
+EOF
+    end
+end
+
+class Checkpoints
+    def initialize(opcode, tmps, checkpoints)
+        @opcode = opcode
+        @tmps = tmps
+        @checkpoints = checkpoints.map_with_index { |(name, tmps), index| Checkpoint.new(name, index, tmps) }
+    end
+
+    def liveness
+        @checkpoints.each do |checkpoint|
+
+        end
+        EOF<--
+BitMap<maxNumTmps> tmpLivenessFor#{@opcode.capitalized_name}(uint8_t checkpoint)
+{
+    switch (checkpoint) {
+
+    }
+}
+EOF
+    end
+end
diff --git a/Source/JavaScriptCore/generator/Opcode.rb b/Source/JavaScriptCore/generator/Opcode.rb
index 684b300..82374b7 100644
--- a/Source/JavaScriptCore/generator/Opcode.rb
+++ b/Source/JavaScriptCore/generator/Opcode.rb
@@ -30,6 +30,7 @@
     attr_reader :args
     attr_reader :metadata
     attr_reader :extras
+    attr_reader :checkpoints
 
     module Size
         Narrow = "OpcodeSize::Narrow"
@@ -45,12 +46,14 @@
         tid
     end
 
-    def initialize(section, name, extras, args, metadata, metadata_initializers)
+    def initialize(section, name, extras, args, metadata, metadata_initializers, tmps, checkpoints)
         @section = section
         @name = name
         @extras = extras || {}
         @metadata = Metadata.new metadata, metadata_initializers
         @args = args.map.with_index { |(arg_name, type), index| Argument.new arg_name, type, index } unless args.nil?
+        @tmps = tmps
+        @checkpoints = checkpoints.map { |(checkpoint, _)| checkpoint } unless checkpoints.nil?
     end
 
     def create_id!
@@ -127,6 +130,8 @@
         <<-EOF
 struct #{capitalized_name} : public Instruction {
     #{opcodeID}
+    #{temps}
+    #{checkpointValues}
 
 #{emitter}
 #{dumper}
@@ -141,6 +146,18 @@
         "static constexpr #{opcodeIDType} opcodeID = #{name};"
     end
 
+    def checkpointValues
+        return if @checkpoints.nil?
+
+        ["enum Checkpoints : uint8_t {"].concat(checkpoints.map{ |checkpoint| "        #{checkpoint}," }).concat(["        numberOfCheckpoints,", "    };"]).join("\n")
+    end
+
+    def temps
+        return if @tmps.nil?
+
+        ["enum Tmps : uint8_t {"].concat(@tmps.map {|(tmp, type)| "        #{tmp},"}).push("    };").join("\n")
+    end
+
     def emitter
         op_wide16 = Argument.new(wide16, opcodeIDType, 0)
         op_wide32 = Argument.new(wide32, opcodeIDType, 0)
@@ -209,6 +226,7 @@
     template<OpcodeSize __size, bool recordOpcode, typename BytecodeGenerator>
     static bool emitImpl(BytecodeGenerator* gen#{typed_args}#{metadata_param})
     {
+        #{!@checkpoints.nil? ? "gen->setUsesCheckpoints();" : ""}
         if (__size == OpcodeSize::Wide16)
             gen->alignWideOpcode16();
         else if (__size == OpcodeSize::Wide32)
diff --git a/Source/JavaScriptCore/generator/Section.rb b/Source/JavaScriptCore/generator/Section.rb
index 204ac75..66855ba 100644
--- a/Source/JavaScriptCore/generator/Section.rb
+++ b/Source/JavaScriptCore/generator/Section.rb
@@ -43,7 +43,7 @@
   end
 
   def create_opcode(name, config)
-      Opcode.new(self, name, config[:extras], config[:args], config[:metadata], config[:metadata_initializers])
+      Opcode.new(self, name, config[:extras], config[:args], config[:metadata], config[:metadata_initializers], config[:tmps], config[:checkpoints])
   end
 
   def add_opcode_group(name, opcodes, config)
diff --git a/Source/JavaScriptCore/heap/Heap.cpp b/Source/JavaScriptCore/heap/Heap.cpp
index e3e836e..74f82a5 100644
--- a/Source/JavaScriptCore/heap/Heap.cpp
+++ b/Source/JavaScriptCore/heap/Heap.cpp
@@ -713,6 +713,7 @@
     if (!VM::canUseJIT())
         return;
     m_vm.gatherScratchBufferRoots(roots);
+    m_vm.scanSideState(roots);
 #else
     UNUSED_PARAM(roots);
 #endif
diff --git a/Source/JavaScriptCore/interpreter/CallFrame.cpp b/Source/JavaScriptCore/interpreter/CallFrame.cpp
index 24101c0..2c775f9 100644
--- a/Source/JavaScriptCore/interpreter/CallFrame.cpp
+++ b/Source/JavaScriptCore/interpreter/CallFrame.cpp
@@ -90,22 +90,22 @@
 
 unsigned CallFrame::callSiteAsRawBits() const
 {
-    return this[CallFrameSlot::argumentCountIncludingThis].tag();
+    return this[static_cast<int>(CallFrameSlot::argumentCountIncludingThis)].tag();
 }
 
 SUPPRESS_ASAN unsigned CallFrame::unsafeCallSiteAsRawBits() const
 {
-    return this[CallFrameSlot::argumentCountIncludingThis].unsafeTag();
+    return this[static_cast<int>(CallFrameSlot::argumentCountIncludingThis)].unsafeTag();
 }
 
 CallSiteIndex CallFrame::callSiteIndex() const
 {
-    return CallSiteIndex(BytecodeIndex(callSiteAsRawBits()));
+    return CallSiteIndex::fromBits(callSiteAsRawBits());
 }
 
 SUPPRESS_ASAN CallSiteIndex CallFrame::unsafeCallSiteIndex() const
 {
-    return CallSiteIndex(BytecodeIndex(unsafeCallSiteAsRawBits()));
+    return CallSiteIndex::fromBits(unsafeCallSiteAsRawBits());
 }
 
 const Instruction* CallFrame::currentVPC() const
@@ -117,7 +117,8 @@
 void CallFrame::setCurrentVPC(const Instruction* vpc)
 {
     CallSiteIndex callSite(codeBlock()->bytecodeIndex(vpc));
-    this[CallFrameSlot::argumentCountIncludingThis].tag() = static_cast<int32_t>(callSite.bits());
+    this[static_cast<int>(CallFrameSlot::argumentCountIncludingThis)].tag() = callSite.bits();
+    ASSERT(currentVPC() == vpc);
 }
 
 unsigned CallFrame::callSiteBitsAsBytecodeOffset() const
@@ -144,7 +145,7 @@
     }
 #endif
     ASSERT(callSiteBitsAreBytecodeOffset());
-    return BytecodeIndex(callSiteBitsAsBytecodeOffset());
+    return callSiteIndex().bytecodeIndex();
 }
 
 CodeOrigin CallFrame::codeOrigin()
@@ -158,7 +159,7 @@
         return codeBlock()->codeOrigin(index);
     }
 #endif
-    return CodeOrigin(BytecodeIndex(callSiteBitsAsBytecodeOffset()));
+    return CodeOrigin(callSiteIndex().bytecodeIndex());
 }
 
 Register* CallFrame::topOfFrameInternal()
diff --git a/Source/JavaScriptCore/interpreter/CallFrame.h b/Source/JavaScriptCore/interpreter/CallFrame.h
index 8abddb8..7d56fc7 100644
--- a/Source/JavaScriptCore/interpreter/CallFrame.h
+++ b/Source/JavaScriptCore/interpreter/CallFrame.h
@@ -29,6 +29,7 @@
 #include "StackVisitor.h"
 #include "VM.h"
 #include "VMEntryRecord.h"
+#include <wtf/EnumClassOperatorOverloads.h>
 
 namespace JSC  {
 
@@ -46,18 +47,24 @@
         CallSiteIndex() = default;
         
         explicit CallSiteIndex(BytecodeIndex bytecodeIndex)
-            : m_bytecodeIndex(bytecodeIndex)
+            : m_bits(bytecodeIndex.offset())
+        { 
+            ASSERT(!bytecodeIndex.checkpoint());
+        }
+        explicit CallSiteIndex(uint32_t bits)
+            : m_bits(bits)
         { }
 
-        explicit operator bool() const { return !!m_bytecodeIndex; }
-        bool operator==(const CallSiteIndex& other) const { return m_bytecodeIndex == other.m_bytecodeIndex; }
+        explicit operator bool() const { return !!m_bits; }
+        bool operator==(const CallSiteIndex& other) const { return m_bits == other.m_bits; }
 
-        uint32_t bits() const { return m_bytecodeIndex.asBits(); }
+        uint32_t bits() const { return m_bits; }
+        static CallSiteIndex fromBits(uint32_t bits) { return CallSiteIndex(bits); }
 
-        BytecodeIndex bytecodeIndex() const { return m_bytecodeIndex; }
+        BytecodeIndex bytecodeIndex() const { return BytecodeIndex(bits()); }
 
     private:
-        BytecodeIndex m_bytecodeIndex;
+        uint32_t m_bits { BytecodeIndex().offset() };
     };
 
     class DisposableCallSiteIndex : public CallSiteIndex {
@@ -65,7 +72,7 @@
         DisposableCallSiteIndex() = default;
 
         explicit DisposableCallSiteIndex(uint32_t bits)
-            : CallSiteIndex(BytecodeIndex::fromBits(bits))
+            : CallSiteIndex(bits)
         {
         }
 
@@ -78,24 +85,27 @@
     // arm64_32 expects caller frame and return pc to use 8 bytes 
     struct CallerFrameAndPC {
         alignas(CPURegister) CallFrame* callerFrame;
-        alignas(CPURegister) const Instruction* returnPC;
+        alignas(CPURegister) void* returnPC;
         static constexpr int sizeInRegisters = 2 * sizeof(CPURegister) / sizeof(Register);
     };
     static_assert(CallerFrameAndPC::sizeInRegisters == sizeof(CallerFrameAndPC) / sizeof(Register), "CallerFrameAndPC::sizeInRegisters is incorrect.");
 
-    struct CallFrameSlot {
-        static constexpr int codeBlock = CallerFrameAndPC::sizeInRegisters;
-        static constexpr int callee = codeBlock + 1;
-        static constexpr int argumentCountIncludingThis = callee + 1;
-        static constexpr int thisArgument = argumentCountIncludingThis + 1;
-        static constexpr int firstArgument = thisArgument + 1;
+    enum class CallFrameSlot : int {
+        codeBlock = CallerFrameAndPC::sizeInRegisters,
+        callee = codeBlock + 1,
+        argumentCountIncludingThis = callee + 1,
+        thisArgument = argumentCountIncludingThis + 1,
+        firstArgument = thisArgument + 1,
     };
 
+    OVERLOAD_MATH_OPERATORS_FOR_ENUM_CLASS_WITH_INTEGRALS(CallFrameSlot)
+    OVERLOAD_RELATIONAL_OPERATORS_FOR_ENUM_CLASS_WITH_INTEGRALS(CallFrameSlot)
+
     // Represents the current state of script execution.
     // Passed as the first argument to most functions.
     class CallFrame : private Register {
     public:
-        static constexpr int headerSizeInRegisters = CallFrameSlot::argumentCountIncludingThis + 1;
+        static constexpr int headerSizeInRegisters = static_cast<int>(CallFrameSlot::argumentCountIncludingThis) + 1;
 
         // This function should only be called in very specific circumstances
         // when you've guaranteed the callee can't be a Wasm callee, and can
@@ -106,10 +116,10 @@
         // to see if it's a cell, and if it's not, we throw an exception.
         inline JSValue guaranteedJSValueCallee() const;
         inline JSObject* jsCallee() const;
-        CalleeBits callee() const { return CalleeBits(this[CallFrameSlot::callee].pointer()); }
-        SUPPRESS_ASAN CalleeBits unsafeCallee() const { return CalleeBits(this[CallFrameSlot::callee].asanUnsafePointer()); }
+        CalleeBits callee() const { return CalleeBits(this[static_cast<int>(CallFrameSlot::callee)].pointer()); }
+        SUPPRESS_ASAN CalleeBits unsafeCallee() const { return CalleeBits(this[static_cast<int>(CallFrameSlot::callee)].asanUnsafePointer()); }
         CodeBlock* codeBlock() const;
-        CodeBlock** addressOfCodeBlock() const { return bitwise_cast<CodeBlock**>(this + CallFrameSlot::codeBlock); }
+        CodeBlock** addressOfCodeBlock() const { return bitwise_cast<CodeBlock**>(this + static_cast<int>(CallFrameSlot::codeBlock)); }
         inline SUPPRESS_ASAN CodeBlock* unsafeCodeBlock() const;
         inline JSScope* scope(int scopeRegisterOffset) const;
 
@@ -182,15 +192,13 @@
         static void initDeprecatedCallFrameForDebugger(CallFrame* globalExec, JSCallee* globalCallee);
 
         // Read a register from the codeframe (or constant from the CodeBlock).
-        Register& r(int);
         Register& r(VirtualRegister);
-        // Read a register for a non-constant
-        Register& uncheckedR(int);
+        // Read a register for a known non-constant
         Register& uncheckedR(VirtualRegister);
 
         // Access to arguments as passed. (After capture, arguments may move to a different location.)
         size_t argumentCount() const { return argumentCountIncludingThis() - 1; }
-        size_t argumentCountIncludingThis() const { return this[CallFrameSlot::argumentCountIncludingThis].payload(); }
+        size_t argumentCountIncludingThis() const { return this[static_cast<int>(CallFrameSlot::argumentCountIncludingThis)].payload(); }
         static int argumentOffset(int argument) { return (CallFrameSlot::firstArgument + argument); }
         static int argumentOffsetIncludingThis(int argument) { return (CallFrameSlot::thisArgument + argument); }
 
@@ -239,7 +247,7 @@
 
         JSValue argumentAfterCapture(size_t argument);
 
-        static int offsetFor(size_t argumentCountIncludingThis) { return argumentCountIncludingThis + CallFrameSlot::thisArgument - 1; }
+        static int offsetFor(size_t argumentCountIncludingThis) { return CallFrameSlot::thisArgument + argumentCountIncludingThis - 1; }
 
         static CallFrame* noCaller() { return nullptr; }
         bool isDeprecatedCallFrameForDebugger() const
@@ -251,10 +259,10 @@
         bool isStackOverflowFrame() const;
         bool isWasmFrame() const;
 
-        void setArgumentCountIncludingThis(int count) { static_cast<Register*>(this)[CallFrameSlot::argumentCountIncludingThis].payload() = count; }
+        void setArgumentCountIncludingThis(int count) { static_cast<Register*>(this)[static_cast<int>(CallFrameSlot::argumentCountIncludingThis)].payload() = count; }
         inline void setCallee(JSObject*);
         inline void setCodeBlock(CodeBlock*);
-        void setReturnPC(void* value) { callerFrameAndPC().returnPC = reinterpret_cast<const Instruction*>(value); }
+        void setReturnPC(void* value) { callerFrameAndPC().returnPC = value; }
 
         String friendlyFunctionName();
 
diff --git a/Source/JavaScriptCore/interpreter/CallFrameInlines.h b/Source/JavaScriptCore/interpreter/CallFrameInlines.h
index a97deca..ee7ec2a 100644
--- a/Source/JavaScriptCore/interpreter/CallFrameInlines.h
+++ b/Source/JavaScriptCore/interpreter/CallFrameInlines.h
@@ -32,50 +32,39 @@
 
 namespace JSC {
 
-inline Register& CallFrame::r(int index)
-{
-    CodeBlock* codeBlock = this->codeBlock();
-    if (codeBlock->isConstantRegisterIndex(index))
-        return *reinterpret_cast<Register*>(&codeBlock->constantRegister(index));
-    return this[index];
-}
-
 inline Register& CallFrame::r(VirtualRegister reg)
 {
-    return r(reg.offset());
-}
-
-inline Register& CallFrame::uncheckedR(int index)
-{
-    RELEASE_ASSERT(index < FirstConstantRegisterIndex);
-    return this[index];
+    if (reg.isConstant())
+        return *reinterpret_cast<Register*>(&this->codeBlock()->constantRegister(reg));
+    return this[reg.offset()];
 }
 
 inline Register& CallFrame::uncheckedR(VirtualRegister reg)
 {
-    return uncheckedR(reg.offset());
+    ASSERT(!reg.isConstant());
+    return this[reg.offset()];
 }
 
 inline JSValue CallFrame::guaranteedJSValueCallee() const
 {
     ASSERT(!callee().isWasm());
-    return this[CallFrameSlot::callee].jsValue();
+    return this[static_cast<int>(CallFrameSlot::callee)].jsValue();
 }
 
 inline JSObject* CallFrame::jsCallee() const
 {
     ASSERT(!callee().isWasm());
-    return this[CallFrameSlot::callee].object();
+    return this[static_cast<int>(CallFrameSlot::callee)].object();
 }
 
 inline CodeBlock* CallFrame::codeBlock() const
 {
-    return this[CallFrameSlot::codeBlock].Register::codeBlock();
+    return this[static_cast<int>(CallFrameSlot::codeBlock)].Register::codeBlock();
 }
 
 inline SUPPRESS_ASAN CodeBlock* CallFrame::unsafeCodeBlock() const
 {
-    return this[CallFrameSlot::codeBlock].Register::asanUnsafeCodeBlock();
+    return this[static_cast<int>(CallFrameSlot::codeBlock)].Register::asanUnsafeCodeBlock();
 }
 
 inline JSGlobalObject* CallFrame::lexicalGlobalObject(VM& vm) const
@@ -102,12 +91,12 @@
 
 inline void CallFrame::setCallee(JSObject* callee)
 {
-    static_cast<Register*>(this)[CallFrameSlot::callee] = callee;
+    static_cast<Register*>(this)[static_cast<int>(CallFrameSlot::callee)] = callee;
 }
 
 inline void CallFrame::setCodeBlock(CodeBlock* codeBlock)
 {
-    static_cast<Register*>(this)[CallFrameSlot::codeBlock] = codeBlock;
+    static_cast<Register*>(this)[static_cast<int>(CallFrameSlot::codeBlock)] = codeBlock;
 }
 
 inline void CallFrame::setScope(int scopeRegisterOffset, JSScope* scope)
diff --git a/Source/JavaScriptCore/interpreter/CheckpointOSRExitSideState.h b/Source/JavaScriptCore/interpreter/CheckpointOSRExitSideState.h
new file mode 100644
index 0000000..89389d8
--- /dev/null
+++ b/Source/JavaScriptCore/interpreter/CheckpointOSRExitSideState.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright (C) 2019 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include "BytecodeIndex.h"
+#include "Operands.h"
+
+namespace JSC {
+
+struct CheckpointOSRExitSideState {
+    WTF_MAKE_FAST_ALLOCATED;
+public:
+
+    BytecodeIndex bytecodeIndex;
+    JSValue tmps[maxNumCheckpointTmps] { };
+};
+
+}
diff --git a/Source/JavaScriptCore/interpreter/Interpreter.cpp b/Source/JavaScriptCore/interpreter/Interpreter.cpp
index 570b0af..3406586 100644
--- a/Source/JavaScriptCore/interpreter/Interpreter.cpp
+++ b/Source/JavaScriptCore/interpreter/Interpreter.cpp
@@ -34,6 +34,7 @@
 #include "Bytecodes.h"
 #include "CallFrameClosure.h"
 #include "CatchScope.h"
+#include "CheckpointOSRExitSideState.h"
 #include "CodeBlock.h"
 #include "CodeCache.h"
 #include "DirectArguments.h"
@@ -123,7 +124,7 @@
     CallFrame* callerFrame = callFrame->callerFrame();
     CallSiteIndex callerCallSiteIndex = callerFrame->callSiteIndex();
     CodeBlock* callerCodeBlock = callerFrame->codeBlock();
-    JSScope* callerScopeChain = callerFrame->uncheckedR(callerCodeBlock->scopeRegister().offset()).Register::scope();
+    JSScope* callerScopeChain = callerFrame->uncheckedR(callerCodeBlock->scopeRegister()).Register::scope();
     UnlinkedCodeBlock* callerUnlinkedCodeBlock = callerCodeBlock->unlinkedCodeBlock();
 
     bool isArrowFunctionContext = callerUnlinkedCodeBlock->isArrowFunction() || callerUnlinkedCodeBlock->isArrowFunctionContext();
@@ -215,6 +216,9 @@
     }
     RETURN_IF_EXCEPTION(scope, 0);
     
+    if (length > maxArguments)
+        throwStackOverflowError(globalObject, scope);
+
     if (length >= firstVarArgOffset)
         length -= firstVarArgOffset;
     else
@@ -251,7 +255,7 @@
     return length;
 }
 
-void loadVarargs(JSGlobalObject* globalObject, CallFrame* callFrame, VirtualRegister firstElementDest, JSValue arguments, uint32_t offset, uint32_t length)
+void loadVarargs(JSGlobalObject* globalObject, JSValue* firstElementDest, JSValue arguments, uint32_t offset, uint32_t length)
 {
     if (UNLIKELY(!arguments.isCell()) || !length)
         return;
@@ -263,31 +267,31 @@
     switch (cell->type()) {
     case DirectArgumentsType:
         scope.release();
-        jsCast<DirectArguments*>(cell)->copyToArguments(globalObject, callFrame, firstElementDest, offset, length);
+        jsCast<DirectArguments*>(cell)->copyToArguments(globalObject, firstElementDest, offset, length);
         return;
     case ScopedArgumentsType:
         scope.release();
-        jsCast<ScopedArguments*>(cell)->copyToArguments(globalObject, callFrame, firstElementDest, offset, length);
+        jsCast<ScopedArguments*>(cell)->copyToArguments(globalObject, firstElementDest, offset, length);
         return;
     case JSImmutableButterflyType:
         scope.release();
-        jsCast<JSImmutableButterfly*>(cell)->copyToArguments(globalObject, callFrame, firstElementDest, offset, length);
+        jsCast<JSImmutableButterfly*>(cell)->copyToArguments(globalObject, firstElementDest, offset, length);
         return; 
     default: {
         ASSERT(arguments.isObject());
         JSObject* object = jsCast<JSObject*>(cell);
         if (isJSArray(object)) {
             scope.release();
-            jsCast<JSArray*>(object)->copyToArguments(globalObject, callFrame, firstElementDest, offset, length);
+            jsCast<JSArray*>(object)->copyToArguments(globalObject, firstElementDest, offset, length);
             return;
         }
         unsigned i;
         for (i = 0; i < length && object->canGetIndexQuickly(i + offset); ++i)
-            callFrame->r(firstElementDest + i) = object->getIndexQuickly(i + offset);
+            firstElementDest[i] = object->getIndexQuickly(i + offset);
         for (; i < length; ++i) {
             JSValue value = object->get(globalObject, i + offset);
             RETURN_IF_EXCEPTION(scope, void());
-            callFrame->r(firstElementDest + i) = value;
+            firstElementDest[i] = value;
         }
         return;
     } }
@@ -299,8 +303,7 @@
     
     loadVarargs(
         globalObject,
-        callFrame,
-        calleeFrameOffset + CallFrame::argumentOffset(0),
+        bitwise_cast<JSValue*>(&callFrame->r(calleeFrameOffset + CallFrame::argumentOffset(0))),
         arguments, offset, length);
     
     newCallFrame->setArgumentCountIncludingThis(length + 1);
@@ -540,8 +543,13 @@
         m_codeBlock = visitor->codeBlock();
 
         m_handler = nullptr;
-        if (!m_isTermination) {
-            if (m_codeBlock) {
+        if (m_codeBlock) {
+            // FIXME: We should support exception handling in checkpoints.
+#if ENABLE(DFG_JIT)
+            if (removeCodePtrTag(m_returnPC) == LLInt::getCodePtr<NoPtrTag>(checkpoint_osr_exit_from_inlined_call_trampoline).executableAddress())
+                m_codeBlock->vm().findCheckpointOSRSideState(m_callFrame);
+#endif
+            if (!m_isTermination) {
                 m_handler = findExceptionHandler(visitor, m_codeBlock, RequiredHandler::AnyHandler);
                 if (m_handler)
                     return StackVisitor::Done;
@@ -563,6 +571,7 @@
         if (shouldStopUnwinding)
             return StackVisitor::Done;
 
+        m_returnPC = m_callFrame->returnPC().value();
         return StackVisitor::Continue;
     }
 
@@ -599,6 +608,7 @@
     bool m_isTermination;
     CodeBlock*& m_codeBlock;
     HandlerInfo*& m_handler;
+    mutable const void* m_returnPC { nullptr };
 };
 
 NEVER_INLINE HandlerInfo* Interpreter::unwind(VM& vm, CallFrame*& callFrame, Exception* exception)
@@ -868,7 +878,7 @@
     }
 
     VMEntryScope entryScope(vm, globalObject);
-    if (UNLIKELY(!vm.isSafeToRecurseSoft()))
+    if (UNLIKELY(!vm.isSafeToRecurseSoft() || args.size() > maxArguments))
         return checkedReturn(throwStackOverflowError(globalObject, throwScope));
 
     if (isJSCall) {
@@ -937,7 +947,7 @@
     }
 
     VMEntryScope entryScope(vm, globalObject);
-    if (UNLIKELY(!vm.isSafeToRecurseSoft())) {
+    if (UNLIKELY(!vm.isSafeToRecurseSoft() || args.size() > maxArguments)) {
         throwStackOverflowError(globalObject, throwScope);
         return nullptr;
     }
diff --git a/Source/JavaScriptCore/interpreter/Interpreter.h b/Source/JavaScriptCore/interpreter/Interpreter.h
index 1811297..e83f489 100644
--- a/Source/JavaScriptCore/interpreter/Interpreter.h
+++ b/Source/JavaScriptCore/interpreter/Interpreter.h
@@ -171,7 +171,7 @@
     static constexpr unsigned maxArguments = 0x10000;
     unsigned sizeFrameForVarargs(JSGlobalObject*, CallFrame*, VM&, JSValue arguments, unsigned numUsedStackSlots, uint32_t firstVarArgOffset);
     unsigned sizeFrameForForwardArguments(JSGlobalObject*, CallFrame*, VM&, unsigned numUsedStackSlots);
-    void loadVarargs(JSGlobalObject*, CallFrame* execCaller, VirtualRegister firstElementDest, JSValue source, uint32_t offset, uint32_t length);
+    void loadVarargs(JSGlobalObject*, JSValue* firstElementDest, JSValue source, uint32_t offset, uint32_t length);
     void setupVarargsFrame(JSGlobalObject*, CallFrame* execCaller, CallFrame* execCallee, JSValue arguments, uint32_t firstVarArgOffset, uint32_t length);
     void setupVarargsFrameAndSetThis(JSGlobalObject*, CallFrame* execCaller, CallFrame* execCallee, JSValue thisValue, JSValue arguments, uint32_t firstVarArgOffset, uint32_t length);
     void setupForwardArgumentsFrame(JSGlobalObject*, CallFrame* execCaller, CallFrame* execCallee, uint32_t length);
diff --git a/Source/JavaScriptCore/interpreter/StackVisitor.cpp b/Source/JavaScriptCore/interpreter/StackVisitor.cpp
index aa7a877..855b856 100644
--- a/Source/JavaScriptCore/interpreter/StackVisitor.cpp
+++ b/Source/JavaScriptCore/interpreter/StackVisitor.cpp
@@ -209,7 +209,7 @@
         m_frame.m_callFrame = callFrame;
         m_frame.m_inlineCallFrame = inlineCallFrame;
         if (inlineCallFrame->argumentCountRegister.isValid())
-            m_frame.m_argumentCountIncludingThis = callFrame->r(inlineCallFrame->argumentCountRegister.offset()).unboxedInt32();
+            m_frame.m_argumentCountIncludingThis = callFrame->r(inlineCallFrame->argumentCountRegister).unboxedInt32();
         else
             m_frame.m_argumentCountIncludingThis = inlineCallFrame->argumentCountIncludingThis;
         m_frame.m_codeBlock = inlineCallFrame->baselineCodeBlock.get();
diff --git a/Source/JavaScriptCore/jit/AssemblyHelpers.h b/Source/JavaScriptCore/jit/AssemblyHelpers.h
index 2820cb1..69342de 100644
--- a/Source/JavaScriptCore/jit/AssemblyHelpers.h
+++ b/Source/JavaScriptCore/jit/AssemblyHelpers.h
@@ -647,28 +647,28 @@
     }
 #endif
 
-    void emitGetFromCallFrameHeaderPtr(int entry, GPRReg to, GPRReg from = GPRInfo::callFrameRegister)
+    void emitGetFromCallFrameHeaderPtr(VirtualRegister entry, GPRReg to, GPRReg from = GPRInfo::callFrameRegister)
     {
-        loadPtr(Address(from, entry * sizeof(Register)), to);
+        loadPtr(Address(from, entry.offset() * sizeof(Register)), to);
     }
-    void emitGetFromCallFrameHeader32(int entry, GPRReg to, GPRReg from = GPRInfo::callFrameRegister)
+    void emitGetFromCallFrameHeader32(VirtualRegister entry, GPRReg to, GPRReg from = GPRInfo::callFrameRegister)
     {
-        load32(Address(from, entry * sizeof(Register)), to);
+        load32(Address(from, entry.offset() * sizeof(Register)), to);
     }
 #if USE(JSVALUE64)
-    void emitGetFromCallFrameHeader64(int entry, GPRReg to, GPRReg from = GPRInfo::callFrameRegister)
+    void emitGetFromCallFrameHeader64(VirtualRegister entry, GPRReg to, GPRReg from = GPRInfo::callFrameRegister)
     {
-        load64(Address(from, entry * sizeof(Register)), to);
+        load64(Address(from, entry.offset() * sizeof(Register)), to);
     }
 #endif // USE(JSVALUE64)
-    void emitPutToCallFrameHeader(GPRReg from, int entry)
+    void emitPutToCallFrameHeader(GPRReg from, VirtualRegister entry)
     {
-        storePtr(from, Address(GPRInfo::callFrameRegister, entry * sizeof(Register)));
+        storePtr(from, Address(GPRInfo::callFrameRegister, entry.offset() * sizeof(Register)));
     }
 
-    void emitPutToCallFrameHeader(void* value, int entry)
+    void emitPutToCallFrameHeader(void* value, VirtualRegister entry)
     {
-        storePtr(TrustedImmPtr(value), Address(GPRInfo::callFrameRegister, entry * sizeof(Register)));
+        storePtr(TrustedImmPtr(value), Address(GPRInfo::callFrameRegister, entry.offset() * sizeof(Register)));
     }
 
     void emitGetCallerFrameFromCallFrameHeaderPtr(RegisterID to)
@@ -696,19 +696,19 @@
     // caller's frame pointer. On some platforms, the callee is responsible for pushing the
     // "link register" containing the return address in the function prologue.
 #if USE(JSVALUE64)
-    void emitPutToCallFrameHeaderBeforePrologue(GPRReg from, int entry)
+    void emitPutToCallFrameHeaderBeforePrologue(GPRReg from, VirtualRegister entry)
     {
-        storePtr(from, Address(stackPointerRegister, entry * static_cast<ptrdiff_t>(sizeof(Register)) - prologueStackPointerDelta()));
+        storePtr(from, Address(stackPointerRegister, entry.offset() * static_cast<ptrdiff_t>(sizeof(Register)) - prologueStackPointerDelta()));
     }
 #else
-    void emitPutPayloadToCallFrameHeaderBeforePrologue(GPRReg from, int entry)
+    void emitPutPayloadToCallFrameHeaderBeforePrologue(GPRReg from, VirtualRegister entry)
     {
-        storePtr(from, Address(stackPointerRegister, entry * static_cast<ptrdiff_t>(sizeof(Register)) - prologueStackPointerDelta() + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload)));
+        storePtr(from, Address(stackPointerRegister, entry.offset() * static_cast<ptrdiff_t>(sizeof(Register)) - prologueStackPointerDelta() + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload)));
     }
 
-    void emitPutTagToCallFrameHeaderBeforePrologue(TrustedImm32 tag, int entry)
+    void emitPutTagToCallFrameHeaderBeforePrologue(TrustedImm32 tag, VirtualRegister entry)
     {
-        storePtr(tag, Address(stackPointerRegister, entry * static_cast<ptrdiff_t>(sizeof(Register)) - prologueStackPointerDelta() + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.tag)));
+        storePtr(tag, Address(stackPointerRegister, entry.offset() * static_cast<ptrdiff_t>(sizeof(Register)) - prologueStackPointerDelta() + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.tag)));
     }
 #endif
     
@@ -1148,9 +1148,10 @@
         ASSERT(virtualRegister.isValid());
         return Address(GPRInfo::callFrameRegister, virtualRegister.offset() * sizeof(Register));
     }
-    static Address addressFor(int operand)
+    static Address addressFor(Operand operand)
     {
-        return addressFor(static_cast<VirtualRegister>(operand));
+        ASSERT(!operand.isTmp());
+        return addressFor(operand.virtualRegister());
     }
 
     static Address tagFor(VirtualRegister virtualRegister, GPRReg baseGPR)
@@ -1163,9 +1164,10 @@
         ASSERT(virtualRegister.isValid());
         return Address(GPRInfo::callFrameRegister, virtualRegister.offset() * sizeof(Register) + TagOffset);
     }
-    static Address tagFor(int operand)
+    static Address tagFor(Operand operand)
     {
-        return tagFor(static_cast<VirtualRegister>(operand));
+        ASSERT(!operand.isTmp());
+        return tagFor(operand.virtualRegister());
     }
 
     static Address payloadFor(VirtualRegister virtualRegister, GPRReg baseGPR)
@@ -1178,30 +1180,31 @@
         ASSERT(virtualRegister.isValid());
         return Address(GPRInfo::callFrameRegister, virtualRegister.offset() * sizeof(Register) + PayloadOffset);
     }
-    static Address payloadFor(int operand)
+    static Address payloadFor(Operand operand)
     {
-        return payloadFor(static_cast<VirtualRegister>(operand));
+        ASSERT(!operand.isTmp());
+        return payloadFor(operand.virtualRegister());
     }
 
     // Access to our fixed callee CallFrame.
-    static Address calleeFrameSlot(int slot)
+    static Address calleeFrameSlot(VirtualRegister slot)
     {
-        ASSERT(slot >= CallerFrameAndPC::sizeInRegisters);
-        return Address(stackPointerRegister, sizeof(Register) * (slot - CallerFrameAndPC::sizeInRegisters));
+        ASSERT(slot.offset() >= CallerFrameAndPC::sizeInRegisters);
+        return Address(stackPointerRegister, sizeof(Register) * (slot - CallerFrameAndPC::sizeInRegisters).offset());
     }
 
     // Access to our fixed callee CallFrame.
     static Address calleeArgumentSlot(int argument)
     {
-        return calleeFrameSlot(virtualRegisterForArgument(argument).offset());
+        return calleeFrameSlot(virtualRegisterForArgumentIncludingThis(argument));
     }
 
-    static Address calleeFrameTagSlot(int slot)
+    static Address calleeFrameTagSlot(VirtualRegister slot)
     {
         return calleeFrameSlot(slot).withOffset(TagOffset);
     }
 
-    static Address calleeFramePayloadSlot(int slot)
+    static Address calleeFramePayloadSlot(VirtualRegister slot)
     {
         return calleeFrameSlot(slot).withOffset(PayloadOffset);
     }
@@ -1218,7 +1221,7 @@
 
     static Address calleeFrameCallerFrame()
     {
-        return calleeFrameSlot(0).withOffset(CallFrame::callerFrameOffset());
+        return calleeFrameSlot(VirtualRegister(0)).withOffset(CallFrame::callerFrameOffset());
     }
 
     static GPRReg selectScratchGPR(RegisterSet preserved)
@@ -1527,7 +1530,7 @@
     {
         ASSERT(!inlineCallFrame || inlineCallFrame->isVarargs());
         if (!inlineCallFrame)
-            return VirtualRegister(CallFrameSlot::argumentCountIncludingThis);
+            return CallFrameSlot::argumentCountIncludingThis;
         return inlineCallFrame->argumentCountRegister;
     }
 
diff --git a/Source/JavaScriptCore/jit/CallFrameShuffler.cpp b/Source/JavaScriptCore/jit/CallFrameShuffler.cpp
index 5dfb2f1..4d2d3db 100644
--- a/Source/JavaScriptCore/jit/CallFrameShuffler.cpp
+++ b/Source/JavaScriptCore/jit/CallFrameShuffler.cpp
@@ -61,11 +61,11 @@
 #endif
 
     ASSERT(!data.callee.isInJSStack() || data.callee.virtualRegister().isLocal());
-    addNew(VirtualRegister(CallFrameSlot::callee), data.callee);
+    addNew(CallFrameSlot::callee, data.callee);
 
     for (size_t i = 0; i < data.args.size(); ++i) {
         ASSERT(!data.args[i].isInJSStack() || data.args[i].virtualRegister().isLocal());
-        addNew(virtualRegisterForArgument(i), data.args[i]);
+        addNew(virtualRegisterForArgumentIncludingThis(i), data.args[i]);
     }
 
 #if USE(JSVALUE64)
diff --git a/Source/JavaScriptCore/jit/CallFrameShuffler.h b/Source/JavaScriptCore/jit/CallFrameShuffler.h
index 62439bd..fefcb90 100644
--- a/Source/JavaScriptCore/jit/CallFrameShuffler.h
+++ b/Source/JavaScriptCore/jit/CallFrameShuffler.h
@@ -105,7 +105,7 @@
         data.callee = getNew(VirtualRegister { CallFrameSlot::callee })->recovery();
         data.args.resize(argCount());
         for (size_t i = 0; i < argCount(); ++i)
-            data.args[i] = getNew(virtualRegisterForArgument(i))->recovery();
+            data.args[i] = getNew(virtualRegisterForArgumentIncludingThis(i))->recovery();
         for (Reg reg = Reg::first(); reg <= Reg::last(); reg = reg.next()) {
             CachedRecovery* cachedRecovery { m_newRegisters[reg] };
             if (!cachedRecovery)
@@ -128,7 +128,7 @@
     {
         ASSERT(isUndecided());
         ASSERT(!getNew(jsValueRegs));
-        CachedRecovery* cachedRecovery { getNew(VirtualRegister(CallFrameSlot::callee)) };
+        CachedRecovery* cachedRecovery { getNew(CallFrameSlot::callee) };
         ASSERT(cachedRecovery);
         addNew(jsValueRegs, cachedRecovery->recovery());
     }
@@ -141,7 +141,7 @@
     void assumeCalleeIsCell()
     {
 #if USE(JSVALUE32_64)
-        CachedRecovery& calleeCachedRecovery = *getNew(VirtualRegister(CallFrameSlot::callee));
+        CachedRecovery& calleeCachedRecovery = *getNew(CallFrameSlot::callee);
         switch (calleeCachedRecovery.recovery().technique()) {
         case InPair:
             updateRecovery(
diff --git a/Source/JavaScriptCore/jit/JIT.h b/Source/JavaScriptCore/jit/JIT.h
index 5656675..f6f09b9 100644
--- a/Source/JavaScriptCore/jit/JIT.h
+++ b/Source/JavaScriptCore/jit/JIT.h
@@ -331,16 +331,16 @@
         void compileOpStrictEqJump(const Instruction*, CompileOpStrictEqType);
         enum class CompileOpEqType { Eq, NEq };
         void compileOpEqJumpSlow(Vector<SlowCaseEntry>::iterator&, CompileOpEqType, int jumpTarget);
-        bool isOperandConstantDouble(int src);
+        bool isOperandConstantDouble(VirtualRegister);
         
-        void emitLoadDouble(int index, FPRegisterID value);
-        void emitLoadInt32ToDouble(int index, FPRegisterID value);
+        void emitLoadDouble(VirtualRegister, FPRegisterID value);
+        void emitLoadInt32ToDouble(VirtualRegister, FPRegisterID value);
 
         enum WriteBarrierMode { UnconditionalWriteBarrier, ShouldFilterBase, ShouldFilterValue, ShouldFilterBaseAndValue };
         // value register in write barrier is used before any scratch registers
         // so may safely be the same as either of the scratch registers.
-        void emitWriteBarrier(unsigned owner, unsigned value, WriteBarrierMode);
-        void emitWriteBarrier(JSCell* owner, unsigned value, WriteBarrierMode);
+        void emitWriteBarrier(VirtualRegister owner, VirtualRegister value, WriteBarrierMode);
+        void emitWriteBarrier(JSCell* owner, VirtualRegister value, WriteBarrierMode);
         void emitWriteBarrier(JSCell* owner);
 
         // This assumes that the value to profile is in regT0 and that regT3 is available for
@@ -404,32 +404,32 @@
 
         enum FinalObjectMode { MayBeFinal, KnownNotFinal };
 
-        void emitGetVirtualRegister(int src, JSValueRegs dst);
-        void emitPutVirtualRegister(int dst, JSValueRegs src);
+        void emitGetVirtualRegister(VirtualRegister src, JSValueRegs dst);
+        void emitPutVirtualRegister(VirtualRegister dst, JSValueRegs src);
 
-        int32_t getOperandConstantInt(int src);
-        double getOperandConstantDouble(int src);
+        int32_t getOperandConstantInt(VirtualRegister src);
+        double getOperandConstantDouble(VirtualRegister src);
 
 #if USE(JSVALUE32_64)
-        bool getOperandConstantInt(int op1, int op2, int& op, int32_t& constant);
+        bool getOperandConstantInt(VirtualRegister op1, VirtualRegister op2, int& op, int32_t& constant);
 
-        void emitLoadTag(int index, RegisterID tag);
-        void emitLoadPayload(int index, RegisterID payload);
+        void emitLoadTag(VirtualRegister, RegisterID tag);
+        void emitLoadPayload(VirtualRegister, RegisterID payload);
 
         void emitLoad(const JSValue& v, RegisterID tag, RegisterID payload);
-        void emitLoad(int index, RegisterID tag, RegisterID payload, RegisterID base = callFrameRegister);
-        void emitLoad2(int index1, RegisterID tag1, RegisterID payload1, int index2, RegisterID tag2, RegisterID payload2);
+        void emitLoad(VirtualRegister, RegisterID tag, RegisterID payload, RegisterID base = callFrameRegister);
+        void emitLoad2(VirtualRegister, RegisterID tag1, RegisterID payload1, VirtualRegister, RegisterID tag2, RegisterID payload2);
 
-        void emitStore(int index, RegisterID tag, RegisterID payload, RegisterID base = callFrameRegister);
-        void emitStore(int index, const JSValue constant, RegisterID base = callFrameRegister);
-        void emitStoreInt32(int index, RegisterID payload, bool indexIsInt32 = false);
-        void emitStoreInt32(int index, TrustedImm32 payload, bool indexIsInt32 = false);
-        void emitStoreCell(int index, RegisterID payload, bool indexIsCell = false);
-        void emitStoreBool(int index, RegisterID payload, bool indexIsBool = false);
-        void emitStoreDouble(int index, FPRegisterID value);
+        void emitStore(VirtualRegister, RegisterID tag, RegisterID payload, RegisterID base = callFrameRegister);
+        void emitStore(VirtualRegister, const JSValue constant, RegisterID base = callFrameRegister);
+        void emitStoreInt32(VirtualRegister, RegisterID payload, bool indexIsInt32 = false);
+        void emitStoreInt32(VirtualRegister, TrustedImm32 payload, bool indexIsInt32 = false);
+        void emitStoreCell(VirtualRegister, RegisterID payload, bool indexIsCell = false);
+        void emitStoreBool(VirtualRegister, RegisterID payload, bool indexIsBool = false);
+        void emitStoreDouble(VirtualRegister, FPRegisterID value);
 
-        void emitJumpSlowCaseIfNotJSCell(int virtualRegisterIndex);
-        void emitJumpSlowCaseIfNotJSCell(int virtualRegisterIndex, RegisterID tag);
+        void emitJumpSlowCaseIfNotJSCell(VirtualRegister);
+        void emitJumpSlowCaseIfNotJSCell(VirtualRegister, RegisterID tag);
 
         void compileGetByIdHotPath(const Identifier*);
 
@@ -438,17 +438,10 @@
         void emitBinaryDoubleOp(const Instruction *, OperandTypes, JumpList& notInt32Op1, JumpList& notInt32Op2, bool op1IsInRegisters = true, bool op2IsInRegisters = true);
 
 #else // USE(JSVALUE32_64)
-        void emitGetVirtualRegister(int src, RegisterID dst);
         void emitGetVirtualRegister(VirtualRegister src, RegisterID dst);
-        void emitGetVirtualRegisters(int src1, RegisterID dst1, int src2, RegisterID dst2);
         void emitGetVirtualRegisters(VirtualRegister src1, RegisterID dst1, VirtualRegister src2, RegisterID dst2);
-        void emitPutVirtualRegister(int dst, RegisterID from = regT0);
         void emitPutVirtualRegister(VirtualRegister dst, RegisterID from = regT0);
-        void emitStoreCell(int dst, RegisterID payload, bool /* only used in JSValue32_64 */ = false)
-        {
-            emitPutVirtualRegister(dst, payload);
-        }
-        void emitStoreCell(VirtualRegister dst, RegisterID payload)
+        void emitStoreCell(VirtualRegister dst, RegisterID payload, bool /* only used in JSValue32_64 */ = false)
         {
             emitPutVirtualRegister(dst, payload);
         }
@@ -456,29 +449,29 @@
         Jump emitJumpIfBothJSCells(RegisterID, RegisterID, RegisterID);
         void emitJumpSlowCaseIfJSCell(RegisterID);
         void emitJumpSlowCaseIfNotJSCell(RegisterID);
-        void emitJumpSlowCaseIfNotJSCell(RegisterID, int VReg);
+        void emitJumpSlowCaseIfNotJSCell(RegisterID, VirtualRegister);
         Jump emitJumpIfNotInt(RegisterID, RegisterID, RegisterID scratch);
         PatchableJump emitPatchableJumpIfNotInt(RegisterID);
         void emitJumpSlowCaseIfNotInt(RegisterID);
         void emitJumpSlowCaseIfNotNumber(RegisterID);
         void emitJumpSlowCaseIfNotInt(RegisterID, RegisterID, RegisterID scratch);
 
-        void compileGetByIdHotPath(int baseVReg, const Identifier*);
+        void compileGetByIdHotPath(VirtualRegister baseReg, const Identifier*);
 
 #endif // USE(JSVALUE32_64)
 
         template<typename Op>
         void emit_compareAndJump(const Instruction*, RelationalCondition);
-        void emit_compareAndJumpImpl(int op1, int op2, unsigned target, RelationalCondition);
+        void emit_compareAndJumpImpl(VirtualRegister op1, VirtualRegister op2, unsigned target, RelationalCondition);
         template<typename Op>
         void emit_compareUnsigned(const Instruction*, RelationalCondition);
-        void emit_compareUnsignedImpl(int dst, int op1, int op2, RelationalCondition);
+        void emit_compareUnsignedImpl(VirtualRegister dst, VirtualRegister op1, VirtualRegister op2, RelationalCondition);
         template<typename Op>
         void emit_compareUnsignedAndJump(const Instruction*, RelationalCondition);
-        void emit_compareUnsignedAndJumpImpl(int op1, int op2, unsigned target, RelationalCondition);
+        void emit_compareUnsignedAndJumpImpl(VirtualRegister op1, VirtualRegister op2, unsigned target, RelationalCondition);
         template<typename Op>
         void emit_compareAndJumpSlow(const Instruction*, DoubleCondition, size_t (JIT_OPERATION *operation)(JSGlobalObject*, EncodedJSValue, EncodedJSValue), bool invert, Vector<SlowCaseEntry>::iterator&);
-        void emit_compareAndJumpSlowImpl(int op1, int op2, unsigned target, size_t instructionSize, DoubleCondition, size_t (JIT_OPERATION *operation)(JSGlobalObject*, EncodedJSValue, EncodedJSValue), bool invert, Vector<SlowCaseEntry>::iterator&);
+        void emit_compareAndJumpSlowImpl(VirtualRegister op1, VirtualRegister op2, unsigned target, size_t instructionSize, DoubleCondition, size_t (JIT_OPERATION *operation)(JSGlobalObject*, EncodedJSValue, EncodedJSValue), bool invert, Vector<SlowCaseEntry>::iterator&);
         
         void assertStackPointerOffset();
 
@@ -686,8 +679,8 @@
         template<typename Op>
         void emitNewFuncExprCommon(const Instruction*);
         void emitVarInjectionCheck(bool needsVarInjectionChecks);
-        void emitResolveClosure(int dst, int scope, bool needsVarInjectionChecks, unsigned depth);
-        void emitLoadWithStructureCheck(int scope, Structure** structureSlot);
+        void emitResolveClosure(VirtualRegister dst, VirtualRegister scope, bool needsVarInjectionChecks, unsigned depth);
+        void emitLoadWithStructureCheck(VirtualRegister scope, Structure** structureSlot);
 #if USE(JSVALUE64)
         void emitGetVarFromPointer(JSValue* operand, GPRReg);
         void emitGetVarFromIndirectPointer(JSValue** operand, GPRReg);
@@ -695,20 +688,20 @@
         void emitGetVarFromIndirectPointer(JSValue** operand, GPRReg tag, GPRReg payload);
         void emitGetVarFromPointer(JSValue* operand, GPRReg tag, GPRReg payload);
 #endif
-        void emitGetClosureVar(int scope, uintptr_t operand);
+        void emitGetClosureVar(VirtualRegister scope, uintptr_t operand);
         void emitNotifyWrite(WatchpointSet*);
         void emitNotifyWrite(GPRReg pointerToSet);
-        void emitPutGlobalVariable(JSValue* operand, int value, WatchpointSet*);
-        void emitPutGlobalVariableIndirect(JSValue** addressOfOperand, int value, WatchpointSet**);
-        void emitPutClosureVar(int scope, uintptr_t operand, int value, WatchpointSet*);
+        void emitPutGlobalVariable(JSValue* operand, VirtualRegister value, WatchpointSet*);
+        void emitPutGlobalVariableIndirect(JSValue** addressOfOperand, VirtualRegister value, WatchpointSet**);
+        void emitPutClosureVar(VirtualRegister scope, uintptr_t operand, VirtualRegister value, WatchpointSet*);
 
-        void emitInitRegister(int dst);
+        void emitInitRegister(VirtualRegister);
 
-        void emitPutIntToCallFrameHeader(RegisterID from, int entry);
+        void emitPutIntToCallFrameHeader(RegisterID from, VirtualRegister);
 
-        JSValue getConstantOperand(int src);
-        bool isOperandConstantInt(int src);
-        bool isOperandConstantChar(int src);
+        JSValue getConstantOperand(VirtualRegister);
+        bool isOperandConstantInt(VirtualRegister);
+        bool isOperandConstantChar(VirtualRegister);
 
         template <typename Op, typename Generator, typename ProfiledFunction, typename NonProfiledFunction>
         void emitMathICFast(JITUnaryMathIC<Generator>*, const Instruction*, ProfiledFunction, NonProfiledFunction);
@@ -735,7 +728,7 @@
             ASSERT(!iter->from.isSet());
             ++iter;
         }
-        void linkSlowCaseIfNotJSCell(Vector<SlowCaseEntry>::iterator&, int virtualRegisterIndex);
+        void linkSlowCaseIfNotJSCell(Vector<SlowCaseEntry>::iterator&, VirtualRegister);
         void linkAllSlowCasesForBytecodeIndex(Vector<SlowCaseEntry>& slowCases,
             Vector<SlowCaseEntry>::iterator&, BytecodeIndex bytecodeOffset);
 
@@ -755,13 +748,13 @@
         MacroAssembler::Call appendCallWithExceptionCheckAndSlowPathReturnType(const FunctionPtr<CFunctionPtrTag>);
 #endif
         MacroAssembler::Call appendCallWithCallFrameRollbackOnException(const FunctionPtr<CFunctionPtrTag>);
-        MacroAssembler::Call appendCallWithExceptionCheckSetJSValueResult(const FunctionPtr<CFunctionPtrTag>, int);
+        MacroAssembler::Call appendCallWithExceptionCheckSetJSValueResult(const FunctionPtr<CFunctionPtrTag>, VirtualRegister result);
         template<typename Metadata>
-        MacroAssembler::Call appendCallWithExceptionCheckSetJSValueResultWithProfile(Metadata&, const FunctionPtr<CFunctionPtrTag>, int);
+        MacroAssembler::Call appendCallWithExceptionCheckSetJSValueResultWithProfile(Metadata&, const FunctionPtr<CFunctionPtrTag>, VirtualRegister result);
         
         template<typename OperationType, typename... Args>
         std::enable_if_t<FunctionTraits<OperationType>::hasResult, MacroAssembler::Call>
-        callOperation(OperationType operation, int result, Args... args)
+        callOperation(OperationType operation, VirtualRegister result, Args... args)
         {
             setupArguments<OperationType>(args...);
             return appendCallWithExceptionCheckSetJSValueResult(operation, result);
@@ -805,7 +798,7 @@
 
         template<typename Metadata, typename OperationType, typename... Args>
         std::enable_if_t<FunctionTraits<OperationType>::hasResult, MacroAssembler::Call>
-        callOperationWithProfile(Metadata& metadata, OperationType operation, int result, Args... args)
+        callOperationWithProfile(Metadata& metadata, OperationType operation, VirtualRegister result, Args... args)
         {
             setupArguments<OperationType>(args...);
             return appendCallWithExceptionCheckSetJSValueResultWithProfile(metadata, operation, result);
@@ -865,7 +858,7 @@
 #endif
 
 #ifndef NDEBUG
-        void printBytecodeOperandTypes(int src1, int src2);
+        void printBytecodeOperandTypes(VirtualRegister src1, VirtualRegister src2);
 #endif
 
 #if ENABLE(SAMPLING_FLAGS)
diff --git a/Source/JavaScriptCore/jit/JITArithmetic.cpp b/Source/JavaScriptCore/jit/JITArithmetic.cpp
index 3458744..3a32c32 100644
--- a/Source/JavaScriptCore/jit/JITArithmetic.cpp
+++ b/Source/JavaScriptCore/jit/JITArithmetic.cpp
@@ -158,8 +158,8 @@
 void JIT::emit_op_unsigned(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpUnsigned>();
-    int result = bytecode.m_dst.offset();
-    int op1 = bytecode.m_operand.offset();
+    VirtualRegister result = bytecode.m_dst;
+    VirtualRegister op1 = bytecode.m_operand;
     
     emitGetVirtualRegister(op1, regT0);
     emitJumpSlowCaseIfNotInt(regT0);
@@ -172,13 +172,13 @@
 void JIT::emit_compareAndJump(const Instruction* instruction, RelationalCondition condition)
 {
     auto bytecode = instruction->as<Op>();
-    int op1 = bytecode.m_lhs.offset();
-    int op2 = bytecode.m_rhs.offset();
+    VirtualRegister op1 = bytecode.m_lhs;
+    VirtualRegister op2 = bytecode.m_rhs;
     unsigned target = jumpTarget(instruction, bytecode.m_targetLabel);
     emit_compareAndJumpImpl(op1, op2, target, condition);
 }
 
-void JIT::emit_compareAndJumpImpl(int op1, int op2, unsigned target, RelationalCondition condition)
+void JIT::emit_compareAndJumpImpl(VirtualRegister op1, VirtualRegister op2, unsigned target, RelationalCondition condition)
 {
     // We generate inline code for the following cases in the fast path:
     // - int immediate to constant int immediate
@@ -230,13 +230,13 @@
 void JIT::emit_compareUnsignedAndJump(const Instruction* instruction, RelationalCondition condition)
 {
     auto bytecode = instruction->as<Op>();
-    int op1 = bytecode.m_lhs.offset();
-    int op2 = bytecode.m_rhs.offset();
+    VirtualRegister op1 = bytecode.m_lhs;
+    VirtualRegister op2 = bytecode.m_rhs;
     unsigned target = jumpTarget(instruction, bytecode.m_targetLabel);
     emit_compareUnsignedAndJumpImpl(op1, op2, target, condition);
 }
 
-void JIT::emit_compareUnsignedAndJumpImpl(int op1, int op2, unsigned target, RelationalCondition condition)
+void JIT::emit_compareUnsignedAndJumpImpl(VirtualRegister op1, VirtualRegister op2, unsigned target, RelationalCondition condition)
 {
     if (isOperandConstantInt(op2)) {
         emitGetVirtualRegister(op1, regT0);
@@ -256,13 +256,13 @@
 void JIT::emit_compareUnsigned(const Instruction* instruction, RelationalCondition condition)
 {
     auto bytecode = instruction->as<Op>();
-    int dst = bytecode.m_dst.offset();
-    int op1 = bytecode.m_lhs.offset();
-    int op2 = bytecode.m_rhs.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister op1 = bytecode.m_lhs;
+    VirtualRegister op2 = bytecode.m_rhs;
     emit_compareUnsignedImpl(dst, op1, op2, condition);
 }
 
-void JIT::emit_compareUnsignedImpl(int dst, int op1, int op2, RelationalCondition condition)
+void JIT::emit_compareUnsignedImpl(VirtualRegister dst, VirtualRegister op1, VirtualRegister op2, RelationalCondition condition)
 {
     if (isOperandConstantInt(op2)) {
         emitGetVirtualRegister(op1, regT0);
@@ -284,13 +284,13 @@
 void JIT::emit_compareAndJumpSlow(const Instruction* instruction, DoubleCondition condition, size_t (JIT_OPERATION *operation)(JSGlobalObject*, EncodedJSValue, EncodedJSValue), bool invert, Vector<SlowCaseEntry>::iterator& iter)
 {
     auto bytecode = instruction->as<Op>();
-    int op1 = bytecode.m_lhs.offset();
-    int op2 = bytecode.m_rhs.offset();
+    VirtualRegister op1 = bytecode.m_lhs;
+    VirtualRegister op2 = bytecode.m_rhs;
     unsigned target = jumpTarget(instruction, bytecode.m_targetLabel);
     emit_compareAndJumpSlowImpl(op1, op2, target, instruction->size(), condition, operation, invert, iter);
 }
 
-void JIT::emit_compareAndJumpSlowImpl(int op1, int op2, unsigned target, size_t instructionSize, DoubleCondition condition, size_t (JIT_OPERATION *operation)(JSGlobalObject*, EncodedJSValue, EncodedJSValue), bool invert, Vector<SlowCaseEntry>::iterator& iter)
+void JIT::emit_compareAndJumpSlowImpl(VirtualRegister op1, VirtualRegister op2, unsigned target, size_t instructionSize, DoubleCondition condition, size_t (JIT_OPERATION *operation)(JSGlobalObject*, EncodedJSValue, EncodedJSValue), bool invert, Vector<SlowCaseEntry>::iterator& iter)
 {
 
     // We generate inline code for the following cases in the slow path:
@@ -387,7 +387,7 @@
 void JIT::emit_op_inc(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpInc>();
-    int srcDst = bytecode.m_srcDst.offset();
+    VirtualRegister srcDst = bytecode.m_srcDst;
 
     emitGetVirtualRegister(srcDst, regT0);
     emitJumpSlowCaseIfNotInt(regT0);
@@ -399,7 +399,7 @@
 void JIT::emit_op_dec(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpDec>();
-    int srcDst = bytecode.m_srcDst.offset();
+    VirtualRegister srcDst = bytecode.m_srcDst;
 
     emitGetVirtualRegister(srcDst, regT0);
     emitJumpSlowCaseIfNotInt(regT0);
@@ -415,9 +415,9 @@
 void JIT::emit_op_mod(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpMod>();
-    int result = bytecode.m_dst.offset();
-    int op1 = bytecode.m_lhs.offset();
-    int op2 = bytecode.m_rhs.offset();
+    VirtualRegister result = bytecode.m_dst;
+    VirtualRegister op1 = bytecode.m_lhs;
+    VirtualRegister op2 = bytecode.m_rhs;
 
     // Make sure registers are correct for x86 IDIV instructions.
     ASSERT(regT0 == X86Registers::eax);
@@ -491,9 +491,9 @@
 void JIT::emitBitBinaryOpFastPath(const Instruction* currentInstruction, ProfilingPolicy profilingPolicy)
 {
     auto bytecode = currentInstruction->as<Op>();
-    int result = bytecode.m_dst.offset();
-    int op1 = bytecode.m_lhs.offset();
-    int op2 = bytecode.m_rhs.offset();
+    VirtualRegister result = bytecode.m_dst;
+    VirtualRegister op1 = bytecode.m_lhs;
+    VirtualRegister op2 = bytecode.m_rhs;
 
 #if USE(JSVALUE64)
     JSValueRegs leftRegs = JSValueRegs(regT0);
@@ -538,8 +538,8 @@
 void JIT::emit_op_bitnot(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpBitnot>();
-    int result = bytecode.m_dst.offset();
-    int op1 = bytecode.m_operand.offset();
+    VirtualRegister result = bytecode.m_dst;
+    VirtualRegister op1 = bytecode.m_operand;
 
 #if USE(JSVALUE64)
     JSValueRegs leftRegs = JSValueRegs(regT0);
@@ -599,9 +599,9 @@
 void JIT::emitRightShiftFastPath(const Instruction* currentInstruction, JITRightShiftGenerator::ShiftType snippetShiftType)
 {
     auto bytecode = currentInstruction->as<Op>();
-    int result = bytecode.m_dst.offset();
-    int op1 = bytecode.m_lhs.offset();
-    int op2 = bytecode.m_rhs.offset();
+    VirtualRegister result = bytecode.m_dst;
+    VirtualRegister op1 = bytecode.m_lhs;
+    VirtualRegister op2 = bytecode.m_rhs;
 
 #if USE(JSVALUE64)
     JSValueRegs leftRegs = JSValueRegs(regT0);
@@ -674,8 +674,8 @@
 void JIT::emitMathICFast(JITUnaryMathIC<Generator>* mathIC, const Instruction* currentInstruction, ProfiledFunction profiledFunction, NonProfiledFunction nonProfiledFunction)
 {
     auto bytecode = currentInstruction->as<Op>();
-    int result = bytecode.m_dst.offset();
-    int operand = bytecode.m_operand.offset();
+    VirtualRegister result = bytecode.m_dst;
+    VirtualRegister operand = bytecode.m_operand;
 
 #if USE(JSVALUE64)
     // ArithNegate benefits from using the same register as src and dst.
@@ -724,9 +724,9 @@
 void JIT::emitMathICFast(JITBinaryMathIC<Generator>* mathIC, const Instruction* currentInstruction, ProfiledFunction profiledFunction, NonProfiledFunction nonProfiledFunction)
 {
     auto bytecode = currentInstruction->as<Op>();
-    int result = bytecode.m_dst.offset();
-    int op1 = bytecode.m_lhs.offset();
-    int op2 = bytecode.m_rhs.offset();
+    VirtualRegister result = bytecode.m_dst;
+    VirtualRegister op1 = bytecode.m_lhs;
+    VirtualRegister op2 = bytecode.m_rhs;
 
 #if USE(JSVALUE64)
     JSValueRegs leftRegs = JSValueRegs(regT1);
@@ -799,7 +799,7 @@
     mathICGenerationState.slowPathStart = label();
 
     auto bytecode = currentInstruction->as<Op>();
-    int result = bytecode.m_dst.offset();
+    VirtualRegister result = bytecode.m_dst;
 
 #if USE(JSVALUE64)
     JSValueRegs srcRegs = JSValueRegs(regT1);
@@ -845,9 +845,9 @@
     mathICGenerationState.slowPathStart = label();
 
     auto bytecode = currentInstruction->as<Op>();
-    int result = bytecode.m_dst.offset();
-    int op1 = bytecode.m_lhs.offset();
-    int op2 = bytecode.m_rhs.offset();
+    VirtualRegister result = bytecode.m_dst;
+    VirtualRegister op1 = bytecode.m_lhs;
+    VirtualRegister op2 = bytecode.m_rhs;
 
 #if USE(JSVALUE64)
     JSValueRegs leftRegs = JSValueRegs(regT1);
@@ -906,9 +906,9 @@
 void JIT::emit_op_div(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpDiv>();
-    int result = bytecode.m_dst.offset();
-    int op1 = bytecode.m_lhs.offset();
-    int op2 = bytecode.m_rhs.offset();
+    VirtualRegister result = bytecode.m_dst;
+    VirtualRegister op1 = bytecode.m_lhs;
+    VirtualRegister op2 = bytecode.m_rhs;
 
 #if USE(JSVALUE64)
     JSValueRegs leftRegs = JSValueRegs(regT0);
diff --git a/Source/JavaScriptCore/jit/JITCall.cpp b/Source/JavaScriptCore/jit/JITCall.cpp
index c3f8aa0..b2bfce6 100644
--- a/Source/JavaScriptCore/jit/JITCall.cpp
+++ b/Source/JavaScriptCore/jit/JITCall.cpp
@@ -51,7 +51,7 @@
 void JIT::emitPutCallResult(const Op& bytecode)
 {
     emitValueProfilingSite(bytecode.metadata(m_codeBlock));
-    emitPutVirtualRegister(bytecode.m_dst.offset());
+    emitPutVirtualRegister(bytecode.m_dst);
 }
 
 template<typename Op>
@@ -66,7 +66,7 @@
     int registerOffset = -static_cast<int>(bytecode.m_argv);
 
     if (Op::opcodeID == op_call && shouldEmitProfiling()) {
-        emitGetVirtualRegister(registerOffset + CallFrame::argumentOffsetIncludingThis(0), regT0);
+        emitGetVirtualRegister(VirtualRegister(registerOffset + CallFrame::argumentOffsetIncludingThis(0)), regT0);
         Jump done = branchIfNotCell(regT0);
         load32(Address(regT0, JSCell::structureIDOffset()), regT0);
         store32(regT0, metadata.m_callLinkInfo.m_arrayProfile.addressOfLastSeenStructureID());
@@ -85,9 +85,9 @@
 , void>
 JIT::compileSetupFrame(const Op& bytecode, CallLinkInfo* info)
 {
-    int thisValue = bytecode.m_thisValue.offset();
-    int arguments = bytecode.m_arguments.offset();
-    int firstFreeRegister = bytecode.m_firstFree.offset();
+    VirtualRegister thisValue = bytecode.m_thisValue;
+    VirtualRegister arguments = bytecode.m_arguments;
+    int firstFreeRegister = bytecode.m_firstFree.offset(); // FIXME: Why is this a virtual register if we never use it as one...
     int firstVarArgOffset = bytecode.m_firstVarArg;
 
     emitGetVirtualRegister(arguments, regT1);
@@ -189,7 +189,7 @@
     for (unsigned i = 0; i < bytecode.m_argc; ++i) {
         shuffleData.args[i] =
             ValueRecovery::displacedInJSStack(
-                virtualRegisterForArgument(i) - bytecode.m_argv,
+                virtualRegisterForArgumentIncludingThis(i) - bytecode.m_argv,
                 DataFormatJS);
     }
     shuffleData.callee =
@@ -206,7 +206,7 @@
 {
     OpcodeID opcodeID = Op::opcodeID;
     auto bytecode = instruction->as<Op>();
-    int callee = bytecode.m_callee.offset();
+    VirtualRegister callee = bytecode.m_callee;
 
     /* Caller always:
         - Updates callFrameRegister to callee callFrame.
diff --git a/Source/JavaScriptCore/jit/JITExceptions.cpp b/Source/JavaScriptCore/jit/JITExceptions.cpp
index 1610dce..922f1fb 100644
--- a/Source/JavaScriptCore/jit/JITExceptions.cpp
+++ b/Source/JavaScriptCore/jit/JITExceptions.cpp
@@ -50,7 +50,7 @@
     if (UNLIKELY(Options::breakOnThrow())) {
         CodeBlock* codeBlock = topJSCallFrame->codeBlock();
         dataLog("In call frame ", RawPointer(topJSCallFrame), " for code block ", codeBlock, "\n");
-        CRASH();
+        WTFBreakpointTrap();
     }
     
     if (auto* shadowChicken = vm.shadowChicken())
diff --git a/Source/JavaScriptCore/jit/JITInlines.h b/Source/JavaScriptCore/jit/JITInlines.h
index fce8bb7..9c11734 100644
--- a/Source/JavaScriptCore/jit/JITInlines.h
+++ b/Source/JavaScriptCore/jit/JITInlines.h
@@ -48,19 +48,20 @@
     return MacroAssembler::JumpList();
 }
 
-ALWAYS_INLINE bool JIT::isOperandConstantDouble(int src)
+ALWAYS_INLINE bool JIT::isOperandConstantDouble(VirtualRegister src)
 {
-    return m_codeBlock->isConstantRegisterIndex(src) && getConstantOperand(src).isDouble();
+    return src.isConstant() && getConstantOperand(src).isDouble();
 }
 
-ALWAYS_INLINE JSValue JIT::getConstantOperand(int src)
+ALWAYS_INLINE JSValue JIT::getConstantOperand(VirtualRegister src)
 {
-    ASSERT(m_codeBlock->isConstantRegisterIndex(src));
+    ASSERT(src.isConstant());
     return m_codeBlock->getConstant(src);
 }
 
-ALWAYS_INLINE void JIT::emitPutIntToCallFrameHeader(RegisterID from, int entry)
+ALWAYS_INLINE void JIT::emitPutIntToCallFrameHeader(RegisterID from, VirtualRegister entry)
 {
+    ASSERT(entry.isHeader());
 #if USE(JSVALUE32_64)
     store32(TrustedImm32(JSValue::Int32Tag), tagFor(entry));
     store32(from, payloadFor(entry));
@@ -143,7 +144,7 @@
     return call;
 }
 
-ALWAYS_INLINE MacroAssembler::Call JIT::appendCallWithExceptionCheckSetJSValueResult(const FunctionPtr<CFunctionPtrTag> function, int dst)
+ALWAYS_INLINE MacroAssembler::Call JIT::appendCallWithExceptionCheckSetJSValueResult(const FunctionPtr<CFunctionPtrTag> function, VirtualRegister dst)
 {
     MacroAssembler::Call call = appendCallWithExceptionCheck(function);
 #if USE(JSVALUE64)
@@ -155,7 +156,7 @@
 }
 
 template<typename Metadata>
-ALWAYS_INLINE MacroAssembler::Call JIT::appendCallWithExceptionCheckSetJSValueResultWithProfile(Metadata& metadata, const FunctionPtr<CFunctionPtrTag> function, int dst)
+ALWAYS_INLINE MacroAssembler::Call JIT::appendCallWithExceptionCheckSetJSValueResultWithProfile(Metadata& metadata, const FunctionPtr<CFunctionPtrTag> function, VirtualRegister dst)
 {
     MacroAssembler::Call call = appendCallWithExceptionCheck(function);
     emitValueProfilingSite(metadata);
@@ -167,9 +168,9 @@
     return call;
 }
 
-ALWAYS_INLINE void JIT::linkSlowCaseIfNotJSCell(Vector<SlowCaseEntry>::iterator& iter, int vReg)
+ALWAYS_INLINE void JIT::linkSlowCaseIfNotJSCell(Vector<SlowCaseEntry>::iterator& iter, VirtualRegister reg)
 {
-    if (!m_codeBlock->isKnownNotImmediate(vReg))
+    if (!m_codeBlock->isKnownNotImmediate(reg))
         linkSlowCase(iter);
 }
 
@@ -284,9 +285,9 @@
 #endif
 #endif
 
-ALWAYS_INLINE bool JIT::isOperandConstantChar(int src)
+ALWAYS_INLINE bool JIT::isOperandConstantChar(VirtualRegister src)
 {
-    return m_codeBlock->isConstantRegisterIndex(src) && getConstantOperand(src).isString() && asString(getConstantOperand(src).asCell())->length() == 1;
+    return src.isConstant() && getConstantOperand(src).isString() && asString(getConstantOperand(src).asCell())->length() == 1;
 }
 
 inline void JIT::emitValueProfilingSite(ValueProfile& valueProfile)
@@ -363,26 +364,26 @@
     return JITContiguous;
 }
 
-ALWAYS_INLINE int32_t JIT::getOperandConstantInt(int src)
+ALWAYS_INLINE int32_t JIT::getOperandConstantInt(VirtualRegister src)
 {
     return getConstantOperand(src).asInt32();
 }
 
-ALWAYS_INLINE double JIT::getOperandConstantDouble(int src)
+ALWAYS_INLINE double JIT::getOperandConstantDouble(VirtualRegister src)
 {
     return getConstantOperand(src).asDouble();
 }
 
-ALWAYS_INLINE void JIT::emitInitRegister(int dst)
+ALWAYS_INLINE void JIT::emitInitRegister(VirtualRegister dst)
 {
     storeTrustedValue(jsUndefined(), addressFor(dst));
 }
 
 #if USE(JSVALUE32_64)
 
-inline void JIT::emitLoadTag(int index, RegisterID tag)
+inline void JIT::emitLoadTag(VirtualRegister reg, RegisterID tag)
 {
-    if (m_codeBlock->isConstantRegisterIndex(index)) {
+    if (reg.isConstant()) {
         move(Imm32(getConstantOperand(index).tag()), tag);
         return;
     }
@@ -390,9 +391,9 @@
     load32(tagFor(index), tag);
 }
 
-inline void JIT::emitLoadPayload(int index, RegisterID payload)
+inline void JIT::emitLoadPayload(VirtualRegister reg, RegisterID payload)
 {
-    if (m_codeBlock->isConstantRegisterIndex(index)) {
+    if (reg.isConstant()) {
         move(Imm32(getConstantOperand(index).payload()), payload);
         return;
     }
@@ -406,136 +407,133 @@
     move(Imm32(v.tag()), tag);
 }
 
-ALWAYS_INLINE void JIT::emitGetVirtualRegister(int src, JSValueRegs dst)
+ALWAYS_INLINE void JIT::emitGet(VirtualRegister src, JSValueRegs dst)
 {
     emitLoad(src, dst.tagGPR(), dst.payloadGPR());
 }
 
-ALWAYS_INLINE void JIT::emitPutVirtualRegister(int dst, JSValueRegs from)
+ALWAYS_INLINE void JIT::emitPutVirtualRegister(VirtualRegister dst, JSValueRegs from)
 {
     emitStore(dst, from.tagGPR(), from.payloadGPR());
 }
 
-inline void JIT::emitLoad(int index, RegisterID tag, RegisterID payload, RegisterID base)
+inline void JIT::emitLoad(VirtualRegister reg, RegisterID tag, RegisterID payload, RegisterID base)
 {
     RELEASE_ASSERT(tag != payload);
 
     if (base == callFrameRegister) {
         RELEASE_ASSERT(payload != base);
-        emitLoadPayload(index, payload);
-        emitLoadTag(index, tag);
+        emitLoadPayload(reg, payload);
+        emitLoadTag(reg, tag);
         return;
     }
 
-    VirtualRegister target { index };
     if (payload == base) { // avoid stomping base
-        load32(tagFor(target, base), tag);
-        load32(payloadFor(target, base), payload);
+        load32(tagFor(reg, base), tag);
+        load32(payloadFor(reg, base), payload);
         return;
     }
 
-    load32(payloadFor(target, base), payload);
-    load32(tagFor(target, base), tag);
+    load32(payloadFor(reg, base), payload);
+    load32(tagFor(reg, base), tag);
 }
 
-inline void JIT::emitLoad2(int index1, RegisterID tag1, RegisterID payload1, int index2, RegisterID tag2, RegisterID payload2)
+inline void JIT::emitLoad2(VirtualRegister reg1, RegisterID tag1, RegisterID payload1, VirtualRegister reg2, RegisterID tag2, RegisterID payload2)
 {
-    emitLoad(index2, tag2, payload2);
-    emitLoad(index1, tag1, payload1);
+    emitLoad(reg2, tag2, payload2);
+    emitLoad(reg1, tag1, payload1);
 }
 
-inline void JIT::emitLoadDouble(int index, FPRegisterID value)
+inline void JIT::emitLoadDouble(VirtualRegister reg, FPRegisterID value)
 {
-    if (m_codeBlock->isConstantRegisterIndex(index)) {
-        WriteBarrier<Unknown>& inConstantPool = m_codeBlock->constantRegister(index);
+    if (reg.isConstant()) {
+        WriteBarrier<Unknown>& inConstantPool = m_codeBlock->constantRegister(reg);
         loadDouble(TrustedImmPtr(&inConstantPool), value);
     } else
-        loadDouble(addressFor(index), value);
+        loadDouble(addressFor(reg), value);
 }
 
-inline void JIT::emitLoadInt32ToDouble(int index, FPRegisterID value)
+inline void JIT::emitLoadInt32ToDouble(VirtualRegister reg, FPRegisterID value)
 {
-    if (m_codeBlock->isConstantRegisterIndex(index)) {
-        WriteBarrier<Unknown>& inConstantPool = m_codeBlock->constantRegister(index);
+    if (reg.isConstant()) {
+        WriteBarrier<Unknown>& inConstantPool = m_codeBlock->constantRegister(reg);
         char* bytePointer = reinterpret_cast<char*>(&inConstantPool);
         convertInt32ToDouble(AbsoluteAddress(bytePointer + OBJECT_OFFSETOF(JSValue, u.asBits.payload)), value);
     } else
-        convertInt32ToDouble(payloadFor(index), value);
+        convertInt32ToDouble(payloadFor(reg), value);
 }
 
-inline void JIT::emitStore(int index, RegisterID tag, RegisterID payload, RegisterID base)
+inline void JIT::emitStore(VirtualRegister reg, RegisterID tag, RegisterID payload, RegisterID base)
 {
-    VirtualRegister target { index };
-    store32(payload, payloadFor(target, base));
-    store32(tag, tagFor(target, base));
+    store32(payload, payloadFor(reg, base));
+    store32(tag, tagFor(reg, base));
 }
 
-inline void JIT::emitStoreInt32(int index, RegisterID payload, bool indexIsInt32)
+inline void JIT::emitStoreInt32(VirtualRegister reg, RegisterID payload, bool indexIsInt32)
 {
-    store32(payload, payloadFor(index));
+    store32(payload, payloadFor(reg));
     if (!indexIsInt32)
-        store32(TrustedImm32(JSValue::Int32Tag), tagFor(index));
+        store32(TrustedImm32(JSValue::Int32Tag), tagFor(reg));
 }
 
-inline void JIT::emitStoreInt32(int index, TrustedImm32 payload, bool indexIsInt32)
+inline void JIT::emitStoreInt32(VirtualRegister reg, TrustedImm32 payload, bool indexIsInt32)
 {
-    store32(payload, payloadFor(index));
+    store32(payload, payloadFor(reg));
     if (!indexIsInt32)
-        store32(TrustedImm32(JSValue::Int32Tag), tagFor(index));
+        store32(TrustedImm32(JSValue::Int32Tag), tagFor(reg));
 }
 
-inline void JIT::emitStoreCell(int index, RegisterID payload, bool indexIsCell)
+inline void JIT::emitStoreCell(VirtualRegister reg, RegisterID payload, bool indexIsCell)
 {
-    store32(payload, payloadFor(index));
+    store32(payload, payloadFor(reg));
     if (!indexIsCell)
-        store32(TrustedImm32(JSValue::CellTag), tagFor(index));
+        store32(TrustedImm32(JSValue::CellTag), tagFor(reg));
 }
 
-inline void JIT::emitStoreBool(int index, RegisterID payload, bool indexIsBool)
+inline void JIT::emitStoreBool(VirtualRegister reg, RegisterID payload, bool indexIsBool)
 {
-    store32(payload, payloadFor(index));
+    store32(payload, payloadFor(reg));
     if (!indexIsBool)
-        store32(TrustedImm32(JSValue::BooleanTag), tagFor(index));
+        store32(TrustedImm32(JSValue::BooleanTag), tagFor(reg));
 }
 
-inline void JIT::emitStoreDouble(int index, FPRegisterID value)
+inline void JIT::emitStoreDouble(VirtualRegister reg, FPRegisterID value)
 {
-    storeDouble(value, addressFor(index));
+    storeDouble(value, addressFor(reg));
 }
 
-inline void JIT::emitStore(int index, const JSValue constant, RegisterID base)
+inline void JIT::emitStore(VirtualRegister reg, const JSValue constant, RegisterID base)
 {
-    VirtualRegister target { index };
-    store32(Imm32(constant.payload()), payloadFor(target, base));
-    store32(Imm32(constant.tag()), tagFor(target, base));
+    store32(Imm32(constant.payload()), payloadFor(reg, base));
+    store32(Imm32(constant.tag()), tagFor(reg, base));
 }
 
-inline void JIT::emitJumpSlowCaseIfNotJSCell(int virtualRegisterIndex)
+inline void JIT::emitJumpSlowCaseIfNotJSCell(VirtualRegister reg)
 {
     if (!m_codeBlock->isKnownNotImmediate(virtualRegisterIndex)) {
-        if (m_codeBlock->isConstantRegisterIndex(virtualRegisterIndex))
+        if (reg.isConstant())
             addSlowCase(jump());
         else
-            addSlowCase(emitJumpIfNotJSCell(virtualRegisterIndex));
+            addSlowCase(emitJumpIfNotJSCell(reg));
     }
 }
 
-inline void JIT::emitJumpSlowCaseIfNotJSCell(int virtualRegisterIndex, RegisterID tag)
+inline void JIT::emitJumpSlowCaseIfNotJSCell(VirtualRegister reg, RegisterID tag)
 {
     if (!m_codeBlock->isKnownNotImmediate(virtualRegisterIndex)) {
-        if (m_codeBlock->isConstantRegisterIndex(virtualRegisterIndex))
+        if (reg.isConstant())
             addSlowCase(jump());
         else
             addSlowCase(branchIfNotCell(tag));
     }
 }
 
-ALWAYS_INLINE bool JIT::isOperandConstantInt(int src)
+ALWAYS_INLINE bool JIT::isOperandConstantInt(VirtualRegister src)
 {
-    return m_codeBlock->isConstantRegisterIndex(src) && getConstantOperand(src).isInt32();
+    return src.isConstant() && getConstantOperand(src).isInt32();
 }
 
-ALWAYS_INLINE bool JIT::getOperandConstantInt(int op1, int op2, int& op, int32_t& constant)
+ALWAYS_INLINE bool JIT::getOperandConstantInt(VirtualRegister op1, VirtualRegister op2, int& op, int32_t& constant)
 {
     if (isOperandConstantInt(op1)) {
         constant = getConstantOperand(op1).asInt32();
@@ -555,11 +553,11 @@
 #else // USE(JSVALUE32_64)
 
 // get arg puts an arg from the SF register array into a h/w register
-ALWAYS_INLINE void JIT::emitGetVirtualRegister(int src, RegisterID dst)
+ALWAYS_INLINE void JIT::emitGetVirtualRegister(VirtualRegister src, RegisterID dst)
 {
     ASSERT(m_bytecodeIndex); // This method should only be called during hot/cold path generation, so that m_bytecodeIndex is set.
 
-    if (m_codeBlock->isConstantRegisterIndex(src)) {
+    if (src.isConstant()) {
         JSValue value = m_codeBlock->getConstant(src);
         if (!value.isNumber())
             move(TrustedImm64(JSValue::encode(value)), dst);
@@ -571,45 +569,30 @@
     load64(addressFor(src), dst);
 }
 
-ALWAYS_INLINE void JIT::emitGetVirtualRegister(int src, JSValueRegs dst)
+ALWAYS_INLINE void JIT::emitGetVirtualRegister(VirtualRegister src, JSValueRegs dst)
 {
     emitGetVirtualRegister(src, dst.payloadGPR());
 }
 
-ALWAYS_INLINE void JIT::emitGetVirtualRegister(VirtualRegister src, RegisterID dst)
-{
-    emitGetVirtualRegister(src.offset(), dst);
-}
-
-ALWAYS_INLINE void JIT::emitGetVirtualRegisters(int src1, RegisterID dst1, int src2, RegisterID dst2)
+ALWAYS_INLINE void JIT::emitGetVirtualRegisters(VirtualRegister src1, RegisterID dst1, VirtualRegister src2, RegisterID dst2)
 {
     emitGetVirtualRegister(src1, dst1);
     emitGetVirtualRegister(src2, dst2);
 }
 
-ALWAYS_INLINE void JIT::emitGetVirtualRegisters(VirtualRegister src1, RegisterID dst1, VirtualRegister src2, RegisterID dst2)
+ALWAYS_INLINE bool JIT::isOperandConstantInt(VirtualRegister src)
 {
-    emitGetVirtualRegisters(src1.offset(), dst1, src2.offset(), dst2);
-}
-
-ALWAYS_INLINE bool JIT::isOperandConstantInt(int src)
-{
-    return m_codeBlock->isConstantRegisterIndex(src) && getConstantOperand(src).isInt32();
-}
-
-ALWAYS_INLINE void JIT::emitPutVirtualRegister(int dst, RegisterID from)
-{
-    store64(from, addressFor(dst));
-}
-
-ALWAYS_INLINE void JIT::emitPutVirtualRegister(int dst, JSValueRegs from)
-{
-    emitPutVirtualRegister(dst, from.payloadGPR());
+    return src.isConstant() && getConstantOperand(src).isInt32();
 }
 
 ALWAYS_INLINE void JIT::emitPutVirtualRegister(VirtualRegister dst, RegisterID from)
 {
-    emitPutVirtualRegister(dst.offset(), from);
+    store64(from, addressFor(dst));
+}
+
+ALWAYS_INLINE void JIT::emitPutVirtualRegister(VirtualRegister dst, JSValueRegs from)
+{
+    emitPutVirtualRegister(dst, from.payloadGPR());
 }
 
 ALWAYS_INLINE JIT::Jump JIT::emitJumpIfBothJSCells(RegisterID reg1, RegisterID reg2, RegisterID scratch)
@@ -629,28 +612,28 @@
     addSlowCase(branchIfNotCell(reg));
 }
 
-ALWAYS_INLINE void JIT::emitJumpSlowCaseIfNotJSCell(RegisterID reg, int vReg)
+ALWAYS_INLINE void JIT::emitJumpSlowCaseIfNotJSCell(RegisterID reg, VirtualRegister vReg)
 {
     if (!m_codeBlock->isKnownNotImmediate(vReg))
         emitJumpSlowCaseIfNotJSCell(reg);
 }
 
-inline void JIT::emitLoadDouble(int index, FPRegisterID value)
+inline void JIT::emitLoadDouble(VirtualRegister reg, FPRegisterID value)
 {
-    if (m_codeBlock->isConstantRegisterIndex(index)) {
-        WriteBarrier<Unknown>& inConstantPool = m_codeBlock->constantRegister(index);
+    if (reg.isConstant()) {
+        WriteBarrier<Unknown>& inConstantPool = m_codeBlock->constantRegister(reg);
         loadDouble(TrustedImmPtr(&inConstantPool), value);
     } else
-        loadDouble(addressFor(index), value);
+        loadDouble(addressFor(reg), value);
 }
 
-inline void JIT::emitLoadInt32ToDouble(int index, FPRegisterID value)
+inline void JIT::emitLoadInt32ToDouble(VirtualRegister reg, FPRegisterID value)
 {
-    if (m_codeBlock->isConstantRegisterIndex(index)) {
-        ASSERT(isOperandConstantInt(index));
-        convertInt32ToDouble(Imm32(getConstantOperand(index).asInt32()), value);
+    if (reg.isConstant()) {
+        ASSERT(isOperandConstantInt(reg));
+        convertInt32ToDouble(Imm32(getConstantOperand(reg).asInt32()), value);
     } else
-        convertInt32ToDouble(addressFor(index), value);
+        convertInt32ToDouble(addressFor(reg), value);
 }
 
 ALWAYS_INLINE JIT::PatchableJump JIT::emitPatchableJumpIfNotInt(RegisterID reg)
diff --git a/Source/JavaScriptCore/jit/JITOpcodes.cpp b/Source/JavaScriptCore/jit/JITOpcodes.cpp
index 88268d9..7f358c1 100644
--- a/Source/JavaScriptCore/jit/JITOpcodes.cpp
+++ b/Source/JavaScriptCore/jit/JITOpcodes.cpp
@@ -56,10 +56,10 @@
 void JIT::emit_op_mov(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpMov>();
-    int dst = bytecode.m_dst.offset();
-    int src = bytecode.m_src.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister src = bytecode.m_src;
 
-    if (m_codeBlock->isConstantRegisterIndex(src)) {
+    if (src.isConstant()) {
         JSValue value = m_codeBlock->getConstant(src);
         if (!value.isNumber())
             store64(TrustedImm64(JSValue::encode(value)), addressFor(dst));
@@ -77,7 +77,7 @@
 {
     auto bytecode = currentInstruction->as<OpEnd>();
     RELEASE_ASSERT(returnValueGPR != callFrameRegister);
-    emitGetVirtualRegister(bytecode.m_value.offset(), returnValueGPR);
+    emitGetVirtualRegister(bytecode.m_value, returnValueGPR);
     emitRestoreCalleeSaves();
     emitFunctionEpilogue();
     ret();
@@ -110,7 +110,7 @@
         emitAllocateJSObject(resultReg, JITAllocator::constant(allocator), allocatorReg, TrustedImmPtr(structure), butterfly, scratchReg, slowCases);
         emitInitializeInlineStorage(resultReg, structure->inlineCapacity());
         addSlowCase(slowCases);
-        emitPutVirtualRegister(bytecode.m_dst.offset());
+        emitPutVirtualRegister(bytecode.m_dst);
     }
 }
 
@@ -120,7 +120,7 @@
 
     auto bytecode = currentInstruction->as<OpNewObject>();
     auto& metadata = bytecode.metadata(m_codeBlock);
-    int dst = bytecode.m_dst.offset();
+    VirtualRegister dst = bytecode.m_dst;
     Structure* structure = metadata.m_objectAllocationProfile.structure();
     callOperation(operationNewObject, TrustedImmPtr(&vm()), structure);
     emitStoreCell(dst, returnValueGPR);
@@ -129,9 +129,9 @@
 void JIT::emit_op_overrides_has_instance(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpOverridesHasInstance>();
-    int dst = bytecode.m_dst.offset();
-    int constructor = bytecode.m_constructor.offset();
-    int hasInstanceValue = bytecode.m_hasInstanceValue.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister constructor = bytecode.m_constructor;
+    VirtualRegister hasInstanceValue = bytecode.m_hasInstanceValue;
 
     emitGetVirtualRegister(hasInstanceValue, regT0);
 
@@ -155,9 +155,9 @@
 void JIT::emit_op_instanceof(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpInstanceof>();
-    int dst = bytecode.m_dst.offset();
-    int value = bytecode.m_value.offset();
-    int proto = bytecode.m_prototype.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister value = bytecode.m_value;
+    VirtualRegister proto = bytecode.m_prototype;
 
     // Load the operands (baseVal, proto, and value respectively) into registers.
     // We use regT0 for baseVal since we will be done with this first, and we can then use it for the result.
@@ -186,7 +186,7 @@
     linkAllSlowCases(iter);
     
     auto bytecode = currentInstruction->as<OpInstanceof>();
-    int resultVReg = bytecode.m_dst.offset();
+    VirtualRegister resultVReg = bytecode.m_dst;
     
     JITInstanceOfGenerator& gen = m_instanceOfs[m_instanceOfIndex++];
     
@@ -204,8 +204,8 @@
 void JIT::emit_op_is_empty(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpIsEmpty>();
-    int dst = bytecode.m_dst.offset();
-    int value = bytecode.m_operand.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister value = bytecode.m_operand;
 
     emitGetVirtualRegister(value, regT0);
     compare64(Equal, regT0, TrustedImm32(JSValue::encode(JSValue())), regT0);
@@ -217,8 +217,8 @@
 void JIT::emit_op_is_undefined(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpIsUndefined>();
-    int dst = bytecode.m_dst.offset();
-    int value = bytecode.m_operand.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister value = bytecode.m_operand;
     
     emitGetVirtualRegister(value, regT0);
     Jump isCell = branchIfCell(regT0);
@@ -246,8 +246,8 @@
 void JIT::emit_op_is_undefined_or_null(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpIsUndefinedOrNull>();
-    int dst = bytecode.m_dst.offset();
-    int value = bytecode.m_operand.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister value = bytecode.m_operand;
 
     emitGetVirtualRegister(value, regT0);
 
@@ -261,8 +261,8 @@
 void JIT::emit_op_is_boolean(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpIsBoolean>();
-    int dst = bytecode.m_dst.offset();
-    int value = bytecode.m_operand.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister value = bytecode.m_operand;
     
     emitGetVirtualRegister(value, regT0);
     xor64(TrustedImm32(JSValue::ValueFalse), regT0);
@@ -274,8 +274,8 @@
 void JIT::emit_op_is_number(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpIsNumber>();
-    int dst = bytecode.m_dst.offset();
-    int value = bytecode.m_operand.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister value = bytecode.m_operand;
     
     emitGetVirtualRegister(value, regT0);
     test64(NonZero, regT0, numberTagRegister, regT0);
@@ -286,8 +286,8 @@
 void JIT::emit_op_is_cell_with_type(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpIsCellWithType>();
-    int dst = bytecode.m_dst.offset();
-    int value = bytecode.m_operand.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister value = bytecode.m_operand;
     int type = bytecode.m_type;
 
     emitGetVirtualRegister(value, regT0);
@@ -307,8 +307,8 @@
 void JIT::emit_op_is_object(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpIsObject>();
-    int dst = bytecode.m_dst.offset();
-    int value = bytecode.m_operand.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister value = bytecode.m_operand;
 
     emitGetVirtualRegister(value, regT0);
     Jump isNotCell = branchIfNotCell(regT0);
@@ -332,7 +332,7 @@
 
     // Return the result in %eax.
     auto bytecode = currentInstruction->as<OpRet>();
-    emitGetVirtualRegister(bytecode.m_value.offset(), returnValueGPR);
+    emitGetVirtualRegister(bytecode.m_value, returnValueGPR);
 
     checkStackPointerAlignment();
     emitRestoreCalleeSaves();
@@ -343,8 +343,8 @@
 void JIT::emit_op_to_primitive(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpToPrimitive>();
-    int dst = bytecode.m_dst.offset();
-    int src = bytecode.m_src.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister src = bytecode.m_src;
 
     emitGetVirtualRegister(src, regT0);
     
@@ -377,15 +377,15 @@
 void JIT::emit_op_set_function_name(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpSetFunctionName>();
-    emitGetVirtualRegister(bytecode.m_function.offset(), regT0);
-    emitGetVirtualRegister(bytecode.m_name.offset(), regT1);
+    emitGetVirtualRegister(bytecode.m_function, regT0);
+    emitGetVirtualRegister(bytecode.m_name, regT1);
     callOperation(operationSetFunctionName, TrustedImmPtr(m_codeBlock->globalObject()), regT0, regT1);
 }
 
 void JIT::emit_op_not(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpNot>();
-    emitGetVirtualRegister(bytecode.m_operand.offset(), regT0);
+    emitGetVirtualRegister(bytecode.m_operand, regT0);
 
     // Invert against JSValue(false); if the value was tagged as a boolean, then all bits will be
     // clear other than the low bit (which will be 0 or 1 for false or true inputs respectively).
@@ -394,7 +394,7 @@
     addSlowCase(branchTestPtr(NonZero, regT0, TrustedImm32(static_cast<int32_t>(~1))));
     xor64(TrustedImm32(JSValue::ValueTrue), regT0);
 
-    emitPutVirtualRegister(bytecode.m_dst.offset());
+    emitPutVirtualRegister(bytecode.m_dst);
 }
 
 void JIT::emit_op_jfalse(const Instruction* currentInstruction)
@@ -407,14 +407,14 @@
     GPRReg scratch2 = regT2;
     bool shouldCheckMasqueradesAsUndefined = true;
 
-    emitGetVirtualRegister(bytecode.m_condition.offset(), value);
+    emitGetVirtualRegister(bytecode.m_condition, value);
     addJump(branchIfFalsey(vm(), JSValueRegs(value), scratch1, scratch2, fpRegT0, fpRegT1, shouldCheckMasqueradesAsUndefined, m_codeBlock->globalObject()), target);
 }
 
 void JIT::emit_op_jeq_null(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpJeqNull>();
-    int src = bytecode.m_value.offset();
+    VirtualRegister src = bytecode.m_value;
     unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel);
 
     emitGetVirtualRegister(src, regT0);
@@ -438,7 +438,7 @@
 void JIT::emit_op_jneq_null(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpJneqNull>();
-    int src = bytecode.m_value.offset();
+    VirtualRegister src = bytecode.m_value;
     unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel);
 
     emitGetVirtualRegister(src, regT0);
@@ -462,7 +462,7 @@
 void JIT::emit_op_jundefined_or_null(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpJundefinedOrNull>();
-    int value = bytecode.m_value.offset();
+    VirtualRegister value = bytecode.m_value;
     unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel);
 
     emitGetVirtualRegister(value, regT0);
@@ -474,7 +474,7 @@
 void JIT::emit_op_jnundefined_or_null(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpJnundefinedOrNull>();
-    int value = bytecode.m_value.offset();
+    VirtualRegister value = bytecode.m_value;
     unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel);
 
     emitGetVirtualRegister(value, regT0);
@@ -487,8 +487,8 @@
 {
     auto bytecode = currentInstruction->as<OpJneqPtr>();
     auto& metadata = bytecode.metadata(m_codeBlock);
-    int src = bytecode.m_value.offset();
-    JSValue specialPointer = getConstantOperand(bytecode.m_specialPointer.offset());
+    VirtualRegister src = bytecode.m_value;
+    JSValue specialPointer = getConstantOperand(bytecode.m_specialPointer);
     ASSERT(specialPointer.isCell());
     unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel);
     
@@ -502,18 +502,18 @@
 void JIT::emit_op_eq(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpEq>();
-    emitGetVirtualRegisters(bytecode.m_lhs.offset(), regT0, bytecode.m_rhs.offset(), regT1);
+    emitGetVirtualRegisters(bytecode.m_lhs, regT0, bytecode.m_rhs, regT1);
     emitJumpSlowCaseIfNotInt(regT0, regT1, regT2);
     compare32(Equal, regT1, regT0, regT0);
     boxBoolean(regT0, JSValueRegs { regT0 });
-    emitPutVirtualRegister(bytecode.m_dst.offset());
+    emitPutVirtualRegister(bytecode.m_dst);
 }
 
 void JIT::emit_op_jeq(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpJeq>();
     unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel);
-    emitGetVirtualRegisters(bytecode.m_lhs.offset(), regT0, bytecode.m_rhs.offset(), regT1);
+    emitGetVirtualRegisters(bytecode.m_lhs, regT0, bytecode.m_rhs, regT1);
     emitJumpSlowCaseIfNotInt(regT0, regT1, regT2);
     addJump(branch32(Equal, regT0, regT1), target);
 }
@@ -527,26 +527,26 @@
     GPRReg scratch1 = regT1;
     GPRReg scratch2 = regT2;
     bool shouldCheckMasqueradesAsUndefined = true;
-    emitGetVirtualRegister(bytecode.m_condition.offset(), value);
+    emitGetVirtualRegister(bytecode.m_condition, value);
     addJump(branchIfTruthy(vm(), JSValueRegs(value), scratch1, scratch2, fpRegT0, fpRegT1, shouldCheckMasqueradesAsUndefined, m_codeBlock->globalObject()), target);
 }
 
 void JIT::emit_op_neq(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpNeq>();
-    emitGetVirtualRegisters(bytecode.m_lhs.offset(), regT0, bytecode.m_rhs.offset(), regT1);
+    emitGetVirtualRegisters(bytecode.m_lhs, regT0, bytecode.m_rhs, regT1);
     emitJumpSlowCaseIfNotInt(regT0, regT1, regT2);
     compare32(NotEqual, regT1, regT0, regT0);
     boxBoolean(regT0, JSValueRegs { regT0 });
 
-    emitPutVirtualRegister(bytecode.m_dst.offset());
+    emitPutVirtualRegister(bytecode.m_dst);
 }
 
 void JIT::emit_op_jneq(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpJneq>();
     unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel);
-    emitGetVirtualRegisters(bytecode.m_lhs.offset(), regT0, bytecode.m_rhs.offset(), regT1);
+    emitGetVirtualRegisters(bytecode.m_lhs, regT0, bytecode.m_rhs, regT1);
     emitJumpSlowCaseIfNotInt(regT0, regT1, regT2);
     addJump(branch32(NotEqual, regT0, regT1), target);
 }
@@ -556,7 +556,7 @@
     auto bytecode = currentInstruction->as<OpThrow>();
     ASSERT(regT0 == returnValueGPR);
     copyCalleeSavesToEntryFrameCalleeSavesBuffer(vm().topEntryFrame);
-    emitGetVirtualRegister(bytecode.m_value.offset(), regT0);
+    emitGetVirtualRegister(bytecode.m_value, regT0);
     callOperationNoExceptionCheck(operationThrow, TrustedImmPtr(m_codeBlock->globalObject()), regT0);
     jumpToExceptionHandler(vm());
 }
@@ -565,9 +565,9 @@
 void JIT::compileOpStrictEq(const Instruction* currentInstruction, CompileOpStrictEqType type)
 {
     auto bytecode = currentInstruction->as<Op>();
-    int dst = bytecode.m_dst.offset();
-    int src1 = bytecode.m_lhs.offset();
-    int src2 = bytecode.m_rhs.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister src1 = bytecode.m_lhs;
+    VirtualRegister src2 = bytecode.m_rhs;
 
     emitGetVirtualRegisters(src1, regT0, src2, regT1);
     
@@ -609,8 +609,8 @@
 {
     auto bytecode = currentInstruction->as<Op>();
     int target = jumpTarget(currentInstruction, bytecode.m_targetLabel);
-    int src1 = bytecode.m_lhs.offset();
-    int src2 = bytecode.m_rhs.offset();
+    VirtualRegister src1 = bytecode.m_lhs;
+    VirtualRegister src2 = bytecode.m_rhs;
 
     emitGetVirtualRegisters(src1, regT0, src2, regT1);
 
@@ -667,8 +667,8 @@
 void JIT::emit_op_to_number(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpToNumber>();
-    int dstVReg = bytecode.m_dst.offset();
-    int srcVReg = bytecode.m_operand.offset();
+    VirtualRegister dstVReg = bytecode.m_dst;
+    VirtualRegister srcVReg = bytecode.m_operand;
     emitGetVirtualRegister(srcVReg, regT0);
     
     addSlowCase(branchIfNotNumber(regT0));
@@ -681,8 +681,8 @@
 void JIT::emit_op_to_numeric(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpToNumeric>();
-    int dstVReg = bytecode.m_dst.offset();
-    int srcVReg = bytecode.m_operand.offset();
+    VirtualRegister dstVReg = bytecode.m_dst;
+    VirtualRegister srcVReg = bytecode.m_operand;
     emitGetVirtualRegister(srcVReg, regT0);
 
     Jump isNotCell = branchIfNotCell(regT0);
@@ -701,20 +701,20 @@
 void JIT::emit_op_to_string(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpToString>();
-    int srcVReg = bytecode.m_operand.offset();
+    VirtualRegister srcVReg = bytecode.m_operand;
     emitGetVirtualRegister(srcVReg, regT0);
 
     addSlowCase(branchIfNotCell(regT0));
     addSlowCase(branchIfNotString(regT0));
 
-    emitPutVirtualRegister(bytecode.m_dst.offset());
+    emitPutVirtualRegister(bytecode.m_dst);
 }
 
 void JIT::emit_op_to_object(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpToObject>();
-    int dstVReg = bytecode.m_dst.offset();
-    int srcVReg = bytecode.m_operand.offset();
+    VirtualRegister dstVReg = bytecode.m_dst;
+    VirtualRegister srcVReg = bytecode.m_operand;
     emitGetVirtualRegister(srcVReg, regT0);
 
     addSlowCase(branchIfNotCell(regT0));
@@ -745,10 +745,10 @@
     move(TrustedImmPtr(m_vm), regT3);
     load64(Address(regT3, VM::exceptionOffset()), regT0);
     store64(TrustedImm64(JSValue::encode(JSValue())), Address(regT3, VM::exceptionOffset()));
-    emitPutVirtualRegister(bytecode.m_exception.offset());
+    emitPutVirtualRegister(bytecode.m_exception);
 
     load64(Address(regT0, Exception::valueOffset()), regT0);
-    emitPutVirtualRegister(bytecode.m_thrownValue.offset());
+    emitPutVirtualRegister(bytecode.m_thrownValue);
 
 #if ENABLE(DFG_JIT)
     // FIXME: consider inline caching the process of doing OSR entry, including
@@ -756,7 +756,7 @@
     // https://bugs.webkit.org/show_bug.cgi?id=175598
 
     auto& metadata = bytecode.metadata(m_codeBlock);
-    ValueProfileAndOperandBuffer* buffer = metadata.m_buffer;
+    ValueProfileAndVirtualRegisterBuffer* buffer = metadata.m_buffer;
     if (buffer || !shouldEmitProfiling())
         callOperation(operationTryOSREnterAtCatch, &vm(), m_bytecodeIndex.asBits());
     else
@@ -766,7 +766,7 @@
     farJump(returnValueGPR, ExceptionHandlerPtrTag);
     skipOSREntry.link(this);
     if (buffer && shouldEmitProfiling()) {
-        buffer->forEach([&] (ValueProfileAndOperand& profile) {
+        buffer->forEach([&] (ValueProfileAndVirtualRegister& profile) {
             JSValueRegs regs(regT0);
             emitGetVirtualRegister(profile.m_operand, regs);
             emitValueProfilingSite(static_cast<ValueProfile&>(profile));
@@ -783,10 +783,10 @@
 void JIT::emit_op_get_parent_scope(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpGetParentScope>();
-    int currentScope = bytecode.m_scope.offset();
+    VirtualRegister currentScope = bytecode.m_scope;
     emitGetVirtualRegister(currentScope, regT0);
     loadPtr(Address(regT0, JSScope::offsetOfNext()), regT0);
-    emitStoreCell(bytecode.m_dst.offset(), regT0);
+    emitStoreCell(bytecode.m_dst, regT0);
 }
 
 void JIT::emit_op_switch_imm(const Instruction* currentInstruction)
@@ -794,7 +794,7 @@
     auto bytecode = currentInstruction->as<OpSwitchImm>();
     size_t tableIndex = bytecode.m_tableIndex;
     unsigned defaultOffset = jumpTarget(currentInstruction, bytecode.m_defaultOffset);
-    unsigned scrutinee = bytecode.m_scrutinee.offset();
+    VirtualRegister scrutinee = bytecode.m_scrutinee;
 
     // create jump table for switch destinations, track this switch statement.
     SimpleJumpTable* jumpTable = &m_codeBlock->switchJumpTable(tableIndex);
@@ -811,7 +811,7 @@
     auto bytecode = currentInstruction->as<OpSwitchChar>();
     size_t tableIndex = bytecode.m_tableIndex;
     unsigned defaultOffset = jumpTarget(currentInstruction, bytecode.m_defaultOffset);
-    unsigned scrutinee = bytecode.m_scrutinee.offset();
+    VirtualRegister scrutinee = bytecode.m_scrutinee;
 
     // create jump table for switch destinations, track this switch statement.
     SimpleJumpTable* jumpTable = &m_codeBlock->switchJumpTable(tableIndex);
@@ -828,7 +828,7 @@
     auto bytecode = currentInstruction->as<OpSwitchString>();
     size_t tableIndex = bytecode.m_tableIndex;
     unsigned defaultOffset = jumpTarget(currentInstruction, bytecode.m_defaultOffset);
-    unsigned scrutinee = bytecode.m_scrutinee.offset();
+    VirtualRegister scrutinee = bytecode.m_scrutinee;
 
     // create jump table for switch destinations, track this switch statement.
     StringJumpTable* jumpTable = &m_codeBlock->stringSwitchJumpTable(tableIndex);
@@ -851,8 +851,8 @@
 void JIT::emit_op_eq_null(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpEqNull>();
-    int dst = bytecode.m_dst.offset();
-    int src1 = bytecode.m_operand.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister src1 = bytecode.m_operand;
 
     emitGetVirtualRegister(src1, regT0);
     Jump isImmediate = branchIfNotCell(regT0);
@@ -884,8 +884,8 @@
 void JIT::emit_op_neq_null(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpNeqNull>();
-    int dst = bytecode.m_dst.offset();
-    int src1 = bytecode.m_operand.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister src1 = bytecode.m_operand;
 
     emitGetVirtualRegister(src1, regT0);
     Jump isImmediate = branchIfNotCell(regT0);
@@ -920,7 +920,7 @@
     // object lifetime and increasing GC pressure.
     size_t count = m_codeBlock->numVars();
     for (size_t j = CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters(); j < count; ++j)
-        emitInitRegister(virtualRegisterForLocal(j).offset());
+        emitInitRegister(virtualRegisterForLocal(j));
 
     emitWriteBarrier(m_codeBlock);
 
@@ -930,7 +930,7 @@
 void JIT::emit_op_get_scope(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpGetScope>();
-    int dst = bytecode.m_dst.offset();
+    VirtualRegister dst = bytecode.m_dst;
     emitGetFromCallFrameHeaderPtr(CallFrameSlot::callee, regT0);
     loadPtr(Address(regT0, JSFunction::offsetOfScopeChain()), regT0);
     emitStoreCell(dst, regT0);
@@ -941,7 +941,7 @@
     auto bytecode = currentInstruction->as<OpToThis>();
     auto& metadata = bytecode.metadata(m_codeBlock);
     StructureID* cachedStructureID = &metadata.m_cachedStructureID;
-    emitGetVirtualRegister(bytecode.m_srcDst.offset(), regT1);
+    emitGetVirtualRegister(bytecode.m_srcDst, regT1);
 
     emitJumpSlowCaseIfNotJSCell(regT1);
 
@@ -954,7 +954,7 @@
 {
     auto bytecode = currentInstruction->as<OpCreateThis>();
     auto& metadata = bytecode.metadata(m_codeBlock);
-    int callee = bytecode.m_callee.offset();
+    VirtualRegister callee = bytecode.m_callee;
     WriteBarrierBase<JSCell>* cachedFunction = &metadata.m_cachedCallee;
     RegisterID calleeReg = regT0;
     RegisterID rareDataReg = regT4;
@@ -982,13 +982,13 @@
     load8(Address(structureReg, Structure::inlineCapacityOffset()), scratchReg);
     emitInitializeInlineStorage(resultReg, scratchReg);
     addSlowCase(slowCases);
-    emitPutVirtualRegister(bytecode.m_dst.offset());
+    emitPutVirtualRegister(bytecode.m_dst);
 }
 
 void JIT::emit_op_check_tdz(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpCheckTdz>();
-    emitGetVirtualRegister(bytecode.m_targetVirtualRegister.offset(), regT0);
+    emitGetVirtualRegister(bytecode.m_targetVirtualRegister, regT0);
     addSlowCase(branchIfEmpty(regT0));
 }
 
@@ -1002,7 +1002,7 @@
     auto bytecode = currentInstruction->as<OpEq>();
     callOperation(operationCompareEq, TrustedImmPtr(m_codeBlock->globalObject()), regT0, regT1);
     boxBoolean(returnValueGPR, JSValueRegs { returnValueGPR });
-    emitPutVirtualRegister(bytecode.m_dst.offset(), returnValueGPR);
+    emitPutVirtualRegister(bytecode.m_dst, returnValueGPR);
 }
 
 void JIT::emitSlow_op_neq(const Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
@@ -1013,7 +1013,7 @@
     callOperation(operationCompareEq, TrustedImmPtr(m_codeBlock->globalObject()), regT0, regT1);
     xor32(TrustedImm32(0x1), regT0);
     boxBoolean(returnValueGPR, JSValueRegs { returnValueGPR });
-    emitPutVirtualRegister(bytecode.m_dst.offset(), returnValueGPR);
+    emitPutVirtualRegister(bytecode.m_dst, returnValueGPR);
 }
 
 void JIT::emitSlow_op_jeq(const Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
@@ -1041,10 +1041,10 @@
     linkAllSlowCases(iter);
 
     auto bytecode = currentInstruction->as<OpInstanceofCustom>();
-    int dst = bytecode.m_dst.offset();
-    int value = bytecode.m_value.offset();
-    int constructor = bytecode.m_constructor.offset();
-    int hasInstanceValue = bytecode.m_hasInstanceValue.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister value = bytecode.m_value;
+    VirtualRegister constructor = bytecode.m_constructor;
+    VirtualRegister hasInstanceValue = bytecode.m_hasInstanceValue;
 
     emitGetVirtualRegister(value, regT0);
     emitGetVirtualRegister(constructor, regT1);
@@ -1121,8 +1121,8 @@
 void JIT::emit_op_new_regexp(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpNewRegexp>();
-    int dst = bytecode.m_dst.offset();
-    int regexp = bytecode.m_regexp.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister regexp = bytecode.m_regexp;
     callOperation(operationNewRegexp, TrustedImmPtr(m_codeBlock->globalObject()), jsCast<RegExp*>(m_codeBlock->getConstant(regexp)));
     emitStoreCell(dst, returnValueGPR);
 }
@@ -1132,12 +1132,12 @@
 {
     Jump lazyJump;
     auto bytecode = currentInstruction->as<Op>();
-    int dst = bytecode.m_dst.offset();
+    VirtualRegister dst = bytecode.m_dst;
 
 #if USE(JSVALUE64)
-    emitGetVirtualRegister(bytecode.m_scope.offset(), regT0);
+    emitGetVirtualRegister(bytecode.m_scope, regT0);
 #else
-    emitLoadPayload(bytecode.m_scope.offset(), regT0);
+    emitLoadPayload(bytecode.m_scope, regT0);
 #endif
     FunctionExecutable* funcExec = m_codeBlock->functionDecl(bytecode.m_functionDecl);
 
@@ -1178,11 +1178,11 @@
 void JIT::emitNewFuncExprCommon(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<Op>();
-    int dst = bytecode.m_dst.offset();
+    VirtualRegister dst = bytecode.m_dst;
 #if USE(JSVALUE64)
-    emitGetVirtualRegister(bytecode.m_scope.offset(), regT0);
+    emitGetVirtualRegister(bytecode.m_scope, regT0);
 #else
-    emitLoadPayload(bytecode.m_scope.offset(), regT0);
+    emitLoadPayload(bytecode.m_scope, regT0);
 #endif
 
     FunctionExecutable* function = m_codeBlock->functionExpr(bytecode.m_functionDecl);
@@ -1224,10 +1224,10 @@
 {
     auto bytecode = currentInstruction->as<OpNewArray>();
     auto& metadata = bytecode.metadata(m_codeBlock);
-    int dst = bytecode.m_dst.offset();
-    int valuesIndex = bytecode.m_argv.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister valuesStart = bytecode.m_argv;
     int size = bytecode.m_argc;
-    addPtr(TrustedImm32(valuesIndex * sizeof(Register)), callFrameRegister, regT0);
+    addPtr(TrustedImm32(valuesStart.offset() * sizeof(Register)), callFrameRegister, regT0);
     callOperation(operationNewArrayWithProfile, dst, TrustedImmPtr(m_codeBlock->globalObject()),
         &metadata.m_arrayAllocationProfile, regT0, size);
 }
@@ -1236,8 +1236,8 @@
 {
     auto bytecode = currentInstruction->as<OpNewArrayWithSize>();
     auto& metadata = bytecode.metadata(m_codeBlock);
-    int dst = bytecode.m_dst.offset();
-    int sizeIndex = bytecode.m_length.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister sizeIndex = bytecode.m_length;
 #if USE(JSVALUE64)
     emitGetVirtualRegister(sizeIndex, regT0);
     callOperation(operationNewArrayWithSizeAndProfile, dst, TrustedImmPtr(m_codeBlock->globalObject()),
@@ -1253,9 +1253,9 @@
 void JIT::emit_op_has_structure_property(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpHasStructureProperty>();
-    int dst = bytecode.m_dst.offset();
-    int base = bytecode.m_base.offset();
-    int enumerator = bytecode.m_enumerator.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister base = bytecode.m_base;
+    VirtualRegister enumerator = bytecode.m_enumerator;
 
     emitGetVirtualRegister(base, regT0);
     emitGetVirtualRegister(enumerator, regT1);
@@ -1299,9 +1299,9 @@
 {
     auto bytecode = currentInstruction->as<OpHasIndexedProperty>();
     auto& metadata = bytecode.metadata(m_codeBlock);
-    int dst = bytecode.m_dst.offset();
-    int base = bytecode.m_base.offset();
-    int property = bytecode.m_property.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister base = bytecode.m_base;
+    VirtualRegister property = bytecode.m_property;
     ArrayProfile* profile = &metadata.m_arrayProfile;
     ByValInfo* byValInfo = m_codeBlock->addByValInfo();
     
@@ -1347,9 +1347,9 @@
     linkAllSlowCases(iter);
 
     auto bytecode = currentInstruction->as<OpHasIndexedProperty>();
-    int dst = bytecode.m_dst.offset();
-    int base = bytecode.m_base.offset();
-    int property = bytecode.m_property.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister base = bytecode.m_base;
+    VirtualRegister property = bytecode.m_property;
     ByValInfo* byValInfo = m_byValCompilationInfo[m_byValInstructionIndex].byValInfo;
 
     Label slowPath = label();
@@ -1366,10 +1366,10 @@
 void JIT::emit_op_get_direct_pname(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpGetDirectPname>();
-    int dst = bytecode.m_dst.offset();
-    int base = bytecode.m_base.offset();
-    int index = bytecode.m_index.offset();
-    int enumerator = bytecode.m_enumerator.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister base = bytecode.m_base;
+    VirtualRegister index = bytecode.m_index;
+    VirtualRegister enumerator = bytecode.m_enumerator;
 
     // Check that base is a cell
     emitGetVirtualRegister(base, regT0);
@@ -1407,9 +1407,9 @@
 void JIT::emit_op_enumerator_structure_pname(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpEnumeratorStructurePname>();
-    int dst = bytecode.m_dst.offset();
-    int enumerator = bytecode.m_enumerator.offset();
-    int index = bytecode.m_index.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister enumerator = bytecode.m_enumerator;
+    VirtualRegister index = bytecode.m_index;
 
     emitGetVirtualRegister(index, regT0);
     emitGetVirtualRegister(enumerator, regT1);
@@ -1431,9 +1431,9 @@
 void JIT::emit_op_enumerator_generic_pname(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpEnumeratorGenericPname>();
-    int dst = bytecode.m_dst.offset();
-    int enumerator = bytecode.m_enumerator.offset();
-    int index = bytecode.m_index.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister enumerator = bytecode.m_enumerator;
+    VirtualRegister index = bytecode.m_index;
 
     emitGetVirtualRegister(index, regT0);
     emitGetVirtualRegister(enumerator, regT1);
@@ -1457,7 +1457,7 @@
     auto bytecode = currentInstruction->as<OpProfileType>();
     auto& metadata = bytecode.metadata(m_codeBlock);
     TypeLocation* cachedTypeLocation = metadata.m_typeLocation;
-    int valueToProfile = bytecode.m_targetVirtualRegister.offset();
+    VirtualRegister valueToProfile = bytecode.m_targetVirtualRegister;
 
     emitGetVirtualRegister(valueToProfile, regT0);
 
@@ -1526,7 +1526,7 @@
     GPRReg scratch1Reg = nonArgGPR0; // This must be a non-argument register.
     GPRReg scratch2Reg = regT2;
     ensureShadowChickenPacket(vm(), shadowPacketReg, scratch1Reg, scratch2Reg);
-    emitGetVirtualRegister(bytecode.m_scope.offset(), regT3);
+    emitGetVirtualRegister(bytecode.m_scope, regT3);
     logShadowChickenProloguePacket(shadowPacketReg, scratch1Reg, regT3);
 }
 
@@ -1540,8 +1540,8 @@
     GPRReg scratch1Reg = nonArgGPR0; // This must be a non-argument register.
     GPRReg scratch2Reg = regT2;
     ensureShadowChickenPacket(vm(), shadowPacketReg, scratch1Reg, scratch2Reg);
-    emitGetVirtualRegister(bytecode.m_thisValue.offset(), regT2);
-    emitGetVirtualRegister(bytecode.m_scope.offset(), regT3);
+    emitGetVirtualRegister(bytecode.m_thisValue, regT2);
+    emitGetVirtualRegister(bytecode.m_scope, regT3);
     logShadowChickenTailPacket(shadowPacketReg, JSValueRegs(regT2), regT3, m_codeBlock, CallSiteIndex(m_bytecodeIndex));
 }
 
@@ -1562,7 +1562,7 @@
 void JIT::emit_op_argument_count(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpArgumentCount>();
-    int dst = bytecode.m_dst.offset();
+    VirtualRegister dst = bytecode.m_dst;
     load32(payloadFor(CallFrameSlot::argumentCountIncludingThis), regT0);
     sub32(TrustedImm32(1), regT0);
     JSValueRegs result = JSValueRegs::withTwoAvailableRegs(regT0, regT1);
@@ -1573,7 +1573,7 @@
 void JIT::emit_op_get_rest_length(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpGetRestLength>();
-    int dst = bytecode.m_dst.offset();
+    VirtualRegister dst = bytecode.m_dst;
     unsigned numParamsToSkip = bytecode.m_numParametersToSkip;
     load32(payloadFor(CallFrameSlot::argumentCountIncludingThis), regT0);
     sub32(TrustedImm32(1), regT0);
@@ -1603,7 +1603,7 @@
 void JIT::emit_op_get_argument(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpGetArgument>();
-    int dst = bytecode.m_dst.offset();
+    VirtualRegister dst = bytecode.m_dst;
     int index = bytecode.m_index;
 #if USE(JSVALUE64)
     JSValueRegs resultRegs(regT0);
@@ -1613,7 +1613,7 @@
 
     load32(payloadFor(CallFrameSlot::argumentCountIncludingThis), regT2);
     Jump argumentOutOfBounds = branch32(LessThanOrEqual, regT2, TrustedImm32(index));
-    loadValue(addressFor(CallFrameSlot::thisArgument + index), resultRegs);
+    loadValue(addressFor(VirtualRegister(CallFrameSlot::thisArgument + index)), resultRegs);
     Jump done = jump();
 
     argumentOutOfBounds.link(this);
diff --git a/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp b/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp
index c1bb657..30286cd 100644
--- a/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp
+++ b/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp
@@ -950,7 +950,7 @@
     // https://bugs.webkit.org/show_bug.cgi?id=175598
 
     auto& metadata = bytecode.metadata(m_codeBlock);
-    ValueProfileAndOperandBuffer* buffer = metadata.m_buffer;
+    ValueProfileAndVirtualRegisterBuffer* buffer = metadata.m_buffer;
     if (buffer || !shouldEmitProfiling())
         callOperation(operationTryOSREnterAtCatch, &vm(), m_bytecodeIndex.asBits());
     else
@@ -960,7 +960,7 @@
     farJump(returnValueGPR, NoPtrTag);
     skipOSREntry.link(this);
     if (buffer && shouldEmitProfiling()) {
-        buffer->forEach([&] (ValueProfileAndOperand& profile) {
+        buffer->forEach([&] (ValueProfileAndVirtualRegister& profile) {
             JSValueRegs regs(regT1, regT0);
             emitGetVirtualRegister(profile.m_operand, regs);
             emitValueProfilingSite(static_cast<ValueProfile&>(profile));
diff --git a/Source/JavaScriptCore/jit/JITOperations.cpp b/Source/JavaScriptCore/jit/JITOperations.cpp
index dba1013..af1facd 100644
--- a/Source/JavaScriptCore/jit/JITOperations.cpp
+++ b/Source/JavaScriptCore/jit/JITOperations.cpp
@@ -1646,18 +1646,18 @@
 
         dataLogLnIf(Options::verboseOSR(), "Triggering optimized compilation of ", *codeBlock);
 
-        unsigned numVarsWithValues;
+        unsigned numVarsWithValues = 0;
         if (bytecodeIndex)
             numVarsWithValues = codeBlock->numCalleeLocals();
-        else
-            numVarsWithValues = 0;
-        Operands<Optional<JSValue>> mustHandleValues(codeBlock->numParameters(), numVarsWithValues);
+
+        Operands<Optional<JSValue>> mustHandleValues(codeBlock->numParameters(), numVarsWithValues, 0);
         int localsUsedForCalleeSaves = static_cast<int>(CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters());
         for (size_t i = 0; i < mustHandleValues.size(); ++i) {
-            int operand = mustHandleValues.operandForIndex(i);
-            if (operandIsLocal(operand) && VirtualRegister(operand).toLocal() < localsUsedForCalleeSaves)
+            Operand operand = mustHandleValues.operandForIndex(i);
+
+            if (operand.isLocal() && operand.toLocal() < localsUsedForCalleeSaves)
                 continue;
-            mustHandleValues[i] = callFrame->uncheckedR(operand).jsValue();
+            mustHandleValues[i] = callFrame->uncheckedR(operand.virtualRegister()).jsValue();
         }
 
         CodeBlock* replacementCodeBlock = codeBlock->newReplacement();
@@ -1768,7 +1768,7 @@
     codeBlock->ensureCatchLivenessIsComputedForBytecodeIndex(bytecodeIndex);
     auto bytecode = codeBlock->instructions().at(bytecodeIndex)->as<OpCatch>();
     auto& metadata = bytecode.metadata(codeBlock);
-    metadata.m_buffer->forEach([&] (ValueProfileAndOperand& profile) {
+    metadata.m_buffer->forEach([&] (ValueProfileAndVirtualRegister& profile) {
         profile.m_buckets[0] = JSValue::encode(callFrame->uncheckedR(profile.m_operand).jsValue());
     });
 
@@ -1892,8 +1892,8 @@
     CallFrame* callFrame = DECLARE_CALL_FRAME(vm);
     JITOperationPrologueCallFrameTracer tracer(vm, callFrame);
 
-    JSScope* scope = callFrame->uncheckedR(scopeReg).Register::scope();
-    callFrame->uncheckedR(scopeReg) = scope->next();
+    auto& scopeSlot = callFrame->uncheckedR(VirtualRegister(scopeReg));
+    scopeSlot = scopeSlot.Register::scope()->next();
 }
 
 int32_t JIT_OPERATION operationInstanceOfCustom(JSGlobalObject* globalObject, EncodedJSValue encodedValue, JSObject* constructor, EncodedJSValue encodedHasInstance)
@@ -2401,7 +2401,7 @@
 
     auto bytecode = pc->as<OpGetFromScope>();
     const Identifier& ident = codeBlock->identifier(bytecode.m_var);
-    JSObject* scope = jsCast<JSObject*>(callFrame->uncheckedR(bytecode.m_scope.offset()).jsValue());
+    JSObject* scope = jsCast<JSObject*>(callFrame->uncheckedR(bytecode.m_scope).jsValue());
     GetPutInfo& getPutInfo = bytecode.metadata(codeBlock).m_getPutInfo;
 
     // ModuleVar is always converted to ClosureVar for get_from_scope.
@@ -2444,8 +2444,8 @@
     auto& metadata = bytecode.metadata(codeBlock);
 
     const Identifier& ident = codeBlock->identifier(bytecode.m_var);
-    JSObject* scope = jsCast<JSObject*>(callFrame->uncheckedR(bytecode.m_scope.offset()).jsValue());
-    JSValue value = callFrame->r(bytecode.m_value.offset()).jsValue();
+    JSObject* scope = jsCast<JSObject*>(callFrame->uncheckedR(bytecode.m_scope).jsValue());
+    JSValue value = callFrame->r(bytecode.m_value).jsValue();
     GetPutInfo& getPutInfo = metadata.m_getPutInfo;
 
     // ModuleVar does not keep the scope register value alive in DFG.
diff --git a/Source/JavaScriptCore/jit/JITPropertyAccess.cpp b/Source/JavaScriptCore/jit/JITPropertyAccess.cpp
index c655db8..abf52f71 100644
--- a/Source/JavaScriptCore/jit/JITPropertyAccess.cpp
+++ b/Source/JavaScriptCore/jit/JITPropertyAccess.cpp
@@ -57,9 +57,9 @@
 {
     auto bytecode = currentInstruction->as<OpGetByVal>();
     auto& metadata = bytecode.metadata(m_codeBlock);
-    int dst = bytecode.m_dst.offset();
-    int base = bytecode.m_base.offset();
-    int property = bytecode.m_property.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister base = bytecode.m_base;
+    VirtualRegister property = bytecode.m_property;
     ArrayProfile* profile = &metadata.m_arrayProfile;
 
     emitGetVirtualRegister(base, regT0);
@@ -93,7 +93,7 @@
 {
     if (hasAnySlowCases(iter)) {
         auto bytecode = currentInstruction->as<OpGetByVal>();
-        int dst = bytecode.m_dst.offset();
+        VirtualRegister dst = bytecode.m_dst;
         auto& metadata = bytecode.metadata(m_codeBlock);
         ArrayProfile* profile = &metadata.m_arrayProfile;
 
@@ -117,8 +117,8 @@
 {
     auto bytecode = currentInstruction->as<Op>();
     auto& metadata = bytecode.metadata(m_codeBlock);
-    int base = bytecode.m_base.offset();
-    int property = bytecode.m_property.offset();
+    VirtualRegister base = bytecode.m_base;
+    VirtualRegister property = bytecode.m_property;
     ArrayProfile* profile = &metadata.m_arrayProfile;
     ByValInfo* byValInfo = m_codeBlock->addByValInfo();
 
@@ -177,7 +177,7 @@
 JIT::JumpList JIT::emitGenericContiguousPutByVal(Op bytecode, PatchableJump& badType, IndexingType indexingShape)
 {
     auto& metadata = bytecode.metadata(m_codeBlock);
-    int value = bytecode.m_value.offset();
+    VirtualRegister value = bytecode.m_value;
     ArrayProfile* profile = &metadata.m_arrayProfile;
     
     JumpList slowCases;
@@ -208,7 +208,7 @@
     }
     case ContiguousShape:
         store64(regT3, BaseIndex(regT2, regT1, TimesEight));
-        emitWriteBarrier(bytecode.m_base.offset(), value, ShouldFilterValue);
+        emitWriteBarrier(bytecode.m_base, value, ShouldFilterValue);
         break;
     default:
         CRASH();
@@ -235,7 +235,7 @@
 JIT::JumpList JIT::emitArrayStoragePutByVal(Op bytecode, PatchableJump& badType)
 {
     auto& metadata = bytecode.metadata(m_codeBlock);
-    int value = bytecode.m_value.offset();
+    VirtualRegister value = bytecode.m_value;
     ArrayProfile* profile = &metadata.m_arrayProfile;
     
     JumpList slowCases;
@@ -249,7 +249,7 @@
     Label storeResult(this);
     emitGetVirtualRegister(value, regT3);
     store64(regT3, BaseIndex(regT2, regT1, TimesEight, ArrayStorage::vectorOffset()));
-    emitWriteBarrier(bytecode.m_base.offset(), value, ShouldFilterValue);
+    emitWriteBarrier(bytecode.m_base, value, ShouldFilterValue);
     Jump end = jump();
     
     empty.link(this);
@@ -274,8 +274,8 @@
     // property: regT1
     // scratch: regT2
 
-    int base = bytecode.m_base.offset();
-    int value = bytecode.m_value.offset();
+    VirtualRegister base = bytecode.m_base;
+    VirtualRegister value = bytecode.m_value;
 
     slowCases.append(branchIfNotCell(regT1));
     emitByValIdentifierCheck(byValInfo, regT1, regT1, propertyName, slowCases);
@@ -304,14 +304,14 @@
 void JIT::emitSlow_op_put_by_val(const Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
 {
     bool isDirect = currentInstruction->opcodeID() == op_put_by_val_direct;
-    int base;
-    int property;
-    int value;
+    VirtualRegister base;
+    VirtualRegister property;
+    VirtualRegister value;
 
     auto load = [&](auto bytecode) {
-        base = bytecode.m_base.offset();
-        property = bytecode.m_property.offset();
-        value = bytecode.m_value.offset();
+        base = bytecode.m_base;
+        property = bytecode.m_property;
+        value = bytecode.m_value;
     };
 
     if (isDirect)
@@ -337,36 +337,36 @@
 void JIT::emit_op_put_getter_by_id(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpPutGetterById>();
-    emitGetVirtualRegister(bytecode.m_base.offset(), regT0);
+    emitGetVirtualRegister(bytecode.m_base, regT0);
     int32_t options = bytecode.m_attributes;
-    emitGetVirtualRegister(bytecode.m_accessor.offset(), regT1);
+    emitGetVirtualRegister(bytecode.m_accessor, regT1);
     callOperation(operationPutGetterById, TrustedImmPtr(m_codeBlock->globalObject()), regT0, m_codeBlock->identifier(bytecode.m_property).impl(), options, regT1);
 }
 
 void JIT::emit_op_put_setter_by_id(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpPutSetterById>();
-    emitGetVirtualRegister(bytecode.m_base.offset(), regT0);
+    emitGetVirtualRegister(bytecode.m_base, regT0);
     int32_t options = bytecode.m_attributes;
-    emitGetVirtualRegister(bytecode.m_accessor.offset(), regT1);
+    emitGetVirtualRegister(bytecode.m_accessor, regT1);
     callOperation(operationPutSetterById, TrustedImmPtr(m_codeBlock->globalObject()), regT0, m_codeBlock->identifier(bytecode.m_property).impl(), options, regT1);
 }
 
 void JIT::emit_op_put_getter_setter_by_id(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpPutGetterSetterById>();
-    emitGetVirtualRegister(bytecode.m_base.offset(), regT0);
+    emitGetVirtualRegister(bytecode.m_base, regT0);
     int32_t attribute = bytecode.m_attributes;
-    emitGetVirtualRegister(bytecode.m_getter.offset(), regT1);
-    emitGetVirtualRegister(bytecode.m_setter.offset(), regT2);
+    emitGetVirtualRegister(bytecode.m_getter, regT1);
+    emitGetVirtualRegister(bytecode.m_setter, regT2);
     callOperation(operationPutGetterSetter, TrustedImmPtr(m_codeBlock->globalObject()), regT0, m_codeBlock->identifier(bytecode.m_property).impl(), attribute, regT1, regT2);
 }
 
 void JIT::emit_op_put_getter_by_val(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpPutGetterByVal>();
-    emitGetVirtualRegister(bytecode.m_base.offset(), regT0);
-    emitGetVirtualRegister(bytecode.m_property.offset(), regT1);
+    emitGetVirtualRegister(bytecode.m_base, regT0);
+    emitGetVirtualRegister(bytecode.m_property, regT1);
     int32_t attributes = bytecode.m_attributes;
     emitGetVirtualRegister(bytecode.m_accessor, regT2);
     callOperation(operationPutGetterByVal, TrustedImmPtr(m_codeBlock->globalObject()), regT0, regT1, attributes, regT2);
@@ -375,18 +375,18 @@
 void JIT::emit_op_put_setter_by_val(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpPutSetterByVal>();
-    emitGetVirtualRegister(bytecode.m_base.offset(), regT0);
-    emitGetVirtualRegister(bytecode.m_property.offset(), regT1);
+    emitGetVirtualRegister(bytecode.m_base, regT0);
+    emitGetVirtualRegister(bytecode.m_property, regT1);
     int32_t attributes = bytecode.m_attributes;
-    emitGetVirtualRegister(bytecode.m_accessor.offset(), regT2);
+    emitGetVirtualRegister(bytecode.m_accessor, regT2);
     callOperation(operationPutSetterByVal, TrustedImmPtr(m_codeBlock->globalObject()), regT0, regT1, attributes, regT2);
 }
 
 void JIT::emit_op_del_by_id(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpDelById>();
-    int dst = bytecode.m_dst.offset();
-    int base = bytecode.m_base.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister base = bytecode.m_base;
     int property = bytecode.m_property;
     emitGetVirtualRegister(base, regT0);
     callOperation(operationDeleteByIdJSResult, dst, TrustedImmPtr(m_codeBlock->globalObject()), regT0, m_codeBlock->identifier(property).impl());
@@ -395,9 +395,9 @@
 void JIT::emit_op_del_by_val(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpDelByVal>();
-    int dst = bytecode.m_dst.offset();
-    int base = bytecode.m_base.offset();
-    int property = bytecode.m_property.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister base = bytecode.m_base;
+    VirtualRegister property = bytecode.m_property;
     emitGetVirtualRegister(base, regT0);
     emitGetVirtualRegister(property, regT1);
     callOperation(operationDeleteByValJSResult, dst, TrustedImmPtr(m_codeBlock->globalObject()), regT0, regT1);
@@ -406,8 +406,8 @@
 void JIT::emit_op_try_get_by_id(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpTryGetById>();
-    int resultVReg = bytecode.m_dst.offset();
-    int baseVReg = bytecode.m_base.offset();
+    VirtualRegister resultVReg = bytecode.m_dst;
+    VirtualRegister baseVReg = bytecode.m_base;
     const Identifier* ident = &(m_codeBlock->identifier(bytecode.m_property));
 
     emitGetVirtualRegister(baseVReg, regT0);
@@ -430,7 +430,7 @@
     linkAllSlowCases(iter);
 
     auto bytecode = currentInstruction->as<OpTryGetById>();
-    int resultVReg = bytecode.m_dst.offset();
+    VirtualRegister resultVReg = bytecode.m_dst;
     const Identifier* ident = &(m_codeBlock->identifier(bytecode.m_property));
 
     JITGetByIdGenerator& gen = m_getByIds[m_getByIdIndex++];
@@ -445,8 +445,8 @@
 void JIT::emit_op_get_by_id_direct(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpGetByIdDirect>();
-    int resultVReg = bytecode.m_dst.offset();
-    int baseVReg = bytecode.m_base.offset();
+    VirtualRegister resultVReg = bytecode.m_dst;
+    VirtualRegister baseVReg = bytecode.m_base;
     const Identifier* ident = &(m_codeBlock->identifier(bytecode.m_property));
 
     emitGetVirtualRegister(baseVReg, regT0);
@@ -469,7 +469,7 @@
     linkAllSlowCases(iter);
 
     auto bytecode = currentInstruction->as<OpGetByIdDirect>();
-    int resultVReg = bytecode.m_dst.offset();
+    VirtualRegister resultVReg = bytecode.m_dst;
     const Identifier* ident = &(m_codeBlock->identifier(bytecode.m_property));
 
     JITGetByIdGenerator& gen = m_getByIds[m_getByIdIndex++];
@@ -485,8 +485,8 @@
 {
     auto bytecode = currentInstruction->as<OpGetById>();
     auto& metadata = bytecode.metadata(m_codeBlock);
-    int resultVReg = bytecode.m_dst.offset();
-    int baseVReg = bytecode.m_base.offset();
+    VirtualRegister resultVReg = bytecode.m_dst;
+    VirtualRegister baseVReg = bytecode.m_base;
     const Identifier* ident = &(m_codeBlock->identifier(bytecode.m_property));
 
     emitGetVirtualRegister(baseVReg, regT0);
@@ -513,9 +513,9 @@
 void JIT::emit_op_get_by_id_with_this(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpGetByIdWithThis>();
-    int resultVReg = bytecode.m_dst.offset();
-    int baseVReg = bytecode.m_base.offset();
-    int thisVReg = bytecode.m_thisValue.offset();
+    VirtualRegister resultVReg = bytecode.m_dst;
+    VirtualRegister baseVReg = bytecode.m_base;
+    VirtualRegister thisVReg = bytecode.m_thisValue;
     const Identifier* ident = &(m_codeBlock->identifier(bytecode.m_property));
 
     emitGetVirtualRegister(baseVReg, regT0);
@@ -539,7 +539,7 @@
     linkAllSlowCases(iter);
 
     auto bytecode = currentInstruction->as<OpGetById>();
-    int resultVReg = bytecode.m_dst.offset();
+    VirtualRegister resultVReg = bytecode.m_dst;
     const Identifier* ident = &(m_codeBlock->identifier(bytecode.m_property));
 
     JITGetByIdGenerator& gen = m_getByIds[m_getByIdIndex++];
@@ -556,7 +556,7 @@
     linkAllSlowCases(iter);
 
     auto bytecode = currentInstruction->as<OpGetByIdWithThis>();
-    int resultVReg = bytecode.m_dst.offset();
+    VirtualRegister resultVReg = bytecode.m_dst;
     const Identifier* ident = &(m_codeBlock->identifier(bytecode.m_property));
 
     JITGetByIdWithThisGenerator& gen = m_getByIdsWithThis[m_getByIdWithThisIndex++];
@@ -571,8 +571,8 @@
 void JIT::emit_op_put_by_id(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpPutById>();
-    int baseVReg = bytecode.m_base.offset();
-    int valueVReg = bytecode.m_value.offset();
+    VirtualRegister baseVReg = bytecode.m_base;
+    VirtualRegister valueVReg = bytecode.m_value;
     bool direct = !!(bytecode.m_flags & PutByIdIsDirect);
 
     // In order to be able to patch both the Structure, and the object offset, we store one pointer,
@@ -615,8 +615,8 @@
 void JIT::emit_op_in_by_id(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpInById>();
-    int resultVReg = bytecode.m_dst.offset();
-    int baseVReg = bytecode.m_base.offset();
+    VirtualRegister resultVReg = bytecode.m_dst;
+    VirtualRegister baseVReg = bytecode.m_base;
     const Identifier* ident = &(m_codeBlock->identifier(bytecode.m_property));
 
     emitGetVirtualRegister(baseVReg, regT0);
@@ -638,7 +638,7 @@
     linkAllSlowCases(iter);
 
     auto bytecode = currentInstruction->as<OpInById>();
-    int resultVReg = bytecode.m_dst.offset();
+    VirtualRegister resultVReg = bytecode.m_dst;
     const Identifier* ident = &(m_codeBlock->identifier(bytecode.m_property));
 
     JITInByIdGenerator& gen = m_inByIds[m_inByIdIndex++];
@@ -657,7 +657,7 @@
     addSlowCase(branch8(Equal, AbsoluteAddress(m_codeBlock->globalObject()->varInjectionWatchpoint()->addressOfState()), TrustedImm32(IsInvalidated)));
 }
 
-void JIT::emitResolveClosure(int dst, int scope, bool needsVarInjectionChecks, unsigned depth)
+void JIT::emitResolveClosure(VirtualRegister dst, VirtualRegister scope, bool needsVarInjectionChecks, unsigned depth)
 {
     emitVarInjectionCheck(needsVarInjectionChecks);
     emitGetVirtualRegister(scope, regT0);
@@ -670,8 +670,8 @@
 {
     auto bytecode = currentInstruction->as<OpResolveScope>();
     auto& metadata = bytecode.metadata(m_codeBlock);
-    int dst = bytecode.m_dst.offset();
-    int scope = bytecode.m_scope.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister scope = bytecode.m_scope;
     ResolveType resolveType = metadata.m_resolveType;
     unsigned depth = metadata.m_localScopeDepth;
 
@@ -770,7 +770,7 @@
     }
 }
 
-void JIT::emitLoadWithStructureCheck(int scope, Structure** structureSlot)
+void JIT::emitLoadWithStructureCheck(VirtualRegister scope, Structure** structureSlot)
 {
     loadPtr(structureSlot, regT1);
     emitGetVirtualRegister(scope, regT0);
@@ -790,7 +790,7 @@
     loadPtr(reg, reg);
 }
 
-void JIT::emitGetClosureVar(int scope, uintptr_t operand)
+void JIT::emitGetClosureVar(VirtualRegister scope, uintptr_t operand)
 {
     emitGetVirtualRegister(scope, regT0);
     loadPtr(Address(regT0, JSLexicalEnvironment::offsetOfVariables() + operand * sizeof(Register)), regT0);
@@ -800,8 +800,8 @@
 {
     auto bytecode = currentInstruction->as<OpGetFromScope>();
     auto& metadata = bytecode.metadata(m_codeBlock);
-    int dst = bytecode.m_dst.offset();
-    int scope = bytecode.m_scope.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister scope = bytecode.m_scope;
     ResolveType resolveType = metadata.m_getPutInfo.resolveType();
     Structure** structureSlot = metadata.m_structure.slot();
     uintptr_t* operandSlot = reinterpret_cast<uintptr_t*>(&metadata.m_operand);
@@ -919,17 +919,17 @@
     linkAllSlowCases(iter);
 
     auto bytecode = currentInstruction->as<OpGetFromScope>();
-    int dst = bytecode.m_dst.offset();
+    VirtualRegister dst = bytecode.m_dst;
     callOperationWithProfile(bytecode.metadata(m_codeBlock), operationGetFromScope, dst, TrustedImmPtr(m_codeBlock->globalObject()), currentInstruction);
 }
 
-void JIT::emitPutGlobalVariable(JSValue* operand, int value, WatchpointSet* set)
+void JIT::emitPutGlobalVariable(JSValue* operand, VirtualRegister value, WatchpointSet* set)
 {
     emitGetVirtualRegister(value, regT0);
     emitNotifyWrite(set);
     storePtr(regT0, operand);
 }
-void JIT::emitPutGlobalVariableIndirect(JSValue** addressOfOperand, int value, WatchpointSet** indirectWatchpointSet)
+void JIT::emitPutGlobalVariableIndirect(JSValue** addressOfOperand, VirtualRegister value, WatchpointSet** indirectWatchpointSet)
 {
     emitGetVirtualRegister(value, regT0);
     loadPtr(indirectWatchpointSet, regT1);
@@ -938,7 +938,7 @@
     storePtr(regT0, regT1);
 }
 
-void JIT::emitPutClosureVar(int scope, uintptr_t operand, int value, WatchpointSet* set)
+void JIT::emitPutClosureVar(VirtualRegister scope, uintptr_t operand, VirtualRegister value, WatchpointSet* set)
 {
     emitGetVirtualRegister(value, regT1);
     emitGetVirtualRegister(scope, regT0);
@@ -950,8 +950,8 @@
 {
     auto bytecode = currentInstruction->as<OpPutToScope>();
     auto& metadata = bytecode.metadata(m_codeBlock);
-    int scope = bytecode.m_scope.offset();
-    int value = bytecode.m_value.offset();
+    VirtualRegister scope = bytecode.m_scope;
+    VirtualRegister value = bytecode.m_value;
     GetPutInfo getPutInfo = copiedGetPutInfo(bytecode);
     ResolveType resolveType = getPutInfo.resolveType();
     Structure** structureSlot = metadata.m_structure.slot();
@@ -1086,8 +1086,8 @@
 void JIT::emit_op_get_from_arguments(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpGetFromArguments>();
-    int dst = bytecode.m_dst.offset();
-    int arguments = bytecode.m_arguments.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister arguments = bytecode.m_arguments;
     int index = bytecode.m_index;
     
     emitGetVirtualRegister(arguments, regT0);
@@ -1099,9 +1099,9 @@
 void JIT::emit_op_put_to_arguments(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpPutToArguments>();
-    int arguments = bytecode.m_arguments.offset();
+    VirtualRegister arguments = bytecode.m_arguments;
     int index = bytecode.m_index;
-    int value = bytecode.m_value.offset();
+    VirtualRegister value = bytecode.m_value;
     
     emitGetVirtualRegister(arguments, regT0);
     emitGetVirtualRegister(value, regT1);
@@ -1110,7 +1110,7 @@
     emitWriteBarrier(arguments, value, ShouldFilterValue);
 }
 
-void JIT::emitWriteBarrier(unsigned owner, unsigned value, WriteBarrierMode mode)
+void JIT::emitWriteBarrier(VirtualRegister owner, VirtualRegister value, WriteBarrierMode mode)
 {
     Jump valueNotCell;
     if (mode == ShouldFilterValue || mode == ShouldFilterBaseAndValue) {
@@ -1133,7 +1133,7 @@
         valueNotCell.link(this);
 }
 
-void JIT::emitWriteBarrier(JSCell* owner, unsigned value, WriteBarrierMode mode)
+void JIT::emitWriteBarrier(JSCell* owner, VirtualRegister value, WriteBarrierMode mode)
 {
     emitGetVirtualRegister(value, regT0);
     Jump valueNotCell;
@@ -1150,8 +1150,8 @@
 {
     auto bytecode = currentInstruction->as<OpGetInternalField>();
     auto& metadata = bytecode.metadata(m_codeBlock);
-    int dst = bytecode.m_dst.offset();
-    int base = bytecode.m_base.offset();
+    VirtualRegister dst = bytecode.m_dst;
+    VirtualRegister base = bytecode.m_base;
     unsigned index = bytecode.m_index;
 
     emitGetVirtualRegister(base, regT1);
@@ -1164,8 +1164,8 @@
 void JIT::emit_op_put_internal_field(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpPutInternalField>();
-    int base = bytecode.m_base.offset();
-    int value = bytecode.m_value.offset();
+    VirtualRegister base = bytecode.m_base;
+    VirtualRegister value = bytecode.m_value;
     unsigned index = bytecode.m_index;
 
     emitGetVirtualRegister(base, regT0);
@@ -1176,7 +1176,7 @@
 
 #else // USE(JSVALUE64)
 
-void JIT::emitWriteBarrier(unsigned owner, unsigned value, WriteBarrierMode mode)
+void JIT::emitWriteBarrier(VirtualRegister owner, VirtualRegister value, WriteBarrierMode mode)
 {
     Jump valueNotCell;
     if (mode == ShouldFilterValue || mode == ShouldFilterBaseAndValue) {
@@ -1199,7 +1199,7 @@
         valueNotCell.link(this);
 }
 
-void JIT::emitWriteBarrier(JSCell* owner, unsigned value, WriteBarrierMode mode)
+void JIT::emitWriteBarrier(JSCell* owner, VirtualRegister value, WriteBarrierMode mode)
 {
     Jump valueNotCell;
     if (mode == ShouldFilterValue) {
@@ -1425,7 +1425,7 @@
     ArrayProfile* profile = &metadata.m_arrayProfile;
     ASSERT(isInt(type));
     
-    int value = bytecode.m_value.offset();
+    VirtualRegister value = bytecode.m_value;
 
 #if USE(JSVALUE64)
     RegisterID base = regT0;
@@ -1501,7 +1501,7 @@
     ArrayProfile* profile = &metadata.m_arrayProfile;
     ASSERT(isFloat(type));
     
-    int value = bytecode.m_value.offset();
+    VirtualRegister value = bytecode.m_value;
 
 #if USE(JSVALUE64)
     RegisterID base = regT0;
diff --git a/Source/JavaScriptCore/jit/JSInterfaceJIT.h b/Source/JavaScriptCore/jit/JSInterfaceJIT.h
index 2cf9198..582713e 100644
--- a/Source/JavaScriptCore/jit/JSInterfaceJIT.h
+++ b/Source/JavaScriptCore/jit/JSInterfaceJIT.h
@@ -45,18 +45,18 @@
         {
         }
 
-        inline Jump emitLoadJSCell(unsigned virtualRegisterIndex, RegisterID payload);
-        inline Jump emitLoadInt32(unsigned virtualRegisterIndex, RegisterID dst);
-        inline Jump emitLoadDouble(unsigned virtualRegisterIndex, FPRegisterID dst, RegisterID scratch);
+        inline Jump emitLoadJSCell(VirtualRegister, RegisterID payload);
+        inline Jump emitLoadInt32(VirtualRegister, RegisterID dst);
+        inline Jump emitLoadDouble(VirtualRegister, FPRegisterID dst, RegisterID scratch);
 
 #if USE(JSVALUE32_64)
-        inline Jump emitJumpIfNotJSCell(unsigned virtualRegisterIndex);
+        inline Jump emitJumpIfNotJSCell(VirtualRegister);
 #endif
 
-        void emitGetFromCallFrameHeaderPtr(int entry, RegisterID to, RegisterID from = callFrameRegister);
-        void emitPutToCallFrameHeader(RegisterID from, int entry);
-        void emitPutToCallFrameHeader(void* value, int entry);
-        void emitPutCellToCallFrameHeader(RegisterID from, int entry);
+        void emitGetFromCallFrameHeaderPtr(VirtualRegister entry, RegisterID to, RegisterID from = callFrameRegister);
+        void emitPutToCallFrameHeader(RegisterID from, VirtualRegister entry);
+        void emitPutToCallFrameHeader(void* value, VirtualRegister entry);
+        void emitPutCellToCallFrameHeader(RegisterID from, VirtualRegister entry);
 
         VM* vm() const { return m_vm; }
 
@@ -64,36 +64,36 @@
     };
 
 #if USE(JSVALUE32_64)
-    inline JSInterfaceJIT::Jump JSInterfaceJIT::emitLoadJSCell(unsigned virtualRegisterIndex, RegisterID payload)
+    inline JSInterfaceJIT::Jump JSInterfaceJIT::emitLoadJSCell(VirtualRegister virtualRegister, RegisterID payload)
     {
-        loadPtr(payloadFor(virtualRegisterIndex), payload);
-        return emitJumpIfNotJSCell(virtualRegisterIndex);
+        loadPtr(payloadFor(virtualRegister), payload);
+        return emitJumpIfNotJSCell(virtualRegister);
     }
 
-    inline JSInterfaceJIT::Jump JSInterfaceJIT::emitJumpIfNotJSCell(unsigned virtualRegisterIndex)
+    inline JSInterfaceJIT::Jump JSInterfaceJIT::emitJumpIfNotJSCell(VirtualRegister virtualRegister)
     {
-        ASSERT(static_cast<int>(virtualRegisterIndex) < FirstConstantRegisterIndex);
-        return branch32(NotEqual, tagFor(virtualRegisterIndex), TrustedImm32(JSValue::CellTag));
+        ASSERT(virtualRegister < FirstConstantRegisterIndex);
+        return branch32(NotEqual, tagFor(virtualRegister), TrustedImm32(JSValue::CellTag));
     }
     
-    inline JSInterfaceJIT::Jump JSInterfaceJIT::emitLoadInt32(unsigned virtualRegisterIndex, RegisterID dst)
+    inline JSInterfaceJIT::Jump JSInterfaceJIT::emitLoadInt32(VirtualRegister virtualRegister, RegisterID dst)
     {
-        ASSERT(static_cast<int>(virtualRegisterIndex) < FirstConstantRegisterIndex);
-        loadPtr(payloadFor(virtualRegisterIndex), dst);
-        return branch32(NotEqual, tagFor(static_cast<int>(virtualRegisterIndex)), TrustedImm32(JSValue::Int32Tag));
+        ASSERT(virtualRegister < FirstConstantRegisterIndex);
+        loadPtr(payloadFor(virtualRegister), dst);
+        return branch32(NotEqual, tagFor(virtualRegister), TrustedImm32(JSValue::Int32Tag));
     }
 
-    inline JSInterfaceJIT::Jump JSInterfaceJIT::emitLoadDouble(unsigned virtualRegisterIndex, FPRegisterID dst, RegisterID scratch)
+    inline JSInterfaceJIT::Jump JSInterfaceJIT::emitLoadDouble(VirtualRegister virtualRegister, FPRegisterID dst, RegisterID scratch)
     {
-        ASSERT(static_cast<int>(virtualRegisterIndex) < FirstConstantRegisterIndex);
-        loadPtr(tagFor(virtualRegisterIndex), scratch);
+        ASSERT(virtualRegister < FirstConstantRegisterIndex);
+        loadPtr(tagFor(virtualRegister), scratch);
         Jump isDouble = branch32(Below, scratch, TrustedImm32(JSValue::LowestTag));
         Jump notInt = branch32(NotEqual, scratch, TrustedImm32(JSValue::Int32Tag));
-        loadPtr(payloadFor(virtualRegisterIndex), scratch);
+        loadPtr(payloadFor(virtualRegister), scratch);
         convertInt32ToDouble(scratch, dst);
         Jump done = jump();
         isDouble.link(this);
-        loadDouble(addressFor(virtualRegisterIndex), dst);
+        loadDouble(addressFor(virtualRegister), dst);
         done.link(this);
         return notInt;
     }
@@ -101,23 +101,23 @@
 #endif
 
 #if USE(JSVALUE64)
-    inline JSInterfaceJIT::Jump JSInterfaceJIT::emitLoadJSCell(unsigned virtualRegisterIndex, RegisterID dst)
+    inline JSInterfaceJIT::Jump JSInterfaceJIT::emitLoadJSCell(VirtualRegister virtualRegister, RegisterID dst)
     {
-        load64(addressFor(virtualRegisterIndex), dst);
+        load64(addressFor(virtualRegister), dst);
         return branchIfNotCell(dst);
     }
     
-    inline JSInterfaceJIT::Jump JSInterfaceJIT::emitLoadInt32(unsigned virtualRegisterIndex, RegisterID dst)
+    inline JSInterfaceJIT::Jump JSInterfaceJIT::emitLoadInt32(VirtualRegister virtualRegister, RegisterID dst)
     {
-        load64(addressFor(virtualRegisterIndex), dst);
+        load64(addressFor(virtualRegister), dst);
         Jump notInt32 = branchIfNotInt32(dst);
         zeroExtend32ToPtr(dst, dst);
         return notInt32;
     }
 
-    inline JSInterfaceJIT::Jump JSInterfaceJIT::emitLoadDouble(unsigned virtualRegisterIndex, FPRegisterID dst, RegisterID scratch)
+    inline JSInterfaceJIT::Jump JSInterfaceJIT::emitLoadDouble(VirtualRegister virtualRegister, FPRegisterID dst, RegisterID scratch)
     {
-        load64(addressFor(virtualRegisterIndex), scratch);
+        load64(addressFor(virtualRegister), scratch);
         Jump notNumber = branchIfNotNumber(scratch);
         Jump notInt = branchIfNotInt32(scratch);
         convertInt32ToDouble(scratch, dst);
@@ -129,12 +129,12 @@
     }
 #endif
 
-    ALWAYS_INLINE void JSInterfaceJIT::emitGetFromCallFrameHeaderPtr(int entry, RegisterID to, RegisterID from)
+    ALWAYS_INLINE void JSInterfaceJIT::emitGetFromCallFrameHeaderPtr(VirtualRegister entry, RegisterID to, RegisterID from)
     {
-        loadPtr(Address(from, entry * sizeof(Register)), to);
+        loadPtr(Address(from, entry.offset() * sizeof(Register)), to);
     }
 
-    ALWAYS_INLINE void JSInterfaceJIT::emitPutToCallFrameHeader(RegisterID from, int entry)
+    ALWAYS_INLINE void JSInterfaceJIT::emitPutToCallFrameHeader(RegisterID from, VirtualRegister entry)
     {
 #if USE(JSVALUE32_64)
         storePtr(from, payloadFor(entry));
@@ -143,12 +143,12 @@
 #endif
     }
 
-    ALWAYS_INLINE void JSInterfaceJIT::emitPutToCallFrameHeader(void* value, int entry)
+    ALWAYS_INLINE void JSInterfaceJIT::emitPutToCallFrameHeader(void* value, VirtualRegister entry)
     {
         storePtr(TrustedImmPtr(value), addressFor(entry));
     }
 
-    ALWAYS_INLINE void JSInterfaceJIT::emitPutCellToCallFrameHeader(RegisterID from, int entry)
+    ALWAYS_INLINE void JSInterfaceJIT::emitPutCellToCallFrameHeader(RegisterID from, VirtualRegister entry)
     {
 #if USE(JSVALUE32_64)
         store32(TrustedImm32(JSValue::CellTag), tagFor(entry));
diff --git a/Source/JavaScriptCore/jit/SetupVarargsFrame.cpp b/Source/JavaScriptCore/jit/SetupVarargsFrame.cpp
index 0952591..49f54c5 100644
--- a/Source/JavaScriptCore/jit/SetupVarargsFrame.cpp
+++ b/Source/JavaScriptCore/jit/SetupVarargsFrame.cpp
@@ -128,7 +128,7 @@
             firstArgumentReg = VirtualRegister(0);
     } else {
         argumentCountRecovery = ValueRecovery::displacedInJSStack(
-            VirtualRegister(CallFrameSlot::argumentCountIncludingThis), DataFormatInt32);
+            CallFrameSlot::argumentCountIncludingThis, DataFormatInt32);
         firstArgumentReg = VirtualRegister(CallFrame::argumentOffset(0));
     }
     emitSetupVarargsFrameFastCase(vm, jit, numUsedSlotsGPR, scratchGPR1, scratchGPR2, scratchGPR3, argumentCountRecovery, firstArgumentReg, firstVarArgOffset, slowCase);
diff --git a/Source/JavaScriptCore/jit/SpecializedThunkJIT.h b/Source/JavaScriptCore/jit/SpecializedThunkJIT.h
index a11622e..4a39567 100644
--- a/Source/JavaScriptCore/jit/SpecializedThunkJIT.h
+++ b/Source/JavaScriptCore/jit/SpecializedThunkJIT.h
@@ -55,13 +55,13 @@
         
         void loadDoubleArgument(int argument, FPRegisterID dst, RegisterID scratch)
         {
-            unsigned src = CallFrame::argumentOffset(argument);
+            VirtualRegister src = virtualRegisterForArgumentIncludingThis(argument + 1);
             m_failures.append(emitLoadDouble(src, dst, scratch));
         }
         
         void loadCellArgument(int argument, RegisterID dst)
         {
-            unsigned src = CallFrame::argumentOffset(argument);
+            VirtualRegister src = virtualRegisterForArgumentIncludingThis(argument + 1);
             m_failures.append(emitLoadJSCell(src, dst));
         }
         
@@ -73,7 +73,7 @@
         
         void loadInt32Argument(int argument, RegisterID dst, Jump& failTarget)
         {
-            unsigned src = CallFrame::argumentOffset(argument);
+            VirtualRegister src = virtualRegisterForArgumentIncludingThis(argument + 1);
             failTarget = emitLoadInt32(src, dst);
         }
         
diff --git a/Source/JavaScriptCore/jit/ThunkGenerators.cpp b/Source/JavaScriptCore/jit/ThunkGenerators.cpp
index b8146d2..bf19d02 100644
--- a/Source/JavaScriptCore/jit/ThunkGenerators.cpp
+++ b/Source/JavaScriptCore/jit/ThunkGenerators.cpp
@@ -1036,8 +1036,8 @@
         return MacroAssemblerCodeRef<JITThunkPtrTag>::createSelfManagedCodeRef(vm.jitStubs->ctiNativeCall(vm));
 
 #if USE(JSVALUE64)
-    unsigned virtualRegisterIndex = CallFrame::argumentOffset(0);
-    jit.load64(AssemblyHelpers::addressFor(virtualRegisterIndex), GPRInfo::regT0);
+    VirtualRegister virtualRegister = CallFrameSlot::firstArgument;
+    jit.load64(AssemblyHelpers::addressFor(virtualRegister), GPRInfo::regT0);
     auto notInteger = jit.branchIfNotInt32(GPRInfo::regT0);
 
     // Abs Int32.
@@ -1217,7 +1217,7 @@
     CCallHelpers::Label loop = jit.label();
     jit.sub32(CCallHelpers::TrustedImm32(1), GPRInfo::regT3);
     jit.sub32(CCallHelpers::TrustedImm32(1), GPRInfo::regT1);
-    jit.loadValue(CCallHelpers::addressFor(virtualRegisterForArgument(1)).indexedBy(GPRInfo::regT3, CCallHelpers::TimesEight), valueRegs);
+    jit.loadValue(CCallHelpers::addressFor(virtualRegisterForArgumentIncludingThis(1)).indexedBy(GPRInfo::regT3, CCallHelpers::TimesEight), valueRegs);
     jit.storeValue(valueRegs, CCallHelpers::calleeArgumentSlot(1).indexedBy(GPRInfo::regT1, CCallHelpers::TimesEight));
     jit.branchTest32(CCallHelpers::NonZero, GPRInfo::regT3).linkTo(loop, &jit);
     
diff --git a/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp b/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
index 83867d1..4bfdd7b 100644
--- a/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
+++ b/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
@@ -29,6 +29,7 @@
 #include "ArrayConstructor.h"
 #include "BytecodeGenerator.h"
 #include "CallFrame.h"
+#include "CheckpointOSRExitSideState.h"
 #include "CommonSlowPaths.h"
 #include "Error.h"
 #include "ErrorHandlingScope.h"
@@ -98,8 +99,8 @@
     LLINT_BEGIN_NO_SET_PC();                    \
     LLINT_SET_PC_FOR_STUBS()
 
-inline JSValue getNonConstantOperand(CallFrame* callFrame, const VirtualRegister& operand) { return callFrame->uncheckedR(operand.offset()).jsValue(); }
-inline JSValue getOperand(CallFrame* callFrame, const VirtualRegister& operand) { return callFrame->r(operand.offset()).jsValue(); }
+inline JSValue getNonConstantOperand(CallFrame* callFrame, VirtualRegister operand) { return callFrame->uncheckedR(operand).jsValue(); }
+inline JSValue getOperand(CallFrame* callFrame, VirtualRegister operand) { return callFrame->r(operand).jsValue(); }
 
 #define LLINT_RETURN_TWO(first, second) do {       \
         return encodeResult(first, second);        \
@@ -1563,7 +1564,7 @@
     CallFrame* calleeFrame = callFrame - bytecode.m_argv;
     
     calleeFrame->setArgumentCountIncludingThis(bytecode.m_argc);
-    calleeFrame->uncheckedR(CallFrameSlot::callee) = calleeAsValue;
+    calleeFrame->uncheckedR(VirtualRegister(CallFrameSlot::callee)) = calleeAsValue;
     calleeFrame->setCallerFrame(callFrame);
     
     auto& metadata = bytecode.metadata(codeBlock);
@@ -1681,7 +1682,7 @@
         setupForwardArgumentsFrameAndSetThis(globalObject, callFrame, calleeFrame, getOperand(callFrame, bytecode.m_thisValue), vm.varargsLength);
 
     calleeFrame->setCallerFrame(callFrame);
-    calleeFrame->uncheckedR(CallFrameSlot::callee) = calleeAsValue;
+    calleeFrame->uncheckedR(VirtualRegister(CallFrameSlot::callee)) = calleeAsValue;
     callFrame->setCurrentVPC(pc);
 
     RELEASE_AND_RETURN(throwScope, setUpCall(calleeFrame, kind, calleeAsValue));
@@ -1717,7 +1718,7 @@
     
     calleeFrame->setArgumentCountIncludingThis(bytecode.m_argc);
     calleeFrame->setCallerFrame(callFrame);
-    calleeFrame->uncheckedR(CallFrameSlot::callee) = calleeAsValue;
+    calleeFrame->uncheckedR(VirtualRegister(CallFrameSlot::callee)) = calleeAsValue;
     calleeFrame->setReturnPC(returnPoint.executableAddress());
     calleeFrame->setCodeBlock(nullptr);
     callFrame->setCurrentVPC(pc);
@@ -1918,7 +1919,7 @@
 
     auto bytecode = pc->as<OpCatch>();
     auto& metadata = bytecode.metadata(codeBlock);
-    metadata.m_buffer->forEach([&] (ValueProfileAndOperand& profile) {
+    metadata.m_buffer->forEach([&] (ValueProfileAndVirtualRegister& profile) {
         profile.m_buckets[0] = JSValue::encode(callFrame->uncheckedR(profile.m_operand).jsValue());
     });
 
@@ -1949,11 +1950,124 @@
     LLINT_END_IMPL();
 }
 
+template<typename Opcode>
+static void handleVarargsCheckpoint(VM& vm, CallFrame* callFrame, JSGlobalObject* globalObject, const Opcode& bytecode, CheckpointOSRExitSideState& sideState)
+{
+    auto scope = DECLARE_THROW_SCOPE(vm);
+    unsigned argumentCountIncludingThis = sideState.tmps[Opcode::argCountIncludingThis].asUInt32();
+    unsigned firstVarArg = bytecode.m_firstVarArg;
+
+    MarkedArgumentBuffer args;
+    args.fill(argumentCountIncludingThis - 1, [&] (JSValue* buffer) {
+        loadVarargs(globalObject, buffer, callFrame->r(bytecode.m_arguments).jsValue(), firstVarArg, argumentCountIncludingThis - 1);
+    });
+    if (args.hasOverflowed()) {
+        throwStackOverflowError(globalObject, scope);
+        return;
+    }
+    
+    RETURN_IF_EXCEPTION(scope, void());
+
+    JSValue result;
+    if (Opcode::opcodeID != op_construct_varargs)
+        result = call(globalObject, getOperand(callFrame, bytecode.m_callee), getOperand(callFrame, bytecode.m_thisValue), args, "");
+    else
+        result = construct(globalObject, getOperand(callFrame, bytecode.m_callee), getOperand(callFrame, bytecode.m_thisValue), args, "");
+
+    RETURN_IF_EXCEPTION(scope, void());
+    callFrame->uncheckedR(bytecode.m_dst) = result;
+}
+
+inline SlowPathReturnType dispatchToNextInstruction(CodeBlock* codeBlock, InstructionStream::Ref pc)
+{
+    RELEASE_ASSERT(!codeBlock->vm().exceptionForInspection());
+    if (Options::forceOSRExitToLLInt() || codeBlock->jitType() == JITType::InterpreterThunk) {
+        const Instruction* nextPC = pc.next().ptr();
+        auto nextBytecode = LLInt::getCodePtr<JSEntryPtrTag>(*pc.next().ptr());
+        return encodeResult(nextPC, nextBytecode.executableAddress());
+    }
+
+#if ENABLE(JIT)
+    ASSERT(codeBlock->jitType() == JITType::BaselineJIT);
+    BytecodeIndex nextBytecodeIndex = pc.next().index();
+    auto nextBytecode = codeBlock->jitCodeMap().find(nextBytecodeIndex);
+    return encodeResult(nullptr, nextBytecode.executableAddress());
+#endif
+    RELEASE_ASSERT_NOT_REACHED();
+}
+
+extern "C" SlowPathReturnType slow_path_checkpoint_osr_exit_from_inlined_call(CallFrame* callFrame, EncodedJSValue result)
+{
+    // Since all our calling checkpoints do right now is move result into our dest we can just do that here and return.
+    CodeBlock* codeBlock = callFrame->codeBlock();
+    VM& vm = codeBlock->vm();
+    SlowPathFrameTracer tracer(vm, callFrame);
+
+    std::unique_ptr<CheckpointOSRExitSideState> sideState = vm.findCheckpointOSRSideState(callFrame);
+    BytecodeIndex bytecodeIndex = sideState->bytecodeIndex;
+    auto pc = codeBlock->instructions().at(bytecodeIndex);
+
+    auto opcode = pc->opcodeID();
+    switch (opcode) {
+    case op_call_varargs: {
+        callFrame->uncheckedR(pc->as<OpCallVarargs>().m_dst) = JSValue::decode(result);
+        break;
+    }
+    case op_construct_varargs: {
+        callFrame->uncheckedR(pc->as<OpConstructVarargs>().m_dst) = JSValue::decode(result);
+        break;
+    }
+    // op_tail_call_varargs should never return if the thing it was calling was inlined.
+    default:
+        RELEASE_ASSERT_NOT_REACHED();
+        break;
+    }
+
+    return dispatchToNextInstruction(codeBlock, pc);
+}
+
+extern "C" SlowPathReturnType slow_path_checkpoint_osr_exit(CallFrame* callFrame, EncodedJSValue /* needed for cCall2 in CLoop */)
+{
+    CodeBlock* codeBlock = callFrame->codeBlock();
+    VM& vm = codeBlock->vm();
+    SlowPathFrameTracer tracer(vm, callFrame);
+    auto scope = DECLARE_THROW_SCOPE(vm);
+
+    JSGlobalObject* globalObject = codeBlock->globalObject();
+
+    std::unique_ptr<CheckpointOSRExitSideState> sideState = vm.findCheckpointOSRSideState(callFrame);
+    BytecodeIndex bytecodeIndex = sideState->bytecodeIndex;
+    ASSERT(bytecodeIndex.checkpoint());
+
+    auto pc = codeBlock->instructions().at(bytecodeIndex);
+
+    auto opcode = pc->opcodeID();
+    switch (opcode) {
+    case op_call_varargs:
+        handleVarargsCheckpoint(vm, callFrame, globalObject, pc->as<OpCallVarargs>(), *sideState.get());
+        break;
+    case op_construct_varargs:
+        handleVarargsCheckpoint(vm, callFrame, globalObject, pc->as<OpConstructVarargs>(), *sideState.get());
+        break;
+    case op_tail_call_varargs:
+        ASSERT_WITH_MESSAGE(pc.next()->opcodeID() == op_ret || pc.next()->opcodeID() == op_jmp, "We strongly assume all tail calls are followed by an op_ret (or sometimes a jmp to a ret).");
+        handleVarargsCheckpoint(vm, callFrame, globalObject, pc->as<OpTailCallVarargs>(), *sideState.get());
+        break;
+
+    default:
+        RELEASE_ASSERT_NOT_REACHED();
+        break;
+    }
+    if (UNLIKELY(scope.exception()))
+        return encodeResult(returnToThrow(vm), 0);
+
+    return dispatchToNextInstruction(codeBlock, pc);
+}
+
 extern "C" SlowPathReturnType llint_throw_stack_overflow_error(VM* vm, ProtoCallFrame* protoFrame)
 {
     CallFrame* callFrame = vm->topCallFrame;
     auto scope = DECLARE_THROW_SCOPE(*vm);
-
     JSGlobalObject* globalObject = nullptr;
     if (callFrame)
         globalObject = callFrame->lexicalGlobalObject(*vm);
diff --git a/Source/JavaScriptCore/llint/LLIntSlowPaths.h b/Source/JavaScriptCore/llint/LLIntSlowPaths.h
index 2ff28b0..7c75db3 100644
--- a/Source/JavaScriptCore/llint/LLIntSlowPaths.h
+++ b/Source/JavaScriptCore/llint/LLIntSlowPaths.h
@@ -136,6 +136,8 @@
 LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_super_sampler_end);
 LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_out_of_line_jump_target);
 extern "C" SlowPathReturnType llint_throw_stack_overflow_error(VM*, ProtoCallFrame*) WTF_INTERNAL;
+extern "C" SlowPathReturnType slow_path_checkpoint_osr_exit(CallFrame* callFrame, EncodedJSValue unused) WTF_INTERNAL;
+extern "C" SlowPathReturnType slow_path_checkpoint_osr_exit_from_inlined_call(CallFrame* callFrame, EncodedJSValue callResult) WTF_INTERNAL;
 #if ENABLE(C_LOOP)
 extern "C" SlowPathReturnType llint_stack_check_at_vm_entry(VM*, Register*) WTF_INTERNAL;
 #endif
diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter.asm
index a73259b..4f7e387 100644
--- a/Source/JavaScriptCore/llint/LowLevelInterpreter.asm
+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter.asm
@@ -1118,7 +1118,7 @@
     defineOSRExitReturnLabel(opName, size)
 
     restoreStackPointerAfterCall()
-    loadi ArgumentCountIncludingThis + TagOffset[cfr], PC
+    loadi LLIntReturnPC[cfr], PC
 end
 
 macro arrayProfile(offset, cellAndIndexingType, metadata, scratch)
@@ -1869,13 +1869,17 @@
 callOp(construct, OpConstruct, prepareForRegularCall, macro (getu, metadata) end)
 
 
-macro doCallVarargs(opcodeName, size, opcodeStruct, dispatch, frameSlowPath, slowPath, prepareCall)
-    callSlowPath(frameSlowPath)
+macro branchIfException(exceptionTarget)
     loadp CodeBlock[cfr], t3
     loadp CodeBlock::m_vm[t3], t3
     btpz VM::m_exception[t3], .noException
-    jmp _llint_throw_from_slow_path_trampoline
-.noException:
+    jmp exceptionTarget
+.noException:    
+end
+
+macro doCallVarargs(opcodeName, size, opcodeStruct, dispatch, frameSlowPath, slowPath, prepareCall)
+    callSlowPath(frameSlowPath)
+    branchIfException(_llint_throw_from_slow_path_trampoline)
     # calleeFrame in r1
     if JSVALUE64
         move r1, sp
@@ -2030,6 +2034,40 @@
 end)
 
 
+op(checkpoint_osr_exit_from_inlined_call_trampoline, macro ()
+    if JSVALUE64 and not (C_LOOP or C_LOOP_WIN)
+        restoreStackPointerAfterCall()
+
+        # Make sure we move r0 to a1 first since r0 might be the same as a0, for instance, on arm.
+        move r0, a1
+        move cfr, a0
+        # We don't call saveStateForCCall() because we are going to use the bytecodeIndex from our side state.
+        cCall2(_slow_path_checkpoint_osr_exit_from_inlined_call)
+        restoreStateAfterCCall()
+        branchIfException(_llint_throw_from_slow_path_trampoline)
+        jmp r1, JSEntryPtrTag
+    else
+        notSupported()
+    end
+end)
+
+op(checkpoint_osr_exit_trampoline, macro ()
+    # FIXME: We can probably dispatch to the checkpoint handler directly but this was easier 
+    # and probably doesn't matter for performance.
+    if JSVALUE64 and not (C_LOOP or C_LOOP_WIN)
+        restoreStackPointerAfterCall()
+
+        move cfr, a0
+        # We don't call saveStateForCCall() because we are going to use the bytecodeIndex from our side state.
+        cCall2(_slow_path_checkpoint_osr_exit)
+        restoreStateAfterCCall()
+        branchIfException(_llint_throw_from_slow_path_trampoline)
+        jmp r1, JSEntryPtrTag
+    else
+        notSupported()
+    end
+end)
+
 # Lastly, make sure that we can link even though we don't support all opcodes.
 # These opcodes should never arise when using LLInt or either JIT. We assert
 # as much.
diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
index 93c2902..07c2ee9 100644
--- a/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
@@ -142,7 +142,7 @@
 macro callSlowPath(slowPath)
     prepareStateForCCall()
     move cfr, a0
-    move PC, a1
+    prepareStateForCCall()
     cCall2(slowPath)
     restoreStateAfterCCall()
 end
diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
index 0f7e6c3..b6ee99e 100644
--- a/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
@@ -23,6 +23,13 @@
 
 
 # Utilities.
+macro storePC()
+    storei PC, LLIntReturnPC[cfr]
+end
+
+macro loadPC()
+    loadi LLIntReturnPC[cfr], PC
+end
 
 macro getuOperandNarrow(opcodeStruct, fieldName, dst)
     loadb constexpr %opcodeStruct%_%fieldName%_index + OpcodeIDNarrowSize[PB, PC, 1], dst
@@ -74,7 +81,7 @@
 
 # After calling, calling bytecode is claiming input registers are not used.
 macro dispatchAfterCall(size, opcodeStruct, dispatch)
-    loadi ArgumentCountIncludingThis + TagOffset[cfr], PC
+    loadPC()
     loadp CodeBlock[cfr], PB
     loadp CodeBlock::m_instructionsRawPointer[PB], PB
     get(size, opcodeStruct, m_dst, t1)
@@ -372,7 +379,7 @@
 
 # Call a slow path for call opcodes.
 macro callCallSlowPath(slowPath, action)
-    storei PC, ArgumentCountIncludingThis + TagOffset[cfr]
+    storePC()
     prepareStateForCCall()
     move cfr, a0
     move PC, a1
@@ -381,20 +388,20 @@
 end
 
 macro callTrapHandler(throwHandler)
-    storei PC, ArgumentCountIncludingThis + TagOffset[cfr]
+    storePC()
     prepareStateForCCall()
     move cfr, a0
     move PC, a1
     cCall2(_llint_slow_path_handle_traps)
     btpnz r0, throwHandler
-    loadi ArgumentCountIncludingThis + TagOffset[cfr], PC
+    loadi LLIntReturnPC[cfr], PC
 end
 
 macro checkSwitchToJITForLoop()
     checkSwitchToJIT(
         1,
         macro()
-            storei PC, ArgumentCountIncludingThis + TagOffset[cfr]
+            storePC()
             prepareStateForCCall()
             move cfr, a0
             move PC, a1
@@ -403,7 +410,7 @@
             move r1, sp
             jmp r0, JSEntryPtrTag
         .recover:
-            loadi ArgumentCountIncludingThis + TagOffset[cfr], PC
+            loadPC()
         end)
 end
 
@@ -2047,7 +2054,7 @@
         addp cfr, t3
         storeq t2, Callee[t3]
         getu(size, opcodeStruct, m_argc, t2)
-        storei PC, ArgumentCountIncludingThis + TagOffset[cfr]
+        storePC()
         storei t2, ArgumentCountIncludingThis + PayloadOffset[t3]
         move t3, sp
         prepareCall(%opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], t2, t3, t4, JSEntryPtrTag)
diff --git a/Source/JavaScriptCore/runtime/ArgList.h b/Source/JavaScriptCore/runtime/ArgList.h
index fada3d6..949b9b0 100644
--- a/Source/JavaScriptCore/runtime/ArgList.h
+++ b/Source/JavaScriptCore/runtime/ArgList.h
@@ -133,6 +133,17 @@
 
     void overflowCheckNotNeeded() { clearNeedsOverflowCheck(); }
 
+    template<typename Functor>
+    void fill(size_t count, const Functor& func)
+    {
+        ASSERT(!m_size);
+        ensureCapacity(count);
+        if (Base::hasOverflowed())
+            return;
+        m_size = count;
+        func(reinterpret_cast<JSValue*>(&slotFor(0)));
+    }
+
 private:
     void expandCapacity();
     void expandCapacity(int newCapacity);
diff --git a/Source/JavaScriptCore/runtime/CachedTypes.cpp b/Source/JavaScriptCore/runtime/CachedTypes.cpp
index b3c73cf..a82661a 100644
--- a/Source/JavaScriptCore/runtime/CachedTypes.cpp
+++ b/Source/JavaScriptCore/runtime/CachedTypes.cpp
@@ -1794,6 +1794,7 @@
     unsigned needsClassFieldInitializer() const { return m_needsClassFieldInitializer; }
     unsigned evalContextType() const { return m_evalContextType; }
     unsigned hasTailCalls() const { return m_hasTailCalls; }
+    unsigned hasCheckpoints() const { return m_hasCheckpoints; }
     unsigned lineCount() const { return m_lineCount; }
     unsigned endColumn() const { return m_endColumn; }
 
@@ -1827,6 +1828,7 @@
     unsigned m_evalContextType : 2;
     unsigned m_hasTailCalls : 1;
     unsigned m_codeType : 2;
+    unsigned m_hasCheckpoints : 1;
 
     CodeFeatures m_features;
     SourceParseMode m_parseMode;
@@ -2025,6 +2027,7 @@
 
     , m_didOptimize(static_cast<unsigned>(MixedTriState))
     , m_age(0)
+    , m_hasCheckpoints(cachedCodeBlock.hasCheckpoints())
 
     , m_features(cachedCodeBlock.features())
     , m_parseMode(cachedCodeBlock.parseMode())
@@ -2215,6 +2218,7 @@
     m_parseMode = codeBlock.m_parseMode;
     m_codeGenerationMode = codeBlock.m_codeGenerationMode;
     m_codeType = codeBlock.m_codeType;
+    m_hasCheckpoints = codeBlock.m_hasCheckpoints;
 
     m_metadata.encode(encoder, codeBlock.m_metadata.get());
     m_rareData.encode(encoder, codeBlock.m_rareData.get());
diff --git a/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp b/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp
index 875b302..2a33bd3 100644
--- a/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp
+++ b/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp
@@ -98,8 +98,8 @@
     BEGIN_NO_SET_PC();                    \
     SET_PC_FOR_STUBS()
 
-#define GET(operand) (callFrame->uncheckedR(operand.offset()))
-#define GET_C(operand) (callFrame->r(operand.offset()))
+#define GET(operand) (callFrame->uncheckedR(operand))
+#define GET_C(operand) (callFrame->r(operand))
 
 #define RETURN_TWO(first, second) do {       \
         return encodeResult(first, second);        \
@@ -1159,8 +1159,7 @@
 {
     BEGIN();
     auto bytecode = pc->as<OpCreateLexicalEnvironment>();
-    int scopeReg = bytecode.m_scope.offset();
-    JSScope* currentScope = callFrame->uncheckedR(scopeReg).Register::scope();
+    JSScope* currentScope = callFrame->uncheckedR(bytecode.m_scope).Register::scope();
     SymbolTable* symbolTable = jsCast<SymbolTable*>(GET_C(bytecode.m_symbolTable).jsValue());
     JSValue initialValue = GET_C(bytecode.m_initialValue).jsValue();
     ASSERT(initialValue == jsUndefined() || initialValue == jsTDZValue());
@@ -1175,8 +1174,7 @@
     JSObject* newScope = GET_C(bytecode.m_newScope).jsValue().toObject(globalObject);
     CHECK_EXCEPTION();
 
-    int scopeReg = bytecode.m_currentScope.offset();
-    JSScope* currentScope = callFrame->uncheckedR(scopeReg).Register::scope();
+    JSScope* currentScope = callFrame->uncheckedR(bytecode.m_currentScope).Register::scope();
     RETURN(JSWithScope::create(vm, globalObject, currentScope, newScope));
 }
 
@@ -1185,7 +1183,7 @@
     BEGIN();
     auto bytecode = pc->as<OpResolveScopeForHoistingFuncDeclInEval>();
     const Identifier& ident = codeBlock->identifier(bytecode.m_property);
-    JSScope* scope = callFrame->uncheckedR(bytecode.m_scope.offset()).Register::scope();
+    JSScope* scope = callFrame->uncheckedR(bytecode.m_scope).Register::scope();
     JSValue resolvedScope = JSScope::resolveScopeForHoistingFuncDeclInEval(globalObject, scope, ident);
 
     CHECK_EXCEPTION();
@@ -1199,7 +1197,7 @@
     auto bytecode = pc->as<OpResolveScope>();
     auto& metadata = bytecode.metadata(codeBlock);
     const Identifier& ident = codeBlock->identifier(bytecode.m_var);
-    JSScope* scope = callFrame->uncheckedR(bytecode.m_scope.offset()).Register::scope();
+    JSScope* scope = callFrame->uncheckedR(bytecode.m_scope).Register::scope();
     JSObject* resolvedScope = JSScope::resolve(globalObject, scope, ident);
     // Proxy can throw an error here, e.g. Proxy in with statement's @unscopables.
     CHECK_EXCEPTION();
@@ -1444,7 +1442,7 @@
 {
     BEGIN();
     auto bytecode = pc->as<OpNewArrayBuffer>();
-    ASSERT(codeBlock->isConstantRegisterIndex(bytecode.m_immutableButterfly.offset()));
+    ASSERT(bytecode.m_immutableButterfly.isConstant());
     JSImmutableButterfly* immutableButterfly = bitwise_cast<JSImmutableButterfly*>(GET_C(bytecode.m_immutableButterfly).jsValue().asCell());
     auto& profile = bytecode.metadata(codeBlock).m_arrayAllocationProfile;
 
@@ -1463,7 +1461,7 @@
         // We also cannot allocate a new butterfly from compilation threads since it's invalid to allocate cells from
         // a compilation thread.
         WTF::storeStoreFence();
-        codeBlock->constantRegister(bytecode.m_immutableButterfly.offset()).set(vm, codeBlock, immutableButterfly);
+        codeBlock->constantRegister(bytecode.m_immutableButterfly).set(vm, codeBlock, immutableButterfly);
         WTF::storeStoreFence();
     }
 
diff --git a/Source/JavaScriptCore/runtime/ConstructData.cpp b/Source/JavaScriptCore/runtime/ConstructData.cpp
index 4bd192a..47b2474 100644
--- a/Source/JavaScriptCore/runtime/ConstructData.cpp
+++ b/Source/JavaScriptCore/runtime/ConstructData.cpp
@@ -36,6 +36,11 @@
 
 JSObject* construct(JSGlobalObject* globalObject, JSValue constructorObject, const ArgList& args, const char* errorMessage)
 {
+    return construct(globalObject, constructorObject, constructorObject, args, errorMessage);
+}
+
+JSObject* construct(JSGlobalObject* globalObject, JSValue constructorObject, JSValue newTarget, const ArgList& args, const char* errorMessage)
+{
     VM& vm = globalObject->vm();
     auto scope = DECLARE_THROW_SCOPE(vm);
 
@@ -46,10 +51,9 @@
         return nullptr;
     }
 
-    RELEASE_AND_RETURN(scope, construct(globalObject, constructorObject, constructType, constructData, args, constructorObject));
+    RELEASE_AND_RETURN(scope, construct(globalObject, constructorObject, constructType, constructData, args, newTarget));  
 }
 
-
 JSObject* construct(JSGlobalObject* globalObject, JSValue constructorObject, ConstructType constructType, const ConstructData& constructData, const ArgList& args, JSValue newTarget)
 {
     VM& vm = globalObject->vm();
diff --git a/Source/JavaScriptCore/runtime/ConstructData.h b/Source/JavaScriptCore/runtime/ConstructData.h
index d0ae00b..fb61974 100644
--- a/Source/JavaScriptCore/runtime/ConstructData.h
+++ b/Source/JavaScriptCore/runtime/ConstructData.h
@@ -60,6 +60,8 @@
 
 // Convenience wrapper so you don't need to deal with CallData and CallType unless you are going to use them.
 JS_EXPORT_PRIVATE JSObject* construct(JSGlobalObject*, JSValue functionObject, const ArgList&, const char* errorMessage);
+JS_EXPORT_PRIVATE JSObject* construct(JSGlobalObject*, JSValue functionObject, JSValue newTarget, const ArgList&, const char* errorMessage);
+
 JS_EXPORT_PRIVATE JSObject* construct(JSGlobalObject*, JSValue constructor, ConstructType, const ConstructData&, const ArgList&, JSValue newTarget);
 
 ALWAYS_INLINE JSObject* construct(JSGlobalObject* globalObject, JSValue constructorObject, ConstructType constructType, const ConstructData& constructData, const ArgList& args)
diff --git a/Source/JavaScriptCore/runtime/DirectArguments.cpp b/Source/JavaScriptCore/runtime/DirectArguments.cpp
index a6795da..4b3461d 100644
--- a/Source/JavaScriptCore/runtime/DirectArguments.cpp
+++ b/Source/JavaScriptCore/runtime/DirectArguments.cpp
@@ -137,20 +137,19 @@
     m_mappedArguments.at(index, internalLength()) = true;
 }
 
-void DirectArguments::copyToArguments(JSGlobalObject* globalObject, CallFrame* callFrame, VirtualRegister firstElementDest, unsigned offset, unsigned length)
+void DirectArguments::copyToArguments(JSGlobalObject* globalObject, JSValue* firstElementDest, unsigned offset, unsigned length)
 {
     if (!m_mappedArguments) {
         unsigned limit = std::min(length + offset, m_length);
         unsigned i;
-        VirtualRegister start = firstElementDest - offset;
         for (i = offset; i < limit; ++i)
-            callFrame->r(start + i) = storage()[i].get();
+            firstElementDest[i - offset] = storage()[i].get();
         for (; i < length; ++i)
-            callFrame->r(start + i) = get(globalObject, i);
+            firstElementDest[i - offset] = get(globalObject, i);
         return;
     }
 
-    GenericArguments::copyToArguments(globalObject, callFrame, firstElementDest, offset, length);
+    GenericArguments::copyToArguments(globalObject, firstElementDest, offset, length);
 }
 
 unsigned DirectArguments::mappedArgumentsSize()
diff --git a/Source/JavaScriptCore/runtime/DirectArguments.h b/Source/JavaScriptCore/runtime/DirectArguments.h
index 0a6b1a0..8d9642a 100644
--- a/Source/JavaScriptCore/runtime/DirectArguments.h
+++ b/Source/JavaScriptCore/runtime/DirectArguments.h
@@ -144,7 +144,7 @@
         return GenericArguments<DirectArguments>::isModifiedArgumentDescriptor(index, m_length);
     }
 
-    void copyToArguments(JSGlobalObject*, CallFrame*, VirtualRegister firstElementDest, unsigned offset, unsigned length);
+    void copyToArguments(JSGlobalObject*, JSValue* firstElementDest, unsigned offset, unsigned length);
 
     DECLARE_INFO;
     
diff --git a/Source/JavaScriptCore/runtime/GenericArguments.h b/Source/JavaScriptCore/runtime/GenericArguments.h
index b31b70a..074c216 100644
--- a/Source/JavaScriptCore/runtime/GenericArguments.h
+++ b/Source/JavaScriptCore/runtime/GenericArguments.h
@@ -59,7 +59,7 @@
     void setModifiedArgumentDescriptor(VM&, unsigned index, unsigned length);
     bool isModifiedArgumentDescriptor(unsigned index, unsigned length);
 
-    void copyToArguments(JSGlobalObject*, CallFrame*, VirtualRegister firstElementDest, unsigned offset, unsigned length);
+    void copyToArguments(JSGlobalObject*, JSValue* firstElementDest, unsigned offset, unsigned length);
 
     using ModifiedArgumentsPtr = CagedBarrierPtr<Gigacage::Primitive, bool>;
     ModifiedArgumentsPtr m_modifiedArgumentsDescriptor;
diff --git a/Source/JavaScriptCore/runtime/GenericArgumentsInlines.h b/Source/JavaScriptCore/runtime/GenericArgumentsInlines.h
index 2182a62..e0aabf5 100644
--- a/Source/JavaScriptCore/runtime/GenericArgumentsInlines.h
+++ b/Source/JavaScriptCore/runtime/GenericArgumentsInlines.h
@@ -297,7 +297,7 @@
 }
 
 template<typename Type>
-void GenericArguments<Type>::copyToArguments(JSGlobalObject* globalObject, CallFrame* callFrame, VirtualRegister firstElementDest, unsigned offset, unsigned length)
+void GenericArguments<Type>::copyToArguments(JSGlobalObject* globalObject, JSValue* firstElementDest, unsigned offset, unsigned length)
 {
     VM& vm = globalObject->vm();
     auto scope = DECLARE_THROW_SCOPE(vm);
@@ -305,9 +305,9 @@
     Type* thisObject = static_cast<Type*>(this);
     for (unsigned i = 0; i < length; ++i) {
         if (thisObject->isMappedArgument(i + offset))
-            callFrame->r(firstElementDest + i) = thisObject->getIndexQuickly(i + offset);
+            firstElementDest[i] = thisObject->getIndexQuickly(i + offset);
         else {
-            callFrame->r(firstElementDest + i) = get(globalObject, i + offset);
+            firstElementDest[i] = get(globalObject, i + offset);
             RETURN_IF_EXCEPTION(scope, void());
         }
     }
diff --git a/Source/JavaScriptCore/runtime/JSArray.cpp b/Source/JavaScriptCore/runtime/JSArray.cpp
index 44c7e52..6d07591 100644
--- a/Source/JavaScriptCore/runtime/JSArray.cpp
+++ b/Source/JavaScriptCore/runtime/JSArray.cpp
@@ -1230,7 +1230,7 @@
         args.append(get(globalObject, i));
 }
 
-void JSArray::copyToArguments(JSGlobalObject* globalObject, CallFrame* callFrame, VirtualRegister firstElementDest, unsigned offset, unsigned length)
+void JSArray::copyToArguments(JSGlobalObject* globalObject, JSValue* firstElementDest, unsigned offset, unsigned length)
 {
     VM& vm = globalObject->vm();
     auto scope = DECLARE_THROW_SCOPE(vm);
@@ -1269,7 +1269,7 @@
             double v = butterfly->contiguousDouble().at(this, i);
             if (v != v)
                 break;
-            callFrame->r(firstElementDest + i - offset) = JSValue(JSValue::EncodeAsDouble, v);
+            firstElementDest[i - offset] = JSValue(JSValue::EncodeAsDouble, v);
         }
         break;
     }
@@ -1294,11 +1294,11 @@
         WriteBarrier<Unknown>& v = vector[i];
         if (!v)
             break;
-        callFrame->r(firstElementDest + i - offset) = v.get();
+        firstElementDest[i - offset] = v.get();
     }
     
     for (; i < length; ++i) {
-        callFrame->r(firstElementDest + i - offset) = get(globalObject, i);
+        firstElementDest[i - offset] = get(globalObject, i);
         RETURN_IF_EXCEPTION(scope, void());
     }
 }
diff --git a/Source/JavaScriptCore/runtime/JSArray.h b/Source/JavaScriptCore/runtime/JSArray.h
index 317949c..31165b5 100644
--- a/Source/JavaScriptCore/runtime/JSArray.h
+++ b/Source/JavaScriptCore/runtime/JSArray.h
@@ -163,7 +163,7 @@
     }
 
     JS_EXPORT_PRIVATE void fillArgList(JSGlobalObject*, MarkedArgumentBuffer&);
-    JS_EXPORT_PRIVATE void copyToArguments(JSGlobalObject*, CallFrame*, VirtualRegister firstElementDest, unsigned offset, unsigned length);
+    JS_EXPORT_PRIVATE void copyToArguments(JSGlobalObject*, JSValue* firstElementDest, unsigned offset, unsigned length);
 
     JS_EXPORT_PRIVATE bool isIteratorProtocolFastAndNonObservable();
 
diff --git a/Source/JavaScriptCore/runtime/JSImmutableButterfly.cpp b/Source/JavaScriptCore/runtime/JSImmutableButterfly.cpp
index e32e949..e490975 100644
--- a/Source/JavaScriptCore/runtime/JSImmutableButterfly.cpp
+++ b/Source/JavaScriptCore/runtime/JSImmutableButterfly.cpp
@@ -46,13 +46,13 @@
     visitor.appendValuesHidden(butterfly->contiguous().data(), butterfly->publicLength());
 }
 
-void JSImmutableButterfly::copyToArguments(JSGlobalObject*, CallFrame* callFrame, VirtualRegister firstElementDest, unsigned offset, unsigned length)
+void JSImmutableButterfly::copyToArguments(JSGlobalObject*, JSValue* firstElementDest, unsigned offset, unsigned length)
 {
     for (unsigned i = 0; i < length; ++i) {
         if ((i + offset) < publicLength())
-            callFrame->r(firstElementDest + i) = get(i + offset);
+            firstElementDest[i] = get(i + offset);
         else
-            callFrame->r(firstElementDest + i) = jsUndefined();
+            firstElementDest[i] = jsUndefined();
     }
 }
 
diff --git a/Source/JavaScriptCore/runtime/JSImmutableButterfly.h b/Source/JavaScriptCore/runtime/JSImmutableButterfly.h
index acb6848..07e1a8a 100644
--- a/Source/JavaScriptCore/runtime/JSImmutableButterfly.h
+++ b/Source/JavaScriptCore/runtime/JSImmutableButterfly.h
@@ -147,7 +147,7 @@
 
     static void visitChildren(JSCell*, SlotVisitor&);
 
-    void copyToArguments(JSGlobalObject*, CallFrame*, VirtualRegister firstElementDest, unsigned offset, unsigned length);
+    void copyToArguments(JSGlobalObject*, JSValue* firstElementDest, unsigned offset, unsigned length);
 
     template<typename, SubspaceAccess>
     static CompleteSubspace* subspaceFor(VM& vm)
diff --git a/Source/JavaScriptCore/runtime/JSLock.cpp b/Source/JavaScriptCore/runtime/JSLock.cpp
index a6eb0ea..16aa369 100644
--- a/Source/JavaScriptCore/runtime/JSLock.cpp
+++ b/Source/JavaScriptCore/runtime/JSLock.cpp
@@ -204,6 +204,7 @@
 {   
     RefPtr<VM> vm = m_vm;
     if (vm) {
+        RELEASE_ASSERT_WITH_MESSAGE(!vm->hasCheckpointOSRSideState(), "Releasing JSLock but pending checkpoint side state still available");
         vm->drainMicrotasks();
 
         if (!vm->topCallFrame)
diff --git a/Source/JavaScriptCore/runtime/ModuleProgramExecutable.cpp b/Source/JavaScriptCore/runtime/ModuleProgramExecutable.cpp
index db05254..596808c 100644
--- a/Source/JavaScriptCore/runtime/ModuleProgramExecutable.cpp
+++ b/Source/JavaScriptCore/runtime/ModuleProgramExecutable.cpp
@@ -74,7 +74,7 @@
 
     executable->m_unlinkedModuleProgramCodeBlock.set(globalObject->vm(), executable, unlinkedModuleProgramCode);
 
-    executable->m_moduleEnvironmentSymbolTable.set(globalObject->vm(), executable, jsCast<SymbolTable*>(unlinkedModuleProgramCode->constantRegister(unlinkedModuleProgramCode->moduleEnvironmentSymbolTableConstantRegisterOffset()).get())->cloneScopePart(globalObject->vm()));
+    executable->m_moduleEnvironmentSymbolTable.set(globalObject->vm(), executable, jsCast<SymbolTable*>(unlinkedModuleProgramCode->constantRegister(VirtualRegister(unlinkedModuleProgramCode->moduleEnvironmentSymbolTableConstantRegisterOffset())).get())->cloneScopePart(globalObject->vm()));
 
     return executable;
 }
diff --git a/Source/JavaScriptCore/runtime/Options.cpp b/Source/JavaScriptCore/runtime/Options.cpp
index 176f99d..74ccdeb 100644
--- a/Source/JavaScriptCore/runtime/Options.cpp
+++ b/Source/JavaScriptCore/runtime/Options.cpp
@@ -509,11 +509,9 @@
     if (Options::softReservedZoneSize() < Options::reservedZoneSize() + minimumReservedZoneSize)
         Options::softReservedZoneSize() = Options::reservedZoneSize() + minimumReservedZoneSize;
 
-#if USE(JSVALUE32_64)
     // FIXME: Make probe OSR exit work on 32-bit:
     // https://bugs.webkit.org/show_bug.cgi?id=177956
     Options::useProbeOSRExit() = false;
-#endif
 
     if (!Options::useCodeCache())
         Options::diskCachePath() = nullptr;
diff --git a/Source/JavaScriptCore/runtime/ScopedArguments.cpp b/Source/JavaScriptCore/runtime/ScopedArguments.cpp
index 081d978..3c740d4 100644
--- a/Source/JavaScriptCore/runtime/ScopedArguments.cpp
+++ b/Source/JavaScriptCore/runtime/ScopedArguments.cpp
@@ -148,9 +148,9 @@
         storage()[i - namedLength].clear();
 }
 
-void ScopedArguments::copyToArguments(JSGlobalObject* globalObject, CallFrame* callFrame, VirtualRegister firstElementDest, unsigned offset, unsigned length)
+void ScopedArguments::copyToArguments(JSGlobalObject* globalObject, JSValue* firstElementDest, unsigned offset, unsigned length)
 {
-    GenericArguments::copyToArguments(globalObject, callFrame, firstElementDest, offset, length);
+    GenericArguments::copyToArguments(globalObject, firstElementDest, offset, length);
 }
 
 } // namespace JSC
diff --git a/Source/JavaScriptCore/runtime/ScopedArguments.h b/Source/JavaScriptCore/runtime/ScopedArguments.h
index f85e47d..e58cb22 100644
--- a/Source/JavaScriptCore/runtime/ScopedArguments.h
+++ b/Source/JavaScriptCore/runtime/ScopedArguments.h
@@ -142,7 +142,7 @@
         return GenericArguments<ScopedArguments>::isModifiedArgumentDescriptor(index, m_table->length());
     }
 
-    void copyToArguments(JSGlobalObject*, CallFrame*, VirtualRegister firstElementDest, unsigned offset, unsigned length);
+    void copyToArguments(JSGlobalObject*, JSValue* firstElementDest, unsigned offset, unsigned length);
 
     DECLARE_INFO;
     
diff --git a/Source/JavaScriptCore/runtime/VM.cpp b/Source/JavaScriptCore/runtime/VM.cpp
index 8803a71..3423a2e 100644
--- a/Source/JavaScriptCore/runtime/VM.cpp
+++ b/Source/JavaScriptCore/runtime/VM.cpp
@@ -34,6 +34,7 @@
 #include "BooleanObject.h"
 #include "BuiltinExecutables.h"
 #include "BytecodeIntrinsicRegistry.h"
+#include "CheckpointOSRExitSideState.h"
 #include "ClonedArguments.h"
 #include "CodeBlock.h"
 #include "CodeCache.h"
@@ -1062,8 +1063,31 @@
         }
     }
 }
+
+void VM::scanSideState(ConservativeRoots& roots) const
+{
+    ASSERT(heap.worldIsStopped());
+    for (const auto& iter : m_checkpointSideState) {
+        static_assert(sizeof(iter.value->tmps) / sizeof(JSValue) == maxNumCheckpointTmps);
+        roots.add(iter.value->tmps, iter.value->tmps + maxNumCheckpointTmps);
+    }
+}
 #endif
 
+void VM::addCheckpointOSRSideState(CallFrame* callFrame, std::unique_ptr<CheckpointOSRExitSideState>&& payload)
+{
+    ASSERT(currentThreadIsHoldingAPILock());
+    auto addResult = m_checkpointSideState.add(callFrame, WTFMove(payload));
+    ASSERT_UNUSED(addResult, addResult.isNewEntry);
+}
+
+std::unique_ptr<CheckpointOSRExitSideState> VM::findCheckpointOSRSideState(CallFrame* callFrame)
+{
+    ASSERT(currentThreadIsHoldingAPILock());
+    auto sideState = m_checkpointSideState.take(callFrame);
+    return sideState;
+}
+
 void logSanitizeStack(VM& vm)
 {
     if (Options::verboseSanitizeStack() && vm.topCallFrame) {
diff --git a/Source/JavaScriptCore/runtime/VM.h b/Source/JavaScriptCore/runtime/VM.h
index f771d092..7da32a7 100644
--- a/Source/JavaScriptCore/runtime/VM.h
+++ b/Source/JavaScriptCore/runtime/VM.h
@@ -103,10 +103,12 @@
 class BuiltinExecutables;
 class BytecodeIntrinsicRegistry;
 class CallFrame;
+struct CheckpointOSRExitSideState;
 class CodeBlock;
 class CodeCache;
 class CommonIdentifiers;
 class CompactVariableMap;
+class ConservativeRoots;
 class CustomGetterSetter;
 class DOMAttributeGetterSetter;
 class DateInstance;
@@ -958,6 +960,11 @@
 
     void gatherScratchBufferRoots(ConservativeRoots&);
 
+    void addCheckpointOSRSideState(CallFrame*, std::unique_ptr<CheckpointOSRExitSideState>&&);
+    std::unique_ptr<CheckpointOSRExitSideState> findCheckpointOSRSideState(CallFrame*);
+    bool hasCheckpointOSRSideState() const { return m_checkpointSideState.size(); }
+    void scanSideState(ConservativeRoots&) const;
+
     VMEntryScope* entryScope;
 
     JSObject* stringRecursionCheckFirstObject { nullptr };
@@ -1208,6 +1215,7 @@
     Lock m_scratchBufferLock;
     Vector<ScratchBuffer*> m_scratchBuffers;
     size_t m_sizeOfLastScratchBuffer { 0 };
+    HashMap<CallFrame*, std::unique_ptr<CheckpointOSRExitSideState>> m_checkpointSideState;
     InlineWatchpointSet m_primitiveGigacageEnabled;
     FunctionHasExecutedCache m_functionHasExecutedCache;
     std::unique_ptr<ControlFlowProfiler> m_controlFlowProfiler;
diff --git a/Source/JavaScriptCore/tools/VMInspector.cpp b/Source/JavaScriptCore/tools/VMInspector.cpp
index 81c9f1f..e148400 100644
--- a/Source/JavaScriptCore/tools/VMInspector.cpp
+++ b/Source/JavaScriptCore/tools/VMInspector.cpp
@@ -397,8 +397,8 @@
     const Register* it;
     const Register* end;
 
-    it = callFrame->registers() + CallFrameSlot::thisArgument + callFrame->argumentCount();
-    end = callFrame->registers() + CallFrameSlot::thisArgument - 1;
+    it = callFrame->registers() + (CallFrameSlot::thisArgument + callFrame->argumentCount());
+    end = callFrame->registers() + (CallFrameSlot::thisArgument - 1);
     while (it > end) {
         JSValue v = it->jsValue();
         int registerNumber = it - callFrame->registers();
diff --git a/Source/JavaScriptCore/wasm/WasmFunctionCodeBlock.h b/Source/JavaScriptCore/wasm/WasmFunctionCodeBlock.h
index f688a7c..2f4b1c7 100644
--- a/Source/JavaScriptCore/wasm/WasmFunctionCodeBlock.h
+++ b/Source/JavaScriptCore/wasm/WasmFunctionCodeBlock.h
@@ -72,11 +72,11 @@
     const Vector<uint64_t>& constants() const { return m_constants; }
     const InstructionStream& instructions() const { return *m_instructions; }
 
-    ALWAYS_INLINE uint64_t getConstant(int index) const { return m_constants[index - FirstConstantRegisterIndex]; }
-    ALWAYS_INLINE Type getConstantType(int index) const
+    ALWAYS_INLINE uint64_t getConstant(VirtualRegister reg) const { return m_constants[reg.toConstantIndex()]; }
+    ALWAYS_INLINE Type getConstantType(VirtualRegister reg) const
     {
         ASSERT(Options::dumpGeneratedWasmBytecodes());
-        return m_constantTypes[index - FirstConstantRegisterIndex];
+        return m_constantTypes[reg.toConstantIndex()];
     }
 
     void setInstructions(std::unique_ptr<InstructionStream>);
diff --git a/Source/JavaScriptCore/wasm/WasmLLIntGenerator.cpp b/Source/JavaScriptCore/wasm/WasmLLIntGenerator.cpp
index dd1a0f7..fc038f5 100644
--- a/Source/JavaScriptCore/wasm/WasmLLIntGenerator.cpp
+++ b/Source/JavaScriptCore/wasm/WasmLLIntGenerator.cpp
@@ -242,6 +242,9 @@
 
     void setParser(FunctionParser<LLIntGenerator>* parser) { m_parser = parser; };
 
+    // We need this for autogenerated templates used by JS bytecodes.
+    void setUsesCheckpoints() const { UNREACHABLE_FOR_PLATFORM(); }
+
     void dump(const ControlStack&, const Stack*) { }
 
 private:
@@ -628,14 +631,14 @@
             if (gprIndex < maxGPRIndex)
                 m_results.append(virtualRegisterForLocal(numberOfLLIntCalleeSaveRegisters + gprIndex++));
             else
-                m_results.append(virtualRegisterForArgument(stackIndex++));
+                m_results.append(virtualRegisterForArgumentIncludingThis(stackIndex++));
             break;
         case Type::F32:
         case Type::F64:
             if (fprIndex < maxFPRIndex)
                 m_results.append(virtualRegisterForLocal(numberOfLLIntCalleeSaveRegisters + fprIndex++));
             else
-                m_results.append(virtualRegisterForArgument(stackIndex++));
+                m_results.append(virtualRegisterForArgumentIncludingThis(stackIndex++));
             break;
         case Void:
         case Func:
@@ -670,7 +673,7 @@
         if (count < max)
             m_normalizedArguments[index] = registerArguments[count++];
         else
-            m_normalizedArguments[index] = virtualRegisterForArgument(stackIndex++);
+            m_normalizedArguments[index] = virtualRegisterForArgumentIncludingThis(stackIndex++);
     };
 
     for (uint32_t i = 0; i < signature.argumentCount(); i++) {
diff --git a/Source/JavaScriptCore/wasm/WasmOperations.cpp b/Source/JavaScriptCore/wasm/WasmOperations.cpp
index dbfb500..56ca547 100644
--- a/Source/JavaScriptCore/wasm/WasmOperations.cpp
+++ b/Source/JavaScriptCore/wasm/WasmOperations.cpp
@@ -744,7 +744,7 @@
     // for exceptions first load callFrameForCatch info call frame register before jumping
     // to the exception handler. If we did this, we could remove this terrible hack.
     // https://bugs.webkit.org/show_bug.cgi?id=170440
-    bitwise_cast<uint64_t*>(callFrame)[CallFrameSlot::callee] = bitwise_cast<uint64_t>(instance->module());
+    bitwise_cast<uint64_t*>(callFrame)[static_cast<int>(CallFrameSlot::callee)] = bitwise_cast<uint64_t>(instance->module());
     return vm.targetMachinePCForThrow;
 }
 
diff --git a/Source/JavaScriptCore/wasm/WasmSlowPaths.cpp b/Source/JavaScriptCore/wasm/WasmSlowPaths.cpp
index 927a698..54f636a 100644
--- a/Source/JavaScriptCore/wasm/WasmSlowPaths.cpp
+++ b/Source/JavaScriptCore/wasm/WasmSlowPaths.cpp
@@ -76,7 +76,7 @@
 
 #define READ(virtualRegister) \
     (virtualRegister.isConstant() \
-        ? JSValue::decode(CODE_BLOCK()->getConstant(virtualRegister.offset())) \
+        ? JSValue::decode(CODE_BLOCK()->getConstant(virtualRegister)) \
         : callFrame->r(virtualRegister))
 
 inline bool jitCompileAndSetHeuristics(Wasm::LLIntCallee* callee, Wasm::FunctionCodeBlock* codeBlock, Wasm::Instance* instance)
diff --git a/Source/WTF/ChangeLog b/Source/WTF/ChangeLog
index e0e07e6..a8b1a46 100644
--- a/Source/WTF/ChangeLog
+++ b/Source/WTF/ChangeLog
@@ -1,3 +1,29 @@
+2020-01-16  Keith Miller  <keith_miller@apple.com>
+
+        Reland bytecode checkpoints since bugs have been fixed
+        https://bugs.webkit.org/show_bug.cgi?id=206361
+
+        Unreviewed, reland.
+
+        The watch bugs have been fixed by https://trac.webkit.org/changeset/254674
+
+        * WTF.xcodeproj/project.pbxproj:
+        * wtf/Bitmap.h:
+        (WTF::WordType>::invert):
+        (WTF::WordType>::operator):
+        (WTF::WordType>::operator const const):
+        * wtf/CMakeLists.txt:
+        * wtf/EnumClassOperatorOverloads.h: Added.
+        * wtf/FastBitVector.h:
+        (WTF::FastBitReference::operator bool const):
+        (WTF::FastBitReference::operator|=):
+        (WTF::FastBitReference::operator&=):
+        (WTF::FastBitVector::fill):
+        (WTF::FastBitVector::grow):
+        * wtf/UnalignedAccess.h:
+        (WTF::unalignedLoad):
+        (WTF::unalignedStore):
+
 2020-01-16  Tim Horton  <timothy_horton@apple.com>
 
         Fix the build after r254701
diff --git a/Source/WTF/WTF.xcodeproj/project.pbxproj b/Source/WTF/WTF.xcodeproj/project.pbxproj
index 76d880a..635bded 100644
--- a/Source/WTF/WTF.xcodeproj/project.pbxproj
+++ b/Source/WTF/WTF.xcodeproj/project.pbxproj
@@ -393,6 +393,7 @@
 		5311BD571EA7E1A100525281 /* Signals.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = Signals.h; sourceTree = "<group>"; };
 		5311BD591EA81A9600525281 /* ThreadMessage.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ThreadMessage.h; sourceTree = "<group>"; };
 		5311BD5B1EA822F900525281 /* ThreadMessage.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ThreadMessage.cpp; sourceTree = "<group>"; };
+		5338EBA423AB04D100382662 /* EnumClassOperatorOverloads.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = EnumClassOperatorOverloads.h; sourceTree = "<group>"; };
 		53534F291EC0E10E00141B2F /* MachExceptions.defs */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.mig; path = MachExceptions.defs; sourceTree = "<group>"; };
 		539EB0621D55284200C82EF7 /* LEBDecoder.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = LEBDecoder.h; sourceTree = "<group>"; };
 		53EC253C1E95AD30000831B9 /* PriorityQueue.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = PriorityQueue.h; sourceTree = "<group>"; };
@@ -988,6 +989,7 @@
 				A8A47298151A825A004123FF /* dtoa.h */,
 				FE05FAE61FDB214300093230 /* DumbPtrTraits.h */,
 				AD653DA82006B6C200D820D7 /* DumbValueTraits.h */,
+				5338EBA423AB04D100382662 /* EnumClassOperatorOverloads.h */,
 				1AEA88E11D6BBCF400E5AD64 /* EnumTraits.h */,
 				AD7C434A1DD2A4A70026888B /* Expected.h */,
 				A8A4729F151A825A004123FF /* ExportMacros.h */,
diff --git a/Source/WTF/wtf/Bitmap.h b/Source/WTF/wtf/Bitmap.h
index 4c6f5d4..b46da9e 100644
--- a/Source/WTF/wtf/Bitmap.h
+++ b/Source/WTF/wtf/Bitmap.h
@@ -51,6 +51,7 @@
     size_t nextPossiblyUnset(size_t) const;
     void clear(size_t);
     void clearAll();
+    void invert();
     int64_t findRunOfZeros(size_t runLength) const;
     size_t count(size_t start = 0) const;
     size_t isEmpty() const;
@@ -102,6 +103,12 @@
             return !(*this == other);
         }
 
+        iterator& operator=(bool value)
+        {
+            m_bitmap->set(m_index, value);
+            return *this;
+        }
+
     private:
         const Bitmap* m_bitmap;
         size_t m_index;
@@ -111,6 +118,9 @@
     iterator begin() const { return iterator(*this, findBit(0, true)); }
     iterator end() const { return iterator(*this, bitmapSize); }
     
+    iterator operator[](size_t);
+    const iterator operator[](size_t) const;
+
     void mergeAndClear(Bitmap&);
     void setAndClear(Bitmap&);
     
@@ -225,6 +235,13 @@
 }
 
 template<size_t bitmapSize, typename WordType>
+inline void Bitmap<bitmapSize, WordType>::invert()
+{
+    for (size_t i = 0; i < words; ++i)
+        bits[i] = ~bits[i];
+}
+
+template<size_t bitmapSize, typename WordType>
 inline size_t Bitmap<bitmapSize, WordType>::nextPossiblyUnset(size_t start) const
 {
     if (!~bits[start / wordSize])
@@ -410,6 +427,19 @@
 }
 
 template<size_t bitmapSize, typename WordType>
+inline auto Bitmap<bitmapSize, WordType>::operator[](size_t index) -> iterator
+{
+    ASSERT(index < size());
+    return iterator(*this, index);
+}
+
+template<size_t bitmapSize, typename WordType>
+inline auto Bitmap<bitmapSize, WordType>::operator[](size_t index) const -> const iterator
+{
+    return (*const_cast<Bitmap<bitmapSize, WordType>*>(this))[index];
+}
+
+template<size_t bitmapSize, typename WordType>
 inline unsigned Bitmap<bitmapSize, WordType>::hash() const
 {
     unsigned result = 0;
diff --git a/Source/WTF/wtf/CMakeLists.txt b/Source/WTF/wtf/CMakeLists.txt
index 9664262..230427a 100644
--- a/Source/WTF/wtf/CMakeLists.txt
+++ b/Source/WTF/wtf/CMakeLists.txt
@@ -56,6 +56,7 @@
     DoublyLinkedList.h
     DumbPtrTraits.h
     DumbValueTraits.h
+    EnumClassOperatorOverloads.h
     EnumTraits.h
     Expected.h
     ExportMacros.h
diff --git a/Source/WTF/wtf/EnumClassOperatorOverloads.h b/Source/WTF/wtf/EnumClassOperatorOverloads.h
new file mode 100644
index 0000000..357ba2f
--- /dev/null
+++ b/Source/WTF/wtf/EnumClassOperatorOverloads.h
@@ -0,0 +1,61 @@
+/*
+ * Copyright (C) 2019 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include <type_traits>
+
+#define OVERLOAD_OPERATOR_FOR_ENUM_CLASS_WHEN(enumName, op, enableExpression) \
+    template<typename T> \
+    constexpr auto operator op(enumName enumEntry, T value) -> std::enable_if_t<(enableExpression), T> \
+    { \
+        return static_cast<T>(enumEntry) op value; \
+    } \
+    \
+    template<typename T> \
+    constexpr auto operator op(T value, enumName enumEntry) -> std::enable_if_t<(enableExpression), T> \
+    { \
+        return value op static_cast<T>(enumEntry); \
+    } \
+
+#define OVERLOAD_MATH_OPERATORS_FOR_ENUM_CLASS_WHEN(enumName, enableExpression) \
+    OVERLOAD_OPERATOR_FOR_ENUM_CLASS_WHEN(enumName, +, enableExpression) \
+    OVERLOAD_OPERATOR_FOR_ENUM_CLASS_WHEN(enumName, -, enableExpression) \
+    OVERLOAD_OPERATOR_FOR_ENUM_CLASS_WHEN(enumName, *, enableExpression) \
+    OVERLOAD_OPERATOR_FOR_ENUM_CLASS_WHEN(enumName, /, enableExpression) \
+    OVERLOAD_OPERATOR_FOR_ENUM_CLASS_WHEN(enumName, %, enableExpression) \
+
+#define OVERLOAD_MATH_OPERATORS_FOR_ENUM_CLASS_WITH_INTEGRALS(enumName) OVERLOAD_MATH_OPERATORS_FOR_ENUM_CLASS_WHEN(enumName, std::is_integral_v<T>)
+
+#define OVERLOAD_RELATIONAL_OPERATORS_FOR_ENUM_CLASS_WHEN(enumName, enableExpression) \
+    OVERLOAD_OPERATOR_FOR_ENUM_CLASS_WHEN(enumName, ==, enableExpression) \
+    OVERLOAD_OPERATOR_FOR_ENUM_CLASS_WHEN(enumName, !=, enableExpression) \
+    OVERLOAD_OPERATOR_FOR_ENUM_CLASS_WHEN(enumName, <, enableExpression) \
+    OVERLOAD_OPERATOR_FOR_ENUM_CLASS_WHEN(enumName, <=, enableExpression) \
+    OVERLOAD_OPERATOR_FOR_ENUM_CLASS_WHEN(enumName, >, enableExpression) \
+    OVERLOAD_OPERATOR_FOR_ENUM_CLASS_WHEN(enumName, >=, enableExpression) \
+
+#define OVERLOAD_RELATIONAL_OPERATORS_FOR_ENUM_CLASS_WITH_INTEGRALS(enumName) OVERLOAD_RELATIONAL_OPERATORS_FOR_ENUM_CLASS_WHEN(enumName, std::is_integral_v<T>)
+
diff --git a/Source/WTF/wtf/FastBitVector.h b/Source/WTF/wtf/FastBitVector.h
index 733c65c..96dcce1 100644
--- a/Source/WTF/wtf/FastBitVector.h
+++ b/Source/WTF/wtf/FastBitVector.h
@@ -452,7 +452,7 @@
     {
     }
 
-    explicit operator bool() const
+    operator bool() const
     {
         return !!(*m_word & m_mask);
     }
@@ -466,6 +466,9 @@
         return *this;
     }
 
+    FastBitReference& operator|=(bool value) { return value ? *this = value : *this; }
+    FastBitReference& operator&=(bool value) { return value ? *this : *this = value; }
+
 private:
     uint32_t* m_word { nullptr };
     uint32_t m_mask { 0 };
@@ -511,7 +514,11 @@
     {
         m_words.clearAll();
     }
-    
+
+    // For templating as Vector<bool>
+    void fill(bool value) { value ? setAll() : clearAll(); }
+    void grow(size_t newSize) { resize(newSize); }
+
     WTF_EXPORT_PRIVATE void clearRange(size_t begin, size_t end);
 
     // Returns true if the contents of this bitvector changed.
@@ -591,4 +598,5 @@
 
 } // namespace WTF
 
+using WTF::FastBitReference;
 using WTF::FastBitVector;
diff --git a/Source/WTF/wtf/UnalignedAccess.h b/Source/WTF/wtf/UnalignedAccess.h
index 5a4e9ef..3a35b38 100644
--- a/Source/WTF/wtf/UnalignedAccess.h
+++ b/Source/WTF/wtf/UnalignedAccess.h
@@ -34,7 +34,7 @@
 template<typename Type>
 inline Type unalignedLoad(const void* pointer)
 {
-    static_assert(std::is_trivial<Type>::value, "");
+    static_assert(std::is_trivially_copyable<Type>::value, "");
     Type result { };
     memcpy(&result, pointer, sizeof(Type));
     return result;
@@ -43,7 +43,7 @@
 template<typename Type>
 inline void unalignedStore(void* pointer, Type value)
 {
-    static_assert(std::is_trivial<Type>::value, "");
+    static_assert(std::is_trivially_copyable<Type>::value, "");
     memcpy(pointer, &value, sizeof(Type));
 }
 
diff --git a/Tools/ChangeLog b/Tools/ChangeLog
index 52183f1..7033fa4 100644
--- a/Tools/ChangeLog
+++ b/Tools/ChangeLog
@@ -1,3 +1,14 @@
+2020-01-16  Keith Miller  <keith_miller@apple.com>
+
+        Reland bytecode checkpoints since bugs have been fixed
+        https://bugs.webkit.org/show_bug.cgi?id=206361
+
+        Unreviewed, reland.
+
+        The watch bugs have been fixed by https://trac.webkit.org/changeset/254674
+
+        * Scripts/run-jsc-stress-tests:
+
 2020-01-16  Fujii Hironori  <Hironori.Fujii@sony.com>
 
         Unreviewed, rolling out r254678.
diff --git a/Tools/Scripts/run-jsc-stress-tests b/Tools/Scripts/run-jsc-stress-tests
index 1289607..09e4f17 100755
--- a/Tools/Scripts/run-jsc-stress-tests
+++ b/Tools/Scripts/run-jsc-stress-tests
@@ -498,7 +498,6 @@
 B3O1_OPTIONS = ["--defaultB3OptLevel=1"]
 B3O0_OPTIONS = ["--maxDFGNodesInBasicBlockForPreciseAnalysis=100", "--defaultB3OptLevel=0"]
 FTL_OPTIONS = ["--useFTLJIT=true"]
-PROBE_OSR_EXIT_OPTION = ["--useProbeOSRExit=true"]
 FORCE_LLINT_EXIT_OPTIONS = ["--forceOSRExitToLLInt=true"]
 
 require_relative "webkitruby/jsc-stress-test-writer-#{$testWriter}"
@@ -713,7 +712,7 @@
 end
 
 def runFTLNoCJITValidate(*optionalTestSpecificOptions)
-    run("ftl-no-cjit-validate-sampling-profiler", "--validateGraph=true", "--useSamplingProfiler=true", "--airForceIRCAllocator=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + PROBE_OSR_EXIT_OPTION + optionalTestSpecificOptions))
+    run("ftl-no-cjit-validate-sampling-profiler", "--validateGraph=true", "--useSamplingProfiler=true", "--airForceIRCAllocator=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + optionalTestSpecificOptions))
 end
 
 def runFTLNoCJITNoPutStackValidate(*optionalTestSpecificOptions)
@@ -729,7 +728,7 @@
 end
 
 def runDFGEager(*optionalTestSpecificOptions)
-    run("dfg-eager", *(EAGER_OPTIONS + COLLECT_CONTINUOUSLY_OPTIONS + PROBE_OSR_EXIT_OPTION + FORCE_LLINT_EXIT_OPTIONS + optionalTestSpecificOptions))
+    run("dfg-eager", *(EAGER_OPTIONS + COLLECT_CONTINUOUSLY_OPTIONS + FORCE_LLINT_EXIT_OPTIONS + optionalTestSpecificOptions))
 end
 
 def runDFGEagerNoCJITValidate(*optionalTestSpecificOptions)