Add support for Callee-Saves registers
https://bugs.webkit.org/show_bug.cgi?id=148666
Reviewed by Filip Pizlo.
We save platform callee save registers right below the call frame header,
in the location(s) starting with VirtualRegister 0. This local space is
allocated in the bytecode compiler. This space is the maximum space
needed for the callee registers that the LLInt and baseline JIT use,
rounded up to a stack aligned number of VirtualRegisters.
The LLInt explicitly saves and restores the registers in the macros
preserveCalleeSavesUsedByLLInt and restoreCalleeSavesUsedByLLInt.
The JITs saves and restores callee saves registers by what registers
are included in m_calleeSaveRegisters in the code block.
Added handling of callee save register restoration to exception handling.
The basic flow is when an exception is thrown or one is recognized to
have been generated in C++ code, we save the current state of all
callee save registers to VM::calleeSaveRegistersBuffer. As we unwind
looking for the corresponding catch, we copy the callee saves from call
frames to the same VM::calleeSaveRegistersBuffer. This is done for all
call frames on the stack up to but not including the call frame that has
the corresponding catch block. When we process the catch, we restore
the callee save registers with the contents of VM::calleeSaveRegistersBuffer.
If there isn't a catch, then handleUncaughtException will restore callee
saves before it returns back to the calling C++.
Eliminated callee saves registers as free registers for various thunk
generators as the callee saves may not have been saved by the function
calling the thunk.
Added code to transition callee saves from one VM's format to the another
as part of OSR entry and OSR exit.
Cleaned up the static RegisterSet's including adding one for LLInt and
baseline JIT callee saves and one to be used to allocate local registers
not including the callee saves or other special registers.
Moved ftl/FTLRegisterAtOffset.{cpp,h} to jit/RegisterAtOffset.{cpp,h}.
Factored out the vector of RegisterAtOffsets in ftl/FTLUnwindInfo.{cpp,h}
into a new class in jit/RegisterAtOffsetList.{cpp,h}.
Eliminted UnwindInfo and changed UnwindInfo::parse() into a standalone
function named parseUnwindInfo. That standalone function now returns
the callee saves RegisterAtOffsetList. This is stored in the CodeBlock
and used instead of UnwindInfo.
Turned off register preservation thunks for outgoing calls from FTL
generated code. THey'll be removed in a subsequent patch.
Changed specialized thinks to save and restore the contents of
tagTypeNumberRegister and tagMaskRegister as they can be called by FTL
compiled functions. We materialize those tag registers for the thunk's
use and then restore the prior contents on function exit.
Also removed the arity check fail return thunk since it is now the
caller's responsibility to restore the stack pointer.
Removed saving of callee save registers and materialization of special
tag registers for 64 bit platforms from vmEntryToJavaScript and
vmEntryToNative.
* CMakeLists.txt:
* JavaScriptCore.vcxproj/JavaScriptCore.vcxproj:
* JavaScriptCore.vcxproj/JavaScriptCore.vcxproj.filters:
* JavaScriptCore.xcodeproj/project.pbxproj:
* ftl/FTLJITCode.h:
* ftl/FTLRegisterAtOffset.cpp: Removed.
* ftl/FTLRegisterAtOffset.h: Removed.
* ftl/FTLUnwindInfo.cpp:
(JSC::FTL::parseUnwindInfo):
(JSC::FTL::UnwindInfo::UnwindInfo): Deleted.
(JSC::FTL::UnwindInfo::~UnwindInfo): Deleted.
(JSC::FTL::UnwindInfo::parse): Deleted.
(JSC::FTL::UnwindInfo::dump): Deleted.
(JSC::FTL::UnwindInfo::find): Deleted.
(JSC::FTL::UnwindInfo::indexOf): Deleted.
* ftl/FTLUnwindInfo.h:
(JSC::RegisterAtOffset::dump):
* jit/RegisterAtOffset.cpp: Added.
* jit/RegisterAtOffset.h: Added.
(JSC::RegisterAtOffset::RegisterAtOffset):
(JSC::RegisterAtOffset::operator!):
(JSC::RegisterAtOffset::reg):
(JSC::RegisterAtOffset::offset):
(JSC::RegisterAtOffset::offsetAsIndex):
(JSC::RegisterAtOffset::operator==):
(JSC::RegisterAtOffset::operator<):
(JSC::RegisterAtOffset::getReg):
* jit/RegisterAtOffsetList.cpp: Added.
(JSC::RegisterAtOffsetList::RegisterAtOffsetList):
(JSC::RegisterAtOffsetList::sort):
(JSC::RegisterAtOffsetList::dump):
(JSC::RegisterAtOffsetList::find):
(JSC::RegisterAtOffsetList::indexOf):
* jit/RegisterAtOffsetList.h: Added.
(JSC::RegisterAtOffsetList::clear):
(JSC::RegisterAtOffsetList::size):
(JSC::RegisterAtOffsetList::at):
(JSC::RegisterAtOffsetList::append):
Move and refactored use of FTLRegisterAtOffset to RegisterAtOffset.
Added RegisterAtOffset and RegisterAtOffsetList to build configurations.
Remove FTLRegisterAtOffset files.
* bytecode/CallLinkInfo.h:
(JSC::CallLinkInfo::setUpCallFromFTL):
Turned off FTL register preservation thunks.
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::CodeBlock):
(JSC::CodeBlock::setCalleeSaveRegisters):
(JSC::roundCalleeSaveSpaceAsVirtualRegisters):
(JSC::CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters):
(JSC::CodeBlock::calleeSaveSpaceAsVirtualRegisters):
* bytecode/CodeBlock.h:
(JSC::CodeBlock::numberOfLLIntBaselineCalleeSaveRegisters):
(JSC::CodeBlock::calleeSaveRegisters):
(JSC::CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters):
(JSC::CodeBlock::optimizeAfterWarmUp):
(JSC::CodeBlock::numberOfDFGCompiles):
Methods to manage a set of callee save registers. Also to allocate the appropriate
number of VirtualRegisters for callee saves.
* bytecompiler/BytecodeGenerator.cpp:
(JSC::BytecodeGenerator::BytecodeGenerator):
(JSC::BytecodeGenerator::allocateCalleeSaveSpace):
* bytecompiler/BytecodeGenerator.h:
Allocate the appropriate number of VirtualRegisters for callee saves needed by LLInt or baseline JIT.
* dfg/DFGJITCompiler.cpp:
(JSC::DFG::JITCompiler::compileEntry):
(JSC::DFG::JITCompiler::compileSetupRegistersForEntry):
(JSC::DFG::JITCompiler::compileBody):
(JSC::DFG::JITCompiler::compileExceptionHandlers):
(JSC::DFG::JITCompiler::compile):
(JSC::DFG::JITCompiler::compileFunction):
* dfg/DFGJITCompiler.h:
* interpreter/Interpreter.cpp:
(JSC::UnwindFunctor::operator()):
(JSC::UnwindFunctor::copyCalleeSavesToVMCalleeSavesBuffer):
* dfg/DFGPlan.cpp:
(JSC::DFG::Plan::compileInThreadImpl):
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::usedRegisters):
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGStackLayoutPhase.cpp:
(JSC::DFG::StackLayoutPhase::run):
* ftl/FTLCompile.cpp:
(JSC::FTL::fixFunctionBasedOnStackMaps):
(JSC::FTL::compile):
* ftl/FTLLink.cpp:
(JSC::FTL::link):
* ftl/FTLOSRExitCompiler.cpp:
(JSC::FTL::compileStub):
* ftl/FTLThunks.cpp:
(JSC::FTL::osrExitGenerationThunkGenerator):
* jit/ArityCheckFailReturnThunks.cpp: Removed.
* jit/ArityCheckFailReturnThunks.h: Removed.
* jit/JIT.cpp:
(JSC::JIT::emitEnterOptimizationCheck):
(JSC::JIT::privateCompile):
(JSC::JIT::privateCompileExceptionHandlers):
* jit/JITCall32_64.cpp:
(JSC::JIT::emit_op_ret):
* jit/JITExceptions.cpp:
(JSC::genericUnwind):
* jit/JITExceptions.h:
* jit/JITOpcodes.cpp:
(JSC::JIT::emit_op_end):
(JSC::JIT::emit_op_ret):
(JSC::JIT::emit_op_throw):
(JSC::JIT::emit_op_catch):
(JSC::JIT::emit_op_enter):
(JSC::JIT::emitSlow_op_loop_hint):
* jit/JITOpcodes32_64.cpp:
(JSC::JIT::emit_op_end):
(JSC::JIT::emit_op_throw):
(JSC::JIT::emit_op_catch):
* jit/JITOperations.cpp:
* jit/Repatch.cpp:
(JSC::generateByIdStub):
* jit/ThunkGenerators.cpp:
* llint/LLIntData.cpp:
(JSC::LLInt::Data::performAssertions):
* llint/LLIntSlowPaths.cpp:
(JSC::LLInt::LLINT_SLOW_PATH_DECL):
* llint/LowLevelInterpreter.asm:
* llint/LowLevelInterpreter32_64.asm:
* llint/LowLevelInterpreter64.asm:
(JSC::throwExceptionFromCallSlowPathGenerator):
(JSC::arityFixupGenerator):
* runtime/CommonSlowPaths.cpp:
(JSC::setupArityCheckData):
* runtime/CommonSlowPaths.h:
(JSC::CommonSlowPaths::arityCheckFor):
Emit code to save and restore callee save registers and materialize tagTypeNumberRegister
and tagMaskRegister.
Handle callee saves when tiering up.
Copy callee saves register contents to VM::calleeSaveRegistersBuffer at beginning of
exception processing.
Process callee save registers in frames when unwinding from an exception.
Restore callee saves register contents from VM::calleeSaveRegistersBuffer on catch.
Use appropriate register set to make sure we don't allocate a callee save register when
compiling a thunk.
Helper to populate tagTypeNumberRegister and tagMaskRegister with the appropriate
constants.
Removed arity fixup return thunks.
* dfg/DFGOSREntry.cpp:
(JSC::DFG::prepareOSREntry):
* dfg/DFGOSRExitCompiler32_64.cpp:
(JSC::DFG::OSRExitCompiler::compileExit):
* dfg/DFGOSRExitCompiler64.cpp:
(JSC::DFG::OSRExitCompiler::compileExit):
* dfg/DFGOSRExitCompilerCommon.cpp:
(JSC::DFG::reifyInlinedCallFrames):
(JSC::DFG::adjustAndJumpToTarget):
Restore callee saves from the DFG and save the appropriate ones for the baseline JIT.
Materialize the tag registers on 64 bit platforms.
* jit/AssemblyHelpers.h:
(JSC::AssemblyHelpers::emitSaveCalleeSavesFor):
(JSC::AssemblyHelpers::emitRestoreCalleeSavesFor):
(JSC::AssemblyHelpers::emitSaveCalleeSaves):
(JSC::AssemblyHelpers::emitRestoreCalleeSaves):
(JSC::AssemblyHelpers::copyCalleeSavesToVMCalleeSavesBuffer):
(JSC::AssemblyHelpers::restoreCalleeSavesFromVMCalleeSavesBuffer):
(JSC::AssemblyHelpers::copyCalleeSavesFromFrameOrRegisterToVMCalleeSavesBuffer):
(JSC::AssemblyHelpers::emitMaterializeTagCheckRegisters):
New helpers to save and restore callee saves as well as materialize the tag registers
contents.
* jit/FPRInfo.h:
* jit/GPRInfo.h:
(JSC::GPRInfo::toRegister):
Updated to include FP callee save registers. Added number of callee saves registers and
cleanup register aliases that collide with callee save registers.
* jit/JITPropertyAccess.cpp:
(JSC::JIT::emitGetByValWithCachedId):
(JSC::JIT::emitPutByValWithCachedId):
(JSC::JIT::emit_op_get_by_id):
(JSC::JIT::emit_op_put_by_id):
* jit/JITPropertyAccess32_64.cpp:
(JSC::JIT::emitGetByValWithCachedId):
(JSC::JIT::emitPutByValWithCachedId):
(JSC::JIT::emit_op_get_by_id):
(JSC::JIT::emit_op_put_by_id):
Uses new stubUnavailableRegisters register set to limit what registers are available for
temporaries.
* jit/RegisterSet.cpp:
(JSC::RegisterSet::stubUnavailableRegisters):
(JSC::RegisterSet::calleeSaveRegisters):
(JSC::RegisterSet::llintBaselineCalleeSaveRegisters):
(JSC::RegisterSet::dfgCalleeSaveRegisters):
(JSC::RegisterSet::ftlCalleeSaveRegisters):
* jit/RegisterSet.h:
New register sets with the callee saves used by various tiers as well as one listing registers
not availble to stub code.
* jit/SpecializedThunkJIT.h:
(JSC::SpecializedThunkJIT::SpecializedThunkJIT):
(JSC::SpecializedThunkJIT::loadDoubleArgument):
(JSC::SpecializedThunkJIT::returnJSValue):
(JSC::SpecializedThunkJIT::returnDouble):
(JSC::SpecializedThunkJIT::returnInt32):
(JSC::SpecializedThunkJIT::returnJSCell):
(JSC::SpecializedThunkJIT::callDoubleToDoublePreservingReturn):
(JSC::SpecializedThunkJIT::emitSaveThenMaterializeTagRegisters):
(JSC::SpecializedThunkJIT::emitRestoreSavedTagRegisters):
(JSC::SpecializedThunkJIT::tagReturnAsInt32):
* jit/ThunkGenerators.cpp:
(JSC::nativeForGenerator):
Changed to save and restore existing tag register contents as the may contain other values.
After saving the existing values, we materialize the tag constants.
* jit/TempRegisterSet.h:
(JSC::TempRegisterSet::getFPRByIndex):
(JSC::TempRegisterSet::getFreeFPR):
(JSC::TempRegisterSet::setByIndex):
* offlineasm/arm64.rb:
* offlineasm/registers.rb:
Added methods for floating point registers to support callee save FP registers.
* jit/JITArithmetic32_64.cpp:
(JSC::JIT::emit_op_mod):
Removed unnecessary #if CPU(X86_64) check to this 32 bit only file.
* offlineasm/x86.rb:
Fixed Windows callee saves naming.
* runtime/VM.cpp:
(JSC::VM::VM):
* runtime/VM.h:
(JSC::VM::calleeSaveRegistersBufferOffset):
(JSC::VM::getAllCalleeSaveRegistersMap):
Provide a RegisterSaveMap that has all registers that might be saved. Added a callee save buffer to be
used for OSR exit and for exception processing in a future patch.
git-svn-id: http://svn.webkit.org/repository/webkit/trunk@189575 268f45cc-cd09-0410-ab3c-d52691b4dbfc
diff --git a/Source/JavaScriptCore/CMakeLists.txt b/Source/JavaScriptCore/CMakeLists.txt
index 728c068..169bb58 100644
--- a/Source/JavaScriptCore/CMakeLists.txt
+++ b/Source/JavaScriptCore/CMakeLists.txt
@@ -358,7 +358,6 @@
interpreter/StackVisitor.cpp
jit/AccessorCallJITStubRoutine.cpp
- jit/ArityCheckFailReturnThunks.cpp
jit/AssemblyHelpers.cpp
jit/BinarySwitch.cpp
jit/ExecutableAllocationFuzz.cpp
@@ -386,6 +385,8 @@
jit/JITToDFGDeferredCompilationCallback.cpp
jit/PolymorphicCallStubRoutine.cpp
jit/Reg.cpp
+ jit/RegisterAtOffset.cpp
+ jit/RegisterAtOffsetList.cpp
jit/RegisterPreservationWrapperGenerator.cpp
jit/RegisterSet.cpp
jit/Repatch.cpp
@@ -904,7 +905,6 @@
ftl/FTLOperations.cpp
ftl/FTLOutput.cpp
ftl/FTLRecoveryOpcode.cpp
- ftl/FTLRegisterAtOffset.cpp
ftl/FTLSaveRestore.cpp
ftl/FTLSlowPathCall.cpp
ftl/FTLSlowPathCallKey.cpp
diff --git a/Source/JavaScriptCore/ChangeLog b/Source/JavaScriptCore/ChangeLog
index 8de6bb6..80058ee 100644
--- a/Source/JavaScriptCore/ChangeLog
+++ b/Source/JavaScriptCore/ChangeLog
@@ -1,3 +1,307 @@
+2015-09-10 Michael Saboff <msaboff@apple.com>
+
+ Add support for Callee-Saves registers
+ https://bugs.webkit.org/show_bug.cgi?id=148666
+
+ Reviewed by Filip Pizlo.
+
+ We save platform callee save registers right below the call frame header,
+ in the location(s) starting with VirtualRegister 0. This local space is
+ allocated in the bytecode compiler. This space is the maximum space
+ needed for the callee registers that the LLInt and baseline JIT use,
+ rounded up to a stack aligned number of VirtualRegisters.
+ The LLInt explicitly saves and restores the registers in the macros
+ preserveCalleeSavesUsedByLLInt and restoreCalleeSavesUsedByLLInt.
+ The JITs saves and restores callee saves registers by what registers
+ are included in m_calleeSaveRegisters in the code block.
+
+ Added handling of callee save register restoration to exception handling.
+ The basic flow is when an exception is thrown or one is recognized to
+ have been generated in C++ code, we save the current state of all
+ callee save registers to VM::calleeSaveRegistersBuffer. As we unwind
+ looking for the corresponding catch, we copy the callee saves from call
+ frames to the same VM::calleeSaveRegistersBuffer. This is done for all
+ call frames on the stack up to but not including the call frame that has
+ the corresponding catch block. When we process the catch, we restore
+ the callee save registers with the contents of VM::calleeSaveRegistersBuffer.
+ If there isn't a catch, then handleUncaughtException will restore callee
+ saves before it returns back to the calling C++.
+
+ Eliminated callee saves registers as free registers for various thunk
+ generators as the callee saves may not have been saved by the function
+ calling the thunk.
+
+ Added code to transition callee saves from one VM's format to the another
+ as part of OSR entry and OSR exit.
+
+ Cleaned up the static RegisterSet's including adding one for LLInt and
+ baseline JIT callee saves and one to be used to allocate local registers
+ not including the callee saves or other special registers.
+
+ Moved ftl/FTLRegisterAtOffset.{cpp,h} to jit/RegisterAtOffset.{cpp,h}.
+ Factored out the vector of RegisterAtOffsets in ftl/FTLUnwindInfo.{cpp,h}
+ into a new class in jit/RegisterAtOffsetList.{cpp,h}.
+ Eliminted UnwindInfo and changed UnwindInfo::parse() into a standalone
+ function named parseUnwindInfo. That standalone function now returns
+ the callee saves RegisterAtOffsetList. This is stored in the CodeBlock
+ and used instead of UnwindInfo.
+
+ Turned off register preservation thunks for outgoing calls from FTL
+ generated code. THey'll be removed in a subsequent patch.
+
+ Changed specialized thinks to save and restore the contents of
+ tagTypeNumberRegister and tagMaskRegister as they can be called by FTL
+ compiled functions. We materialize those tag registers for the thunk's
+ use and then restore the prior contents on function exit.
+
+ Also removed the arity check fail return thunk since it is now the
+ caller's responsibility to restore the stack pointer.
+
+ Removed saving of callee save registers and materialization of special
+ tag registers for 64 bit platforms from vmEntryToJavaScript and
+ vmEntryToNative.
+
+ * CMakeLists.txt:
+ * JavaScriptCore.vcxproj/JavaScriptCore.vcxproj:
+ * JavaScriptCore.vcxproj/JavaScriptCore.vcxproj.filters:
+ * JavaScriptCore.xcodeproj/project.pbxproj:
+ * ftl/FTLJITCode.h:
+ * ftl/FTLRegisterAtOffset.cpp: Removed.
+ * ftl/FTLRegisterAtOffset.h: Removed.
+ * ftl/FTLUnwindInfo.cpp:
+ (JSC::FTL::parseUnwindInfo):
+ (JSC::FTL::UnwindInfo::UnwindInfo): Deleted.
+ (JSC::FTL::UnwindInfo::~UnwindInfo): Deleted.
+ (JSC::FTL::UnwindInfo::parse): Deleted.
+ (JSC::FTL::UnwindInfo::dump): Deleted.
+ (JSC::FTL::UnwindInfo::find): Deleted.
+ (JSC::FTL::UnwindInfo::indexOf): Deleted.
+ * ftl/FTLUnwindInfo.h:
+ (JSC::RegisterAtOffset::dump):
+ * jit/RegisterAtOffset.cpp: Added.
+ * jit/RegisterAtOffset.h: Added.
+ (JSC::RegisterAtOffset::RegisterAtOffset):
+ (JSC::RegisterAtOffset::operator!):
+ (JSC::RegisterAtOffset::reg):
+ (JSC::RegisterAtOffset::offset):
+ (JSC::RegisterAtOffset::offsetAsIndex):
+ (JSC::RegisterAtOffset::operator==):
+ (JSC::RegisterAtOffset::operator<):
+ (JSC::RegisterAtOffset::getReg):
+ * jit/RegisterAtOffsetList.cpp: Added.
+ (JSC::RegisterAtOffsetList::RegisterAtOffsetList):
+ (JSC::RegisterAtOffsetList::sort):
+ (JSC::RegisterAtOffsetList::dump):
+ (JSC::RegisterAtOffsetList::find):
+ (JSC::RegisterAtOffsetList::indexOf):
+ * jit/RegisterAtOffsetList.h: Added.
+ (JSC::RegisterAtOffsetList::clear):
+ (JSC::RegisterAtOffsetList::size):
+ (JSC::RegisterAtOffsetList::at):
+ (JSC::RegisterAtOffsetList::append):
+ Move and refactored use of FTLRegisterAtOffset to RegisterAtOffset.
+ Added RegisterAtOffset and RegisterAtOffsetList to build configurations.
+ Remove FTLRegisterAtOffset files.
+
+ * bytecode/CallLinkInfo.h:
+ (JSC::CallLinkInfo::setUpCallFromFTL):
+ Turned off FTL register preservation thunks.
+
+ * bytecode/CodeBlock.cpp:
+ (JSC::CodeBlock::CodeBlock):
+ (JSC::CodeBlock::setCalleeSaveRegisters):
+ (JSC::roundCalleeSaveSpaceAsVirtualRegisters):
+ (JSC::CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters):
+ (JSC::CodeBlock::calleeSaveSpaceAsVirtualRegisters):
+ * bytecode/CodeBlock.h:
+ (JSC::CodeBlock::numberOfLLIntBaselineCalleeSaveRegisters):
+ (JSC::CodeBlock::calleeSaveRegisters):
+ (JSC::CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters):
+ (JSC::CodeBlock::optimizeAfterWarmUp):
+ (JSC::CodeBlock::numberOfDFGCompiles):
+ Methods to manage a set of callee save registers. Also to allocate the appropriate
+ number of VirtualRegisters for callee saves.
+
+ * bytecompiler/BytecodeGenerator.cpp:
+ (JSC::BytecodeGenerator::BytecodeGenerator):
+ (JSC::BytecodeGenerator::allocateCalleeSaveSpace):
+ * bytecompiler/BytecodeGenerator.h:
+ Allocate the appropriate number of VirtualRegisters for callee saves needed by LLInt or baseline JIT.
+
+ * dfg/DFGJITCompiler.cpp:
+ (JSC::DFG::JITCompiler::compileEntry):
+ (JSC::DFG::JITCompiler::compileSetupRegistersForEntry):
+ (JSC::DFG::JITCompiler::compileBody):
+ (JSC::DFG::JITCompiler::compileExceptionHandlers):
+ (JSC::DFG::JITCompiler::compile):
+ (JSC::DFG::JITCompiler::compileFunction):
+ * dfg/DFGJITCompiler.h:
+ * interpreter/Interpreter.cpp:
+ (JSC::UnwindFunctor::operator()):
+ (JSC::UnwindFunctor::copyCalleeSavesToVMCalleeSavesBuffer):
+ * dfg/DFGPlan.cpp:
+ (JSC::DFG::Plan::compileInThreadImpl):
+ * dfg/DFGSpeculativeJIT.cpp:
+ (JSC::DFG::SpeculativeJIT::usedRegisters):
+ * dfg/DFGSpeculativeJIT32_64.cpp:
+ (JSC::DFG::SpeculativeJIT::compile):
+ * dfg/DFGSpeculativeJIT64.cpp:
+ (JSC::DFG::SpeculativeJIT::compile):
+ * dfg/DFGStackLayoutPhase.cpp:
+ (JSC::DFG::StackLayoutPhase::run):
+ * ftl/FTLCompile.cpp:
+ (JSC::FTL::fixFunctionBasedOnStackMaps):
+ (JSC::FTL::compile):
+ * ftl/FTLLink.cpp:
+ (JSC::FTL::link):
+ * ftl/FTLOSRExitCompiler.cpp:
+ (JSC::FTL::compileStub):
+ * ftl/FTLThunks.cpp:
+ (JSC::FTL::osrExitGenerationThunkGenerator):
+ * jit/ArityCheckFailReturnThunks.cpp: Removed.
+ * jit/ArityCheckFailReturnThunks.h: Removed.
+ * jit/JIT.cpp:
+ (JSC::JIT::emitEnterOptimizationCheck):
+ (JSC::JIT::privateCompile):
+ (JSC::JIT::privateCompileExceptionHandlers):
+ * jit/JITCall32_64.cpp:
+ (JSC::JIT::emit_op_ret):
+ * jit/JITExceptions.cpp:
+ (JSC::genericUnwind):
+ * jit/JITExceptions.h:
+ * jit/JITOpcodes.cpp:
+ (JSC::JIT::emit_op_end):
+ (JSC::JIT::emit_op_ret):
+ (JSC::JIT::emit_op_throw):
+ (JSC::JIT::emit_op_catch):
+ (JSC::JIT::emit_op_enter):
+ (JSC::JIT::emitSlow_op_loop_hint):
+ * jit/JITOpcodes32_64.cpp:
+ (JSC::JIT::emit_op_end):
+ (JSC::JIT::emit_op_throw):
+ (JSC::JIT::emit_op_catch):
+ * jit/JITOperations.cpp:
+ * jit/Repatch.cpp:
+ (JSC::generateByIdStub):
+ * jit/ThunkGenerators.cpp:
+ * llint/LLIntData.cpp:
+ (JSC::LLInt::Data::performAssertions):
+ * llint/LLIntSlowPaths.cpp:
+ (JSC::LLInt::LLINT_SLOW_PATH_DECL):
+ * llint/LowLevelInterpreter.asm:
+ * llint/LowLevelInterpreter32_64.asm:
+ * llint/LowLevelInterpreter64.asm:
+ (JSC::throwExceptionFromCallSlowPathGenerator):
+ (JSC::arityFixupGenerator):
+ * runtime/CommonSlowPaths.cpp:
+ (JSC::setupArityCheckData):
+ * runtime/CommonSlowPaths.h:
+ (JSC::CommonSlowPaths::arityCheckFor):
+ Emit code to save and restore callee save registers and materialize tagTypeNumberRegister
+ and tagMaskRegister.
+ Handle callee saves when tiering up.
+ Copy callee saves register contents to VM::calleeSaveRegistersBuffer at beginning of
+ exception processing.
+ Process callee save registers in frames when unwinding from an exception.
+ Restore callee saves register contents from VM::calleeSaveRegistersBuffer on catch.
+ Use appropriate register set to make sure we don't allocate a callee save register when
+ compiling a thunk.
+ Helper to populate tagTypeNumberRegister and tagMaskRegister with the appropriate
+ constants.
+ Removed arity fixup return thunks.
+
+ * dfg/DFGOSREntry.cpp:
+ (JSC::DFG::prepareOSREntry):
+ * dfg/DFGOSRExitCompiler32_64.cpp:
+ (JSC::DFG::OSRExitCompiler::compileExit):
+ * dfg/DFGOSRExitCompiler64.cpp:
+ (JSC::DFG::OSRExitCompiler::compileExit):
+ * dfg/DFGOSRExitCompilerCommon.cpp:
+ (JSC::DFG::reifyInlinedCallFrames):
+ (JSC::DFG::adjustAndJumpToTarget):
+ Restore callee saves from the DFG and save the appropriate ones for the baseline JIT.
+ Materialize the tag registers on 64 bit platforms.
+
+ * jit/AssemblyHelpers.h:
+ (JSC::AssemblyHelpers::emitSaveCalleeSavesFor):
+ (JSC::AssemblyHelpers::emitRestoreCalleeSavesFor):
+ (JSC::AssemblyHelpers::emitSaveCalleeSaves):
+ (JSC::AssemblyHelpers::emitRestoreCalleeSaves):
+ (JSC::AssemblyHelpers::copyCalleeSavesToVMCalleeSavesBuffer):
+ (JSC::AssemblyHelpers::restoreCalleeSavesFromVMCalleeSavesBuffer):
+ (JSC::AssemblyHelpers::copyCalleeSavesFromFrameOrRegisterToVMCalleeSavesBuffer):
+ (JSC::AssemblyHelpers::emitMaterializeTagCheckRegisters):
+ New helpers to save and restore callee saves as well as materialize the tag registers
+ contents.
+
+ * jit/FPRInfo.h:
+ * jit/GPRInfo.h:
+ (JSC::GPRInfo::toRegister):
+ Updated to include FP callee save registers. Added number of callee saves registers and
+ cleanup register aliases that collide with callee save registers.
+
+ * jit/JITPropertyAccess.cpp:
+ (JSC::JIT::emitGetByValWithCachedId):
+ (JSC::JIT::emitPutByValWithCachedId):
+ (JSC::JIT::emit_op_get_by_id):
+ (JSC::JIT::emit_op_put_by_id):
+ * jit/JITPropertyAccess32_64.cpp:
+ (JSC::JIT::emitGetByValWithCachedId):
+ (JSC::JIT::emitPutByValWithCachedId):
+ (JSC::JIT::emit_op_get_by_id):
+ (JSC::JIT::emit_op_put_by_id):
+ Uses new stubUnavailableRegisters register set to limit what registers are available for
+ temporaries.
+
+ * jit/RegisterSet.cpp:
+ (JSC::RegisterSet::stubUnavailableRegisters):
+ (JSC::RegisterSet::calleeSaveRegisters):
+ (JSC::RegisterSet::llintBaselineCalleeSaveRegisters):
+ (JSC::RegisterSet::dfgCalleeSaveRegisters):
+ (JSC::RegisterSet::ftlCalleeSaveRegisters):
+ * jit/RegisterSet.h:
+ New register sets with the callee saves used by various tiers as well as one listing registers
+ not availble to stub code.
+
+ * jit/SpecializedThunkJIT.h:
+ (JSC::SpecializedThunkJIT::SpecializedThunkJIT):
+ (JSC::SpecializedThunkJIT::loadDoubleArgument):
+ (JSC::SpecializedThunkJIT::returnJSValue):
+ (JSC::SpecializedThunkJIT::returnDouble):
+ (JSC::SpecializedThunkJIT::returnInt32):
+ (JSC::SpecializedThunkJIT::returnJSCell):
+ (JSC::SpecializedThunkJIT::callDoubleToDoublePreservingReturn):
+ (JSC::SpecializedThunkJIT::emitSaveThenMaterializeTagRegisters):
+ (JSC::SpecializedThunkJIT::emitRestoreSavedTagRegisters):
+ (JSC::SpecializedThunkJIT::tagReturnAsInt32):
+ * jit/ThunkGenerators.cpp:
+ (JSC::nativeForGenerator):
+ Changed to save and restore existing tag register contents as the may contain other values.
+ After saving the existing values, we materialize the tag constants.
+
+ * jit/TempRegisterSet.h:
+ (JSC::TempRegisterSet::getFPRByIndex):
+ (JSC::TempRegisterSet::getFreeFPR):
+ (JSC::TempRegisterSet::setByIndex):
+ * offlineasm/arm64.rb:
+ * offlineasm/registers.rb:
+ Added methods for floating point registers to support callee save FP registers.
+
+ * jit/JITArithmetic32_64.cpp:
+ (JSC::JIT::emit_op_mod):
+ Removed unnecessary #if CPU(X86_64) check to this 32 bit only file.
+
+ * offlineasm/x86.rb:
+ Fixed Windows callee saves naming.
+
+ * runtime/VM.cpp:
+ (JSC::VM::VM):
+ * runtime/VM.h:
+ (JSC::VM::calleeSaveRegistersBufferOffset):
+ (JSC::VM::getAllCalleeSaveRegistersMap):
+ Provide a RegisterSaveMap that has all registers that might be saved. Added a callee save buffer to be
+ used for OSR exit and for exception processing in a future patch.
+
2015-09-10 Yusuke Suzuki <utatane.tea@gmail.com>
ModuleProgramExecutable should provide CodeBlock to ScriptExecutable::forEachCodeBlock
diff --git a/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj b/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj
index b5156e6..5ea08eb 100644
--- a/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj
+++ b/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj
@@ -545,7 +545,6 @@
<ClCompile Include="..\ftl\FTLOperations.cpp" />
<ClCompile Include="..\ftl\FTLOutput.cpp" />
<ClCompile Include="..\ftl\FTLRecoveryOpcode.cpp" />
- <ClCompile Include="..\ftl\FTLRegisterAtOffset.cpp" />
<ClCompile Include="..\ftl\FTLSaveRestore.cpp" />
<ClCompile Include="..\ftl\FTLSlowPathCall.cpp" />
<ClCompile Include="..\ftl\FTLSlowPathCallKey.cpp" />
@@ -619,7 +618,6 @@
<ClCompile Include="..\interpreter\ProtoCallFrame.cpp" />
<ClCompile Include="..\interpreter\StackVisitor.cpp" />
<ClCompile Include="..\jit\AccessorCallJITStubRoutine.cpp" />
- <ClCompile Include="..\jit\ArityCheckFailReturnThunks.cpp" />
<ClCompile Include="..\jit\AssemblyHelpers.cpp" />
<ClCompile Include="..\jit\BinarySwitch.cpp" />
<ClCompile Include="..\jit\ExecutableAllocationFuzz.cpp" />
@@ -649,6 +647,8 @@
<ClCompile Include="..\jit\SetupVarargsFrame.cpp" />
<ClCompile Include="..\jit\PolymorphicCallStubRoutine.cpp" />
<ClCompile Include="..\jit\Reg.cpp" />
+ <ClCompile Include="..\jit\RegisterAtOffset.cpp" />
+ <ClCompile Include="..\jit\RegisterAtOffsetList.cpp" />
<ClCompile Include="..\jit\RegisterPreservationWrapperGenerator.cpp" />
<ClCompile Include="..\jit\RegisterSet.cpp" />
<ClCompile Include="..\jit\Repatch.cpp" />
@@ -1297,7 +1297,6 @@
<ClInclude Include="..\ftl\FTLOperations.h" />
<ClInclude Include="..\ftl\FTLOutput.h" />
<ClInclude Include="..\ftl\FTLRecoveryOpcode.h" />
- <ClInclude Include="..\ftl\FTLRegisterAtOffset.h" />
<ClInclude Include="..\ftl\FTLSaveRestore.h" />
<ClInclude Include="..\ftl\FTLSlowPathCall.h" />
<ClInclude Include="..\ftl\FTLSlowPathCallKey.h" />
@@ -1419,7 +1418,6 @@
<ClInclude Include="..\interpreter\Register.h" />
<ClInclude Include="..\interpreter\StackVisitor.h" />
<ClInclude Include="..\jit\AccessorCallJITStubRoutine.h" />
- <ClInclude Include="..\jit\ArityCheckFailReturnThunks.h" />
<ClInclude Include="..\jit\AssemblyHelpers.h" />
<ClInclude Include="..\jit\BinarySwitch.h" />
<ClInclude Include="..\jit\CCallHelpers.h" />
@@ -1450,6 +1448,8 @@
<ClInclude Include="..\jit\SetupVarargsFrame.h" />
<ClInclude Include="..\jit\PolymorphicCallStubRoutine.h" />
<ClInclude Include="..\jit\Reg.h" />
+ <ClInclude Include="..\jit\RegisterAtOffset.h" />
+ <ClInclude Include="..\jit\RegisterAtOffsetList.h" />
<ClInclude Include="..\jit\RegisterMap.h" />
<ClInclude Include="..\jit\RegisterPreservationWrapperGenerator.h" />
<ClInclude Include="..\jit\RegisterSet.h" />
diff --git a/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj.filters b/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj.filters
index 5b5a8b4..f5f8544 100644
--- a/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj.filters
+++ b/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj.filters
@@ -1491,9 +1491,6 @@
<ClCompile Include="$(ConfigurationBuildDir)\obj$(PlatformArchitecture)\$(ProjectName)\DerivedSources\InspectorProtocolObjects.cpp">
<Filter>Derived Sources</Filter>
</ClCompile>
- <ClCompile Include="..\jit\ArityCheckFailReturnThunks.cpp">
- <Filter>jit</Filter>
- </ClCompile>
<ClCompile Include="..\jit\RegisterPreservationWrapperGenerator.cpp">
<Filter>jit</Filter>
</ClCompile>
@@ -4037,9 +4034,6 @@
<Filter>runtime</Filter>
</ClInclude>
<ClInclude Include="$(ConfigurationBuildDir)\obj$(PlatformArchitecture)\$(ProjectName)\DerivedSources\JSDataViewPrototype.lut.h" />
- <ClInclude Include="..\jit\ArityCheckFailReturnThunks.h">
- <Filter>jit</Filter>
- </ClInclude>
<ClInclude Include="..\jit\RegisterPreservationWrapperGenerator.h">
<Filter>jit</Filter>
</ClInclude>
diff --git a/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj b/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
index 1bbfa49..4480bab 100644
--- a/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
+++ b/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
@@ -382,12 +382,8 @@
0F6B1CBA1861244C00845D97 /* RegisterPreservationMode.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6B1CB81861244C00845D97 /* RegisterPreservationMode.h */; settings = {ATTRIBUTES = (Private, ); }; };
0F6B1CBD1861246A00845D97 /* RegisterPreservationWrapperGenerator.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F6B1CBB1861246A00845D97 /* RegisterPreservationWrapperGenerator.cpp */; };
0F6B1CBE1861246A00845D97 /* RegisterPreservationWrapperGenerator.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6B1CBC1861246A00845D97 /* RegisterPreservationWrapperGenerator.h */; settings = {ATTRIBUTES = (Private, ); }; };
- 0F6B1CC31862C47800845D97 /* FTLRegisterAtOffset.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F6B1CBF1862C47800845D97 /* FTLRegisterAtOffset.cpp */; };
- 0F6B1CC41862C47800845D97 /* FTLRegisterAtOffset.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6B1CC01862C47800845D97 /* FTLRegisterAtOffset.h */; settings = {ATTRIBUTES = (Private, ); }; };
0F6B1CC51862C47800845D97 /* FTLUnwindInfo.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F6B1CC11862C47800845D97 /* FTLUnwindInfo.cpp */; };
0F6B1CC61862C47800845D97 /* FTLUnwindInfo.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6B1CC21862C47800845D97 /* FTLUnwindInfo.h */; settings = {ATTRIBUTES = (Private, ); }; };
- 0F6B1CC918641DF800845D97 /* ArityCheckFailReturnThunks.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F6B1CC718641DF800845D97 /* ArityCheckFailReturnThunks.cpp */; };
- 0F6B1CCA18641DF800845D97 /* ArityCheckFailReturnThunks.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6B1CC818641DF800845D97 /* ArityCheckFailReturnThunks.h */; settings = {ATTRIBUTES = (Private, ); }; };
0F6C73501AC9F99F00BE1682 /* VariableWriteFireDetail.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F6C734E1AC9F99F00BE1682 /* VariableWriteFireDetail.cpp */; };
0F6C73511AC9F99F00BE1682 /* VariableWriteFireDetail.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6C734F1AC9F99F00BE1682 /* VariableWriteFireDetail.h */; settings = {ATTRIBUTES = (Private, ); }; };
0F6E845A19030BEF00562741 /* DFGVariableAccessData.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F6E845919030BEF00562741 /* DFGVariableAccessData.cpp */; };
@@ -987,6 +983,8 @@
6511230714046B0A002B101D /* testRegExp.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 651122E5140469BA002B101D /* testRegExp.cpp */; };
6514F21918B3E1670098FF8B /* Bytecodes.h in Headers */ = {isa = PBXBuildFile; fileRef = 6514F21718B3E1670098FF8B /* Bytecodes.h */; settings = {ATTRIBUTES = (Private, ); }; };
65303D641447B9E100D3F904 /* ParserTokens.h in Headers */ = {isa = PBXBuildFile; fileRef = 65303D631447B9E100D3F904 /* ParserTokens.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ 6540C7A01B82E1C3000F6B79 /* RegisterAtOffset.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 6540C79E1B82D9CE000F6B79 /* RegisterAtOffset.cpp */; };
+ 6540C7A11B82E1C3000F6B79 /* RegisterAtOffsetList.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 6540C79C1B82D99D000F6B79 /* RegisterAtOffsetList.cpp */; };
6546F5211A32B313006F07D5 /* NullGetterFunction.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 6546F51F1A32A59C006F07D5 /* NullGetterFunction.cpp */; };
65525FC51A6DD801007B5495 /* NullSetterFunction.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 65525FC31A6DD3B3007B5495 /* NullSetterFunction.cpp */; };
6553A33117A1F1EE008CF6F3 /* CommonSlowPathsExceptions.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 6553A32F17A1F1EE008CF6F3 /* CommonSlowPathsExceptions.cpp */; };
@@ -2216,12 +2214,8 @@
0F6B1CB81861244C00845D97 /* RegisterPreservationMode.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RegisterPreservationMode.h; sourceTree = "<group>"; };
0F6B1CBB1861246A00845D97 /* RegisterPreservationWrapperGenerator.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = RegisterPreservationWrapperGenerator.cpp; sourceTree = "<group>"; };
0F6B1CBC1861246A00845D97 /* RegisterPreservationWrapperGenerator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RegisterPreservationWrapperGenerator.h; sourceTree = "<group>"; };
- 0F6B1CBF1862C47800845D97 /* FTLRegisterAtOffset.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLRegisterAtOffset.cpp; path = ftl/FTLRegisterAtOffset.cpp; sourceTree = "<group>"; };
- 0F6B1CC01862C47800845D97 /* FTLRegisterAtOffset.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLRegisterAtOffset.h; path = ftl/FTLRegisterAtOffset.h; sourceTree = "<group>"; };
0F6B1CC11862C47800845D97 /* FTLUnwindInfo.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLUnwindInfo.cpp; path = ftl/FTLUnwindInfo.cpp; sourceTree = "<group>"; };
0F6B1CC21862C47800845D97 /* FTLUnwindInfo.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLUnwindInfo.h; path = ftl/FTLUnwindInfo.h; sourceTree = "<group>"; };
- 0F6B1CC718641DF800845D97 /* ArityCheckFailReturnThunks.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ArityCheckFailReturnThunks.cpp; sourceTree = "<group>"; };
- 0F6B1CC818641DF800845D97 /* ArityCheckFailReturnThunks.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ArityCheckFailReturnThunks.h; sourceTree = "<group>"; };
0F6C734E1AC9F99F00BE1682 /* VariableWriteFireDetail.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = VariableWriteFireDetail.cpp; sourceTree = "<group>"; };
0F6C734F1AC9F99F00BE1682 /* VariableWriteFireDetail.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = VariableWriteFireDetail.h; sourceTree = "<group>"; };
0F6E845919030BEF00562741 /* DFGVariableAccessData.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGVariableAccessData.cpp; path = dfg/DFGVariableAccessData.cpp; sourceTree = "<group>"; };
@@ -2801,6 +2795,10 @@
652A3A231651C69700A80AFE /* A64DOpcode.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = A64DOpcode.h; path = disassembler/ARM64/A64DOpcode.h; sourceTree = "<group>"; };
65303D631447B9E100D3F904 /* ParserTokens.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ParserTokens.h; sourceTree = "<group>"; };
65400C100A69BAF200509887 /* PropertyNameArray.h */ = {isa = PBXFileReference; fileEncoding = 30; lastKnownFileType = sourcecode.c.h; path = PropertyNameArray.h; sourceTree = "<group>"; };
+ 6540C79C1B82D99D000F6B79 /* RegisterAtOffsetList.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = RegisterAtOffsetList.cpp; sourceTree = "<group>"; };
+ 6540C79D1B82D99D000F6B79 /* RegisterAtOffsetList.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RegisterAtOffsetList.h; sourceTree = "<group>"; };
+ 6540C79E1B82D9CE000F6B79 /* RegisterAtOffset.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = RegisterAtOffset.cpp; sourceTree = "<group>"; };
+ 6540C79F1B82D9CE000F6B79 /* RegisterAtOffset.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RegisterAtOffset.h; sourceTree = "<group>"; };
6546F51F1A32A59C006F07D5 /* NullGetterFunction.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; lineEnding = 0; path = NullGetterFunction.cpp; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.cpp; };
6546F5201A32A59C006F07D5 /* NullGetterFunction.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = NullGetterFunction.h; sourceTree = "<group>"; };
65525FC31A6DD3B3007B5495 /* NullSetterFunction.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = NullSetterFunction.cpp; sourceTree = "<group>"; };
@@ -3972,8 +3970,6 @@
0FEA0A06170513DB00BB722C /* FTLOutput.h */,
0F485325187DFDEC0083B687 /* FTLRecoveryOpcode.cpp */,
0F485326187DFDEC0083B687 /* FTLRecoveryOpcode.h */,
- 0F6B1CBF1862C47800845D97 /* FTLRegisterAtOffset.cpp */,
- 0F6B1CC01862C47800845D97 /* FTLRegisterAtOffset.h */,
0FCEFAA91804C13E00472CE4 /* FTLSaveRestore.cpp */,
0FCEFAAA1804C13E00472CE4 /* FTLSaveRestore.h */,
0F25F1AA181635F300522F39 /* FTLSlowPathCall.cpp */,
@@ -4104,8 +4100,6 @@
0FF054F81AC35B4400E5BE57 /* ExecutableAllocationFuzz.h */,
0F7576D018E1FEE9002EF4CD /* AccessorCallJITStubRoutine.cpp */,
0F7576D118E1FEE9002EF4CD /* AccessorCallJITStubRoutine.h */,
- 0F6B1CC718641DF800845D97 /* ArityCheckFailReturnThunks.cpp */,
- 0F6B1CC818641DF800845D97 /* ArityCheckFailReturnThunks.h */,
0F24E53B17EA9F5900ABB217 /* AssemblyHelpers.cpp */,
0F24E53C17EA9F5900ABB217 /* AssemblyHelpers.h */,
0F64B26F1A784BAF006E4E66 /* BinarySwitch.cpp */,
@@ -4158,12 +4152,14 @@
0FC712E117CD878F008CC93C /* JITToDFGDeferredCompilationCallback.h */,
A76F54A213B28AAB00EF2BCE /* JITWriteBarrier.h */,
A76C51741182748D00715B05 /* JSInterfaceJIT.h */,
- 0FEE98421A89227500754E93 /* SetupVarargsFrame.cpp */,
- 0FEE98401A8865B600754E93 /* SetupVarargsFrame.h */,
0FE834151A6EF97B00D04847 /* PolymorphicCallStubRoutine.cpp */,
0FE834161A6EF97B00D04847 /* PolymorphicCallStubRoutine.h */,
0FA7A8E918B413C80052371D /* Reg.cpp */,
0FA7A8EA18B413C80052371D /* Reg.h */,
+ 6540C79E1B82D9CE000F6B79 /* RegisterAtOffset.cpp */,
+ 6540C79F1B82D9CE000F6B79 /* RegisterAtOffset.h */,
+ 6540C79C1B82D99D000F6B79 /* RegisterAtOffsetList.cpp */,
+ 6540C79D1B82D99D000F6B79 /* RegisterAtOffsetList.h */,
623A37EB1B87A7BD00754209 /* RegisterMap.h */,
0F6B1CBB1861246A00845D97 /* RegisterPreservationWrapperGenerator.cpp */,
0F6B1CBC1861246A00845D97 /* RegisterPreservationWrapperGenerator.h */,
@@ -4173,6 +4169,8 @@
0F24E54A17EE274900ABB217 /* Repatch.h */,
0FA7A8ED18CE4FD80052371D /* ScratchRegisterAllocator.cpp */,
0F24E54B17EE274900ABB217 /* ScratchRegisterAllocator.h */,
+ 0FEE98421A89227500754E93 /* SetupVarargsFrame.cpp */,
+ 0FEE98401A8865B600754E93 /* SetupVarargsFrame.h */,
A709F2EF17A0AC0400512E98 /* SlowPathCall.h */,
A7386551118697B400540279 /* SpecializedThunkJIT.h */,
A7FF647A18C52E8500B55307 /* SpillRegistersMode.h */,
@@ -5925,7 +5923,6 @@
BC18C3E50E16F5CD00B34460 /* APICast.h in Headers */,
BCF605140E203EF800B9A64D /* ArgList.h in Headers */,
2A88067919107D5500CB0BBB /* DFGFunctionWhitelist.h in Headers */,
- 0F6B1CCA18641DF800845D97 /* ArityCheckFailReturnThunks.h in Headers */,
0F6B1CB91861244C00845D97 /* ArityCheckMode.h in Headers */,
A1A009C11831A26E00CF8711 /* ARM64Assembler.h in Headers */,
0F898F321B27689F0083A33C /* DFGIntegerRangeOptimizationPhase.h in Headers */,
@@ -6325,7 +6322,6 @@
9E72940B190F0514001A91B5 /* BundlePath.h in Headers */,
0F48532A187DFDEC0083B687 /* FTLRecoveryOpcode.h in Headers */,
E3794E761B77EB97005543AE /* ModuleAnalyzer.h in Headers */,
- 0F6B1CC41862C47800845D97 /* FTLRegisterAtOffset.h in Headers */,
0FCEFAAC1804C13E00472CE4 /* FTLSaveRestore.h in Headers */,
0F25F1B2181635F300522F39 /* FTLSlowPathCall.h in Headers */,
E354622B1B6065D100545386 /* ConstructAbility.h in Headers */,
@@ -7392,7 +7388,6 @@
0FE050151AA9091100D33B33 /* DirectArgumentsOffset.cpp in Sources */,
0F55F0F414D1063900AC7649 /* AbstractPC.cpp in Sources */,
147F39BD107EC37600427A48 /* ArgList.cpp in Sources */,
- 0F6B1CC918641DF800845D97 /* ArityCheckFailReturnThunks.cpp in Sources */,
797E07A91B8FCFB9008400BA /* JSGlobalLexicalEnvironment.cpp in Sources */,
E3794E751B77EB97005543AE /* ModuleAnalyzer.cpp in Sources */,
0F743BAA16B88249009F9277 /* ARM64Disassembler.cpp in Sources */,
@@ -7655,7 +7650,6 @@
8B0F424C1ABD6DE2003917EA /* JSArrowFunction.cpp in Sources */,
0FEA0A2A1709629600BB722C /* FTLOutput.cpp in Sources */,
0F485329187DFDEC0083B687 /* FTLRecoveryOpcode.cpp in Sources */,
- 0F6B1CC31862C47800845D97 /* FTLRegisterAtOffset.cpp in Sources */,
0FCEFAAB1804C13E00472CE4 /* FTLSaveRestore.cpp in Sources */,
0F25F1B1181635F300522F39 /* FTLSlowPathCall.cpp in Sources */,
0F25F1B3181635F300522F39 /* FTLSlowPathCallKey.cpp in Sources */,
@@ -7926,6 +7920,8 @@
0F3E01AA19D353A500F61B7F /* DFGPrePostNumbering.cpp in Sources */,
0FF60AC316740F8800029779 /* ReduceWhitespace.cpp in Sources */,
0FA7A8EB18B413C80052371D /* Reg.cpp in Sources */,
+ 6540C7A11B82E1C3000F6B79 /* RegisterAtOffsetList.cpp in Sources */,
+ 6540C7A01B82E1C3000F6B79 /* RegisterAtOffset.cpp in Sources */,
14280841107EC0930013E7B2 /* RegExp.cpp in Sources */,
A1712B3B11C7B212007A5315 /* RegExpCache.cpp in Sources */,
0FE0502C1AA9095600D33B33 /* VarOffset.cpp in Sources */,
diff --git a/Source/JavaScriptCore/bytecode/CallLinkInfo.h b/Source/JavaScriptCore/bytecode/CallLinkInfo.h
index 55b033b..5f5c6b4 100644
--- a/Source/JavaScriptCore/bytecode/CallLinkInfo.h
+++ b/Source/JavaScriptCore/bytecode/CallLinkInfo.h
@@ -117,7 +117,7 @@
CodeLocationNearCall callReturnLocation, CodeLocationDataLabelPtr hotPathBegin,
CodeLocationNearCall hotPathOther, unsigned calleeGPR)
{
- m_registerPreservationMode = static_cast<unsigned>(MustPreserveRegisters);
+ m_registerPreservationMode = static_cast<unsigned>(RegisterPreservationNotRequired);
m_callType = callType;
m_codeOrigin = codeOrigin;
m_callReturnLocation = callReturnLocation;
diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.cpp b/Source/JavaScriptCore/bytecode/CodeBlock.cpp
index 6143113..a43d64c 100644
--- a/Source/JavaScriptCore/bytecode/CodeBlock.cpp
+++ b/Source/JavaScriptCore/bytecode/CodeBlock.cpp
@@ -69,6 +69,10 @@
#include <wtf/StringPrintStream.h>
#include <wtf/text/UniquedStringImpl.h>
+#if ENABLE(JIT)
+#include "RegisterAtOffsetList.h"
+#endif
+
#if ENABLE(DFG_JIT)
#include "DFGOperations.h"
#endif
@@ -1887,6 +1891,10 @@
if (size_t size = unlinkedCodeBlock->numberOfObjectAllocationProfiles())
m_objectAllocationProfiles.resizeToFit(size);
+#if ENABLE(JIT)
+ setCalleeSaveRegisters(RegisterSet::llintBaselineCalleeSaveRegisters());
+#endif
+
// Copy and translate the UnlinkedInstructions
unsigned instructionCount = unlinkedCodeBlock->instructions().count();
UnlinkedInstructionStream::Reader instructionReader(unlinkedCodeBlock->instructions());
@@ -3329,6 +3337,33 @@
}
#if ENABLE(JIT)
+void CodeBlock::setCalleeSaveRegisters(RegisterSet calleeSaveRegisters)
+{
+ m_calleeSaveRegisters = std::make_unique<RegisterAtOffsetList>(calleeSaveRegisters);
+}
+
+void CodeBlock::setCalleeSaveRegisters(std::unique_ptr<RegisterAtOffsetList> registerAtOffsetList)
+{
+ m_calleeSaveRegisters = WTF::move(registerAtOffsetList);
+}
+
+static size_t roundCalleeSaveSpaceAsVirtualRegisters(size_t calleeSaveRegisters)
+{
+ static const unsigned cpuRegisterSize = sizeof(void*);
+ return (WTF::roundUpToMultipleOf(sizeof(Register), calleeSaveRegisters * cpuRegisterSize) / sizeof(Register));
+
+}
+
+size_t CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters()
+{
+ return roundCalleeSaveSpaceAsVirtualRegisters(numberOfLLIntBaselineCalleeSaveRegisters());
+}
+
+size_t CodeBlock::calleeSaveSpaceAsVirtualRegisters()
+{
+ return roundCalleeSaveSpaceAsVirtualRegisters(m_calleeSaveRegisters->size());
+}
+
void CodeBlock::countReoptimization()
{
m_reoptimizationRetryCounter++;
diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.h b/Source/JavaScriptCore/bytecode/CodeBlock.h
index a51d0cd..4da4896 100644
--- a/Source/JavaScriptCore/bytecode/CodeBlock.h
+++ b/Source/JavaScriptCore/bytecode/CodeBlock.h
@@ -80,6 +80,7 @@
class ExecState;
class LLIntOffsetsExtractor;
+class RegisterAtOffsetList;
class TypeLocation;
class JSModuleEnvironment;
@@ -731,6 +732,10 @@
JS_EXPORT_PRIVATE unsigned reoptimizationRetryCounter() const;
void countReoptimization();
#if ENABLE(JIT)
+ static unsigned numberOfLLIntBaselineCalleeSaveRegisters() { return RegisterSet::llintBaselineCalleeSaveRegisters().numberOfSetRegisters(); }
+ static size_t llintBaselineCalleeSaveSpaceAsVirtualRegisters();
+ size_t calleeSaveSpaceAsVirtualRegisters();
+
unsigned numberOfDFGCompiles();
int32_t codeTypeThresholdMultiplier() const;
@@ -816,7 +821,14 @@
uint32_t exitCountThresholdForReoptimizationFromLoop();
bool shouldReoptimizeNow();
bool shouldReoptimizeFromLoopNow();
+
+ void setCalleeSaveRegisters(RegisterSet);
+ void setCalleeSaveRegisters(std::unique_ptr<RegisterAtOffsetList>);
+
+ RegisterAtOffsetList* calleeSaveRegisters() const { return m_calleeSaveRegisters.get(); }
#else // No JIT
+ static unsigned numberOfLLIntBaselineCalleeSaveRegisters() { return 0; }
+ static size_t llintBaselineCalleeSaveSpaceAsVirtualRegisters() { return 0; };
void optimizeAfterWarmUp() { }
unsigned numberOfDFGCompiles() { return 0; }
#endif
@@ -855,6 +867,7 @@
// FIXME: Make these remaining members private.
+ int m_numLocalRegistersForCalleeSaves;
int m_numCalleeRegisters;
int m_numVars;
bool m_isConstructor : 1;
@@ -1015,6 +1028,7 @@
SentinelLinkedList<LLIntCallLinkInfo, BasicRawSentinelNode<LLIntCallLinkInfo>> m_incomingLLIntCalls;
RefPtr<JITCode> m_jitCode;
#if ENABLE(JIT)
+ std::unique_ptr<RegisterAtOffsetList> m_calleeSaveRegisters;
Bag<StructureStubInfo> m_stubInfos;
Bag<ByValInfo> m_byValInfos;
Bag<CallLinkInfo> m_callLinkInfos;
diff --git a/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp b/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp
index 437353f..a0b012b 100644
--- a/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp
+++ b/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp
@@ -158,6 +158,8 @@
for (auto& constantRegister : m_linkTimeConstantRegisters)
constantRegister = nullptr;
+ allocateCalleeSaveSpace();
+
m_codeBlock->setNumParameters(1); // Allocate space for "this"
emitOpcode(op_enter);
@@ -198,6 +200,8 @@
if (m_isBuiltinFunction)
m_shouldEmitDebugHooks = false;
+
+ allocateCalleeSaveSpace();
SymbolTable* functionSymbolTable = SymbolTable::create(*m_vm);
functionSymbolTable->setUsesNonStrictEval(m_usesNonStrictEval);
@@ -493,6 +497,8 @@
for (auto& constantRegister : m_linkTimeConstantRegisters)
constantRegister = nullptr;
+ allocateCalleeSaveSpace();
+
m_codeBlock->setNumParameters(1);
emitOpcode(op_enter);
@@ -537,6 +543,8 @@
if (m_isBuiltinFunction)
m_shouldEmitDebugHooks = false;
+ allocateCalleeSaveSpace();
+
SymbolTable* moduleEnvironmentSymbolTable = SymbolTable::create(*m_vm);
moduleEnvironmentSymbolTable->setUsesNonStrictEval(m_usesNonStrictEval);
moduleEnvironmentSymbolTable->setScopeType(SymbolTable::ScopeType::LexicalScope);
@@ -3064,6 +3072,17 @@
return LabelScopePtr::null();
}
+void BytecodeGenerator::allocateCalleeSaveSpace()
+{
+ size_t virtualRegisterCountForCalleeSaves = CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters();
+
+ for (size_t i = 0; i < virtualRegisterCountForCalleeSaves; i++) {
+ RegisterID* localRegister = addVar();
+ localRegister->ref();
+ m_localRegistersForCalleeSaveRegisters.append(localRegister);
+ }
+}
+
void BytecodeGenerator::allocateAndEmitScope()
{
m_scopeRegister = addVar();
diff --git a/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.h b/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.h
index 769dc39..dc31f7a 100644
--- a/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.h
+++ b/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.h
@@ -697,6 +697,7 @@
ALWAYS_INLINE void rewindBinaryOp();
ALWAYS_INLINE void rewindUnaryOp();
+ void allocateCalleeSaveSpace();
void allocateAndEmitScope();
RegisterID* emitLoadArrowFunctionThis(RegisterID*);
void emitComplexPopScopes(RegisterID*, ControlFlowContext* topScope, ControlFlowContext* bottomScope);
@@ -810,6 +811,7 @@
RegisterID* m_newTargetRegister { nullptr };
RegisterID* m_linkTimeConstantRegisters[LinkTimeConstantCount];
+ SegmentedVector<RegisterID*, 16> m_localRegistersForCalleeSaveRegisters;
SegmentedVector<RegisterID, 32> m_constantPoolRegisters;
SegmentedVector<RegisterID, 32> m_calleeRegisters;
SegmentedVector<RegisterID, 32> m_parameters;
diff --git a/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp b/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
index 1718df6..2f65ae7 100644
--- a/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
+++ b/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
@@ -28,7 +28,6 @@
#if ENABLE(DFG_JIT)
-#include "ArityCheckFailReturnThunks.h"
#include "CodeBlock.h"
#include "DFGFailedFinalizer.h"
#include "DFGInlineCacheWrapperInlines.h"
@@ -102,7 +101,12 @@
// both normal return code and when jumping to an exception handler).
emitFunctionPrologue();
emitPutImmediateToCallFrameHeader(m_codeBlock, JSStack::CodeBlock);
- jitAssertTagsInPlace();
+}
+
+void JITCompiler::compileSetupRegistersForEntry()
+{
+ emitSaveCalleeSaves();
+ emitMaterializeTagCheckRegisters();
}
void JITCompiler::compileBody()
@@ -119,6 +123,8 @@
if (!m_exceptionChecksWithCallFrameRollback.empty()) {
m_exceptionChecksWithCallFrameRollback.link(this);
+ copyCalleeSavesToVMCalleeSavesBuffer();
+
// lookupExceptionHandlerFromCallerFrame is passed two arguments, the VM and the exec (the CallFrame*).
move(TrustedImmPtr(vm()), GPRInfo::argumentGPR0);
move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR1);
@@ -137,6 +143,8 @@
if (!m_exceptionChecks.empty()) {
m_exceptionChecks.link(this);
+ copyCalleeSavesToVMCalleeSavesBuffer();
+
// lookupExceptionHandler is passed two arguments, the VM and the exec (the CallFrame*).
move(TrustedImmPtr(vm()), GPRInfo::argumentGPR0);
move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR1);
@@ -294,6 +302,7 @@
addPtr(TrustedImm32(m_graph.stackPointerOffset() * sizeof(Register)), GPRInfo::callFrameRegister, stackPointerRegister);
checkStackPointerAlignment();
+ compileSetupRegistersForEntry();
compileBody();
setEndOfMainPath();
@@ -358,6 +367,8 @@
addPtr(TrustedImm32(m_graph.stackPointerOffset() * sizeof(Register)), GPRInfo::callFrameRegister, stackPointerRegister);
checkStackPointerAlignment();
+ compileSetupRegistersForEntry();
+
// === Function body code generation ===
m_speculative = std::make_unique<SpeculativeJIT>(*this);
compileBody();
@@ -397,11 +408,6 @@
addPtr(TrustedImm32(maxFrameExtentForSlowPathCall), stackPointerRegister);
branchTest32(Zero, GPRInfo::returnValueGPR).linkTo(fromArityCheck, this);
emitStoreCodeOrigin(CodeOrigin(0));
- GPRReg thunkReg = GPRInfo::argumentGPR1;
- CodeLocationLabel* arityThunkLabels =
- m_vm->arityCheckFailReturnThunks->returnPCsFor(*m_vm, m_codeBlock->numParameters());
- move(TrustedImmPtr(arityThunkLabels), thunkReg);
- loadPtr(BaseIndex(thunkReg, GPRInfo::returnValueGPR, timesPtr()), thunkReg);
move(GPRInfo::returnValueGPR, GPRInfo::argumentGPR0);
m_callArityFixup = call();
jump(fromArityCheck);
diff --git a/Source/JavaScriptCore/dfg/DFGJITCompiler.h b/Source/JavaScriptCore/dfg/DFGJITCompiler.h
index e9dca25..7db7426 100644
--- a/Source/JavaScriptCore/dfg/DFGJITCompiler.h
+++ b/Source/JavaScriptCore/dfg/DFGJITCompiler.h
@@ -265,6 +265,7 @@
// Internal implementation to compile.
void compileEntry();
+ void compileSetupRegistersForEntry();
void compileBody();
void link(LinkBuffer&);
diff --git a/Source/JavaScriptCore/dfg/DFGOSREntry.cpp b/Source/JavaScriptCore/dfg/DFGOSREntry.cpp
index d85765f..48a4f01 100644
--- a/Source/JavaScriptCore/dfg/DFGOSREntry.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOSREntry.cpp
@@ -308,8 +308,25 @@
continue;
pivot[i] = JSValue();
}
+
+ // 6) Copy our callee saves to buffer.
+#if NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
+ RegisterAtOffsetList* registerSaveLocations = codeBlock->calleeSaveRegisters();
+ RegisterAtOffsetList* allCalleeSaves = vm->getAllCalleeSaveRegisterOffsets();
+ RegisterSet dontSaveRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs());
+
+ unsigned registerCount = registerSaveLocations->size();
+ for (unsigned i = 0; i < registerCount; i++) {
+ RegisterAtOffset currentEntry = registerSaveLocations->at(i);
+ if (dontSaveRegisters.get(currentEntry.reg()))
+ continue;
+ RegisterAtOffset* vmCalleeSavesEntry = allCalleeSaves->find(currentEntry.reg());
+
+ *(bitwise_cast<intptr_t*>(pivot - 1) - currentEntry.offsetAsIndex()) = vm->calleeSaveRegistersBuffer[vmCalleeSavesEntry->offsetAsIndex()];
+ }
+#endif
- // 6) Fix the call frame to have the right code block.
+ // 7) Fix the call frame to have the right code block.
*bitwise_cast<CodeBlock**>(pivot - 1 - JSStack::CodeBlock) = codeBlock;
diff --git a/Source/JavaScriptCore/dfg/DFGOSRExitCompiler32_64.cpp b/Source/JavaScriptCore/dfg/DFGOSRExitCompiler32_64.cpp
index e1b72cf..3486883 100644
--- a/Source/JavaScriptCore/dfg/DFGOSRExitCompiler32_64.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOSRExitCompiler32_64.cpp
@@ -244,12 +244,21 @@
-m_jit.codeBlock()->jitCode()->dfgCommon()->requiredRegisterCountForExit * sizeof(Register)),
CCallHelpers::framePointerRegister, CCallHelpers::stackPointerRegister);
+ // Restore the DFG callee saves and then save the ones the baseline JIT uses.
+ m_jit.emitRestoreCalleeSaves();
+ m_jit.emitSaveCalleeSavesFor(m_jit.baselineCodeBlock());
+
// Do all data format conversions and store the results into the stack.
for (size_t index = 0; index < operands.size(); ++index) {
const ValueRecovery& recovery = operands[index];
- int operand = operands.operandForIndex(index);
-
+ VirtualRegister reg = operands.virtualRegisterForIndex(index);
+
+ if (reg.isLocal() && reg.toLocal() < static_cast<int>(m_jit.baselineCodeBlock()->calleeSaveSpaceAsVirtualRegisters()))
+ continue;
+
+ int operand = reg.offset();
+
switch (recovery.technique()) {
case InPair:
case DisplacedInJSStack:
diff --git a/Source/JavaScriptCore/dfg/DFGOSRExitCompiler64.cpp b/Source/JavaScriptCore/dfg/DFGOSRExitCompiler64.cpp
index f547554..b225879 100644
--- a/Source/JavaScriptCore/dfg/DFGOSRExitCompiler64.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOSRExitCompiler64.cpp
@@ -202,7 +202,7 @@
}
// And voila, all GPRs are free to reuse.
-
+
// Save all state from FPRs into the scratch buffer.
for (size_t index = 0; index < operands.size(); ++index) {
@@ -254,12 +254,24 @@
-m_jit.codeBlock()->jitCode()->dfgCommon()->requiredRegisterCountForExit * sizeof(Register)),
CCallHelpers::framePointerRegister, CCallHelpers::stackPointerRegister);
+ // Restore the DFG callee saves and then save the ones the baseline JIT uses.
+ m_jit.emitRestoreCalleeSaves();
+ m_jit.emitSaveCalleeSavesFor(m_jit.baselineCodeBlock());
+
+ // The tag registers are needed to materialize recoveries below.
+ m_jit.emitMaterializeTagCheckRegisters();
+
// Do all data format conversions and store the results into the stack.
for (size_t index = 0; index < operands.size(); ++index) {
const ValueRecovery& recovery = operands[index];
- int operand = operands.operandForIndex(index);
-
+ VirtualRegister reg = operands.virtualRegisterForIndex(index);
+
+ if (reg.isLocal() && reg.toLocal() < static_cast<int>(m_jit.baselineCodeBlock()->calleeSaveSpaceAsVirtualRegisters()))
+ continue;
+
+ int operand = reg.offset();
+
switch (recovery.technique()) {
case InGPR:
case UnboxedCellInGPR:
@@ -320,7 +332,7 @@
break;
}
}
-
+
// Now that things on the stack are recovered, do the arguments recovery. We assume that arguments
// recoveries don't recursively refer to each other. But, we don't try to assume that they only
// refer to certain ranges of locals. Hence why we need to do this here, once the stack is sensible.
@@ -370,7 +382,7 @@
// Reify inlined call frames.
reifyInlinedCallFrames(m_jit, exit);
-
+
// And finish.
adjustAndJumpToTarget(m_jit, exit);
}
diff --git a/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp b/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
index 2c16a45..b28cffa 100644
--- a/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
@@ -197,6 +197,7 @@
jit.storePtr(AssemblyHelpers::TrustedImmPtr(trueReturnPC), AssemblyHelpers::addressFor(inlineCallFrame->stackOffset + virtualRegisterForArgument(inlineCallFrame->arguments.size()).offset()));
jit.storePtr(AssemblyHelpers::TrustedImmPtr(baselineCodeBlock), AssemblyHelpers::addressFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::CodeBlock)));
+ jit.emitSaveCalleeSavesFor(baselineCodeBlock, static_cast<VirtualRegister>(inlineCallFrame->stackOffset));
if (!inlineCallFrame->isVarargs())
jit.store32(AssemblyHelpers::TrustedImm32(inlineCallFrame->arguments.size()), AssemblyHelpers::payloadFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ArgumentCount)));
#if USE(JSVALUE64)
@@ -250,14 +251,14 @@
void adjustAndJumpToTarget(CCallHelpers& jit, const OSRExitBase& exit)
{
#if ENABLE(GGC)
- jit.move(AssemblyHelpers::TrustedImmPtr(jit.codeBlock()->ownerExecutable()), GPRInfo::nonArgGPR0);
- osrWriteBarrier(jit, GPRInfo::nonArgGPR0, GPRInfo::nonArgGPR1);
+ jit.move(AssemblyHelpers::TrustedImmPtr(jit.codeBlock()->ownerExecutable()), GPRInfo::argumentGPR1);
+ osrWriteBarrier(jit, GPRInfo::argumentGPR1, GPRInfo::nonArgGPR0);
InlineCallFrameSet* inlineCallFrames = jit.codeBlock()->jitCode()->dfgCommon()->inlineCallFrames.get();
if (inlineCallFrames) {
for (InlineCallFrame* inlineCallFrame : *inlineCallFrames) {
ScriptExecutable* ownerExecutable = inlineCallFrame->executable.get();
- jit.move(AssemblyHelpers::TrustedImmPtr(ownerExecutable), GPRInfo::nonArgGPR0);
- osrWriteBarrier(jit, GPRInfo::nonArgGPR0, GPRInfo::nonArgGPR1);
+ jit.move(AssemblyHelpers::TrustedImmPtr(ownerExecutable), GPRInfo::argumentGPR1);
+ osrWriteBarrier(jit, GPRInfo::argumentGPR1, GPRInfo::nonArgGPR0);
}
}
#endif
@@ -277,8 +278,6 @@
jit.addPtr(AssemblyHelpers::TrustedImm32(JIT::stackPointerOffsetFor(baselineCodeBlock) * sizeof(Register)), GPRInfo::callFrameRegister, AssemblyHelpers::stackPointerRegister);
- jit.jitAssertTagsInPlace();
-
jit.move(AssemblyHelpers::TrustedImmPtr(jumpTarget), GPRInfo::regT2);
jit.jump(GPRInfo::regT2);
}
diff --git a/Source/JavaScriptCore/dfg/DFGPlan.cpp b/Source/JavaScriptCore/dfg/DFGPlan.cpp
index d0d5360..ab14b5a 100644
--- a/Source/JavaScriptCore/dfg/DFGPlan.cpp
+++ b/Source/JavaScriptCore/dfg/DFGPlan.cpp
@@ -242,6 +242,8 @@
finalizer = std::make_unique<FailedFinalizer>(*this);
return FailPath;
}
+
+ codeBlock->setCalleeSaveRegisters(RegisterSet::dfgCalleeSaveRegisters());
// By this point the DFG bytecode parser will have potentially mutated various tables
// in the CodeBlock. This is a good time to perform an early shrink, which is more
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
index 76cee35..2076d05 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
@@ -311,7 +311,7 @@
result.set(fpr);
}
- result.merge(RegisterSet::specialRegisters());
+ result.merge(RegisterSet::stubUnavailableRegisters());
return result;
}
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
index 267f073..daf62f5 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
@@ -3131,6 +3131,7 @@
}
}
+ m_jit.emitRestoreCalleeSaves();
m_jit.emitFunctionEpilogue();
m_jit.ret();
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
index 0c0bb18..11ec7a5 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
@@ -3190,6 +3190,7 @@
JSValueOperand op1(this, node->child1());
m_jit.move(op1.gpr(), GPRInfo::returnValueGPR);
+ m_jit.emitRestoreCalleeSaves();
m_jit.emitFunctionEpilogue();
m_jit.ret();
@@ -4790,6 +4791,7 @@
TrustedImm32(m_stream->size()));
appendCallSetResult(triggerOSREntryNow, tempGPR);
MacroAssembler::Jump dontEnter = m_jit.branchTestPtr(MacroAssembler::Zero, tempGPR);
+ m_jit.emitRestoreCalleeSaves();
m_jit.jump(tempGPR);
dontEnter.link(&m_jit);
silentFillAllRegisters(tempGPR);
diff --git a/Source/JavaScriptCore/dfg/DFGStackLayoutPhase.cpp b/Source/JavaScriptCore/dfg/DFGStackLayoutPhase.cpp
index 0713720..526a0f5 100644
--- a/Source/JavaScriptCore/dfg/DFGStackLayoutPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGStackLayoutPhase.cpp
@@ -129,7 +129,7 @@
}
Vector<unsigned> allocation(usedLocals.size());
- m_graph.m_nextMachineLocal = 0;
+ m_graph.m_nextMachineLocal = codeBlock()->calleeSaveSpaceAsVirtualRegisters();
for (unsigned i = 0; i < usedLocals.size(); ++i) {
if (!usedLocals.get(i)) {
allocation[i] = UINT_MAX;
diff --git a/Source/JavaScriptCore/ftl/FTLCompile.cpp b/Source/JavaScriptCore/ftl/FTLCompile.cpp
index 6acbcd3..30abdf0 100644
--- a/Source/JavaScriptCore/ftl/FTLCompile.cpp
+++ b/Source/JavaScriptCore/ftl/FTLCompile.cpp
@@ -329,7 +329,7 @@
static void fixFunctionBasedOnStackMaps(
State& state, CodeBlock* codeBlock, JITCode* jitCode, GeneratedFunction generatedFunction,
- StackMaps::RecordMap& recordMap, bool didSeeUnwindInfo)
+ StackMaps::RecordMap& recordMap)
{
Graph& graph = state.graph;
VM& vm = graph.m_vm;
@@ -365,12 +365,14 @@
// At this point it's perfectly fair to just blow away all state and restore the
// JS JIT view of the universe.
+ checkJIT.copyCalleeSavesToVMCalleeSavesBuffer();
checkJIT.move(MacroAssembler::TrustedImmPtr(&vm), GPRInfo::argumentGPR0);
checkJIT.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR1);
MacroAssembler::Call callLookupExceptionHandler = checkJIT.call();
checkJIT.jumpToExceptionHandler();
stackOverflowException = checkJIT.label();
+ checkJIT.copyCalleeSavesToVMCalleeSavesBuffer();
checkJIT.move(MacroAssembler::TrustedImmPtr(&vm), GPRInfo::argumentGPR0);
checkJIT.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR1);
MacroAssembler::Call callLookupExceptionHandlerFromCallerFrame = checkJIT.call();
@@ -392,7 +394,6 @@
exitThunkGenerator.emitThunks();
if (exitThunkGenerator.didThings()) {
RELEASE_ASSERT(state.finalizer->osrExit.size());
- RELEASE_ASSERT(didSeeUnwindInfo);
auto linkBuffer = std::make_unique<LinkBuffer>(
vm, exitThunkGenerator, codeBlock, JITCompilationCanFail);
@@ -814,16 +815,14 @@
}
}
- bool didSeeUnwindInfo = state.jitCode->unwindInfo.parse(
+ std::unique_ptr<RegisterAtOffsetList> registerOffsets = parseUnwindInfo(
state.unwindDataSection, state.unwindDataSectionSize,
state.generatedFunction);
if (shouldShowDisassembly()) {
dataLog("Unwind info for ", CodeBlockWithJITType(state.graph.m_codeBlock, JITCode::FTLJIT), ":\n");
- if (didSeeUnwindInfo)
- dataLog(" ", state.jitCode->unwindInfo, "\n");
- else
- dataLog(" <no unwind info>\n");
+ dataLog(" ", *registerOffsets, "\n");
}
+ state.graph.m_codeBlock->setCalleeSaveRegisters(WTF::move(registerOffsets));
if (state.stackmapsSection && state.stackmapsSection->size()) {
if (shouldShowDisassembly()) {
@@ -846,7 +845,7 @@
StackMaps::RecordMap recordMap = state.jitCode->stackmaps.computeRecordMap();
fixFunctionBasedOnStackMaps(
state, state.graph.m_codeBlock, state.jitCode.get(), state.generatedFunction,
- recordMap, didSeeUnwindInfo);
+ recordMap);
if (state.allocationFailed)
return;
diff --git a/Source/JavaScriptCore/ftl/FTLJITCode.h b/Source/JavaScriptCore/ftl/FTLJITCode.h
index 6a4d152..b077033 100644
--- a/Source/JavaScriptCore/ftl/FTLJITCode.h
+++ b/Source/JavaScriptCore/ftl/FTLJITCode.h
@@ -84,7 +84,6 @@
DFG::CommonData common;
SegmentedVector<OSRExit, 8> osrExit;
StackMaps stackmaps;
- UnwindInfo unwindInfo;
private:
Vector<RefPtr<DataSection>> m_dataSections;
diff --git a/Source/JavaScriptCore/ftl/FTLLink.cpp b/Source/JavaScriptCore/ftl/FTLLink.cpp
index fa934c3..80e24a3 100644
--- a/Source/JavaScriptCore/ftl/FTLLink.cpp
+++ b/Source/JavaScriptCore/ftl/FTLLink.cpp
@@ -28,7 +28,6 @@
#if ENABLE(FTL_JIT)
-#include "ArityCheckFailReturnThunks.h"
#include "CCallHelpers.h"
#include "CodeBlockWithJITType.h"
#include "DFGCommon.h"
@@ -54,9 +53,7 @@
// LLVM will create its own jump tables as needed.
codeBlock->clearSwitchJumpTables();
- // FIXME: Need to know the real frame register count.
- // https://bugs.webkit.org/show_bug.cgi?id=125727
- state.jitCode->common.frameRegisterCount = 1000;
+ state.jitCode->common.frameRegisterCount = state.jitCode->stackmaps.stackSizeForLocals() / sizeof(void*);
state.jitCode->common.requiredRegisterCountForExit = graph.requiredRegisterCountForExit();
@@ -169,10 +166,6 @@
jit.emitFunctionEpilogue();
mainPathJumps.append(jit.branchTest32(CCallHelpers::Zero, GPRInfo::argumentGPR0));
jit.emitFunctionPrologue();
- CodeLocationLabel* arityThunkLabels =
- vm.arityCheckFailReturnThunks->returnPCsFor(vm, codeBlock->numParameters());
- jit.move(CCallHelpers::TrustedImmPtr(arityThunkLabels), GPRInfo::argumentGPR1);
- jit.loadPtr(CCallHelpers::BaseIndex(GPRInfo::argumentGPR1, GPRInfo::argumentGPR0, CCallHelpers::timesPtr()), GPRInfo::argumentGPR1);
CCallHelpers::Call callArityFixup = jit.call();
jit.emitFunctionEpilogue();
mainPathJumps.append(jit.jump());
diff --git a/Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp b/Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp
index 57900e6..1b19051 100644
--- a/Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp
+++ b/Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp
@@ -211,7 +211,7 @@
sizeof(EncodedJSValue) * (
exit.m_values.size() + numMaterializations + maxMaterializationNumArguments) +
requiredScratchMemorySizeInBytes() +
- jitCode->unwindInfo.m_registers.size() * sizeof(uint64_t));
+ codeBlock->calleeSaveRegisters()->size() * sizeof(uint64_t));
EncodedJSValue* scratch = scratchBuffer ? static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer()) : 0;
EncodedJSValue* materializationPointers = scratch + exit.m_values.size();
EncodedJSValue* materializationArguments = materializationPointers + numMaterializations;
@@ -384,8 +384,8 @@
// Before we start messing with the frame, we need to set aside any registers that the
// FTL code was preserving.
- for (unsigned i = jitCode->unwindInfo.m_registers.size(); i--;) {
- RegisterAtOffset entry = jitCode->unwindInfo.m_registers[i];
+ for (unsigned i = codeBlock->calleeSaveRegisters()->size(); i--;) {
+ RegisterAtOffset entry = codeBlock->calleeSaveRegisters()->at(i);
jit.load64(
MacroAssembler::Address(MacroAssembler::framePointerRegister, entry.offset()),
GPRInfo::regT0);
@@ -432,10 +432,12 @@
jit.add32(GPRInfo::regT3, GPRInfo::regT2);
arityIntact.link(&jit);
+ CodeBlock* baselineCodeBlock = jit.baselineCodeBlockFor(exit.m_codeOrigin);
+
// First set up SP so that our data doesn't get clobbered by signals.
unsigned conservativeStackDelta =
registerPreservationOffset() +
- exit.m_values.numberOfLocals() * sizeof(Register) +
+ (exit.m_values.numberOfLocals() + baselineCodeBlock->calleeSaveSpaceAsVirtualRegisters()) * sizeof(Register) +
maxFrameExtentForSlowPathCall;
conservativeStackDelta = WTF::roundUpToMultipleOf(
stackAlignmentBytes(), conservativeStackDelta);
@@ -457,67 +459,59 @@
jit.store64(GPRInfo::regT0, GPRInfo::regT1);
jit.addPtr(MacroAssembler::TrustedImm32(sizeof(Register)), GPRInfo::regT1);
jit.branchTest32(MacroAssembler::NonZero, GPRInfo::regT2).linkTo(loop, &jit);
-
- // At this point regT1 points to where we would save our registers. Save them here.
- ptrdiff_t currentOffset = 0;
- for (Reg reg = Reg::first(); reg <= Reg::last(); reg = reg.next()) {
- if (!toSave.get(reg))
- continue;
- currentOffset += sizeof(Register);
- unsigned unwindIndex = jitCode->unwindInfo.indexOf(reg);
- if (unwindIndex == UINT_MAX) {
- // The FTL compilation didn't preserve this register. This means that it also
- // didn't use the register. So its value at the beginning of OSR exit should be
- // preserved by the thunk. Luckily, we saved all registers into the register
- // scratch buffer, so we can restore them from there.
- jit.load64(registerScratch + offsetOfReg(reg), GPRInfo::regT0);
- } else {
- // The FTL compilation preserved the register. Its new value is therefore
- // irrelevant, but we can get the value that was preserved by using the unwind
- // data. We've already copied all unwind-able preserved registers into the unwind
- // scratch buffer, so we can get it from there.
- jit.load64(unwindScratch + unwindIndex, GPRInfo::regT0);
- }
- jit.store64(GPRInfo::regT0, AssemblyHelpers::Address(GPRInfo::regT1, currentOffset));
- }
-
- // We need to make sure that we return into the register restoration thunk. This works
- // differently depending on whether or not we had arity issues.
- MacroAssembler::Jump arityIntactForReturnPC = jit.branch32(
- MacroAssembler::GreaterThanOrEqual,
- CCallHelpers::payloadFor(JSStack::ArgumentCount),
- MacroAssembler::TrustedImm32(codeBlock->numParameters()));
-
- // The return PC in the call frame header points at exactly the right arity restoration
- // thunk. We don't want to change that. But the arity restoration thunk's frame has a
- // return PC and we want to reroute that to our register restoration thunk. The arity
- // restoration's return PC just just below regT1, and the register restoration's return PC
- // is right at regT1.
- jit.loadPtr(MacroAssembler::Address(GPRInfo::regT1, -static_cast<ptrdiff_t>(sizeof(Register))), GPRInfo::regT0);
- jit.storePtr(GPRInfo::regT0, GPRInfo::regT1);
- jit.storePtr(
- MacroAssembler::TrustedImmPtr(vm->getCTIStub(registerRestorationThunkGenerator).code().executableAddress()),
- MacroAssembler::Address(GPRInfo::regT1, -static_cast<ptrdiff_t>(sizeof(Register))));
-
- MacroAssembler::Jump arityReturnPCReady = jit.jump();
- arityIntactForReturnPC.link(&jit);
-
- jit.loadPtr(MacroAssembler::Address(MacroAssembler::framePointerRegister, CallFrame::returnPCOffset()), GPRInfo::regT0);
- jit.storePtr(GPRInfo::regT0, GPRInfo::regT1);
- jit.storePtr(
- MacroAssembler::TrustedImmPtr(vm->getCTIStub(registerRestorationThunkGenerator).code().executableAddress()),
- MacroAssembler::Address(MacroAssembler::framePointerRegister, CallFrame::returnPCOffset()));
-
- arityReturnPCReady.link(&jit);
-
+ RegisterAtOffsetList* baselineCalleeSaves = baselineCodeBlock->calleeSaveRegisters();
+
+ for (Reg reg = Reg::first(); reg <= Reg::last(); reg = reg.next()) {
+ if (!toSave.get(reg) || !reg.isGPR())
+ continue;
+ unsigned unwindIndex = codeBlock->calleeSaveRegisters()->indexOf(reg);
+ RegisterAtOffset* baselineRegisterOffset = baselineCalleeSaves->find(reg);
+
+ if (reg.isGPR()) {
+ GPRReg regToLoad = baselineRegisterOffset ? GPRInfo::regT0 : reg.gpr();
+
+ if (unwindIndex == UINT_MAX) {
+ // The FTL compilation didn't preserve this register. This means that it also
+ // didn't use the register. So its value at the beginning of OSR exit should be
+ // preserved by the thunk. Luckily, we saved all registers into the register
+ // scratch buffer, so we can restore them from there.
+ jit.load64(registerScratch + offsetOfReg(reg), regToLoad);
+ } else {
+ // The FTL compilation preserved the register. Its new value is therefore
+ // irrelevant, but we can get the value that was preserved by using the unwind
+ // data. We've already copied all unwind-able preserved registers into the unwind
+ // scratch buffer, so we can get it from there.
+ jit.load64(unwindScratch + unwindIndex, regToLoad);
+ }
+
+ if (baselineRegisterOffset)
+ jit.store64(regToLoad, MacroAssembler::Address(MacroAssembler::framePointerRegister, baselineRegisterOffset->offset()));
+ } else {
+ FPRReg fpRegToLoad = baselineRegisterOffset ? FPRInfo::fpRegT0 : reg.fpr();
+
+ if (unwindIndex == UINT_MAX)
+ jit.loadDouble(MacroAssembler::TrustedImmPtr(registerScratch + offsetOfReg(reg)), fpRegToLoad);
+ else
+ jit.loadDouble(MacroAssembler::TrustedImmPtr(unwindScratch + unwindIndex), fpRegToLoad);
+
+ if (baselineRegisterOffset)
+ jit.storeDouble(fpRegToLoad, MacroAssembler::Address(MacroAssembler::framePointerRegister, baselineRegisterOffset->offset()));
+ }
+ }
+
+ size_t baselineVirtualRegistersForCalleeSaves = baselineCodeBlock->calleeSaveSpaceAsVirtualRegisters();
+
// Now get state out of the scratch buffer and place it back into the stack. The values are
// already reboxed so we just move them.
for (unsigned index = exit.m_values.size(); index--;) {
- int operand = exit.m_values.operandForIndex(index);
-
+ VirtualRegister reg = exit.m_values.virtualRegisterForIndex(index);
+
+ if (reg.isLocal() && reg.toLocal() < static_cast<int>(baselineVirtualRegistersForCalleeSaves))
+ continue;
+
jit.load64(scratch + index, GPRInfo::regT0);
- jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(static_cast<VirtualRegister>(operand)));
+ jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(reg));
}
handleExitCounts(jit, exit);
diff --git a/Source/JavaScriptCore/ftl/FTLThunks.cpp b/Source/JavaScriptCore/ftl/FTLThunks.cpp
index f2198ad..2792583 100644
--- a/Source/JavaScriptCore/ftl/FTLThunks.cpp
+++ b/Source/JavaScriptCore/ftl/FTLThunks.cpp
@@ -66,8 +66,8 @@
saveAllRegisters(jit, buffer);
// Tell GC mark phase how much of the scratch buffer is active during call.
- jit.move(MacroAssembler::TrustedImmPtr(scratchBuffer->activeLengthPtr()), GPRInfo::nonArgGPR1);
- jit.storePtr(MacroAssembler::TrustedImmPtr(requiredScratchMemorySizeInBytes()), GPRInfo::nonArgGPR1);
+ jit.move(MacroAssembler::TrustedImmPtr(scratchBuffer->activeLengthPtr()), GPRInfo::nonArgGPR0);
+ jit.storePtr(MacroAssembler::TrustedImmPtr(requiredScratchMemorySizeInBytes()), GPRInfo::nonArgGPR0);
jit.loadPtr(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);
jit.peek(
diff --git a/Source/JavaScriptCore/ftl/FTLUnwindInfo.cpp b/Source/JavaScriptCore/ftl/FTLUnwindInfo.cpp
index 2867a72..6429dd6 100644
--- a/Source/JavaScriptCore/ftl/FTLUnwindInfo.cpp
+++ b/Source/JavaScriptCore/ftl/FTLUnwindInfo.cpp
@@ -94,6 +94,9 @@
#include "config.h"
#include "FTLUnwindInfo.h"
+#include "CodeBlock.h"
+#include "RegisterAtOffsetList.h"
+
#if ENABLE(FTL_JIT)
#if OS(DARWIN)
@@ -103,10 +106,6 @@
namespace JSC { namespace FTL {
-UnwindInfo::UnwindInfo() { }
-UnwindInfo::~UnwindInfo() { }
-
-
namespace {
#if OS(DARWIN)
struct CompactUnwind {
@@ -654,12 +653,11 @@
#endif
} // anonymous namespace
-bool UnwindInfo::parse(void* section, size_t size, GeneratedFunction generatedFunction)
+std::unique_ptr<RegisterAtOffsetList> parseUnwindInfo(void* section, size_t size, GeneratedFunction generatedFunction)
{
- m_registers.clear();
RELEASE_ASSERT(!!section);
- if (!section)
- return false;
+
+ std::unique_ptr<RegisterAtOffsetList> registerOffsets = std::make_unique<RegisterAtOffsetList>();
#if OS(DARWIN)
RELEASE_ASSERT(size >= sizeof(CompactUnwind));
@@ -689,27 +687,27 @@
break;
case UNWIND_X86_64_REG_RBX:
- m_registers.append(RegisterAtOffset(X86Registers::ebx, offset));
+ registerOffsets->append(RegisterAtOffset(X86Registers::ebx, offset));
break;
case UNWIND_X86_64_REG_R12:
- m_registers.append(RegisterAtOffset(X86Registers::r12, offset));
+ registerOffsets->append(RegisterAtOffset(X86Registers::r12, offset));
break;
case UNWIND_X86_64_REG_R13:
- m_registers.append(RegisterAtOffset(X86Registers::r13, offset));
+ registerOffsets->append(RegisterAtOffset(X86Registers::r13, offset));
break;
case UNWIND_X86_64_REG_R14:
- m_registers.append(RegisterAtOffset(X86Registers::r14, offset));
+ registerOffsets->append(RegisterAtOffset(X86Registers::r14, offset));
break;
case UNWIND_X86_64_REG_R15:
- m_registers.append(RegisterAtOffset(X86Registers::r15, offset));
+ registerOffsets->append(RegisterAtOffset(X86Registers::r15, offset));
break;
case UNWIND_X86_64_REG_RBP:
- m_registers.append(RegisterAtOffset(X86Registers::ebp, offset));
+ registerOffsets->append(RegisterAtOffset(X86Registers::ebp, offset));
break;
default:
@@ -721,44 +719,44 @@
#elif CPU(ARM64)
RELEASE_ASSERT((encoding & UNWIND_ARM64_MODE_MASK) == UNWIND_ARM64_MODE_FRAME);
- m_registers.append(RegisterAtOffset(ARM64Registers::fp, 0));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::fp, 0));
int32_t offset = 0;
if (encoding & UNWIND_ARM64_FRAME_X19_X20_PAIR) {
- m_registers.append(RegisterAtOffset(ARM64Registers::x19, offset -= 8));
- m_registers.append(RegisterAtOffset(ARM64Registers::x20, offset -= 8));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x19, offset -= 8));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x20, offset -= 8));
}
if (encoding & UNWIND_ARM64_FRAME_X21_X22_PAIR) {
- m_registers.append(RegisterAtOffset(ARM64Registers::x21, offset -= 8));
- m_registers.append(RegisterAtOffset(ARM64Registers::x22, offset -= 8));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x21, offset -= 8));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x22, offset -= 8));
}
if (encoding & UNWIND_ARM64_FRAME_X23_X24_PAIR) {
- m_registers.append(RegisterAtOffset(ARM64Registers::x23, offset -= 8));
- m_registers.append(RegisterAtOffset(ARM64Registers::x24, offset -= 8));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x23, offset -= 8));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x24, offset -= 8));
}
if (encoding & UNWIND_ARM64_FRAME_X25_X26_PAIR) {
- m_registers.append(RegisterAtOffset(ARM64Registers::x25, offset -= 8));
- m_registers.append(RegisterAtOffset(ARM64Registers::x26, offset -= 8));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x25, offset -= 8));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x26, offset -= 8));
}
if (encoding & UNWIND_ARM64_FRAME_X27_X28_PAIR) {
- m_registers.append(RegisterAtOffset(ARM64Registers::x27, offset -= 8));
- m_registers.append(RegisterAtOffset(ARM64Registers::x28, offset -= 8));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x27, offset -= 8));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x28, offset -= 8));
}
if (encoding & UNWIND_ARM64_FRAME_D8_D9_PAIR) {
- m_registers.append(RegisterAtOffset(ARM64Registers::q8, offset -= 8));
- m_registers.append(RegisterAtOffset(ARM64Registers::q9, offset -= 8));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q8, offset -= 8));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q9, offset -= 8));
}
if (encoding & UNWIND_ARM64_FRAME_D10_D11_PAIR) {
- m_registers.append(RegisterAtOffset(ARM64Registers::q10, offset -= 8));
- m_registers.append(RegisterAtOffset(ARM64Registers::q11, offset -= 8));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q10, offset -= 8));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q11, offset -= 8));
}
if (encoding & UNWIND_ARM64_FRAME_D12_D13_PAIR) {
- m_registers.append(RegisterAtOffset(ARM64Registers::q12, offset -= 8));
- m_registers.append(RegisterAtOffset(ARM64Registers::q13, offset -= 8));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q12, offset -= 8));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q13, offset -= 8));
}
if (encoding & UNWIND_ARM64_FRAME_D14_D15_PAIR) {
- m_registers.append(RegisterAtOffset(ARM64Registers::q14, offset -= 8));
- m_registers.append(RegisterAtOffset(ARM64Registers::q15, offset -= 8));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q14, offset -= 8));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q15, offset -= 8));
}
#else
#error "Unrecognized architecture"
@@ -782,22 +780,22 @@
if (prolog.savedRegisters[i].saved) {
switch (i) {
case UNW_X86_64_rbx:
- m_registers.append(RegisterAtOffset(X86Registers::ebx, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(X86Registers::ebx, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_X86_64_r12:
- m_registers.append(RegisterAtOffset(X86Registers::r12, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(X86Registers::r12, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_X86_64_r13:
- m_registers.append(RegisterAtOffset(X86Registers::r13, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(X86Registers::r13, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_X86_64_r14:
- m_registers.append(RegisterAtOffset(X86Registers::r14, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(X86Registers::r14, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_X86_64_r15:
- m_registers.append(RegisterAtOffset(X86Registers::r15, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(X86Registers::r15, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_X86_64_rbp:
- m_registers.append(RegisterAtOffset(X86Registers::ebp, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(X86Registers::ebp, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case DW_X86_64_RET_addr:
break;
@@ -816,196 +814,196 @@
if (prolog.savedRegisters[i].saved) {
switch (i) {
case UNW_ARM64_x0:
- m_registers.append(RegisterAtOffset(ARM64Registers::x0, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x0, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x1:
- m_registers.append(RegisterAtOffset(ARM64Registers::x1, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x1, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x2:
- m_registers.append(RegisterAtOffset(ARM64Registers::x2, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x2, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x3:
- m_registers.append(RegisterAtOffset(ARM64Registers::x3, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x3, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x4:
- m_registers.append(RegisterAtOffset(ARM64Registers::x4, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x4, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x5:
- m_registers.append(RegisterAtOffset(ARM64Registers::x5, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x5, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x6:
- m_registers.append(RegisterAtOffset(ARM64Registers::x6, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x6, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x7:
- m_registers.append(RegisterAtOffset(ARM64Registers::x7, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x7, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x8:
- m_registers.append(RegisterAtOffset(ARM64Registers::x8, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x8, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x9:
- m_registers.append(RegisterAtOffset(ARM64Registers::x9, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x9, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x10:
- m_registers.append(RegisterAtOffset(ARM64Registers::x10, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x10, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x11:
- m_registers.append(RegisterAtOffset(ARM64Registers::x11, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x11, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x12:
- m_registers.append(RegisterAtOffset(ARM64Registers::x12, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x12, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x13:
- m_registers.append(RegisterAtOffset(ARM64Registers::x13, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x13, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x14:
- m_registers.append(RegisterAtOffset(ARM64Registers::x14, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x14, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x15:
- m_registers.append(RegisterAtOffset(ARM64Registers::x15, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x15, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x16:
- m_registers.append(RegisterAtOffset(ARM64Registers::x16, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x16, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x17:
- m_registers.append(RegisterAtOffset(ARM64Registers::x17, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x17, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x18:
- m_registers.append(RegisterAtOffset(ARM64Registers::x18, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x18, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x19:
- m_registers.append(RegisterAtOffset(ARM64Registers::x19, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x19, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x20:
- m_registers.append(RegisterAtOffset(ARM64Registers::x20, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x20, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x21:
- m_registers.append(RegisterAtOffset(ARM64Registers::x21, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x21, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x22:
- m_registers.append(RegisterAtOffset(ARM64Registers::x22, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x22, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x23:
- m_registers.append(RegisterAtOffset(ARM64Registers::x23, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x23, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x24:
- m_registers.append(RegisterAtOffset(ARM64Registers::x24, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x24, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x25:
- m_registers.append(RegisterAtOffset(ARM64Registers::x25, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x25, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x26:
- m_registers.append(RegisterAtOffset(ARM64Registers::x26, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x26, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x27:
- m_registers.append(RegisterAtOffset(ARM64Registers::x27, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x27, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x28:
- m_registers.append(RegisterAtOffset(ARM64Registers::x28, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x28, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_fp:
- m_registers.append(RegisterAtOffset(ARM64Registers::fp, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::fp, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_x30:
- m_registers.append(RegisterAtOffset(ARM64Registers::x30, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::x30, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_sp:
- m_registers.append(RegisterAtOffset(ARM64Registers::sp, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::sp, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v0:
- m_registers.append(RegisterAtOffset(ARM64Registers::q0, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q0, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v1:
- m_registers.append(RegisterAtOffset(ARM64Registers::q1, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q1, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v2:
- m_registers.append(RegisterAtOffset(ARM64Registers::q2, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q2, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v3:
- m_registers.append(RegisterAtOffset(ARM64Registers::q3, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q3, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v4:
- m_registers.append(RegisterAtOffset(ARM64Registers::q4, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q4, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v5:
- m_registers.append(RegisterAtOffset(ARM64Registers::q5, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q5, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v6:
- m_registers.append(RegisterAtOffset(ARM64Registers::q6, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q6, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v7:
- m_registers.append(RegisterAtOffset(ARM64Registers::q7, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q7, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v8:
- m_registers.append(RegisterAtOffset(ARM64Registers::q8, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q8, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v9:
- m_registers.append(RegisterAtOffset(ARM64Registers::q9, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q9, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v10:
- m_registers.append(RegisterAtOffset(ARM64Registers::q10, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q10, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v11:
- m_registers.append(RegisterAtOffset(ARM64Registers::q11, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q11, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v12:
- m_registers.append(RegisterAtOffset(ARM64Registers::q12, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q12, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v13:
- m_registers.append(RegisterAtOffset(ARM64Registers::q13, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q13, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v14:
- m_registers.append(RegisterAtOffset(ARM64Registers::q14, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q14, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v15:
- m_registers.append(RegisterAtOffset(ARM64Registers::q15, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q15, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v16:
- m_registers.append(RegisterAtOffset(ARM64Registers::q16, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q16, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v17:
- m_registers.append(RegisterAtOffset(ARM64Registers::q17, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q17, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v18:
- m_registers.append(RegisterAtOffset(ARM64Registers::q18, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q18, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v19:
- m_registers.append(RegisterAtOffset(ARM64Registers::q19, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q19, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v20:
- m_registers.append(RegisterAtOffset(ARM64Registers::q20, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q20, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v21:
- m_registers.append(RegisterAtOffset(ARM64Registers::q21, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ append(RegisterAtOffset(ARM64Registers::q21, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v22:
- m_registers.append(RegisterAtOffset(ARM64Registers::q22, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q22, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v23:
- m_registers.append(RegisterAtOffset(ARM64Registers::q23, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q23, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v24:
- m_registers.append(RegisterAtOffset(ARM64Registers::q24, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q24, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v25:
- m_registers.append(RegisterAtOffset(ARM64Registers::q25, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q25, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v26:
- m_registers.append(RegisterAtOffset(ARM64Registers::q26, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q26, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v27:
- m_registers.append(RegisterAtOffset(ARM64Registers::q27, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q27, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v28:
- m_registers.append(RegisterAtOffset(ARM64Registers::q28, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q28, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v29:
- m_registers.append(RegisterAtOffset(ARM64Registers::q29, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q29, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v30:
- m_registers.append(RegisterAtOffset(ARM64Registers::q30, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q30, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
case UNW_ARM64_v31:
- m_registers.append(RegisterAtOffset(ARM64Registers::q31, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
+ registerOffsets->append(RegisterAtOffset(ARM64Registers::q31, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));
break;
default:
RELEASE_ASSERT_NOT_REACHED(); // non-standard register being saved in prolog
@@ -1017,25 +1015,8 @@
#endif
#endif
- std::sort(m_registers.begin(), m_registers.end());
- return true;
-}
-
-void UnwindInfo::dump(PrintStream& out) const
-{
- out.print(listDump(m_registers));
-}
-
-RegisterAtOffset* UnwindInfo::find(Reg reg) const
-{
- return tryBinarySearch<RegisterAtOffset, Reg>(m_registers, m_registers.size(), reg, RegisterAtOffset::getReg);
-}
-
-unsigned UnwindInfo::indexOf(Reg reg) const
-{
- if (RegisterAtOffset* pointer = find(reg))
- return pointer - m_registers.begin();
- return UINT_MAX;
+ registerOffsets->sort();
+ return WTF::move(registerOffsets);
}
} } // namespace JSC::FTL
diff --git a/Source/JavaScriptCore/ftl/FTLUnwindInfo.h b/Source/JavaScriptCore/ftl/FTLUnwindInfo.h
index 6d2bf63..372f2f1 100644
--- a/Source/JavaScriptCore/ftl/FTLUnwindInfo.h
+++ b/Source/JavaScriptCore/ftl/FTLUnwindInfo.h
@@ -31,23 +31,11 @@
#if ENABLE(FTL_JIT)
#include "FTLGeneratedFunction.h"
-#include "FTLRegisterAtOffset.h"
+class RegisterAtOffsetList;
namespace JSC { namespace FTL {
-struct UnwindInfo {
- UnwindInfo();
- ~UnwindInfo();
-
- bool parse(void*, size_t, GeneratedFunction);
-
- void dump(PrintStream&) const;
-
- RegisterAtOffset* find(Reg) const;
- unsigned indexOf(Reg) const; // Returns UINT_MAX if not found.
-
- Vector<RegisterAtOffset> m_registers;
-};
+std::unique_ptr<RegisterAtOffsetList> parseUnwindInfo(void*, size_t, GeneratedFunction);
} } // namespace JSC::FTL
diff --git a/Source/JavaScriptCore/interpreter/Interpreter.cpp b/Source/JavaScriptCore/interpreter/Interpreter.cpp
index 1f6773b..2b3e0e0 100644
--- a/Source/JavaScriptCore/interpreter/Interpreter.cpp
+++ b/Source/JavaScriptCore/interpreter/Interpreter.cpp
@@ -650,15 +650,55 @@
if (!unwindCallFrame(visitor)) {
if (LegacyProfiler* profiler = vm.enabledProfiler())
profiler->exceptionUnwind(m_callFrame);
+
+ copyCalleeSavesToVMCalleeSavesBuffer(visitor);
+
return StackVisitor::Done;
}
} else
return StackVisitor::Done;
+ copyCalleeSavesToVMCalleeSavesBuffer(visitor);
+
return StackVisitor::Continue;
}
private:
+ void copyCalleeSavesToVMCalleeSavesBuffer(StackVisitor& visitor)
+ {
+#if ENABLE(JIT) && NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
+
+ if (!visitor->isJSFrame())
+ return;
+
+#if ENABLE(DFG_JIT)
+ if (visitor->inlineCallFrame())
+ return;
+#endif
+ RegisterAtOffsetList* currentCalleeSaves = m_codeBlock ? m_codeBlock->calleeSaveRegisters() : nullptr;
+
+ if (!currentCalleeSaves)
+ return;
+
+ VM& vm = m_callFrame->vm();
+ RegisterAtOffsetList* allCalleeSaves = vm.getAllCalleeSaveRegisterOffsets();
+ RegisterSet dontCopyRegisters = RegisterSet::stackRegisters();
+ intptr_t* frame = reinterpret_cast<intptr_t*>(m_callFrame->registers());
+
+ unsigned registerCount = currentCalleeSaves->size();
+ for (unsigned i = 0; i < registerCount; i++) {
+ RegisterAtOffset currentEntry = currentCalleeSaves->at(i);
+ if (dontCopyRegisters.get(currentEntry.reg()))
+ continue;
+ RegisterAtOffset* vmCalleeSavesEntry = allCalleeSaves->find(currentEntry.reg());
+
+ vm.calleeSaveRegistersBuffer[vmCalleeSavesEntry->offsetAsIndex()] = *(frame + currentEntry.offsetAsIndex());
+ }
+#else
+ UNUSED_PARAM(visitor);
+#endif
+ }
+
CallFrame*& m_callFrame;
bool m_isTermination;
CodeBlock*& m_codeBlock;
diff --git a/Source/JavaScriptCore/jit/ArityCheckFailReturnThunks.cpp b/Source/JavaScriptCore/jit/ArityCheckFailReturnThunks.cpp
deleted file mode 100644
index d522b81..0000000
--- a/Source/JavaScriptCore/jit/ArityCheckFailReturnThunks.cpp
+++ /dev/null
@@ -1,135 +0,0 @@
-/*
- * Copyright (C) 2013 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#include "config.h"
-#include "ArityCheckFailReturnThunks.h"
-
-#if ENABLE(JIT)
-
-#include "AssemblyHelpers.h"
-#include "LinkBuffer.h"
-#include "JSCInlines.h"
-#include "StackAlignment.h"
-
-namespace JSC {
-
-ArityCheckFailReturnThunks::ArityCheckFailReturnThunks()
- : m_nextSize(0)
-{
-}
-
-ArityCheckFailReturnThunks::~ArityCheckFailReturnThunks() { }
-
-CodeLocationLabel* ArityCheckFailReturnThunks::returnPCsFor(
- VM& vm, unsigned numExpectedArgumentsIncludingThis)
-{
- ASSERT(numExpectedArgumentsIncludingThis >= 1);
-
- numExpectedArgumentsIncludingThis = WTF::roundUpToMultipleOf(
- stackAlignmentRegisters(), numExpectedArgumentsIncludingThis);
-
- {
- ConcurrentJITLocker locker(m_lock);
- if (numExpectedArgumentsIncludingThis < m_nextSize)
- return m_returnPCArrays.last().get();
- }
-
- ASSERT(!isCompilationThread());
-
- numExpectedArgumentsIncludingThis = std::max(numExpectedArgumentsIncludingThis, m_nextSize * 2);
-
- AssemblyHelpers jit(&vm, 0);
-
- Vector<AssemblyHelpers::Label> labels;
-
- for (unsigned size = m_nextSize; size <= numExpectedArgumentsIncludingThis; size += stackAlignmentRegisters()) {
- labels.append(jit.label());
-
- jit.load32(
- AssemblyHelpers::Address(
- AssemblyHelpers::stackPointerRegister,
- (JSStack::ArgumentCount - JSStack::CallerFrameAndPCSize) * sizeof(Register) +
- PayloadOffset),
- GPRInfo::regT4);
- jit.add32(
- AssemblyHelpers::TrustedImm32(
- JSStack::CallFrameHeaderSize - JSStack::CallerFrameAndPCSize + size - 1),
- GPRInfo::regT4, GPRInfo::regT2);
- jit.lshift32(AssemblyHelpers::TrustedImm32(3), GPRInfo::regT2);
- jit.addPtr(AssemblyHelpers::stackPointerRegister, GPRInfo::regT2);
- jit.loadPtr(GPRInfo::regT2, GPRInfo::regT2);
-
- jit.addPtr(
- AssemblyHelpers::TrustedImm32(size * sizeof(Register)),
- AssemblyHelpers::stackPointerRegister);
-
- // Thunks like ours want to use the return PC to figure out where things
- // were saved. So, we pay it forward.
- jit.store32(
- GPRInfo::regT4,
- AssemblyHelpers::Address(
- AssemblyHelpers::stackPointerRegister,
- (JSStack::ArgumentCount - JSStack::CallerFrameAndPCSize) * sizeof(Register) +
- PayloadOffset));
-
- jit.jump(GPRInfo::regT2);
- }
-
- // Sadly, we cannot fail here because the LLInt may need us.
- LinkBuffer linkBuffer(vm, jit, GLOBAL_THUNK_ID, JITCompilationMustSucceed);
-
- unsigned returnPCsSize = numExpectedArgumentsIncludingThis / stackAlignmentRegisters() + 1;
- std::unique_ptr<CodeLocationLabel[]> returnPCs =
- std::make_unique<CodeLocationLabel[]>(returnPCsSize);
- for (unsigned size = 0; size <= numExpectedArgumentsIncludingThis; size += stackAlignmentRegisters()) {
- unsigned index = size / stackAlignmentRegisters();
- RELEASE_ASSERT(index < returnPCsSize);
- if (size < m_nextSize)
- returnPCs[index] = m_returnPCArrays.last()[index];
- else
- returnPCs[index] = linkBuffer.locationOf(labels[(size - m_nextSize) / stackAlignmentRegisters()]);
- }
-
- CodeLocationLabel* result = returnPCs.get();
-
- {
- ConcurrentJITLocker locker(m_lock);
- m_returnPCArrays.append(WTF::move(returnPCs));
- m_refs.append(FINALIZE_CODE(linkBuffer, ("Arity check fail return thunks for up to numArgs = %u", numExpectedArgumentsIncludingThis)));
- m_nextSize = numExpectedArgumentsIncludingThis + stackAlignmentRegisters();
- }
-
- return result;
-}
-
-CodeLocationLabel ArityCheckFailReturnThunks::returnPCFor(VM& vm, unsigned slotsToAdd)
-{
- return returnPCsFor(vm, slotsToAdd)[slotsToAdd / stackAlignmentRegisters()];
-}
-
-} // namespace JSC
-
-#endif // ENABLE(JIT)
-
diff --git a/Source/JavaScriptCore/jit/ArityCheckFailReturnThunks.h b/Source/JavaScriptCore/jit/ArityCheckFailReturnThunks.h
deleted file mode 100644
index b2d0341..0000000
--- a/Source/JavaScriptCore/jit/ArityCheckFailReturnThunks.h
+++ /dev/null
@@ -1,59 +0,0 @@
-/*
- * Copyright (C) 2013 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#ifndef ArityCheckFailReturnThunks_h
-#define ArityCheckFailReturnThunks_h
-
-#if ENABLE(JIT)
-
-#include "CodeLocation.h"
-#include "ConcurrentJITLock.h"
-#include <wtf/HashMap.h>
-
-namespace JSC {
-
-class ArityCheckFailReturnThunks {
-public:
- ArityCheckFailReturnThunks();
- ~ArityCheckFailReturnThunks();
-
- // Returns a pointer to an array of return labels indexed by missingArgs.
- CodeLocationLabel* returnPCsFor(VM&, unsigned numExpectedArgumentsIncludingThis);
-
- CodeLocationLabel returnPCFor(VM&, unsigned slotsToAdd);
-
-private:
- Vector<std::unique_ptr<CodeLocationLabel[]>> m_returnPCArrays;
- unsigned m_nextSize;
- Vector<MacroAssemblerCodeRef> m_refs;
- ConcurrentJITLock m_lock;
-};
-
-} // namespace JSC
-
-#endif // ENABLE(JIT)
-
-#endif // ArityCheckFailReturnThunks_h
-
diff --git a/Source/JavaScriptCore/jit/AssemblyHelpers.h b/Source/JavaScriptCore/jit/AssemblyHelpers.h
index d7e06aa..e1d8a22 100644
--- a/Source/JavaScriptCore/jit/AssemblyHelpers.h
+++ b/Source/JavaScriptCore/jit/AssemblyHelpers.h
@@ -34,6 +34,8 @@
#include "InlineCallFrame.h"
#include "JITCode.h"
#include "MacroAssembler.h"
+#include "RegisterAtOffsetList.h"
+#include "RegisterSet.h"
#include "TypeofType.h"
#include "VM.h"
@@ -175,6 +177,159 @@
#endif
}
+ void emitSaveCalleeSavesFor(CodeBlock* codeBlock, VirtualRegister offsetVirtualRegister = static_cast<VirtualRegister>(0))
+ {
+ ASSERT(codeBlock);
+
+ RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters();
+ RegisterSet dontSaveRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs());
+ unsigned registerCount = calleeSaves->size();
+
+ for (unsigned i = 0; i < registerCount; i++) {
+ RegisterAtOffset entry = calleeSaves->at(i);
+ if (dontSaveRegisters.get(entry.reg()))
+ continue;
+ storePtr(entry.reg().gpr(), Address(framePointerRegister, offsetVirtualRegister.offsetInBytes() + entry.offset()));
+ }
+ }
+
+ void emitRestoreCalleeSavesFor(CodeBlock* codeBlock)
+ {
+ ASSERT(codeBlock);
+
+ RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters();
+ RegisterSet dontRestoreRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs());
+ unsigned registerCount = calleeSaves->size();
+
+ for (unsigned i = 0; i < registerCount; i++) {
+ RegisterAtOffset entry = calleeSaves->at(i);
+ if (dontRestoreRegisters.get(entry.reg()))
+ continue;
+ loadPtr(Address(framePointerRegister, entry.offset()), entry.reg().gpr());
+ }
+ }
+
+ void emitSaveCalleeSaves()
+ {
+ emitSaveCalleeSavesFor(codeBlock());
+ }
+
+ void emitRestoreCalleeSaves()
+ {
+ emitRestoreCalleeSavesFor(codeBlock());
+ }
+
+ void copyCalleeSavesToVMCalleeSavesBuffer(const TempRegisterSet& usedRegisters = { RegisterSet::stubUnavailableRegisters() })
+ {
+#if NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
+ GPRReg temp1 = usedRegisters.getFreeGPR(0);
+
+ move(TrustedImmPtr(m_vm->calleeSaveRegistersBuffer), temp1);
+
+ RegisterAtOffsetList* allCalleeSaves = m_vm->getAllCalleeSaveRegisterOffsets();
+ RegisterSet dontCopyRegisters = RegisterSet::stackRegisters();
+ unsigned registerCount = allCalleeSaves->size();
+
+ for (unsigned i = 0; i < registerCount; i++) {
+ RegisterAtOffset entry = allCalleeSaves->at(i);
+ if (dontCopyRegisters.get(entry.reg()))
+ continue;
+ if (entry.reg().isGPR())
+ storePtr(entry.reg().gpr(), Address(temp1, entry.offset()));
+ else
+ storeDouble(entry.reg().fpr(), Address(temp1, entry.offset()));
+ }
+#else
+ UNUSED_PARAM(usedRegisters);
+#endif
+ }
+
+ void restoreCalleeSavesFromVMCalleeSavesBuffer(const TempRegisterSet& usedRegisters = { RegisterSet::stubUnavailableRegisters() })
+ {
+#if NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
+ GPRReg temp1 = usedRegisters.getFreeGPR(0);
+
+ move(TrustedImmPtr(m_vm->calleeSaveRegistersBuffer), temp1);
+
+ RegisterAtOffsetList* allCalleeSaves = m_vm->getAllCalleeSaveRegisterOffsets();
+ RegisterSet dontRestoreRegisters = RegisterSet::stackRegisters();
+ unsigned registerCount = allCalleeSaves->size();
+
+ for (unsigned i = 0; i < registerCount; i++) {
+ RegisterAtOffset entry = allCalleeSaves->at(i);
+ if (dontRestoreRegisters.get(entry.reg()))
+ continue;
+ if (entry.reg().isGPR())
+ loadPtr(Address(temp1, entry.offset()), entry.reg().gpr());
+ else
+ loadDouble(Address(temp1, entry.offset()), entry.reg().fpr());
+ }
+#else
+ UNUSED_PARAM(usedRegisters);
+#endif
+ }
+
+ void copyCalleeSavesFromFrameOrRegisterToVMCalleeSavesBuffer(const TempRegisterSet& usedRegisters = { RegisterSet::stubUnavailableRegisters() })
+ {
+#if NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
+ GPRReg temp1 = usedRegisters.getFreeGPR(0);
+ GPRReg temp2 = usedRegisters.getFreeGPR(1);
+ FPRReg fpTemp = usedRegisters.getFreeFPR();
+ ASSERT(temp2 != InvalidGPRReg);
+
+ ASSERT(codeBlock());
+
+ // Copy saved calleeSaves on stack or unsaved calleeSaves in register to vm calleeSave buffer
+ move(TrustedImmPtr(m_vm->calleeSaveRegistersBuffer), temp1);
+
+ RegisterAtOffsetList* allCalleeSaves = m_vm->getAllCalleeSaveRegisterOffsets();
+ RegisterAtOffsetList* currentCalleeSaves = codeBlock()->calleeSaveRegisters();
+ RegisterSet dontCopyRegisters = RegisterSet::stackRegisters();
+ unsigned registerCount = allCalleeSaves->size();
+
+ for (unsigned i = 0; i < registerCount; i++) {
+ RegisterAtOffset vmEntry = allCalleeSaves->at(i);
+ if (dontCopyRegisters.get(vmEntry.reg()))
+ continue;
+ RegisterAtOffset* currentFrameEntry = currentCalleeSaves->find(vmEntry.reg());
+
+ if (vmEntry.reg().isGPR()) {
+ GPRReg regToStore;
+ if (currentFrameEntry) {
+ // Load calleeSave from stack into temp register
+ regToStore = temp2;
+ loadPtr(Address(framePointerRegister, currentFrameEntry->offset()), regToStore);
+ } else
+ // Just store callee save directly
+ regToStore = vmEntry.reg().gpr();
+
+ storePtr(regToStore, Address(temp1, vmEntry.offset()));
+ } else {
+ FPRReg fpRegToStore;
+ if (currentFrameEntry) {
+ // Load calleeSave from stack into temp register
+ fpRegToStore = fpTemp;
+ loadDouble(Address(framePointerRegister, currentFrameEntry->offset()), fpRegToStore);
+ } else
+ // Just store callee save directly
+ fpRegToStore = vmEntry.reg().fpr();
+
+ storeDouble(fpRegToStore, Address(temp1, vmEntry.offset()));
+ }
+ }
+#else
+ UNUSED_PARAM(usedRegisters);
+#endif
+ }
+
+ void emitMaterializeTagCheckRegisters()
+ {
+#if USE(JSVALUE64)
+ move(MacroAssembler::TrustedImm64(TagTypeNumber), GPRInfo::tagTypeNumberRegister);
+ orPtr(MacroAssembler::TrustedImm32(TagBitTypeOther), GPRInfo::tagTypeNumberRegister, GPRInfo::tagMaskRegister);
+#endif
+ }
+
#if CPU(X86_64) || CPU(X86)
static size_t prologueStackPointerDelta()
{
diff --git a/Source/JavaScriptCore/jit/FPRInfo.h b/Source/JavaScriptCore/jit/FPRInfo.h
index 0062b71..d0980a6 100644
--- a/Source/JavaScriptCore/jit/FPRInfo.h
+++ b/Source/JavaScriptCore/jit/FPRInfo.h
@@ -208,6 +208,14 @@
static const FPRReg fpRegT20 = ARM64Registers::q28;
static const FPRReg fpRegT21 = ARM64Registers::q29;
static const FPRReg fpRegT22 = ARM64Registers::q30;
+ static const FPRReg fpRegCS0 = ARM64Registers::q8;
+ static const FPRReg fpRegCS1 = ARM64Registers::q9;
+ static const FPRReg fpRegCS2 = ARM64Registers::q10;
+ static const FPRReg fpRegCS3 = ARM64Registers::q11;
+ static const FPRReg fpRegCS4 = ARM64Registers::q12;
+ static const FPRReg fpRegCS5 = ARM64Registers::q13;
+ static const FPRReg fpRegCS6 = ARM64Registers::q14;
+ static const FPRReg fpRegCS7 = ARM64Registers::q15;
static const FPRReg argumentFPR0 = ARM64Registers::q0; // fpRegT0
static const FPRReg argumentFPR1 = ARM64Registers::q1; // fpRegT1
diff --git a/Source/JavaScriptCore/jit/GPRInfo.h b/Source/JavaScriptCore/jit/GPRInfo.h
index c708805..4cfdc68 100644
--- a/Source/JavaScriptCore/jit/GPRInfo.h
+++ b/Source/JavaScriptCore/jit/GPRInfo.h
@@ -315,6 +315,7 @@
#if CPU(X86)
#define NUMBER_OF_ARGUMENT_REGISTERS 0u
+#define NUMBER_OF_CALLEE_SAVES_REGISTERS 0u
class GPRInfo {
public:
@@ -336,7 +337,6 @@
static const GPRReg argumentGPR2 = X86Registers::eax; // regT0
static const GPRReg argumentGPR3 = X86Registers::ebx; // regT3
static const GPRReg nonArgGPR0 = X86Registers::esi; // regT4
- static const GPRReg nonArgGPR1 = X86Registers::edi; // regT5
static const GPRReg returnValueGPR = X86Registers::eax; // regT0
static const GPRReg returnValueGPR2 = X86Registers::edx; // regT1
static const GPRReg nonPreservedNonReturnGPR = X86Registers::ecx;
@@ -382,8 +382,10 @@
#if CPU(X86_64)
#if !OS(WINDOWS)
#define NUMBER_OF_ARGUMENT_REGISTERS 6u
+#define NUMBER_OF_CALLEE_SAVES_REGISTERS 5u
#else
#define NUMBER_OF_ARGUMENT_REGISTERS 4u
+#define NUMBER_OF_CALLEE_SAVES_REGISTERS 7u
#endif
class GPRInfo {
@@ -445,7 +447,6 @@
static const GPRReg argumentGPR3 = X86Registers::r9; // regT3
#endif
static const GPRReg nonArgGPR0 = X86Registers::r10; // regT5 (regT4 on Windows)
- static const GPRReg nonArgGPR1 = X86Registers::ebx; // Callee save
static const GPRReg returnValueGPR = X86Registers::eax; // regT0
static const GPRReg returnValueGPR2 = X86Registers::edx; // regT1 or regT2
static const GPRReg nonPreservedNonReturnGPR = X86Registers::r10; // regT5 (regT4 on Windows)
@@ -506,6 +507,7 @@
#if CPU(ARM)
#define NUMBER_OF_ARGUMENT_REGISTERS 4u
+#define NUMBER_OF_CALLEE_SAVES_REGISTERS 0u
class GPRInfo {
public:
@@ -536,7 +538,6 @@
static const GPRReg argumentGPR3 = ARMRegisters::r3; // regT3
static const GPRReg nonArgGPR0 = ARMRegisters::r4; // regT8
static const GPRReg nonArgGPR1 = ARMRegisters::r8; // regT4
- static const GPRReg nonArgGPR2 = ARMRegisters::r9; // regT5
static const GPRReg returnValueGPR = ARMRegisters::r0; // regT0
static const GPRReg returnValueGPR2 = ARMRegisters::r1; // regT1
static const GPRReg nonPreservedNonReturnGPR = ARMRegisters::r5;
@@ -589,6 +590,8 @@
#if CPU(ARM64)
#define NUMBER_OF_ARGUMENT_REGISTERS 8u
+// Callee Saves includes x19..x28 and FP registers q8..q15
+#define NUMBER_OF_CALLEE_SAVES_REGISTERS 18u
class GPRInfo {
public:
@@ -617,9 +620,16 @@
static const GPRReg regT13 = ARM64Registers::x13;
static const GPRReg regT14 = ARM64Registers::x14;
static const GPRReg regT15 = ARM64Registers::x15;
- static const GPRReg regCS0 = ARM64Registers::x26; // Used by LLInt only
- static const GPRReg regCS1 = ARM64Registers::x27; // tagTypeNumber
- static const GPRReg regCS2 = ARM64Registers::x28; // tagMask
+ static const GPRReg regCS0 = ARM64Registers::x19; // Used by FTL only
+ static const GPRReg regCS1 = ARM64Registers::x20; // Used by FTL only
+ static const GPRReg regCS2 = ARM64Registers::x21; // Used by FTL only
+ static const GPRReg regCS3 = ARM64Registers::x22; // Used by FTL only
+ static const GPRReg regCS4 = ARM64Registers::x23; // Used by FTL only
+ static const GPRReg regCS5 = ARM64Registers::x24; // Used by FTL only
+ static const GPRReg regCS6 = ARM64Registers::x25; // Used by FTL only
+ static const GPRReg regCS7 = ARM64Registers::x26;
+ static const GPRReg regCS8 = ARM64Registers::x27; // tagTypeNumber
+ static const GPRReg regCS9 = ARM64Registers::x28; // tagMask
// These constants provide the names for the general purpose argument & return value registers.
static const GPRReg argumentGPR0 = ARM64Registers::x0; // regT0
static const GPRReg argumentGPR1 = ARM64Registers::x1; // regT1
@@ -637,7 +647,7 @@
static const GPRReg nonPreservedNonArgumentGPR = ARM64Registers::x8;
static const GPRReg patchpointScratchRegister = ARM64Registers::ip0;
- // GPRReg mapping is direct, the machine regsiter numbers can
+ // GPRReg mapping is direct, the machine register numbers can
// be used directly as indices into the GPR RegisterBank.
COMPILE_ASSERT(ARM64Registers::q0 == 0, q0_is_0);
COMPILE_ASSERT(ARM64Registers::q1 == 1, q1_is_1);
@@ -692,6 +702,7 @@
#if CPU(MIPS)
#define NUMBER_OF_ARGUMENT_REGISTERS 4u
+#define NUMBER_OF_CALLEE_SAVES_REGISTERS 0u
class GPRInfo {
public:
@@ -719,7 +730,6 @@
static const GPRReg argumentGPR2 = MIPSRegisters::a2;
static const GPRReg argumentGPR3 = MIPSRegisters::a3;
static const GPRReg nonArgGPR0 = regT0;
- static const GPRReg nonArgGPR1 = regT1;
static const GPRReg returnValueGPR = regT0;
static const GPRReg returnValueGPR2 = regT1;
static const GPRReg nonPreservedNonReturnGPR = regT2;
@@ -764,6 +774,7 @@
#if CPU(SH4)
#define NUMBER_OF_ARGUMENT_REGISTERS 4u
+#define NUMBER_OF_CALLEE_SAVES_REGISTERS 0u
class GPRInfo {
public:
@@ -793,7 +804,6 @@
static const GPRReg argumentGPR2 = SH4Registers::r6; // regT2
static const GPRReg argumentGPR3 = SH4Registers::r7; // regT3
static const GPRReg nonArgGPR0 = regT4;
- static const GPRReg nonArgGPR1 = regT5;
static const GPRReg returnValueGPR = regT0;
static const GPRReg returnValueGPR2 = regT1;
static const GPRReg nonPreservedNonReturnGPR = regT2;
diff --git a/Source/JavaScriptCore/jit/JIT.cpp b/Source/JavaScriptCore/jit/JIT.cpp
index 6dc4faa..b1ea289 100644
--- a/Source/JavaScriptCore/jit/JIT.cpp
+++ b/Source/JavaScriptCore/jit/JIT.cpp
@@ -29,7 +29,6 @@
#include "JIT.h"
-#include "ArityCheckFailReturnThunks.h"
#include "CodeBlock.h"
#include "CodeBlockWithJITType.h"
#include "DFGCapabilities.h"
@@ -85,6 +84,9 @@
skipOptimize.append(branchAdd32(Signed, TrustedImm32(Options::executionCounterIncrementForEntry()), AbsoluteAddress(m_codeBlock->addressOfJITExecuteCounter())));
ASSERT(!m_bytecodeOffset);
+
+ copyCalleeSavesFromFrameOrRegisterToVMCalleeSavesBuffer();
+
callOperation(operationOptimize, m_bytecodeOffset);
skipOptimize.append(branchTestPtr(Zero, returnValueGPR));
move(returnValueGPR2, stackPointerRegister);
@@ -492,6 +494,8 @@
break;
}
+ m_codeBlock->setCalleeSaveRegisters(RegisterSet::llintBaselineCalleeSaveRegisters()); // Might be able to remove as this is probably already set to this value.
+
// This ensures that we have the most up to date type information when performing typecheck optimizations for op_profile_type.
if (m_vm->typeProfiler())
m_vm->typeProfilerLog()->processLogEntries(ASCIILiteral("Preparing for JIT compilation."));
@@ -549,6 +553,9 @@
move(regT1, stackPointerRegister);
checkStackPointerAlignment();
+ emitSaveCalleeSaves();
+ emitMaterializeTagCheckRegisters();
+
privateCompileMainPass();
privateCompileLinkPass();
privateCompileSlowCases();
@@ -580,11 +587,6 @@
if (maxFrameExtentForSlowPathCall)
addPtr(TrustedImm32(maxFrameExtentForSlowPathCall), stackPointerRegister);
branchTest32(Zero, returnValueGPR).linkTo(beginLabel, this);
- GPRReg thunkReg = GPRInfo::argumentGPR1;
- CodeLocationLabel* failThunkLabels =
- m_vm->arityCheckFailReturnThunks->returnPCsFor(*m_vm, m_codeBlock->numParameters());
- move(TrustedImmPtr(failThunkLabels), thunkReg);
- loadPtr(BaseIndex(thunkReg, returnValueGPR, timesPtr()), thunkReg);
move(returnValueGPR, GPRInfo::argumentGPR0);
emitNakedCall(m_vm->getCTIStub(arityFixupGenerator).code());
@@ -722,6 +724,8 @@
if (!m_exceptionChecksWithCallFrameRollback.empty()) {
m_exceptionChecksWithCallFrameRollback.link(this);
+ copyCalleeSavesToVMCalleeSavesBuffer();
+
// lookupExceptionHandlerFromCallerFrame is passed two arguments, the VM and the exec (the CallFrame*).
move(TrustedImmPtr(vm()), GPRInfo::argumentGPR0);
@@ -739,6 +743,8 @@
if (!m_exceptionChecks.empty()) {
m_exceptionChecks.link(this);
+ copyCalleeSavesToVMCalleeSavesBuffer();
+
// lookupExceptionHandler is passed two arguments, the VM and the exec (the CallFrame*).
move(TrustedImmPtr(vm()), GPRInfo::argumentGPR0);
move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR1);
diff --git a/Source/JavaScriptCore/jit/JITArithmetic32_64.cpp b/Source/JavaScriptCore/jit/JITArithmetic32_64.cpp
index 30b42d1..ad0cd3e 100644
--- a/Source/JavaScriptCore/jit/JITArithmetic32_64.cpp
+++ b/Source/JavaScriptCore/jit/JITArithmetic32_64.cpp
@@ -1069,7 +1069,7 @@
void JIT::emit_op_mod(Instruction* currentInstruction)
{
-#if CPU(X86) || CPU(X86_64)
+#if CPU(X86)
int dst = currentInstruction[1].u.operand;
int op1 = currentInstruction[2].u.operand;
int op2 = currentInstruction[3].u.operand;
diff --git a/Source/JavaScriptCore/jit/JITCall32_64.cpp b/Source/JavaScriptCore/jit/JITCall32_64.cpp
index faf4b9b..89c3bac 100644
--- a/Source/JavaScriptCore/jit/JITCall32_64.cpp
+++ b/Source/JavaScriptCore/jit/JITCall32_64.cpp
@@ -59,6 +59,7 @@
emitLoad(dst, regT1, regT0);
checkStackPointerAlignment();
+ emitRestoreCalleeSaves();
emitFunctionEpilogue();
ret();
}
diff --git a/Source/JavaScriptCore/jit/JITOpcodes.cpp b/Source/JavaScriptCore/jit/JITOpcodes.cpp
index a2953e9..80ec34e 100644
--- a/Source/JavaScriptCore/jit/JITOpcodes.cpp
+++ b/Source/JavaScriptCore/jit/JITOpcodes.cpp
@@ -70,6 +70,7 @@
{
RELEASE_ASSERT(returnValueGPR != callFrameRegister);
emitGetVirtualRegister(currentInstruction[1].u.operand, returnValueGPR);
+ emitRestoreCalleeSaves();
emitFunctionEpilogue();
ret();
}
@@ -255,6 +256,7 @@
emitGetVirtualRegister(currentInstruction[1].u.operand, returnValueGPR);
checkStackPointerAlignment();
+ emitRestoreCalleeSaves();
emitFunctionEpilogue();
ret();
}
@@ -419,6 +421,7 @@
void JIT::emit_op_throw(Instruction* currentInstruction)
{
ASSERT(regT0 == returnValueGPR);
+ copyCalleeSavesToVMCalleeSavesBuffer();
emitGetVirtualRegister(currentInstruction[1].u.operand, regT0);
callOperationNoExceptionCheck(operationThrow, regT0);
jumpToExceptionHandler();
@@ -494,11 +497,8 @@
void JIT::emit_op_catch(Instruction* currentInstruction)
{
- // Gotta restore the tag registers. We could be throwing from FTL, which may
- // clobber them.
- move(TrustedImm64(TagTypeNumber), tagTypeNumberRegister);
- move(TrustedImm64(TagMask), tagMaskRegister);
-
+ restoreCalleeSavesFromVMCalleeSavesBuffer();
+
move(TrustedImmPtr(m_vm), regT3);
load64(Address(regT3, VM::callFrameForThrowOffset()), callFrameRegister);
@@ -656,7 +656,7 @@
// registers to zap stale pointers, to avoid unnecessarily prolonging
// object lifetime and increasing GC pressure.
size_t count = m_codeBlock->m_numVars;
- for (size_t j = 0; j < count; ++j)
+ for (size_t j = CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters(); j < count; ++j)
emitInitRegister(virtualRegisterForLocal(j).offset());
emitWriteBarrier(m_codeBlock->ownerExecutable());
@@ -922,7 +922,9 @@
// Emit the slow path for the JIT optimization check:
if (canBeOptimized()) {
linkSlowCase(iter);
-
+
+ copyCalleeSavesFromFrameOrRegisterToVMCalleeSavesBuffer();
+
callOperation(operationOptimize, m_bytecodeOffset);
Jump noOptimizedEntry = branchTestPtr(Zero, returnValueGPR);
if (!ASSERT_DISABLED) {
diff --git a/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp b/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp
index fd79478..2508b7d 100644
--- a/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp
+++ b/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp
@@ -148,6 +148,7 @@
{
ASSERT(returnValueGPR != callFrameRegister);
emitLoad(currentInstruction[1].u.operand, regT1, returnValueGPR);
+ emitRestoreCalleeSaves();
emitFunctionEpilogue();
ret();
}
@@ -741,6 +742,7 @@
void JIT::emit_op_throw(Instruction* currentInstruction)
{
ASSERT(regT0 == returnValueGPR);
+ copyCalleeSavesToVMCalleeSavesBuffer();
emitLoad(currentInstruction[1].u.operand, regT1, regT0);
callOperationNoExceptionCheck(operationThrow, regT1, regT0);
jumpToExceptionHandler();
@@ -800,6 +802,8 @@
void JIT::emit_op_catch(Instruction* currentInstruction)
{
+ restoreCalleeSavesFromVMCalleeSavesBuffer();
+
move(TrustedImmPtr(m_vm), regT3);
// operationThrow returns the callFrame for the handler.
load32(Address(regT3, VM::callFrameForThrowOffset()), callFrameRegister);
diff --git a/Source/JavaScriptCore/jit/JITOperations.cpp b/Source/JavaScriptCore/jit/JITOperations.cpp
index b859e9e..1649158 100644
--- a/Source/JavaScriptCore/jit/JITOperations.cpp
+++ b/Source/JavaScriptCore/jit/JITOperations.cpp
@@ -1334,8 +1334,11 @@
else
numVarsWithValues = 0;
Operands<JSValue> mustHandleValues(codeBlock->numParameters(), numVarsWithValues);
+ int localsUsedForCalleeSaves = static_cast<int>(CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters());
for (size_t i = 0; i < mustHandleValues.size(); ++i) {
int operand = mustHandleValues.operandForIndex(i);
+ if (operandIsLocal(operand) && VirtualRegister(operand).toLocal() < localsUsedForCalleeSaves)
+ continue;
mustHandleValues[i] = exec->uncheckedR(operand).jsValue();
}
diff --git a/Source/JavaScriptCore/jit/JITPropertyAccess.cpp b/Source/JavaScriptCore/jit/JITPropertyAccess.cpp
index ff8553d..3319ec4 100644
--- a/Source/JavaScriptCore/jit/JITPropertyAccess.cpp
+++ b/Source/JavaScriptCore/jit/JITPropertyAccess.cpp
@@ -213,7 +213,7 @@
emitIdentifierCheck(regT1, regT3, propertyName, slowCases);
JITGetByIdGenerator gen(
- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::specialRegisters(),
+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(),
JSValueRegs(regT0), JSValueRegs(regT0), DontSpill);
gen.generateFastPath(*this);
@@ -446,7 +446,7 @@
emitGetVirtualRegisters(base, regT0, value, regT1);
JITPutByIdGenerator gen(
- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::specialRegisters(),
+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(),
JSValueRegs(regT0), JSValueRegs(regT1), regT2, DontSpill, m_codeBlock->ecmaMode(), putKind);
gen.generateFastPath(*this);
doneCases.append(jump());
@@ -574,7 +574,7 @@
emitArrayProfilingSiteForBytecodeIndexWithCell(regT0, regT1, m_bytecodeOffset);
JITGetByIdGenerator gen(
- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::specialRegisters(),
+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(),
JSValueRegs(regT0), JSValueRegs(regT0), DontSpill);
gen.generateFastPath(*this);
addSlowCase(gen.slowPathJump());
@@ -621,7 +621,7 @@
emitJumpSlowCaseIfNotJSCell(regT0, baseVReg);
JITPutByIdGenerator gen(
- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::specialRegisters(),
+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(),
JSValueRegs(regT0), JSValueRegs(regT1), regT2, DontSpill, m_codeBlock->ecmaMode(),
direct ? Direct : NotDirect);
diff --git a/Source/JavaScriptCore/jit/JITPropertyAccess32_64.cpp b/Source/JavaScriptCore/jit/JITPropertyAccess32_64.cpp
index b6dec21..ea3395c 100644
--- a/Source/JavaScriptCore/jit/JITPropertyAccess32_64.cpp
+++ b/Source/JavaScriptCore/jit/JITPropertyAccess32_64.cpp
@@ -282,7 +282,7 @@
emitIdentifierCheck(regT2, regT4, propertyName, slowCases);
JITGetByIdGenerator gen(
- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::specialRegisters(),
+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(),
JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), DontSpill);
gen.generateFastPath(*this);
@@ -494,7 +494,7 @@
emitLoad(value, regT3, regT2);
JITPutByIdGenerator gen(
- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::specialRegisters(),
+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(),
JSValueRegs::payloadOnly(regT0), JSValueRegs(regT3, regT2), regT1, DontSpill, m_codeBlock->ecmaMode(), putKind);
gen.generateFastPath(*this);
doneCases.append(jump());
@@ -587,7 +587,7 @@
emitArrayProfilingSiteForBytecodeIndexWithCell(regT0, regT2, m_bytecodeOffset);
JITGetByIdGenerator gen(
- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::specialRegisters(),
+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(),
JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), DontSpill);
gen.generateFastPath(*this);
addSlowCase(gen.slowPathJump());
@@ -632,7 +632,7 @@
emitJumpSlowCaseIfNotJSCell(base, regT1);
JITPutByIdGenerator gen(
- m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::specialRegisters(),
+ m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(),
JSValueRegs::payloadOnly(regT0), JSValueRegs(regT3, regT2),
regT1, DontSpill, m_codeBlock->ecmaMode(), direct ? Direct : NotDirect);
diff --git a/Source/JavaScriptCore/ftl/FTLRegisterAtOffset.cpp b/Source/JavaScriptCore/jit/RegisterAtOffset.cpp
similarity index 87%
rename from Source/JavaScriptCore/ftl/FTLRegisterAtOffset.cpp
rename to Source/JavaScriptCore/jit/RegisterAtOffset.cpp
index 4605722..be93604 100644
--- a/Source/JavaScriptCore/ftl/FTLRegisterAtOffset.cpp
+++ b/Source/JavaScriptCore/jit/RegisterAtOffset.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -24,18 +24,18 @@
*/
#include "config.h"
-#include "FTLRegisterAtOffset.h"
+#include "RegisterAtOffset.h"
-#if ENABLE(FTL_JIT)
+#if ENABLE(JIT)
-namespace JSC { namespace FTL {
+namespace JSC {
void RegisterAtOffset::dump(PrintStream& out) const
{
out.print(reg(), " at ", offset());
}
-} } // namespace JSC::FTL
+} // namespace JSC
-#endif // ENABLE(FTL_JIT)
+#endif // ENABLE(JIT)
diff --git a/Source/JavaScriptCore/ftl/FTLRegisterAtOffset.h b/Source/JavaScriptCore/jit/RegisterAtOffset.h
similarity index 88%
rename from Source/JavaScriptCore/ftl/FTLRegisterAtOffset.h
rename to Source/JavaScriptCore/jit/RegisterAtOffset.h
index 277338d..1122559 100644
--- a/Source/JavaScriptCore/ftl/FTLRegisterAtOffset.h
+++ b/Source/JavaScriptCore/jit/RegisterAtOffset.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -23,15 +23,15 @@
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
-#ifndef FTLRegisterAtOffset_h
-#define FTLRegisterAtOffset_h
+#ifndef RegisterAtOffset_h
+#define RegisterAtOffset_h
-#if ENABLE(FTL_JIT)
+#if ENABLE(JIT)
#include "Reg.h"
#include <wtf/PrintStream.h>
-namespace JSC { namespace FTL {
+namespace JSC {
class RegisterAtOffset {
public:
@@ -50,6 +50,7 @@
Reg reg() const { return m_reg; }
ptrdiff_t offset() const { return m_offset; }
+ int offsetAsIndex() const { return offset() / sizeof(void*); }
bool operator==(const RegisterAtOffset& other) const
{
@@ -72,9 +73,9 @@
ptrdiff_t m_offset;
};
-} } // namespace JSC::FTL
+} // namespace JSC
-#endif // ENABLE(FTL_JIT)
+#endif // ENABLE(JIT)
-#endif // FTLRegisterAtOffset_h
+#endif // RegisterAtOffset_h
diff --git a/Source/JavaScriptCore/jit/RegisterAtOffsetList.cpp b/Source/JavaScriptCore/jit/RegisterAtOffsetList.cpp
new file mode 100644
index 0000000..872cd0a
--- /dev/null
+++ b/Source/JavaScriptCore/jit/RegisterAtOffsetList.cpp
@@ -0,0 +1,80 @@
+/*
+ * Copyright (C) 2015 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "config.h"
+#include "RegisterAtOffsetList.h"
+
+#if ENABLE(JIT)
+
+#include <wtf/ListDump.h>
+
+namespace JSC {
+
+RegisterAtOffsetList::RegisterAtOffsetList() { }
+
+RegisterAtOffsetList::RegisterAtOffsetList(RegisterSet registerSet, OffsetBaseType offsetBaseType)
+{
+ size_t numberOfRegisters = registerSet.numberOfSetRegisters();
+ ptrdiff_t offset = 0;
+
+ if (offsetBaseType == FramePointerBased)
+ offset = -(static_cast<ptrdiff_t>(numberOfRegisters) * sizeof(void*));
+
+ for (Reg reg = Reg::first(); reg <= Reg::last();reg = reg.next()) {
+ if (registerSet.get(reg)) {
+ append(RegisterAtOffset(reg, offset));
+ offset += sizeof(void*);
+ }
+ }
+
+ sort();
+}
+
+void RegisterAtOffsetList::sort()
+{
+ std::sort(m_registers.begin(), m_registers.end());
+}
+
+void RegisterAtOffsetList::dump(PrintStream& out) const
+{
+ out.print(listDump(m_registers));
+}
+
+RegisterAtOffset* RegisterAtOffsetList::find(Reg reg) const
+{
+ return tryBinarySearch<RegisterAtOffset, Reg>(m_registers, m_registers.size(), reg, RegisterAtOffset::getReg);
+}
+
+unsigned RegisterAtOffsetList::indexOf(Reg reg) const
+{
+ if (RegisterAtOffset* pointer = find(reg))
+ return pointer - m_registers.begin();
+ return UINT_MAX;
+}
+
+} // namespace JSC
+
+#endif // ENABLE(JIT)
+
diff --git a/Source/JavaScriptCore/jit/RegisterAtOffsetList.h b/Source/JavaScriptCore/jit/RegisterAtOffsetList.h
new file mode 100644
index 0000000..19f6ce9
--- /dev/null
+++ b/Source/JavaScriptCore/jit/RegisterAtOffsetList.h
@@ -0,0 +1,78 @@
+/*
+ * Copyright (C) 2015 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef RegisterAtOffsetList_h
+#define RegisterAtOffsetList_h
+
+#if ENABLE(JIT)
+
+#include "RegisterAtOffset.h"
+#include "RegisterSet.h"
+
+namespace JSC {
+
+class RegisterAtOffsetList {
+public:
+ enum OffsetBaseType { FramePointerBased, ZeroBased };
+
+ RegisterAtOffsetList();
+ RegisterAtOffsetList(RegisterSet, OffsetBaseType = FramePointerBased);
+
+ void dump(PrintStream&) const;
+
+ void clear()
+ {
+ m_registers.clear();
+ }
+
+ size_t size()
+ {
+ return m_registers.size();
+ }
+
+ RegisterAtOffset& at(size_t index)
+ {
+ return m_registers.at(index);
+ }
+
+ void append(RegisterAtOffset registerAtOffset)
+ {
+ m_registers.append(registerAtOffset);
+ }
+
+ void sort();
+ RegisterAtOffset* find(Reg) const;
+ unsigned indexOf(Reg) const; // Returns UINT_MAX if not found.
+
+private:
+ Vector<RegisterAtOffset> m_registers;
+};
+
+} // namespace JSC
+
+#endif // ENABLE(JIT)
+
+#endif // RegisterAtOffsetList_h
+
diff --git a/Source/JavaScriptCore/jit/RegisterSet.cpp b/Source/JavaScriptCore/jit/RegisterSet.cpp
index 6302e26..a0d0dbe 100644
--- a/Source/JavaScriptCore/jit/RegisterSet.cpp
+++ b/Source/JavaScriptCore/jit/RegisterSet.cpp
@@ -66,6 +66,11 @@
stackRegisters(), reservedHardwareRegisters(), runtimeRegisters());
}
+RegisterSet RegisterSet::stubUnavailableRegisters()
+{
+ return RegisterSet(specialRegisters(), vmCalleeSaveRegisters());
+}
+
RegisterSet RegisterSet::calleeSaveRegisters()
{
RegisterSet result;
@@ -122,6 +127,153 @@
return result;
}
+RegisterSet RegisterSet::vmCalleeSaveRegisters()
+{
+ RegisterSet result;
+#if CPU(X86_64)
+ result.set(GPRInfo::regCS0);
+ result.set(GPRInfo::regCS1);
+ result.set(GPRInfo::regCS2);
+ result.set(GPRInfo::regCS3);
+ result.set(GPRInfo::regCS4);
+#if OS(WINDOWS)
+ result.set(GPRInfo::regCS5);
+ result.set(GPRInfo::regCS6);
+#endif
+#elif CPU(ARM64)
+ result.set(GPRInfo::regCS0);
+ result.set(GPRInfo::regCS1);
+ result.set(GPRInfo::regCS2);
+ result.set(GPRInfo::regCS3);
+ result.set(GPRInfo::regCS4);
+ result.set(GPRInfo::regCS5);
+ result.set(GPRInfo::regCS6);
+ result.set(GPRInfo::regCS7);
+ result.set(GPRInfo::regCS8);
+ result.set(GPRInfo::regCS9);
+ result.set(FPRInfo::fpRegCS0);
+ result.set(FPRInfo::fpRegCS1);
+ result.set(FPRInfo::fpRegCS2);
+ result.set(FPRInfo::fpRegCS3);
+ result.set(FPRInfo::fpRegCS4);
+ result.set(FPRInfo::fpRegCS5);
+ result.set(FPRInfo::fpRegCS6);
+ result.set(FPRInfo::fpRegCS7);
+#endif
+ return result;
+}
+
+RegisterSet RegisterSet::llintBaselineCalleeSaveRegisters()
+{
+ RegisterSet result;
+#if CPU(X86)
+#elif CPU(X86_64)
+#if !OS(WINDOWS)
+ result.set(GPRInfo::regCS2);
+ ASSERT(GPRInfo::regCS3 == GPRInfo::tagTypeNumberRegister);
+ ASSERT(GPRInfo::regCS4 == GPRInfo::tagMaskRegister);
+ result.set(GPRInfo::regCS3);
+ result.set(GPRInfo::regCS4);
+#else
+ result.set(GPRInfo::regCS4);
+ ASSERT(GPRInfo::regCS5 == GPRInfo::tagTypeNumberRegister);
+ ASSERT(GPRInfo::regCS6 == GPRInfo::tagMaskRegister);
+ result.set(GPRInfo::regCS5);
+ result.set(GPRInfo::regCS6);
+#endif
+#elif CPU(ARM_THUMB2)
+#elif CPU(ARM_TRADITIONAL)
+#elif CPU(ARM64)
+ result.set(GPRInfo::regCS7);
+ ASSERT(GPRInfo::regCS8 == GPRInfo::tagTypeNumberRegister);
+ ASSERT(GPRInfo::regCS9 == GPRInfo::tagMaskRegister);
+ result.set(GPRInfo::regCS8);
+ result.set(GPRInfo::regCS9);
+#elif CPU(MIPS)
+#elif CPU(SH4)
+#else
+ UNREACHABLE_FOR_PLATFORM();
+#endif
+ return result;
+}
+
+RegisterSet RegisterSet::dfgCalleeSaveRegisters()
+{
+ RegisterSet result;
+#if CPU(X86)
+#elif CPU(X86_64)
+ result.set(GPRInfo::regCS0);
+ result.set(GPRInfo::regCS1);
+ result.set(GPRInfo::regCS2);
+#if !OS(WINDOWS)
+ ASSERT(GPRInfo::regCS3 == GPRInfo::tagTypeNumberRegister);
+ ASSERT(GPRInfo::regCS4 == GPRInfo::tagMaskRegister);
+ result.set(GPRInfo::regCS3);
+ result.set(GPRInfo::regCS4);
+#else
+ result.set(GPRInfo::regCS3);
+ result.set(GPRInfo::regCS4);
+ ASSERT(GPRInfo::regCS5 == GPRInfo::tagTypeNumberRegister);
+ ASSERT(GPRInfo::regCS6 == GPRInfo::tagMaskRegister);
+ result.set(GPRInfo::regCS5);
+ result.set(GPRInfo::regCS6);
+#endif
+#elif CPU(ARM_THUMB2)
+#elif CPU(ARM_TRADITIONAL)
+#elif CPU(ARM64)
+ ASSERT(GPRInfo::regCS8 == GPRInfo::tagTypeNumberRegister);
+ ASSERT(GPRInfo::regCS9 == GPRInfo::tagMaskRegister);
+ result.set(GPRInfo::regCS8);
+ result.set(GPRInfo::regCS9);
+#elif CPU(MIPS)
+#elif CPU(SH4)
+#else
+ UNREACHABLE_FOR_PLATFORM();
+#endif
+ return result;
+}
+
+RegisterSet RegisterSet::ftlCalleeSaveRegisters()
+{
+ RegisterSet result;
+#if ENABLE(FTL_JIT)
+#if CPU(X86_64) && !OS(WINDOWS)
+ result.set(GPRInfo::regCS0);
+ result.set(GPRInfo::regCS1);
+ result.set(GPRInfo::regCS2);
+ ASSERT(GPRInfo::regCS3 == GPRInfo::tagTypeNumberRegister);
+ ASSERT(GPRInfo::regCS4 == GPRInfo::tagMaskRegister);
+ result.set(GPRInfo::regCS3);
+ result.set(GPRInfo::regCS4);
+#elif CPU(ARM64)
+ // LLVM might save and use all ARM64 callee saves specified in the ABI.
+ result.set(GPRInfo::regCS0);
+ result.set(GPRInfo::regCS1);
+ result.set(GPRInfo::regCS2);
+ result.set(GPRInfo::regCS3);
+ result.set(GPRInfo::regCS4);
+ result.set(GPRInfo::regCS5);
+ result.set(GPRInfo::regCS6);
+ result.set(GPRInfo::regCS7);
+ ASSERT(GPRInfo::regCS8 == GPRInfo::tagTypeNumberRegister);
+ ASSERT(GPRInfo::regCS9 == GPRInfo::tagMaskRegister);
+ result.set(GPRInfo::regCS8);
+ result.set(GPRInfo::regCS9);
+ result.set(FPRInfo::fpRegCS0);
+ result.set(FPRInfo::fpRegCS1);
+ result.set(FPRInfo::fpRegCS2);
+ result.set(FPRInfo::fpRegCS3);
+ result.set(FPRInfo::fpRegCS4);
+ result.set(FPRInfo::fpRegCS5);
+ result.set(FPRInfo::fpRegCS6);
+ result.set(FPRInfo::fpRegCS7);
+#else
+ UNREACHABLE_FOR_PLATFORM();
+#endif
+#endif
+ return result;
+}
+
RegisterSet RegisterSet::allGPRs()
{
RegisterSet result;
diff --git a/Source/JavaScriptCore/jit/RegisterSet.h b/Source/JavaScriptCore/jit/RegisterSet.h
index 44bfecd..36630d1 100644
--- a/Source/JavaScriptCore/jit/RegisterSet.h
+++ b/Source/JavaScriptCore/jit/RegisterSet.h
@@ -50,6 +50,11 @@
static RegisterSet runtimeRegisters();
static RegisterSet specialRegisters(); // The union of stack, reserved hardware, and runtime registers.
static RegisterSet calleeSaveRegisters();
+ static RegisterSet vmCalleeSaveRegisters(); // Callee save registers that might be saved and used by any tier.
+ static RegisterSet llintBaselineCalleeSaveRegisters(); // Registers saved and used by the LLInt.
+ static RegisterSet dfgCalleeSaveRegisters(); // Registers saved and used by the DFG JIT.
+ static RegisterSet ftlCalleeSaveRegisters(); // Registers that might be saved and used by the FTL JIT.
+ static RegisterSet stubUnavailableRegisters(); // The union of callee saves and special registers.
static RegisterSet allGPRs();
static RegisterSet allFPRs();
static RegisterSet allRegisters();
diff --git a/Source/JavaScriptCore/jit/Repatch.cpp b/Source/JavaScriptCore/jit/Repatch.cpp
index 15f53e2..13c0c08 100644
--- a/Source/JavaScriptCore/jit/Repatch.cpp
+++ b/Source/JavaScriptCore/jit/Repatch.cpp
@@ -555,7 +555,9 @@
if (kind == CallCustomGetter)
stubJit.setupResults(valueRegs);
MacroAssembler::Jump noException = stubJit.emitExceptionCheck(CCallHelpers::InvertedExceptionCheck);
-
+
+ stubJit.copyCalleeSavesToVMCalleeSavesBuffer();
+
stubJit.setupArguments(CCallHelpers::TrustedImmPtr(vm), GPRInfo::callFrameRegister);
handlerCall = stubJit.call();
stubJit.jumpToExceptionHandler();
diff --git a/Source/JavaScriptCore/jit/SpecializedThunkJIT.h b/Source/JavaScriptCore/jit/SpecializedThunkJIT.h
index 3c1da0e..6a2da6d 100644
--- a/Source/JavaScriptCore/jit/SpecializedThunkJIT.h
+++ b/Source/JavaScriptCore/jit/SpecializedThunkJIT.h
@@ -44,6 +44,7 @@
: JSInterfaceJIT(vm)
{
emitFunctionPrologue();
+ emitSaveThenMaterializeTagRegisters();
// Check that we have the expected number of arguments
m_failures.append(branch32(NotEqual, payloadFor(JSStack::ArgumentCount), TrustedImm32(expectedArgCount + 1)));
}
@@ -52,6 +53,7 @@
: JSInterfaceJIT(vm)
{
emitFunctionPrologue();
+ emitSaveThenMaterializeTagRegisters();
}
void loadDoubleArgument(int argument, FPRegisterID dst, RegisterID scratch)
@@ -105,6 +107,8 @@
{
if (src != regT0)
move(src, regT0);
+
+ emitRestoreSavedTagRegisters();
emitFunctionEpilogue();
ret();
}
@@ -113,6 +117,7 @@
{
ASSERT_UNUSED(payload, payload == regT0);
ASSERT_UNUSED(tag, tag == regT1);
+ emitRestoreSavedTagRegisters();
emitFunctionEpilogue();
ret();
}
@@ -137,6 +142,7 @@
lowNonZero.link(this);
highNonZero.link(this);
#endif
+ emitRestoreSavedTagRegisters();
emitFunctionEpilogue();
ret();
}
@@ -146,6 +152,7 @@
if (src != regT0)
move(src, regT0);
tagReturnAsInt32();
+ emitRestoreSavedTagRegisters();
emitFunctionEpilogue();
ret();
}
@@ -155,6 +162,7 @@
if (src != regT0)
move(src, regT0);
tagReturnAsJSCell();
+ emitRestoreSavedTagRegisters();
emitFunctionEpilogue();
ret();
}
@@ -185,7 +193,31 @@
}
private:
+ void emitSaveThenMaterializeTagRegisters()
+ {
+#if USE(JSVALUE64)
+#if CPU(ARM64)
+ pushPair(tagTypeNumberRegister, tagMaskRegister);
+#else
+ push(tagTypeNumberRegister);
+ push(tagMaskRegister);
+#endif
+ emitMaterializeTagCheckRegisters();
+#endif
+ }
+ void emitRestoreSavedTagRegisters()
+ {
+#if USE(JSVALUE64)
+#if CPU(ARM64)
+ popPair(tagTypeNumberRegister, tagMaskRegister);
+#else
+ pop(tagMaskRegister);
+ pop(tagTypeNumberRegister);
+#endif
+#endif
+ }
+
void tagReturnAsInt32()
{
#if USE(JSVALUE64)
diff --git a/Source/JavaScriptCore/jit/TempRegisterSet.h b/Source/JavaScriptCore/jit/TempRegisterSet.h
index 0b2edf9..4c21024 100644
--- a/Source/JavaScriptCore/jit/TempRegisterSet.h
+++ b/Source/JavaScriptCore/jit/TempRegisterSet.h
@@ -115,6 +115,16 @@
return getBit(GPRInfo::numberOfRegisters + index);
}
+ // Return the index'th free FPR.
+ FPRReg getFreeFPR(unsigned index = 0) const
+ {
+ for (unsigned i = FPRInfo::numberOfRegisters; i--;) {
+ if (!getFPRByIndex(i) && !index--)
+ return FPRInfo::toRegister(i);
+ }
+ return InvalidFPRReg;
+ }
+
template<typename BankInfo>
void setByIndex(unsigned index)
{
diff --git a/Source/JavaScriptCore/jit/ThunkGenerators.cpp b/Source/JavaScriptCore/jit/ThunkGenerators.cpp
index ab67a7f..cb648bb 100644
--- a/Source/JavaScriptCore/jit/ThunkGenerators.cpp
+++ b/Source/JavaScriptCore/jit/ThunkGenerators.cpp
@@ -66,6 +66,8 @@
// even though we won't use it.
jit.preserveReturnAddressAfterCall(GPRInfo::nonPreservedNonReturnGPR);
+ jit.copyCalleeSavesToVMCalleeSavesBuffer();
+
jit.setupArguments(CCallHelpers::TrustedImmPtr(vm), GPRInfo::callFrameRegister);
jit.move(CCallHelpers::TrustedImmPtr(bitwise_cast<void*>(lookupExceptionHandler)), GPRInfo::nonArgGPR0);
emitPointerValidation(jit, GPRInfo::nonArgGPR0);
@@ -209,6 +211,18 @@
if (entryType == EnterViaCall)
jit.emitFunctionPrologue();
+#if USE(JSVALUE64)
+ else if (entryType == EnterViaJump) {
+ // We're coming from a specialized thunk that has saved the prior tag registers' contents.
+ // Restore them now.
+#if CPU(ARM64)
+ jit.popPair(JSInterfaceJIT::tagTypeNumberRegister, JSInterfaceJIT::tagMaskRegister);
+#else
+ jit.pop(JSInterfaceJIT::tagMaskRegister);
+ jit.pop(JSInterfaceJIT::tagTypeNumberRegister);
+#endif
+ }
+#endif
jit.emitPutImmediateToCallFrameHeader(0, JSStack::CodeBlock);
jit.storePtr(JSInterfaceJIT::callFrameRegister, &vm->topCallFrame);
@@ -306,6 +320,7 @@
// Handle an exception
exceptionHandler.link(&jit);
+ jit.copyCalleeSavesToVMCalleeSavesBuffer();
jit.storePtr(JSInterfaceJIT::callFrameRegister, &vm->topCallFrame);
#if CPU(X86) && USE(JSVALUE32_64)
@@ -391,13 +406,6 @@
jit.addPtr(extraTemp, JSInterfaceJIT::callFrameRegister);
jit.addPtr(extraTemp, JSInterfaceJIT::stackPointerRegister);
- // Save the original return PC.
- jit.loadPtr(JSInterfaceJIT::Address(JSInterfaceJIT::callFrameRegister, CallFrame::returnPCOffset()), extraTemp);
- jit.storePtr(extraTemp, MacroAssembler::BaseIndex(JSInterfaceJIT::regT3, JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::TimesEight));
-
- // Install the new return PC.
- jit.storePtr(GPRInfo::argumentGPR1, JSInterfaceJIT::Address(JSInterfaceJIT::callFrameRegister, CallFrame::returnPCOffset()));
-
# if CPU(X86_64)
jit.push(JSInterfaceJIT::regT4);
# endif
@@ -439,13 +447,6 @@
jit.addPtr(JSInterfaceJIT::regT5, JSInterfaceJIT::callFrameRegister);
jit.addPtr(JSInterfaceJIT::regT5, JSInterfaceJIT::stackPointerRegister);
- // Save the original return PC.
- jit.loadPtr(JSInterfaceJIT::Address(JSInterfaceJIT::callFrameRegister, CallFrame::returnPCOffset()), GPRInfo::regT5);
- jit.storePtr(GPRInfo::regT5, MacroAssembler::BaseIndex(JSInterfaceJIT::regT3, JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::TimesEight));
-
- // Install the new return PC.
- jit.storePtr(GPRInfo::argumentGPR1, JSInterfaceJIT::Address(JSInterfaceJIT::callFrameRegister, CallFrame::returnPCOffset()));
-
# if CPU(X86)
jit.push(JSInterfaceJIT::regT4);
# endif
diff --git a/Source/JavaScriptCore/llint/LLIntData.cpp b/Source/JavaScriptCore/llint/LLIntData.cpp
index 8e73edf..0ed3b0b 100644
--- a/Source/JavaScriptCore/llint/LLIntData.cpp
+++ b/Source/JavaScriptCore/llint/LLIntData.cpp
@@ -26,6 +26,7 @@
#include "config.h"
#include "LLIntData.h"
#include "BytecodeConventions.h"
+#include "CodeBlock.h"
#include "CodeType.h"
#include "Instruction.h"
#include "JSScope.h"
@@ -131,6 +132,15 @@
#elif CPU(X86_64) && OS(WINDOWS)
ASSERT(maxFrameExtentForSlowPathCall == 64);
#endif
+
+#if !ENABLE(JIT) || USE(JSVALUE32_64)
+ ASSERT(!CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters());
+#elif (CPU(X86_64) && !OS(WINDOWS)) || CPU(ARM64)
+ ASSERT(CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters() == 3);
+#elif (CPU(X86_64) && OS(WINDOWS))
+ ASSERT(CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters() == 3);
+#endif
+
ASSERT(StringType == 6);
ASSERT(ObjectType == 21);
ASSERT(FinalObjectType == 22);
diff --git a/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp b/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
index f07b418..07fa48e 100644
--- a/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
+++ b/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
@@ -485,8 +485,8 @@
vm.topCallFrame = exec;
ErrorHandlingScope errorScope(vm);
- CommonSlowPaths::interpreterThrowInCaller(exec, createStackOverflowError(exec));
- pc = returnToThrowForThrownException(exec);
+ vm.throwException(exec, createStackOverflowError(exec));
+ pc = returnToThrow(exec);
LLINT_RETURN_TWO(pc, exec);
}
diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter.asm
index 1904820..849ca64 100644
--- a/Source/JavaScriptCore/llint/LowLevelInterpreter.asm
+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter.asm
@@ -107,10 +107,9 @@
# - t4 and t5 are never argument registers, t3 can only be a3, t1 can only be
# a1; but t0 and t2 can be either a0 or a2.
#
-# - On 64 bits, csr0, csr1, csr2 and optionally csr3, csr4, csr5 and csr6
-# are available as callee-save registers.
-# csr0 is used to store the PC base, while the last two csr registers are used
-# to store special tag values. Don't use them for anything else.
+# - On 64 bits, there are callee-save registers named csr0, csr1, ... csrN.
+# The last three csr registers are used used to store the PC base and
+# two special tag values. Don't use them for anything else.
#
# Additional platform-specific details (you shouldn't rely on this remaining
# true):
@@ -218,6 +217,15 @@
const maxFrameExtentForSlowPathCall = 64
end
+if X86_64 or X86_64_WIN or ARM64
+ const CalleeSaveSpaceAsVirtualRegisters = 3
+else
+ const CalleeSaveSpaceAsVirtualRegisters = 0
+end
+
+const CalleeSaveSpaceStackAligned = (CalleeSaveSpaceAsVirtualRegisters * SlotSize + StackAlignment - 1) & ~StackAlignmentMask
+
+
# Watchpoint states
const ClearWatchpoint = 0
const IsWatched = 1
@@ -231,17 +239,20 @@
# - C calls are still given the Instruction* rather than the PC index.
# This requires an add before the call, and a sub after.
const PC = t4
- const PB = csr0
if ARM64
- const tagTypeNumber = csr1
- const tagMask = csr2
+ const PB = csr7
+ const tagTypeNumber = csr8
+ const tagMask = csr9
elsif X86_64
+ const PB = csr2
const tagTypeNumber = csr3
const tagMask = csr4
elsif X86_64_WIN
+ const PB = csr4
const tagTypeNumber = csr5
const tagMask = csr6
elsif C_LOOP
+ const PB = csr0
const tagTypeNumber = csr1
const tagMask = csr2
end
@@ -398,18 +409,14 @@
end
end
-if C_LOOP
+if C_LOOP or ARM64 or X86_64 or X86_64_WIN
const CalleeSaveRegisterCount = 0
elsif ARM or ARMv7_TRADITIONAL or ARMv7
const CalleeSaveRegisterCount = 7
-elsif ARM64
- const CalleeSaveRegisterCount = 10
-elsif SH4 or X86_64 or MIPS
+elsif SH4 or MIPS
const CalleeSaveRegisterCount = 5
elsif X86 or X86_WIN
const CalleeSaveRegisterCount = 3
-elsif X86_64_WIN
- const CalleeSaveRegisterCount = 7
end
const CalleeRegisterSaveSize = CalleeSaveRegisterCount * PtrSize
@@ -419,17 +426,11 @@
const VMEntryTotalFrameSize = (CalleeRegisterSaveSize + sizeof VMEntryRecord + StackAlignment - 1) & ~StackAlignmentMask
macro pushCalleeSaves()
- if C_LOOP
+ if C_LOOP or ARM64 or X86_64 or X86_64_WIN
elsif ARM or ARMv7_TRADITIONAL
emit "push {r4-r10}"
elsif ARMv7
emit "push {r4-r6, r8-r11}"
- elsif ARM64
- emit "stp x20, x19, [sp, #-16]!"
- emit "stp x22, x21, [sp, #-16]!"
- emit "stp x24, x23, [sp, #-16]!"
- emit "stp x26, x25, [sp, #-16]!"
- emit "stp x28, x27, [sp, #-16]!"
elsif MIPS
emit "addiu $sp, $sp, -20"
emit "sw $20, 16($sp)"
@@ -451,35 +452,15 @@
emit "push esi"
emit "push edi"
emit "push ebx"
- elsif X86_64
- emit "push %r12"
- emit "push %r13"
- emit "push %r14"
- emit "push %r15"
- emit "push %rbx"
- elsif X86_64_WIN
- emit "push r12"
- emit "push r13"
- emit "push r14"
- emit "push r15"
- emit "push rbx"
- emit "push rdi"
- emit "push rsi"
end
end
macro popCalleeSaves()
- if C_LOOP
+ if C_LOOP or ARM64 or X86_64 or X86_64_WIN
elsif ARM or ARMv7_TRADITIONAL
emit "pop {r4-r10}"
elsif ARMv7
emit "pop {r4-r6, r8-r11}"
- elsif ARM64
- emit "ldp x28, x27, [sp], #16"
- emit "ldp x26, x25, [sp], #16"
- emit "ldp x24, x23, [sp], #16"
- emit "ldp x22, x21, [sp], #16"
- emit "ldp x20, x19, [sp], #16"
elsif MIPS
emit "lw $16, 0($sp)"
emit "lw $17, 4($sp)"
@@ -501,20 +482,6 @@
emit "pop ebx"
emit "pop edi"
emit "pop esi"
- elsif X86_64
- emit "pop %rbx"
- emit "pop %r15"
- emit "pop %r14"
- emit "pop %r13"
- emit "pop %r12"
- elsif X86_64_WIN
- emit "pop rsi"
- emit "pop rdi"
- emit "pop rbx"
- emit "pop r15"
- emit "pop r14"
- emit "pop r13"
- emit "pop r12"
end
end
@@ -544,6 +511,131 @@
end
end
+macro preserveCalleeSavesUsedByLLInt()
+ subp CalleeSaveSpaceStackAligned, sp
+ if C_LOOP
+ elsif ARM or ARMv7_TRADITIONAL
+ elsif ARMv7
+ elsif ARM64
+ emit "stp x27, x28, [fp, #-16]"
+ emit "stp xzr, x26, [fp, #-32]"
+ elsif MIPS
+ elsif SH4
+ elsif X86
+ elsif X86_WIN
+ elsif X86_64
+ storep csr4, -8[cfr]
+ storep csr3, -16[cfr]
+ storep csr2, -24[cfr]
+ elsif X86_64_WIN
+ storep csr6, -8[cfr]
+ storep csr5, -16[cfr]
+ storep csr4, -24[cfr]
+ end
+end
+
+macro restoreCalleeSavesUsedByLLInt()
+ if C_LOOP
+ elsif ARM or ARMv7_TRADITIONAL
+ elsif ARMv7
+ elsif ARM64
+ emit "ldp xzr, x26, [fp, #-32]"
+ emit "ldp x27, x28, [fp, #-16]"
+ elsif MIPS
+ elsif SH4
+ elsif X86
+ elsif X86_WIN
+ elsif X86_64
+ loadp -24[cfr], csr2
+ loadp -16[cfr], csr3
+ loadp -8[cfr], csr4
+ elsif X86_64_WIN
+ loadp -24[cfr], csr4
+ loadp -16[cfr], csr5
+ loadp -8[cfr], csr6
+ end
+end
+
+macro copyCalleeSavesToVMCalleeSavesBuffer(vm, temp)
+ if ARM64 or X86_64 or X86_64_WIN
+ leap VM::calleeSaveRegistersBuffer[vm], temp
+ if ARM64
+ storep csr0, [temp]
+ storep csr1, 8[temp]
+ storep csr2, 16[temp]
+ storep csr3, 24[temp]
+ storep csr4, 32[temp]
+ storep csr5, 40[temp]
+ storep csr6, 48[temp]
+ storep csr7, 56[temp]
+ storep csr8, 64[temp]
+ storep csr9, 72[temp]
+ stored csfr0, 80[temp]
+ stored csfr1, 88[temp]
+ stored csfr2, 96[temp]
+ stored csfr3, 104[temp]
+ stored csfr4, 112[temp]
+ stored csfr5, 120[temp]
+ stored csfr6, 128[temp]
+ stored csfr7, 136[temp]
+ elsif X86_64
+ storep csr0, [temp]
+ storep csr1, 8[temp]
+ storep csr2, 16[temp]
+ storep csr3, 24[temp]
+ storep csr4, 32[temp]
+ elsif X86_64_WIN
+ storep csr0, [temp]
+ storep csr1, 8[temp]
+ storep csr2, 16[temp]
+ storep csr3, 24[temp]
+ storep csr4, 32[temp]
+ storep csr5, 40[temp]
+ storep csr6, 48[temp]
+ end
+ end
+end
+
+macro restoreCalleeSavesFromVMCalleeSavesBuffer(vm, temp)
+ if ARM64 or X86_64 or X86_64_WIN
+ leap VM::calleeSaveRegistersBuffer[vm], temp
+ if ARM64
+ loadp [temp], csr0
+ loadp 8[temp], csr1
+ loadp 16[temp], csr2
+ loadp 24[temp], csr3
+ loadp 32[temp], csr4
+ loadp 40[temp], csr5
+ loadp 48[temp], csr6
+ loadp 56[temp], csr7
+ loadp 64[temp], csr8
+ loadp 72[temp], csr9
+ loadd 80[temp], csfr0
+ loadd 88[temp], csfr1
+ loadd 96[temp], csfr2
+ loadd 104[temp], csfr3
+ loadd 112[temp], csfr4
+ loadd 120[temp], csfr5
+ loadd 128[temp], csfr6
+ loadd 136[temp], csfr7
+ elsif X86_64
+ loadp [temp], csr0
+ loadp 8[temp], csr1
+ loadp 16[temp], csr2
+ loadp 24[temp], csr3
+ loadp 32[temp], csr4
+ elsif X86_64_WIN
+ loadp [temp], csr0
+ loadp 8[temp], csr1
+ loadp 16[temp], csr2
+ loadp 24[temp], csr3
+ loadp 32[temp], csr4
+ loadp 40[temp], csr5
+ loadp 48[temp], csr6
+ end
+ end
+end
+
macro preserveReturnAddressAfterCall(destinationRegister)
if C_LOOP or ARM or ARMv7 or ARMv7_TRADITIONAL or ARM64 or MIPS or SH4
# In C_LOOP case, we're only preserving the bytecode vPC.
@@ -555,17 +647,6 @@
end
end
-macro restoreReturnAddressBeforeReturn(sourceRegister)
- if C_LOOP or ARM or ARMv7 or ARMv7_TRADITIONAL or ARM64 or MIPS or SH4
- # In C_LOOP case, we're only restoring the bytecode vPC.
- move sourceRegister, lr
- elsif X86 or X86_WIN or X86_64 or X86_64_WIN
- push sourceRegister
- else
- error
- end
-end
-
macro functionPrologue()
if X86 or X86_WIN or X86_64 or X86_64_WIN
push cfr
@@ -763,6 +844,8 @@
codeBlockSetter(t1)
+ preserveCalleeSavesUsedByLLInt()
+
# Set up the PC.
if JSVALUE64
loadp CodeBlock::m_instructions[t1], PB
@@ -778,7 +861,8 @@
bpbeq VM::m_jsStackLimit[t2], t0, .stackHeightOK
# Stack height check failed - need to call a slow_path.
- subp maxFrameExtentForSlowPathCall, sp # Set up temporary stack pointer for call
+ # Set up temporary stack pointer for call including callee saves
+ subp maxFrameExtentForSlowPathCall, sp
callSlowPath(_llint_stack_check)
bpeq r1, 0, .stackHeightOKGetCodeBlock
move r1, cfr
@@ -793,6 +877,11 @@
.stackHeightOK:
move t0, sp
+
+ if JSVALUE64
+ move TagTypeNumber, tagTypeNumber
+ addp TagBitTypeOther, tagTypeNumber, tagMask
+ end
end
# Expects that CodeBlock is in t1, which is what prologue() leaves behind.
@@ -848,6 +937,7 @@
end
macro doReturn()
+ restoreCalleeSavesUsedByLLInt()
restoreCallerPCAndCFR()
ret
end
diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
index b7f2d8b..99fc1d9 100644
--- a/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
@@ -302,6 +302,7 @@
loadp Callee + PayloadOffset[cfr], t3
andp MarkedBlockMask, t3
loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3
+ restoreCalleeSavesFromVMCalleeSavesBuffer(t3, t0)
loadp VM::callFrameForThrow[t3], cfr
loadp CallerFrame[cfr], cfr
@@ -591,7 +592,6 @@
btpz t3, .proceedInline
loadp CommonSlowPaths::ArityCheckData::paddedStackSpace[r1], a0
- loadp CommonSlowPaths::ArityCheckData::returnPC[r1], a1
call t3
if ASSERT_ENABLED
loadp ReturnPC[cfr], t0
@@ -1878,6 +1878,7 @@
loadp Callee + PayloadOffset[cfr], t3
andp MarkedBlockMask, t3
loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3
+ restoreCalleeSavesFromVMCalleeSavesBuffer(t3, t0)
loadp VM::callFrameForThrow[t3], cfr
restoreStackPointerAfterCall()
@@ -1916,6 +1917,7 @@
loadp Callee[cfr], t1
andp MarkedBlockMask, t1
loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t1], t1
+ copyCalleeSavesToVMCalleeSavesBuffer(t1, t2)
jmp VM::targetMachinePCForThrow[t1]
diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
index bdb64e5..564ebb0 100644
--- a/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
@@ -215,9 +215,6 @@
end
storep cfr, VM::topVMEntryFrame[vm]
- move TagTypeNumber, tagTypeNumber
- addp TagBitTypeOther, tagTypeNumber, tagMask
-
checkStackPointerAlignment(extraTempReg, 0xbad0dc02)
makeCall(entry, t3)
@@ -277,6 +274,7 @@
loadp Callee[cfr], t3
andp MarkedBlockMask, t3
loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3
+ restoreCalleeSavesFromVMCalleeSavesBuffer(t3, t0)
loadp VM::callFrameForThrow[t3], cfr
loadp CallerFrame[cfr], cfr
@@ -509,20 +507,6 @@
jmp _llint_throw_from_slow_path_trampoline
.noError:
- # r1 points to ArityCheckData.
- loadp CommonSlowPaths::ArityCheckData::thunkToCall[r1], t3
- btpz t3, .proceedInline
-
- loadp CommonSlowPaths::ArityCheckData::paddedStackSpace[r1], a0
- loadp CommonSlowPaths::ArityCheckData::returnPC[r1], a1
- call t3
- if ASSERT_ENABLED
- loadp ReturnPC[cfr], t0
- loadp [t0], t0
- end
- jmp .continue
-
-.proceedInline:
loadi CommonSlowPaths::ArityCheckData::paddedStackSpace[r1], t1
btiz t1, .continue
@@ -530,8 +514,9 @@
lshiftp 1, t1
negq t1
move cfr, t3
+ subp CalleeSaveSpaceAsVirtualRegisters * 8, t3
loadi PayloadOffset + ArgumentCount[cfr], t2
- addi CallFrameHeaderSlots, t2
+ addi CallFrameHeaderSlots + CalleeSaveSpaceAsVirtualRegisters, t2
.copyLoop:
loadq [t3], t0
storeq t0, [t3, t1, 8]
@@ -574,12 +559,15 @@
checkStackPointerAlignment(t2, 0xdead00e1)
loadp CodeBlock[cfr], t2 // t2<CodeBlock> = cfr.CodeBlock
loadi CodeBlock::m_numVars[t2], t2 // t2<size_t> = t2<CodeBlock>.m_numVars
+ subq CalleeSaveSpaceAsVirtualRegisters, t2
+ move cfr, t1
+ subq CalleeSaveSpaceAsVirtualRegisters * 8, t1
btiz t2, .opEnterDone
move ValueUndefined, t0
negi t2
sxi2q t2, t2
.opEnterLoop:
- storeq t0, [cfr, t2, 8]
+ storeq t0, [t1, t2, 8]
addq 1, t2
btqnz t2, .opEnterLoop
.opEnterDone:
@@ -1761,11 +1749,6 @@
_llint_op_catch:
- # Gotta restore the tag registers. We could be throwing from FTL, which may
- # clobber them.
- move TagTypeNumber, tagTypeNumber
- move TagMask, tagMask
-
# This is where we end up from the JIT's throw trampoline (because the
# machine code return address will be set to _llint_op_catch), and from
# the interpreter's throw trampoline (see _llint_throw_trampoline).
@@ -1774,6 +1757,7 @@
loadp Callee[cfr], t3
andp MarkedBlockMask, t3
loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3
+ restoreCalleeSavesFromVMCalleeSavesBuffer(t3, t0)
loadp VM::callFrameForThrow[t3], cfr
restoreStackPointerAfterCall()
@@ -1806,6 +1790,11 @@
_llint_throw_from_slow_path_trampoline:
+ loadp Callee[cfr], t1
+ andp MarkedBlockMask, t1
+ loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t1], t1
+ copyCalleeSavesToVMCalleeSavesBuffer(t1, t2)
+
callSlowPath(_llint_slow_path_handle_exception)
# When throwing from the interpreter (i.e. throwing from LLIntSlowPaths), so
diff --git a/Source/JavaScriptCore/offlineasm/arm64.rb b/Source/JavaScriptCore/offlineasm/arm64.rb
index 1110622..227a029 100644
--- a/Source/JavaScriptCore/offlineasm/arm64.rb
+++ b/Source/JavaScriptCore/offlineasm/arm64.rb
@@ -61,6 +61,14 @@
# q3 => ft3, fa3
# q4 => ft4 (unused in baseline)
# q5 => ft5 (unused in baseline)
+# q8 => csfr0 (Only the lower 64 bits)
+# q9 => csfr1 (Only the lower 64 bits)
+# q10 => csfr2 (Only the lower 64 bits)
+# q11 => csfr3 (Only the lower 64 bits)
+# q12 => csfr4 (Only the lower 64 bits)
+# q13 => csfr5 (Only the lower 64 bits)
+# q14 => csfr6 (Only the lower 64 bits)
+# q15 => csfr7 (Only the lower 64 bits)
# q31 => scratch
def arm64GPRName(name, kind)
@@ -116,10 +124,24 @@
when 'cfr'
arm64GPRName('x29', kind)
when 'csr0'
- arm64GPRName('x26', kind)
+ arm64GPRName('x19', kind)
when 'csr1'
- arm64GPRName('x27', kind)
+ arm64GPRName('x20', kind)
when 'csr2'
+ arm64GPRName('x21', kind)
+ when 'csr3'
+ arm64GPRName('x22', kind)
+ when 'csr4'
+ arm64GPRName('x23', kind)
+ when 'csr5'
+ arm64GPRName('x24', kind)
+ when 'csr6'
+ arm64GPRName('x25', kind)
+ when 'csr7'
+ arm64GPRName('x26', kind)
+ when 'csr8'
+ arm64GPRName('x27', kind)
+ when 'csr9'
arm64GPRName('x28', kind)
when 'sp'
'sp'
@@ -146,6 +168,22 @@
arm64FPRName('q4', kind)
when 'ft5'
arm64FPRName('q5', kind)
+ when 'csfr0'
+ arm64FPRName('q8', kind)
+ when 'csfr1'
+ arm64FPRName('q9', kind)
+ when 'csfr2'
+ arm64FPRName('q10', kind)
+ when 'csfr3'
+ arm64FPRName('q11', kind)
+ when 'csfr4'
+ arm64FPRName('q12', kind)
+ when 'csfr5'
+ arm64FPRName('q13', kind)
+ when 'csfr6'
+ arm64FPRName('q14', kind)
+ when 'csfr7'
+ arm64FPRName('q15', kind)
else "Bad register name #{@name} at #{codeOriginString}"
end
end
diff --git a/Source/JavaScriptCore/offlineasm/registers.rb b/Source/JavaScriptCore/offlineasm/registers.rb
index a4a075c..b6ed36d 100644
--- a/Source/JavaScriptCore/offlineasm/registers.rb
+++ b/Source/JavaScriptCore/offlineasm/registers.rb
@@ -48,7 +48,10 @@
"csr3",
"csr4",
"csr5",
- "csr6"
+ "csr6",
+ "csr7",
+ "csr8",
+ "csr9"
]
FPRS =
@@ -63,6 +66,14 @@
"fa1",
"fa2",
"fa3",
+ "csfr0",
+ "csfr1",
+ "csfr2",
+ "csfr3",
+ "csfr4",
+ "csfr5",
+ "csfr6",
+ "csfr7",
"fr"
]
diff --git a/Source/JavaScriptCore/offlineasm/x86.rb b/Source/JavaScriptCore/offlineasm/x86.rb
index f2ee96c..c6e1717 100644
--- a/Source/JavaScriptCore/offlineasm/x86.rb
+++ b/Source/JavaScriptCore/offlineasm/x86.rb
@@ -290,14 +290,13 @@
when "csr0"
"ebx"
when "csr1"
- "r12"
+ isWin ? "esi" : "r12"
when "csr2"
- "r13"
+ isWin ? "edi" : "r13"
when "csr3"
- isWin ? "esi" : "r14"
+ isWin ? "r12" : "r14"
when "csr4"
- isWin ? "edi" : "r15"
- "r15"
+ isWin ? "r13" : "r15"
when "csr5"
raise "cannot use register #{name} on X86-64" unless isWin
"r14"
diff --git a/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp b/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp
index e4861ba..e29f13a 100644
--- a/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp
+++ b/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp
@@ -25,7 +25,6 @@
#include "config.h"
#include "CommonSlowPaths.h"
-#include "ArityCheckFailReturnThunks.h"
#include "ArrayConstructor.h"
#include "CallFrame.h"
#include "ClonedArguments.h"
@@ -166,15 +165,11 @@
CommonSlowPaths::ArityCheckData* result = vm.arityCheckData.get();
result->paddedStackSpace = slotsToAdd;
#if ENABLE(JIT)
- if (vm.canUseJIT()) {
+ if (vm.canUseJIT())
result->thunkToCall = vm.getCTIStub(arityFixupGenerator).code().executableAddress();
- result->returnPC = vm.arityCheckFailReturnThunks->returnPCFor(vm, slotsToAdd * stackAlignmentRegisters()).executableAddress();
- } else
+ else
#endif
- {
result->thunkToCall = 0;
- result->returnPC = 0;
- }
return result;
}
diff --git a/Source/JavaScriptCore/runtime/CommonSlowPaths.h b/Source/JavaScriptCore/runtime/CommonSlowPaths.h
index 7f165c3..92af43b 100644
--- a/Source/JavaScriptCore/runtime/CommonSlowPaths.h
+++ b/Source/JavaScriptCore/runtime/CommonSlowPaths.h
@@ -49,7 +49,6 @@
struct ArityCheckData {
unsigned paddedStackSpace;
void* thunkToCall;
- void* returnPC;
};
ALWAYS_INLINE int arityCheckFor(ExecState* exec, JSStack* stack, CodeSpecializationKind kind)
diff --git a/Source/JavaScriptCore/runtime/VM.cpp b/Source/JavaScriptCore/runtime/VM.cpp
index 827bb36..2db36e8 100644
--- a/Source/JavaScriptCore/runtime/VM.cpp
+++ b/Source/JavaScriptCore/runtime/VM.cpp
@@ -30,7 +30,6 @@
#include "VM.h"
#include "ArgList.h"
-#include "ArityCheckFailReturnThunks.h"
#include "ArrayBufferNeuteringWatchpoint.h"
#include "BuiltinExecutables.h"
#include "CodeBlock.h"
@@ -77,6 +76,7 @@
#include "PropertyMapHashTable.h"
#include "RegExpCache.h"
#include "RegExpObject.h"
+#include "RegisterAtOffsetList.h"
#include "RuntimeType.h"
#include "SimpleTypedArrayController.h"
#include "SourceProviderCache.h"
@@ -253,7 +253,7 @@
#if ENABLE(JIT)
jitStubs = std::make_unique<JITThunks>();
- arityCheckFailReturnThunks = std::make_unique<ArityCheckFailReturnThunks>();
+ allCalleeSaveRegisterOffsets = std::make_unique<RegisterAtOffsetList>(RegisterSet::vmCalleeSaveRegisters(), RegisterAtOffsetList::ZeroBased);
#endif
arityCheckData = std::make_unique<CommonSlowPaths::ArityCheckData>();
diff --git a/Source/JavaScriptCore/runtime/VM.h b/Source/JavaScriptCore/runtime/VM.h
index 2eb13d7..81884e5 100644
--- a/Source/JavaScriptCore/runtime/VM.h
+++ b/Source/JavaScriptCore/runtime/VM.h
@@ -33,6 +33,9 @@
#include "DateInstanceCache.h"
#include "ExecutableAllocator.h"
#include "FunctionHasExecutedCache.h"
+#if ENABLE(JIT)
+#include "GPRInfo.h"
+#endif
#include "Heap.h"
#include "Intrinsic.h"
#include "JITThunks.h"
@@ -72,7 +75,6 @@
namespace JSC {
-class ArityCheckFailReturnThunks;
class BuiltinExecutables;
class CodeBlock;
class CodeCache;
@@ -90,6 +92,7 @@
class LegacyProfiler;
class NativeExecutable;
class RegExpCache;
+class RegisterAtOffsetList;
class ScriptExecutable;
class SourceProvider;
class SourceProviderCache;
@@ -357,14 +360,26 @@
SourceProviderCacheMap sourceProviderCacheMap;
Interpreter* interpreter;
#if ENABLE(JIT)
+#if NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
+ intptr_t calleeSaveRegistersBuffer[NUMBER_OF_CALLEE_SAVES_REGISTERS];
+
+ static ptrdiff_t calleeSaveRegistersBufferOffset()
+ {
+ return OBJECT_OFFSETOF(VM, calleeSaveRegistersBuffer);
+ }
+#endif // NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
+
std::unique_ptr<JITThunks> jitStubs;
MacroAssemblerCodeRef getCTIStub(ThunkGenerator generator)
{
return jitStubs->ctiStub(this, generator);
}
NativeExecutable* getHostFunction(NativeFunction, Intrinsic);
+
+ std::unique_ptr<RegisterAtOffsetList> allCalleeSaveRegisterOffsets;
+
+ RegisterAtOffsetList* getAllCalleeSaveRegisterOffsets() { return allCalleeSaveRegisterOffsets.get(); }
- std::unique_ptr<ArityCheckFailReturnThunks> arityCheckFailReturnThunks;
#endif // ENABLE(JIT)
std::unique_ptr<CommonSlowPaths::ArityCheckData> arityCheckData;
#if ENABLE(FTL_JIT)