[JSC] Implement op_wide16 / op_wide32 and introduce 16bit version bytecode
https://bugs.webkit.org/show_bug.cgi?id=197979
Reviewed by Filip Pizlo.
JSTests:
* stress/16bit-code.js: Added.
(shouldBe):
* stress/32bit-code.js: Added.
(shouldBe):
Source/JavaScriptCore:
This patch introduces 16bit bytecode size. Previously, we had two versions of bytecodes, 8bit and 32bit. However,
in Gmail, we found that a lot of bytecodes get 32bit because they do not fit in 8bit. 8bit is very small and large
function easily emits a lot of 32bit bytecodes because of large VirtualRegister number etc. But they almost always
fit in 16bit. If we can have 16bit version of bytecode, we can make most of the current 32bit bytecodes 16bit and
save memory.
We rename rename op_wide to op_wide32 and introduce op_wide16. The mechanism is similar to old op_wide. When we
get op_wide16, the following bytecode data is 16bit, and we execute 16bit version of bytecode in LLInt.
We also disable this op_wide16 feature in Windows CLoop, which is used in AppleWin port. When the code size of
CLoop::execute increases, MSVC starts generating CLoop::execute function with very large stack allocation
requirement. Even before introducing this 16bit bytecode, CLoop::execute in AppleWin takes almost 100KB stack
height. After introducing this, it becomes 160KB. While the semantics of the function is correctly compiled,
such a large stack allocation is not essentially necessary, and this leads to stack overflow errors quite easily,
and tests fail with AppleWin port because it starts throwing stack overflow range error in various places.
In this patch, for now, we just disable op_wide16 feature for AppleWin so that CLoop::execute takes 100KB
stack allocation because this patch is not focusing on fixing AppleWin's CLoop issue. We introduce a new backend
type for LLInt, "C_LOOP_WIN". "C_LOOP_WIN" do not generate wide16 version of code to reduce the code size of
CLoop::execute. In the future, we should investigate whether this MSVC issue is fixed in Visual Studio 2019.
Or we should consider always enabling ASM LLInt for Windows.
This patch improves Gmail by 7MB at least.
* CMakeLists.txt:
* bytecode/BytecodeConventions.h:
* bytecode/BytecodeDumper.cpp:
(JSC::BytecodeDumper<Block>::dumpBlock):
* bytecode/BytecodeList.rb:
* bytecode/BytecodeRewriter.h:
(JSC::BytecodeRewriter::Fragment::align):
* bytecode/BytecodeUseDef.h:
(JSC::computeUsesForBytecodeOffset):
(JSC::computeDefsForBytecodeOffset):
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::finishCreation):
* bytecode/CodeBlock.h:
(JSC::CodeBlock::metadataTable const):
* bytecode/Fits.h:
* bytecode/Instruction.h:
(JSC::Instruction::opcodeID const):
(JSC::Instruction::isWide16 const):
(JSC::Instruction::isWide32 const):
(JSC::Instruction::hasMetadata const):
(JSC::Instruction::sizeShiftAmount const):
(JSC::Instruction::size const):
(JSC::Instruction::wide16 const):
(JSC::Instruction::wide32 const):
(JSC::Instruction::isWide const): Deleted.
(JSC::Instruction::wide const): Deleted.
* bytecode/InstructionStream.h:
(JSC::InstructionStreamWriter::write):
* bytecode/Opcode.h:
* bytecode/OpcodeSize.h:
* bytecompiler/BytecodeGenerator.cpp:
(JSC::BytecodeGenerator::alignWideOpcode16):
(JSC::BytecodeGenerator::alignWideOpcode32):
(JSC::BytecodeGenerator::emitGetByVal): Previously, we always emit 32bit op_get_by_val for bytecodes in `for-in` context because
its operand can be replaced to the other VirtualRegister later. But if we know that replacing VirtualRegister can fit in 8bit / 16bit
a-priori, we should not emit 32bit version. We expose OpXXX::checkWithoutMetadataID to check whether we could potentially compact
the bytecode for the given operands.
(JSC::BytecodeGenerator::emitYieldPoint):
(JSC::StructureForInContext::finalize):
(JSC::BytecodeGenerator::alignWideOpcode): Deleted.
* bytecompiler/BytecodeGenerator.h:
(JSC::BytecodeGenerator::write):
* dfg/DFGCapabilities.cpp:
(JSC::DFG::capabilityLevel):
* generator/Argument.rb:
* generator/DSL.rb:
* generator/Metadata.rb:
* generator/Opcode.rb: A little bit weird but checkImpl's argument must be reference. We are relying on that BoundLabel is once modified in
this check phase, and the modified BoundLabel will be used when emitting the code. If checkImpl copies the passed BoundLabel, this modification
will be discarded in this checkImpl function and make the code generation broken.
* generator/Section.rb:
* jit/JITExceptions.cpp:
(JSC::genericUnwind):
* llint/LLIntData.cpp:
(JSC::LLInt::initialize):
* llint/LLIntData.h:
(JSC::LLInt::opcodeMapWide16):
(JSC::LLInt::opcodeMapWide32):
(JSC::LLInt::getOpcodeWide16):
(JSC::LLInt::getOpcodeWide32):
(JSC::LLInt::getWide16CodePtr):
(JSC::LLInt::getWide32CodePtr):
(JSC::LLInt::opcodeMapWide): Deleted.
(JSC::LLInt::getOpcodeWide): Deleted.
(JSC::LLInt::getWideCodePtr): Deleted.
* llint/LLIntOfflineAsmConfig.h:
* llint/LLIntSlowPaths.cpp:
(JSC::LLInt::LLINT_SLOW_PATH_DECL):
* llint/LLIntSlowPaths.h:
* llint/LowLevelInterpreter.asm:
* llint/LowLevelInterpreter.cpp:
(JSC::CLoop::execute):
* llint/LowLevelInterpreter32_64.asm:
* llint/LowLevelInterpreter64.asm:
* offlineasm/arm.rb:
* offlineasm/arm64.rb:
* offlineasm/asm.rb:
* offlineasm/backends.rb:
* offlineasm/cloop.rb:
* offlineasm/instructions.rb:
* offlineasm/mips.rb:
* offlineasm/x86.rb: Load operation with sign extension should also have the extended size information. For example, loadbs should be
converted to loadbsi for 32bit sign extension (and loadbsq for 64bit sign extension). And use loadbsq / loadhsq for loading VirtualRegister
information in LowLevelInterpreter64 since they will be used for pointer arithmetic and they are using machine register width.
* parser/ResultType.h:
(JSC::OperandTypes::OperandTypes):
(JSC::OperandTypes::first const):
(JSC::OperandTypes::second const):
(JSC::OperandTypes::bits):
(JSC::OperandTypes::fromBits):
(): Deleted.
(JSC::OperandTypes::toInt): Deleted.
(JSC::OperandTypes::fromInt): Deleted.
We reduce sizeof(OperandTypes) from unsigned to uint16_t, which guarantees that OperandTypes always fit in 16bit bytecode.
git-svn-id: http://svn.webkit.org/repository/webkit/trunk@245906 268f45cc-cd09-0410-ab3c-d52691b4dbfc
diff --git a/JSTests/ChangeLog b/JSTests/ChangeLog
index 0cb1914..4aad595 100644
--- a/JSTests/ChangeLog
+++ b/JSTests/ChangeLog
@@ -1,3 +1,15 @@
+2019-05-30 Tadeu Zagallo <tzagallo@apple.com> and Yusuke Suzuki <ysuzuki@apple.com>
+
+ [JSC] Implement op_wide16 / op_wide32 and introduce 16bit version bytecode
+ https://bugs.webkit.org/show_bug.cgi?id=197979
+
+ Reviewed by Filip Pizlo.
+
+ * stress/16bit-code.js: Added.
+ (shouldBe):
+ * stress/32bit-code.js: Added.
+ (shouldBe):
+
2019-05-30 Justin Michaud <justin_michaud@apple.com>
oss-fuzz: jsc: Issue 15016: jsc: Abrt in JSC::Wasm::AirIRGenerator::addLocal (15016)
diff --git a/JSTests/stress/16bit-code.js b/JSTests/stress/16bit-code.js
new file mode 100644
index 0000000..70a39ee
--- /dev/null
+++ b/JSTests/stress/16bit-code.js
@@ -0,0 +1,7 @@
+function shouldBe(actual, expected) {
+ if (actual !== expected)
+ throw new Error('bad value: ' + actual);
+}
+
+var f = new Function(`obj`, `return 0 ${"+ obj.i".repeat(1000)}`);
+shouldBe(f({ i: 42 }), 42000);
diff --git a/JSTests/stress/32bit-code.js b/JSTests/stress/32bit-code.js
new file mode 100644
index 0000000..928d08c
--- /dev/null
+++ b/JSTests/stress/32bit-code.js
@@ -0,0 +1,13 @@
+function shouldBe(actual, expected) {
+ if (actual !== expected)
+ throw new Error('bad value: ' + actual);
+}
+
+var string = ``;
+string += `var res = 0;\n`;
+for (var i = 0; i < 5e4; ++i) {
+ string += `res += ${i};\n`
+}
+string += `return res;`
+var f = new Function(string);
+shouldBe(f(), 1249975000);
diff --git a/Source/JavaScriptCore/CMakeLists.txt b/Source/JavaScriptCore/CMakeLists.txt
index e716c71..838f345 100644
--- a/Source/JavaScriptCore/CMakeLists.txt
+++ b/Source/JavaScriptCore/CMakeLists.txt
@@ -236,7 +236,7 @@
)
if (WIN32)
- set(OFFLINE_ASM_BACKEND "X86_WIN, X86_64_WIN, C_LOOP")
+ set(OFFLINE_ASM_BACKEND "X86_WIN, X86_64_WIN, C_LOOP_WIN")
else ()
if (WTF_CPU_X86)
set(OFFLINE_ASM_BACKEND "X86")
diff --git a/Source/JavaScriptCore/ChangeLog b/Source/JavaScriptCore/ChangeLog
index b994f82..a40f272 100644
--- a/Source/JavaScriptCore/ChangeLog
+++ b/Source/JavaScriptCore/ChangeLog
@@ -1,3 +1,131 @@
+2019-05-30 Tadeu Zagallo <tzagallo@apple.com> and Yusuke Suzuki <ysuzuki@apple.com>
+
+ [JSC] Implement op_wide16 / op_wide32 and introduce 16bit version bytecode
+ https://bugs.webkit.org/show_bug.cgi?id=197979
+
+ Reviewed by Filip Pizlo.
+
+ This patch introduces 16bit bytecode size. Previously, we had two versions of bytecodes, 8bit and 32bit. However,
+ in Gmail, we found that a lot of bytecodes get 32bit because they do not fit in 8bit. 8bit is very small and large
+ function easily emits a lot of 32bit bytecodes because of large VirtualRegister number etc. But they almost always
+ fit in 16bit. If we can have 16bit version of bytecode, we can make most of the current 32bit bytecodes 16bit and
+ save memory.
+
+ We rename rename op_wide to op_wide32 and introduce op_wide16. The mechanism is similar to old op_wide. When we
+ get op_wide16, the following bytecode data is 16bit, and we execute 16bit version of bytecode in LLInt.
+
+ We also disable this op_wide16 feature in Windows CLoop, which is used in AppleWin port. When the code size of
+ CLoop::execute increases, MSVC starts generating CLoop::execute function with very large stack allocation
+ requirement. Even before introducing this 16bit bytecode, CLoop::execute in AppleWin takes almost 100KB stack
+ height. After introducing this, it becomes 160KB. While the semantics of the function is correctly compiled,
+ such a large stack allocation is not essentially necessary, and this leads to stack overflow errors quite easily,
+ and tests fail with AppleWin port because it starts throwing stack overflow range error in various places.
+ In this patch, for now, we just disable op_wide16 feature for AppleWin so that CLoop::execute takes 100KB
+ stack allocation because this patch is not focusing on fixing AppleWin's CLoop issue. We introduce a new backend
+ type for LLInt, "C_LOOP_WIN". "C_LOOP_WIN" do not generate wide16 version of code to reduce the code size of
+ CLoop::execute. In the future, we should investigate whether this MSVC issue is fixed in Visual Studio 2019.
+ Or we should consider always enabling ASM LLInt for Windows.
+
+ This patch improves Gmail by 7MB at least.
+
+ * CMakeLists.txt:
+ * bytecode/BytecodeConventions.h:
+ * bytecode/BytecodeDumper.cpp:
+ (JSC::BytecodeDumper<Block>::dumpBlock):
+ * bytecode/BytecodeList.rb:
+ * bytecode/BytecodeRewriter.h:
+ (JSC::BytecodeRewriter::Fragment::align):
+ * bytecode/BytecodeUseDef.h:
+ (JSC::computeUsesForBytecodeOffset):
+ (JSC::computeDefsForBytecodeOffset):
+ * bytecode/CodeBlock.cpp:
+ (JSC::CodeBlock::finishCreation):
+ * bytecode/CodeBlock.h:
+ (JSC::CodeBlock::metadataTable const):
+ * bytecode/Fits.h:
+ * bytecode/Instruction.h:
+ (JSC::Instruction::opcodeID const):
+ (JSC::Instruction::isWide16 const):
+ (JSC::Instruction::isWide32 const):
+ (JSC::Instruction::hasMetadata const):
+ (JSC::Instruction::sizeShiftAmount const):
+ (JSC::Instruction::size const):
+ (JSC::Instruction::wide16 const):
+ (JSC::Instruction::wide32 const):
+ (JSC::Instruction::isWide const): Deleted.
+ (JSC::Instruction::wide const): Deleted.
+ * bytecode/InstructionStream.h:
+ (JSC::InstructionStreamWriter::write):
+ * bytecode/Opcode.h:
+ * bytecode/OpcodeSize.h:
+ * bytecompiler/BytecodeGenerator.cpp:
+ (JSC::BytecodeGenerator::alignWideOpcode16):
+ (JSC::BytecodeGenerator::alignWideOpcode32):
+ (JSC::BytecodeGenerator::emitGetByVal): Previously, we always emit 32bit op_get_by_val for bytecodes in `for-in` context because
+ its operand can be replaced to the other VirtualRegister later. But if we know that replacing VirtualRegister can fit in 8bit / 16bit
+ a-priori, we should not emit 32bit version. We expose OpXXX::checkWithoutMetadataID to check whether we could potentially compact
+ the bytecode for the given operands.
+
+ (JSC::BytecodeGenerator::emitYieldPoint):
+ (JSC::StructureForInContext::finalize):
+ (JSC::BytecodeGenerator::alignWideOpcode): Deleted.
+ * bytecompiler/BytecodeGenerator.h:
+ (JSC::BytecodeGenerator::write):
+ * dfg/DFGCapabilities.cpp:
+ (JSC::DFG::capabilityLevel):
+ * generator/Argument.rb:
+ * generator/DSL.rb:
+ * generator/Metadata.rb:
+ * generator/Opcode.rb: A little bit weird but checkImpl's argument must be reference. We are relying on that BoundLabel is once modified in
+ this check phase, and the modified BoundLabel will be used when emitting the code. If checkImpl copies the passed BoundLabel, this modification
+ will be discarded in this checkImpl function and make the code generation broken.
+
+ * generator/Section.rb:
+ * jit/JITExceptions.cpp:
+ (JSC::genericUnwind):
+ * llint/LLIntData.cpp:
+ (JSC::LLInt::initialize):
+ * llint/LLIntData.h:
+ (JSC::LLInt::opcodeMapWide16):
+ (JSC::LLInt::opcodeMapWide32):
+ (JSC::LLInt::getOpcodeWide16):
+ (JSC::LLInt::getOpcodeWide32):
+ (JSC::LLInt::getWide16CodePtr):
+ (JSC::LLInt::getWide32CodePtr):
+ (JSC::LLInt::opcodeMapWide): Deleted.
+ (JSC::LLInt::getOpcodeWide): Deleted.
+ (JSC::LLInt::getWideCodePtr): Deleted.
+ * llint/LLIntOfflineAsmConfig.h:
+ * llint/LLIntSlowPaths.cpp:
+ (JSC::LLInt::LLINT_SLOW_PATH_DECL):
+ * llint/LLIntSlowPaths.h:
+ * llint/LowLevelInterpreter.asm:
+ * llint/LowLevelInterpreter.cpp:
+ (JSC::CLoop::execute):
+ * llint/LowLevelInterpreter32_64.asm:
+ * llint/LowLevelInterpreter64.asm:
+ * offlineasm/arm.rb:
+ * offlineasm/arm64.rb:
+ * offlineasm/asm.rb:
+ * offlineasm/backends.rb:
+ * offlineasm/cloop.rb:
+ * offlineasm/instructions.rb:
+ * offlineasm/mips.rb:
+ * offlineasm/x86.rb: Load operation with sign extension should also have the extended size information. For example, loadbs should be
+ converted to loadbsi for 32bit sign extension (and loadbsq for 64bit sign extension). And use loadbsq / loadhsq for loading VirtualRegister
+ information in LowLevelInterpreter64 since they will be used for pointer arithmetic and they are using machine register width.
+
+ * parser/ResultType.h:
+ (JSC::OperandTypes::OperandTypes):
+ (JSC::OperandTypes::first const):
+ (JSC::OperandTypes::second const):
+ (JSC::OperandTypes::bits):
+ (JSC::OperandTypes::fromBits):
+ (): Deleted.
+ (JSC::OperandTypes::toInt): Deleted.
+ (JSC::OperandTypes::fromInt): Deleted.
+ We reduce sizeof(OperandTypes) from unsigned to uint16_t, which guarantees that OperandTypes always fit in 16bit bytecode.
+
2019-05-30 Justin Michaud <justin_michaud@apple.com>
oss-fuzz: jsc: Issue 15016: jsc: Abrt in JSC::Wasm::AirIRGenerator::addLocal (15016)
diff --git a/Source/JavaScriptCore/bytecode/BytecodeConventions.h b/Source/JavaScriptCore/bytecode/BytecodeConventions.h
index 7781378..a6bdd12 100644
--- a/Source/JavaScriptCore/bytecode/BytecodeConventions.h
+++ b/Source/JavaScriptCore/bytecode/BytecodeConventions.h
@@ -29,4 +29,8 @@
// 0x80000000-0xFFFFFFFF Negative indices from the CallFrame pointer are entries in the call frame.
// 0x00000000-0x3FFFFFFF Forwards indices from the CallFrame pointer are local vars and temporaries with the function's callframe.
// 0x40000000-0x7FFFFFFF Positive indices from 0x40000000 specify entries in the constant pool on the CodeBlock.
-static const int FirstConstantRegisterIndex = 0x40000000;
+static constexpr int FirstConstantRegisterIndex = 0x40000000;
+
+static constexpr int FirstConstantRegisterIndex8 = 16;
+static constexpr int FirstConstantRegisterIndex16 = 64;
+static constexpr int FirstConstantRegisterIndex32 = FirstConstantRegisterIndex;
diff --git a/Source/JavaScriptCore/bytecode/BytecodeDumper.cpp b/Source/JavaScriptCore/bytecode/BytecodeDumper.cpp
index 721d390..371472c 100644
--- a/Source/JavaScriptCore/bytecode/BytecodeDumper.cpp
+++ b/Source/JavaScriptCore/bytecode/BytecodeDumper.cpp
@@ -193,22 +193,26 @@
void BytecodeDumper<Block>::dumpBlock(Block* block, const InstructionStream& instructions, PrintStream& out, const ICStatusMap& statusMap)
{
size_t instructionCount = 0;
- size_t wideInstructionCount = 0;
+ size_t wide16InstructionCount = 0;
+ size_t wide32InstructionCount = 0;
size_t instructionWithMetadataCount = 0;
for (const auto& instruction : instructions) {
- if (instruction->isWide())
- ++wideInstructionCount;
- if (instruction->opcodeID() < NUMBER_OF_BYTECODE_WITH_METADATA)
+ if (instruction->isWide16())
+ ++wide16InstructionCount;
+ else if (instruction->isWide32())
+ ++wide32InstructionCount;
+ if (instruction->hasMetadata())
++instructionWithMetadataCount;
++instructionCount;
}
out.print(*block);
out.printf(
- ": %lu instructions (%lu wide instructions, %lu instructions with metadata); %lu bytes (%lu metadata bytes); %d parameter(s); %d callee register(s); %d variable(s)",
+ ": %lu instructions (%lu 16-bit instructions, %lu 32-bit instructions, %lu instructions with metadata); %lu bytes (%lu metadata bytes); %d parameter(s); %d callee register(s); %d variable(s)",
static_cast<unsigned long>(instructionCount),
- static_cast<unsigned long>(wideInstructionCount),
+ static_cast<unsigned long>(wide16InstructionCount),
+ static_cast<unsigned long>(wide32InstructionCount),
static_cast<unsigned long>(instructionWithMetadataCount),
static_cast<unsigned long>(instructions.sizeInBytes() + block->metadataSizeInBytes()),
static_cast<unsigned long>(block->metadataSizeInBytes()),
diff --git a/Source/JavaScriptCore/bytecode/BytecodeList.rb b/Source/JavaScriptCore/bytecode/BytecodeList.rb
index cdee569..ea1bbe2 100644
--- a/Source/JavaScriptCore/bytecode/BytecodeList.rb
+++ b/Source/JavaScriptCore/bytecode/BytecodeList.rb
@@ -82,7 +82,8 @@
asm_prefix: "llint_",
op_prefix: "op_"
-op :wide
+op :wide16
+op :wide32
op :enter
@@ -1140,6 +1141,17 @@
op :llint_cloop_did_return_from_js_21
op :llint_cloop_did_return_from_js_22
op :llint_cloop_did_return_from_js_23
+op :llint_cloop_did_return_from_js_24
+op :llint_cloop_did_return_from_js_25
+op :llint_cloop_did_return_from_js_26
+op :llint_cloop_did_return_from_js_27
+op :llint_cloop_did_return_from_js_28
+op :llint_cloop_did_return_from_js_29
+op :llint_cloop_did_return_from_js_30
+op :llint_cloop_did_return_from_js_31
+op :llint_cloop_did_return_from_js_32
+op :llint_cloop_did_return_from_js_33
+op :llint_cloop_did_return_from_js_34
end_section :CLoopHelpers
diff --git a/Source/JavaScriptCore/bytecode/BytecodeRewriter.h b/Source/JavaScriptCore/bytecode/BytecodeRewriter.h
index 367eaa9..e261654 100644
--- a/Source/JavaScriptCore/bytecode/BytecodeRewriter.h
+++ b/Source/JavaScriptCore/bytecode/BytecodeRewriter.h
@@ -161,7 +161,7 @@
{
#if CPU(NEEDS_ALIGNED_ACCESS)
m_bytecodeGenerator.withWriter(m_writer, [&] {
- while (m_bytecodeGenerator.instructions().size() % OpcodeSize::Wide)
+ while (m_bytecodeGenerator.instructions().size() % OpcodeSize::Wide32)
OpNop::emit<OpcodeSize::Narrow>(&m_bytecodeGenerator);
});
#endif
diff --git a/Source/JavaScriptCore/bytecode/BytecodeUseDef.h b/Source/JavaScriptCore/bytecode/BytecodeUseDef.h
index 5718b5b..4962c76 100644
--- a/Source/JavaScriptCore/bytecode/BytecodeUseDef.h
+++ b/Source/JavaScriptCore/bytecode/BytecodeUseDef.h
@@ -68,7 +68,8 @@
};
switch (opcodeID) {
- case op_wide:
+ case op_wide16:
+ case op_wide32:
RELEASE_ASSERT_NOT_REACHED();
// No uses.
@@ -289,7 +290,8 @@
void computeDefsForBytecodeOffset(Block* codeBlock, OpcodeID opcodeID, const Instruction* instruction, const Functor& functor)
{
switch (opcodeID) {
- case op_wide:
+ case op_wide16:
+ case op_wide32:
RELEASE_ASSERT_NOT_REACHED();
// These don't define anything.
diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.cpp b/Source/JavaScriptCore/bytecode/CodeBlock.cpp
index 18ba96a..ade2ace 100644
--- a/Source/JavaScriptCore/bytecode/CodeBlock.cpp
+++ b/Source/JavaScriptCore/bytecode/CodeBlock.cpp
@@ -445,9 +445,14 @@
const UnlinkedHandlerInfo& unlinkedHandler = unlinkedCodeBlock->exceptionHandler(i);
HandlerInfo& handler = m_rareData->m_exceptionHandlers[i];
#if ENABLE(JIT)
- MacroAssemblerCodePtr<BytecodePtrTag> codePtr = instructions().at(unlinkedHandler.target)->isWide()
- ? LLInt::getWideCodePtr<BytecodePtrTag>(op_catch)
- : LLInt::getCodePtr<BytecodePtrTag>(op_catch);
+ auto instruction = instructions().at(unlinkedHandler.target);
+ MacroAssemblerCodePtr<BytecodePtrTag> codePtr;
+ if (instruction->isWide32())
+ codePtr = LLInt::getWide32CodePtr<BytecodePtrTag>(op_catch);
+ else if (instruction->isWide16())
+ codePtr = LLInt::getWide16CodePtr<BytecodePtrTag>(op_catch);
+ else
+ codePtr = LLInt::getCodePtr<BytecodePtrTag>(op_catch);
handler.initialize(unlinkedHandler, CodeLocationLabel<ExceptionHandlerPtrTag>(codePtr.retagged<ExceptionHandlerPtrTag>()));
#else
handler.initialize(unlinkedHandler);
diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.h b/Source/JavaScriptCore/bytecode/CodeBlock.h
index 98dbf64..4e62bde 100644
--- a/Source/JavaScriptCore/bytecode/CodeBlock.h
+++ b/Source/JavaScriptCore/bytecode/CodeBlock.h
@@ -145,6 +145,8 @@
void dumpAssumingJITType(PrintStream&, JITType) const;
JS_EXPORT_PRIVATE void dump(PrintStream&) const;
+ MetadataTable* metadataTable() const { return m_metadata.get(); }
+
int numParameters() const { return m_numParameters; }
void setNumParameters(int newValue);
diff --git a/Source/JavaScriptCore/bytecode/Fits.h b/Source/JavaScriptCore/bytecode/Fits.h
index 24d7757..9ab7713 100644
--- a/Source/JavaScriptCore/bytecode/Fits.h
+++ b/Source/JavaScriptCore/bytecode/Fits.h
@@ -51,123 +51,127 @@
// Implicit conversion for types of the same size
template<typename T, OpcodeSize size>
struct Fits<T, size, std::enable_if_t<sizeof(T) == size, std::true_type>> {
+ using TargetType = typename TypeBySize<size>::unsignedType;
+
static bool check(T) { return true; }
- static typename TypeBySize<size>::type convert(T t) { return bitwise_cast<typename TypeBySize<size>::type>(t); }
+ static TargetType convert(T t) { return bitwise_cast<TargetType>(t); }
- template<class T1 = T, OpcodeSize size1 = size, typename = std::enable_if_t<!std::is_same<T1, typename TypeBySize<size1>::type>::value, std::true_type>>
- static T1 convert(typename TypeBySize<size1>::type t) { return bitwise_cast<T1>(t); }
+ template<class T1 = T, OpcodeSize size1 = size, typename = std::enable_if_t<!std::is_same<T1, TargetType>::value, std::true_type>>
+ static T1 convert(TargetType t) { return bitwise_cast<T1>(t); }
};
template<typename T, OpcodeSize size>
-struct Fits<T, size, std::enable_if_t<sizeof(T) < size, std::true_type>> {
- static bool check(T) { return true; }
+struct Fits<T, size, std::enable_if_t<std::is_integral<T>::value && sizeof(T) != size && !std::is_same<bool, T>::value, std::true_type>> {
+ using TargetType = std::conditional_t<std::is_unsigned<T>::value, typename TypeBySize<size>::unsignedType, typename TypeBySize<size>::signedType>;
- static typename TypeBySize<size>::type convert(T t) { return static_cast<typename TypeBySize<size>::type>(t); }
+ static bool check(T t)
+ {
+ return t >= std::numeric_limits<TargetType>::min() && t <= std::numeric_limits<TargetType>::max();
+ }
- template<class T1 = T, OpcodeSize size1 = size, typename = std::enable_if_t<!std::is_same<T1, typename TypeBySize<size1>::type>::value, std::true_type>>
- static T1 convert(typename TypeBySize<size1>::type t) { return static_cast<T1>(t); }
+ static TargetType convert(T t)
+ {
+ ASSERT(check(t));
+ return static_cast<TargetType>(t);
+ }
+
+ template<class T1 = T, OpcodeSize size1 = size, typename TargetType1 = TargetType, typename = std::enable_if_t<!std::is_same<T1, TargetType1>::value, std::true_type>>
+ static T1 convert(TargetType1 t) { return static_cast<T1>(t); }
};
-template<>
-struct Fits<uint32_t, OpcodeSize::Narrow> {
- static bool check(unsigned u) { return u <= UINT8_MAX; }
+template<OpcodeSize size>
+struct Fits<bool, size, std::enable_if_t<size != sizeof(bool), std::true_type>> : public Fits<uint8_t, size> {
+ using Base = Fits<uint8_t, size>;
- static uint8_t convert(unsigned u)
+ static bool check(bool e) { return Base::check(static_cast<uint8_t>(e)); }
+
+ static typename Base::TargetType convert(bool e)
{
- ASSERT(check(u));
- return static_cast<uint8_t>(u);
+ return Base::convert(static_cast<uint8_t>(e));
}
- static unsigned convert(uint8_t u)
+
+ static bool convert(typename Base::TargetType e)
{
- return u;
+ return Base::convert(e);
}
};
+template<OpcodeSize size>
+struct FirstConstant;
+
template<>
-struct Fits<int, OpcodeSize::Narrow> {
- static bool check(int i)
- {
- return i >= INT8_MIN && i <= INT8_MAX;
- }
-
- static uint8_t convert(int i)
- {
- ASSERT(check(i));
- return static_cast<uint8_t>(i);
- }
-
- static int convert(uint8_t i)
- {
- return static_cast<int8_t>(i);
- }
+struct FirstConstant<OpcodeSize::Narrow> {
+ static constexpr int index = FirstConstantRegisterIndex8;
};
template<>
-struct Fits<VirtualRegister, OpcodeSize::Narrow> {
+struct FirstConstant<OpcodeSize::Wide16> {
+ static constexpr int index = FirstConstantRegisterIndex16;
+};
+
+template<OpcodeSize size>
+struct Fits<VirtualRegister, size, std::enable_if_t<size != OpcodeSize::Wide32, std::true_type>> {
+ // Narrow:
// -128..-1 local variables
// 0..15 arguments
// 16..127 constants
- static constexpr int s_firstConstantIndex = 16;
+ //
+ // Wide16:
+ // -2**15..-1 local variables
+ // 0..64 arguments
+ // 64..2**15-1 constants
+
+ using TargetType = typename TypeBySize<size>::signedType;
+
+ static constexpr int s_firstConstantIndex = FirstConstant<size>::index;
static bool check(VirtualRegister r)
{
if (r.isConstant())
- return (s_firstConstantIndex + r.toConstantIndex()) <= INT8_MAX;
- return r.offset() >= INT8_MIN && r.offset() < s_firstConstantIndex;
+ return (s_firstConstantIndex + r.toConstantIndex()) <= std::numeric_limits<TargetType>::max();
+ return r.offset() >= std::numeric_limits<TargetType>::min() && r.offset() < s_firstConstantIndex;
}
- static uint8_t convert(VirtualRegister r)
+ static TargetType convert(VirtualRegister r)
{
ASSERT(check(r));
if (r.isConstant())
- return static_cast<int8_t>(s_firstConstantIndex + r.toConstantIndex());
- return static_cast<int8_t>(r.offset());
+ return static_cast<TargetType>(s_firstConstantIndex + r.toConstantIndex());
+ return static_cast<TargetType>(r.offset());
}
- static VirtualRegister convert(uint8_t u)
+ static VirtualRegister convert(TargetType u)
{
- int i = static_cast<int>(static_cast<int8_t>(u));
+ int i = static_cast<int>(static_cast<TargetType>(u));
if (i >= s_firstConstantIndex)
return VirtualRegister { (i - s_firstConstantIndex) + FirstConstantRegisterIndex };
return VirtualRegister { i };
}
};
-template<>
-struct Fits<SymbolTableOrScopeDepth, OpcodeSize::Narrow> {
- static bool check(SymbolTableOrScopeDepth u)
+template<OpcodeSize size>
+struct Fits<SymbolTableOrScopeDepth, size, std::enable_if_t<size != OpcodeSize::Wide32, std::true_type>> : public Fits<unsigned, size> {
+ static_assert(sizeof(SymbolTableOrScopeDepth) == sizeof(unsigned));
+ using TargetType = typename TypeBySize<size>::unsignedType;
+ using Base = Fits<unsigned, size>;
+
+ static bool check(SymbolTableOrScopeDepth u) { return Base::check(u.raw()); }
+
+ static TargetType convert(SymbolTableOrScopeDepth u)
{
- return u.raw() <= UINT8_MAX;
+ return Base::convert(u.raw());
}
- static uint8_t convert(SymbolTableOrScopeDepth u)
+ static SymbolTableOrScopeDepth convert(TargetType u)
{
- ASSERT(check(u));
- return static_cast<uint8_t>(u.raw());
- }
-
- static SymbolTableOrScopeDepth convert(uint8_t u)
- {
- return SymbolTableOrScopeDepth::raw(u);
+ return SymbolTableOrScopeDepth::raw(Base::convert(u));
}
};
-template<>
-struct Fits<Special::Pointer, OpcodeSize::Narrow> : Fits<int, OpcodeSize::Narrow> {
- using Base = Fits<int, OpcodeSize::Narrow>;
- static bool check(Special::Pointer sp) { return Base::check(static_cast<int>(sp)); }
- static uint8_t convert(Special::Pointer sp)
- {
- return Base::convert(static_cast<int>(sp));
- }
- static Special::Pointer convert(uint8_t sp)
- {
- return static_cast<Special::Pointer>(Base::convert(sp));
- }
-};
+template<OpcodeSize size>
+struct Fits<GetPutInfo, size, std::enable_if_t<size != OpcodeSize::Wide32, std::true_type>> {
+ using TargetType = typename TypeBySize<size>::unsignedType;
-template<>
-struct Fits<GetPutInfo, OpcodeSize::Narrow> {
// 13 Resolve Types
// 3 Initialization Modes
// 2 Resolve Modes
@@ -197,7 +201,7 @@
return resolveType < s_resolveTypeMax && initializationMode < s_initializationModeMax && resolveMode < s_resolveModeMax;
}
- static uint8_t convert(GetPutInfo gpi)
+ static TargetType convert(GetPutInfo gpi)
{
ASSERT(check(gpi));
auto resolveType = static_cast<uint8_t>(gpi.resolveType());
@@ -206,7 +210,7 @@
return (resolveType << 3) | (initializationMode << 1) | resolveMode;
}
- static GetPutInfo convert(uint8_t gpi)
+ static GetPutInfo convert(TargetType gpi)
{
auto resolveType = static_cast<ResolveType>((gpi & s_resolveTypeBits) >> 3);
auto initializationMode = static_cast<InitializationMode>((gpi & s_initializationModeBits) >> 1);
@@ -215,108 +219,79 @@
}
};
-template<>
-struct Fits<DebugHookType, OpcodeSize::Narrow> : Fits<int, OpcodeSize::Narrow> {
- using Base = Fits<int, OpcodeSize::Narrow>;
- static bool check(DebugHookType dht) { return Base::check(static_cast<int>(dht)); }
- static uint8_t convert(DebugHookType dht)
- {
- return Base::convert(static_cast<int>(dht));
- }
- static DebugHookType convert(uint8_t dht)
- {
- return static_cast<DebugHookType>(Base::convert(dht));
- }
-};
+template<typename E, OpcodeSize size>
+struct Fits<E, size, std::enable_if_t<sizeof(E) != size && std::is_enum<E>::value, std::true_type>> : public Fits<std::underlying_type_t<E>, size> {
+ using Base = Fits<std::underlying_type_t<E>, size>;
-template<>
-struct Fits<ProfileTypeBytecodeFlag, OpcodeSize::Narrow> : Fits<int, OpcodeSize::Narrow> {
- using Base = Fits<int, OpcodeSize::Narrow>;
- static bool check(ProfileTypeBytecodeFlag ptbf) { return Base::check(static_cast<int>(ptbf)); }
- static uint8_t convert(ProfileTypeBytecodeFlag ptbf)
- {
- return Base::convert(static_cast<int>(ptbf));
- }
- static ProfileTypeBytecodeFlag convert(uint8_t ptbf)
- {
- return static_cast<ProfileTypeBytecodeFlag>(Base::convert(ptbf));
- }
-};
+ static bool check(E e) { return Base::check(static_cast<std::underlying_type_t<E>>(e)); }
-template<>
-struct Fits<ResolveType, OpcodeSize::Narrow> : Fits<int, OpcodeSize::Narrow> {
- using Base = Fits<int, OpcodeSize::Narrow>;
- static bool check(ResolveType rt) { return Base::check(static_cast<int>(rt)); }
- static uint8_t convert(ResolveType rt)
+ static typename Base::TargetType convert(E e)
{
- return Base::convert(static_cast<int>(rt));
+ return Base::convert(static_cast<std::underlying_type_t<E>>(e));
}
- static ResolveType convert(uint8_t rt)
+ static E convert(typename Base::TargetType e)
{
- return static_cast<ResolveType>(Base::convert(rt));
- }
-};
-
-template<>
-struct Fits<OperandTypes, OpcodeSize::Narrow> {
- // a pair of (ResultType::Type, ResultType::Type) - try to fit each type into 4 bits
- // additionally, encode unknown types as 0 rather than the | of all types
- static constexpr int s_maxType = 0x10;
-
- static bool check(OperandTypes types)
- {
- auto first = types.first().bits();
- auto second = types.second().bits();
- if (first == ResultType::unknownType().bits())
- first = 0;
- if (second == ResultType::unknownType().bits())
- second = 0;
- return first < s_maxType && second < s_maxType;
- }
-
- static uint8_t convert(OperandTypes types)
- {
- ASSERT(check(types));
- auto first = types.first().bits();
- auto second = types.second().bits();
- if (first == ResultType::unknownType().bits())
- first = 0;
- if (second == ResultType::unknownType().bits())
- second = 0;
- return (first << 4) | second;
- }
-
- static OperandTypes convert(uint8_t types)
- {
- auto first = (types & (0xf << 4)) >> 4;
- auto second = (types & 0xf);
- if (!first)
- first = ResultType::unknownType().bits();
- if (!second)
- second = ResultType::unknownType().bits();
- return OperandTypes(ResultType(first), ResultType(second));
- }
-};
-
-template<>
-struct Fits<PutByIdFlags, OpcodeSize::Narrow> : Fits<int, OpcodeSize::Narrow> {
- // only ever encoded in the bytecode stream as 0 or 1, so the trivial encoding should be good enough
- using Base = Fits<int, OpcodeSize::Narrow>;
- static bool check(PutByIdFlags flags) { return Base::check(static_cast<int>(flags)); }
- static uint8_t convert(PutByIdFlags flags)
- {
- return Base::convert(static_cast<int>(flags));
- }
-
- static PutByIdFlags convert(uint8_t flags)
- {
- return static_cast<PutByIdFlags>(Base::convert(flags));
+ return static_cast<E>(Base::convert(e));
}
};
template<OpcodeSize size>
-struct Fits<BoundLabel, size> : Fits<int, size> {
+struct Fits<OperandTypes, size, std::enable_if_t<sizeof(OperandTypes) != size, std::true_type>> {
+ static_assert(sizeof(OperandTypes) == sizeof(uint16_t));
+ using TargetType = typename TypeBySize<size>::unsignedType;
+
+ // a pair of (ResultType::Type, ResultType::Type) - try to fit each type into 4 bits
+ // additionally, encode unknown types as 0 rather than the | of all types
+ static constexpr unsigned typeWidth = 4;
+ static constexpr unsigned maxType = (1 << typeWidth) - 1;
+
+ static bool check(OperandTypes types)
+ {
+ if (size == OpcodeSize::Narrow) {
+ auto first = types.first().bits();
+ auto second = types.second().bits();
+ if (first == ResultType::unknownType().bits())
+ first = 0;
+ if (second == ResultType::unknownType().bits())
+ second = 0;
+ return first <= maxType && second <= maxType;
+ }
+ return true;
+ }
+
+ static TargetType convert(OperandTypes types)
+ {
+ if (size == OpcodeSize::Narrow) {
+ ASSERT(check(types));
+ auto first = types.first().bits();
+ auto second = types.second().bits();
+ if (first == ResultType::unknownType().bits())
+ first = 0;
+ if (second == ResultType::unknownType().bits())
+ second = 0;
+ return (first << typeWidth) | second;
+ }
+ return static_cast<TargetType>(types.bits());
+ }
+
+ static OperandTypes convert(TargetType types)
+ {
+ if (size == OpcodeSize::Narrow) {
+ auto first = types >> typeWidth;
+ auto second = types & maxType;
+ if (!first)
+ first = ResultType::unknownType().bits();
+ if (!second)
+ second = ResultType::unknownType().bits();
+ return OperandTypes(ResultType(first), ResultType(second));
+ }
+ return OperandTypes::fromBits(static_cast<uint16_t>(types));
+ }
+};
+
+template<OpcodeSize size>
+struct Fits<BoundLabel, size> : public Fits<int, size> {
// This is a bit hacky: we need to delay computing jump targets, since we
// might have to emit `nop`s to align the instructions stream. Additionally,
// we have to compute the target before we start writing to the instruction
@@ -330,12 +305,12 @@
return Base::check(label.saveTarget());
}
- static typename TypeBySize<size>::type convert(BoundLabel& label)
+ static typename Base::TargetType convert(BoundLabel& label)
{
return Base::convert(label.commitTarget());
}
- static BoundLabel convert(typename TypeBySize<size>::type target)
+ static BoundLabel convert(typename Base::TargetType target)
{
return BoundLabel(Base::convert(target));
}
diff --git a/Source/JavaScriptCore/bytecode/Instruction.h b/Source/JavaScriptCore/bytecode/Instruction.h
index fb278e9..651ce8f 100644
--- a/Source/JavaScriptCore/bytecode/Instruction.h
+++ b/Source/JavaScriptCore/bytecode/Instruction.h
@@ -45,14 +45,16 @@
OpcodeID opcodeID() const { return static_cast<OpcodeID>(m_opcode); }
private:
- typename TypeBySize<Width>::type m_opcode;
+ typename TypeBySize<Width>::unsignedType m_opcode;
};
public:
OpcodeID opcodeID() const
{
- if (isWide())
- return wide()->opcodeID();
+ if (isWide32())
+ return wide32()->opcodeID();
+ if (isWide16())
+ return wide16()->opcodeID();
return narrow()->opcodeID();
}
@@ -61,16 +63,35 @@
return opcodeNames[opcodeID()];
}
- bool isWide() const
+ bool isWide16() const
{
- return narrow()->opcodeID() == op_wide;
+ return narrow()->opcodeID() == op_wide16;
+ }
+
+ bool isWide32() const
+ {
+ return narrow()->opcodeID() == op_wide32;
+ }
+
+ bool hasMetadata() const
+ {
+ return opcodeID() < NUMBER_OF_BYTECODE_WITH_METADATA;
+ }
+
+ int sizeShiftAmount() const
+ {
+ if (isWide32())
+ return 2;
+ if (isWide16())
+ return 1;
+ return 0;
}
size_t size() const
{
- auto wide = isWide();
- auto padding = wide ? 1 : 0;
- auto size = wide ? 4 : 1;
+ auto sizeShiftAmount = this->sizeShiftAmount();
+ auto padding = sizeShiftAmount ? 1 : 0;
+ auto size = 1 << sizeShiftAmount;
return opcodeLengths[opcodeID()] * size + padding;
}
@@ -106,11 +127,18 @@
return reinterpret_cast<const Impl<OpcodeSize::Narrow>*>(this);
}
- const Impl<OpcodeSize::Wide>* wide() const
+ const Impl<OpcodeSize::Wide16>* wide16() const
{
- ASSERT(isWide());
- return reinterpret_cast<const Impl<OpcodeSize::Wide>*>(bitwise_cast<uintptr_t>(this) + 1);
+ ASSERT(isWide16());
+ return reinterpret_cast<const Impl<OpcodeSize::Wide16>*>(bitwise_cast<uintptr_t>(this) + 1);
+ }
+
+ const Impl<OpcodeSize::Wide32>* wide32() const
+ {
+
+ ASSERT(isWide32());
+ return reinterpret_cast<const Impl<OpcodeSize::Wide32>*>(bitwise_cast<uintptr_t>(this) + 1);
}
};
diff --git a/Source/JavaScriptCore/bytecode/InstructionStream.h b/Source/JavaScriptCore/bytecode/InstructionStream.h
index ce9607b..99b5a5a 100644
--- a/Source/JavaScriptCore/bytecode/InstructionStream.h
+++ b/Source/JavaScriptCore/bytecode/InstructionStream.h
@@ -210,6 +210,20 @@
m_position++;
}
}
+
+ void write(uint16_t h)
+ {
+ ASSERT(!m_finalized);
+ uint8_t bytes[2];
+ std::memcpy(bytes, &h, sizeof(h));
+
+ // Though not always obvious, we don't have to invert the order of the
+ // bytes written here for CPU(BIG_ENDIAN). This is because the incoming
+ // i value is already ordered in big endian on CPU(BIG_EDNDIAN) platforms.
+ write(bytes[0]);
+ write(bytes[1]);
+ }
+
void write(uint32_t i)
{
ASSERT(!m_finalized);
diff --git a/Source/JavaScriptCore/bytecode/Opcode.h b/Source/JavaScriptCore/bytecode/Opcode.h
index 4427dd9..c921dd8 100644
--- a/Source/JavaScriptCore/bytecode/Opcode.h
+++ b/Source/JavaScriptCore/bytecode/Opcode.h
@@ -66,8 +66,12 @@
#if ENABLE(C_LOOP) && !HAVE(COMPUTED_GOTO)
-#define OPCODE_ID_ENUM(opcode, length) opcode##_wide = numOpcodeIDs + opcode,
- enum OpcodeIDWide : unsigned { FOR_EACH_OPCODE_ID(OPCODE_ID_ENUM) };
+#define OPCODE_ID_ENUM(opcode, length) opcode##_wide16 = numOpcodeIDs + opcode,
+ enum OpcodeIDWide16 : unsigned { FOR_EACH_OPCODE_ID(OPCODE_ID_ENUM) };
+#undef OPCODE_ID_ENUM
+
+#define OPCODE_ID_ENUM(opcode, length) opcode##_wide32 = numOpcodeIDs * 2 + opcode,
+ enum OpcodeIDWide32 : unsigned { FOR_EACH_OPCODE_ID(OPCODE_ID_ENUM) };
#undef OPCODE_ID_ENUM
#endif
diff --git a/Source/JavaScriptCore/bytecode/OpcodeSize.h b/Source/JavaScriptCore/bytecode/OpcodeSize.h
index 98943f3..24b162b 100644
--- a/Source/JavaScriptCore/bytecode/OpcodeSize.h
+++ b/Source/JavaScriptCore/bytecode/OpcodeSize.h
@@ -29,7 +29,8 @@
enum OpcodeSize {
Narrow = 1,
- Wide = 4,
+ Wide16 = 2,
+ Wide32 = 4,
};
template<OpcodeSize>
@@ -37,12 +38,20 @@
template<>
struct TypeBySize<OpcodeSize::Narrow> {
- using type = uint8_t;
+ using signedType = int8_t;
+ using unsignedType = uint8_t;
};
template<>
-struct TypeBySize<OpcodeSize::Wide> {
- using type = uint32_t;
+struct TypeBySize<OpcodeSize::Wide16> {
+ using signedType = int16_t;
+ using unsignedType = uint16_t;
+};
+
+template<>
+struct TypeBySize<OpcodeSize::Wide32> {
+ using signedType = int32_t;
+ using unsignedType = uint32_t;
};
template<OpcodeSize>
@@ -54,7 +63,12 @@
};
template<>
-struct PaddingBySize<OpcodeSize::Wide> {
+struct PaddingBySize<OpcodeSize::Wide16> {
+ static constexpr uint8_t value = 1;
+};
+
+template<>
+struct PaddingBySize<OpcodeSize::Wide32> {
static constexpr uint8_t value = 1;
};
diff --git a/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp b/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp
index 667dac0..abe4166 100644
--- a/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp
+++ b/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp
@@ -1339,10 +1339,18 @@
m_lastOpcodeID = opcodeID;
}
-void BytecodeGenerator::alignWideOpcode()
+void BytecodeGenerator::alignWideOpcode16()
{
#if CPU(NEEDS_ALIGNED_ACCESS)
- while ((m_writer.position() + 1) % OpcodeSize::Wide)
+ while ((m_writer.position() + 1) % OpcodeSize::Wide16)
+ OpNop::emit<OpcodeSize::Narrow>(this);
+#endif
+}
+
+void BytecodeGenerator::alignWideOpcode32()
+{
+#if CPU(NEEDS_ALIGNED_ACCESS)
+ while ((m_writer.position() + 1) % OpcodeSize::Wide32)
OpNop::emit<OpcodeSize::Narrow>(this);
#endif
}
@@ -2721,13 +2729,20 @@
if (context.isIndexedForInContext()) {
auto& indexedContext = context.asIndexedForInContext();
- OpGetByVal::emit<OpcodeSize::Wide>(this, kill(dst), base, indexedContext.index());
+ kill(dst);
+ if (OpGetByVal::checkWithoutMetadataID<OpcodeSize::Narrow>(this, dst, base, property))
+ OpGetByVal::emitWithSmallestSizeRequirement<OpcodeSize::Narrow>(this, dst, base, indexedContext.index());
+ else if (OpGetByVal::checkWithoutMetadataID<OpcodeSize::Wide16>(this, dst, base, property))
+ OpGetByVal::emitWithSmallestSizeRequirement<OpcodeSize::Wide16>(this, dst, base, indexedContext.index());
+ else
+ OpGetByVal::emit<OpcodeSize::Wide32>(this, dst, base, indexedContext.index());
indexedContext.addGetInst(m_lastInstruction.offset(), property->index());
return dst;
}
+ // We cannot do the above optimization here since OpGetDirectPname => OpGetByVal conversion involves different metadata ID allocation.
StructureForInContext& structureContext = context.asStructureForInContext();
- OpGetDirectPname::emit<OpcodeSize::Wide>(this, kill(dst), base, property, structureContext.index(), structureContext.enumerator());
+ OpGetDirectPname::emit<OpcodeSize::Wide32>(this, kill(dst), base, property, structureContext.index(), structureContext.enumerator());
structureContext.addGetInst(m_lastInstruction.offset(), property->index());
return dst;
@@ -4480,7 +4495,7 @@
#if CPU(NEEDS_ALIGNED_ACCESS)
// conservatively align for the bytecode rewriter: it will delete this yield and
// append a fragment, so we make sure that the start of the fragments is aligned
- while (m_writer.position() % OpcodeSize::Wide)
+ while (m_writer.position() % OpcodeSize::Wide32)
OpNop::emit<OpcodeSize::Narrow>(this);
#endif
OpYield::emit(this, generatorFrameRegister(), yieldPointIndex, argument);
@@ -4983,7 +4998,7 @@
int propertyRegIndex = std::get<1>(instTuple);
auto instruction = generator.m_writer.ref(instIndex);
auto end = instIndex + instruction->size();
- ASSERT(instruction->isWide());
+ ASSERT(instruction->isWide32());
generator.m_writer.seek(instIndex);
@@ -4996,7 +5011,7 @@
// 1. dst stays the same.
// 2. base stays the same.
// 3. property gets switched to the original property.
- OpGetByVal::emit<OpcodeSize::Wide>(&generator, bytecode.m_dst, bytecode.m_base, VirtualRegister(propertyRegIndex));
+ OpGetByVal::emit<OpcodeSize::Wide32>(&generator, bytecode.m_dst, bytecode.m_base, VirtualRegister(propertyRegIndex));
// 4. nop out the remaining bytes
while (generator.m_writer.position() < end)
@@ -5018,8 +5033,6 @@
for (const auto& instPair : m_getInsts) {
unsigned instIndex = instPair.first;
int propertyRegIndex = instPair.second;
- // FIXME: we should not have to force this get_by_val to be wide, just guarantee that propertyRegIndex fits
- // https://bugs.webkit.org/show_bug.cgi?id=190929
generator.m_writer.ref(instIndex)->cast<OpGetByVal>()->setProperty(VirtualRegister(propertyRegIndex), []() {
ASSERT_NOT_REACHED();
return VirtualRegister();
diff --git a/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.h b/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.h
index 1c90313..e97686a 100644
--- a/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.h
+++ b/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.h
@@ -1162,8 +1162,13 @@
RegisterID* emitThrowExpressionTooDeepException();
void write(uint8_t byte) { m_writer.write(byte); }
+ void write(uint16_t h) { m_writer.write(h); }
void write(uint32_t i) { m_writer.write(i); }
- void alignWideOpcode();
+ void write(int8_t byte) { m_writer.write(static_cast<uint8_t>(byte)); }
+ void write(int16_t h) { m_writer.write(static_cast<uint16_t>(h)); }
+ void write(int32_t i) { m_writer.write(static_cast<uint32_t>(i)); }
+ void alignWideOpcode16();
+ void alignWideOpcode32();
class PreservedTDZStack {
private:
diff --git a/Source/JavaScriptCore/dfg/DFGCapabilities.cpp b/Source/JavaScriptCore/dfg/DFGCapabilities.cpp
index 20c2340..dfe6c16 100644
--- a/Source/JavaScriptCore/dfg/DFGCapabilities.cpp
+++ b/Source/JavaScriptCore/dfg/DFGCapabilities.cpp
@@ -108,7 +108,8 @@
UNUSED_PARAM(pc);
switch (opcodeID) {
- case op_wide:
+ case op_wide16:
+ case op_wide32:
RELEASE_ASSERT_NOT_REACHED();
case op_enter:
case op_to_this:
diff --git a/Source/JavaScriptCore/generator/Argument.rb b/Source/JavaScriptCore/generator/Argument.rb
index 99dcb93..38a2815 100644
--- a/Source/JavaScriptCore/generator/Argument.rb
+++ b/Source/JavaScriptCore/generator/Argument.rb
@@ -42,6 +42,10 @@
"#{@type.to_s} #{@name}"
end
+ def create_reference_param
+ "#{@type.to_s}& #{@name}"
+ end
+
def field_name
"m_#{@name}"
end
@@ -67,8 +71,10 @@
template<typename Functor>
void set#{capitalized_name}(#{@type.to_s} value, Functor func)
{
- if (isWide())
- set#{capitalized_name}<OpcodeSize::Wide>(value, func);
+ if (isWide32())
+ set#{capitalized_name}<OpcodeSize::Wide32>(value, func);
+ else if (isWide16())
+ set#{capitalized_name}<OpcodeSize::Wide16>(value, func);
else
set#{capitalized_name}<OpcodeSize::Narrow>(value, func);
}
@@ -78,7 +84,7 @@
{
if (!#{Fits::check "size", "value", @type})
value = func();
- auto* stream = bitwise_cast<typename TypeBySize<size>::type*>(reinterpret_cast<uint8_t*>(this) + #{@index} * size + PaddingBySize<size>::value);
+ auto* stream = bitwise_cast<typename TypeBySize<size>::unsignedType*>(reinterpret_cast<uint8_t*>(this) + #{@index} * size + PaddingBySize<size>::value);
*stream = #{Fits::convert "size", "value", @type};
}
EOF
diff --git a/Source/JavaScriptCore/generator/DSL.rb b/Source/JavaScriptCore/generator/DSL.rb
index 9407aad..92c7f94 100644
--- a/Source/JavaScriptCore/generator/DSL.rb
+++ b/Source/JavaScriptCore/generator/DSL.rb
@@ -144,7 +144,7 @@
GeneratedFile::create(init_asm_filename, bytecode_list) do |template|
template.multiline_comment = nil
template.line_comment = "#"
- template.body = (opcodes.map.with_index(&:set_entry_address) + opcodes.map.with_index(&:set_entry_address_wide)) .join("\n")
+ template.body = (opcodes.map.with_index(&:set_entry_address) + opcodes.map.with_index(&:set_entry_address_wide16) + opcodes.map.with_index(&:set_entry_address_wide32)) .join("\n")
end
end
diff --git a/Source/JavaScriptCore/generator/Metadata.rb b/Source/JavaScriptCore/generator/Metadata.rb
index ad5efa5..c3886f8 100644
--- a/Source/JavaScriptCore/generator/Metadata.rb
+++ b/Source/JavaScriptCore/generator/Metadata.rb
@@ -112,9 +112,13 @@
EOF
end
+ def emitter_local_name
+ "__metadataID"
+ end
+
def emitter_local
unless @@emitter_local
- @@emitter_local = Argument.new("__metadataID", :unsigned, -1)
+ @@emitter_local = Argument.new(emitter_local_name, :unsigned, -1)
end
return @@emitter_local
diff --git a/Source/JavaScriptCore/generator/Opcode.rb b/Source/JavaScriptCore/generator/Opcode.rb
index 05c2595..3d25a96 100644
--- a/Source/JavaScriptCore/generator/Opcode.rb
+++ b/Source/JavaScriptCore/generator/Opcode.rb
@@ -32,7 +32,8 @@
module Size
Narrow = "OpcodeSize::Narrow"
- Wide = "OpcodeSize::Wide"
+ Wide16 = "OpcodeSize::Wide16"
+ Wide32 = "OpcodeSize::Wide32"
end
@@id = 0
@@ -74,6 +75,12 @@
@args.map(&:create_param).unshift("").join(", ")
end
+ def typed_reference_args
+ return if @args.nil?
+
+ @args.map(&:create_reference_param).unshift("").join(", ")
+ end
+
def untyped_args
return if @args.nil?
@@ -81,7 +88,7 @@
end
def map_fields_with_size(prefix, size, &block)
- args = [Argument.new("opcodeID", :unsigned, 0)]
+ args = [Argument.new("opcodeID", :OpcodeID, 0)]
args += @args.dup if @args
unless @metadata.empty?
args << @metadata.emitter_local
@@ -108,15 +115,14 @@
end
def emitter
- op_wide = Argument.new("op_wide", :unsigned, 0)
+ op_wide16 = Argument.new("op_wide16", :OpcodeID, 0)
+ op_wide32 = Argument.new("op_wide32", :OpcodeID, 0)
metadata_param = @metadata.empty? ? "" : ", #{@metadata.emitter_local.create_param}"
metadata_arg = @metadata.empty? ? "" : ", #{@metadata.emitter_local.name}"
<<-EOF.chomp
static void emit(BytecodeGenerator* gen#{typed_args})
{
- #{@metadata.create_emitter_local}
- emit<OpcodeSize::Narrow, NoAssert, true>(gen#{untyped_args}#{metadata_arg})
- || emit<OpcodeSize::Wide, Assert, true>(gen#{untyped_args}#{metadata_arg});
+ emitWithSmallestSizeRequirement<OpcodeSize::Narrow>(gen#{untyped_args});
}
#{%{
template<OpcodeSize size, FitsAssertion shouldAssert = Assert>
@@ -124,6 +130,13 @@
{#{@metadata.create_emitter_local}
return emit<size, shouldAssert>(gen#{untyped_args}#{metadata_arg});
}
+
+ template<OpcodeSize size>
+ static bool checkWithoutMetadataID(BytecodeGenerator* gen#{typed_args})
+ {
+ decltype(gen->addMetadataFor(opcodeID)) __metadataID { };
+ return checkImpl<size>(gen#{untyped_args}#{metadata_arg});
+ }
} unless @metadata.empty?}
template<OpcodeSize size, FitsAssertion shouldAssert = Assert, bool recordOpcode = true>
static bool emit(BytecodeGenerator* gen#{typed_args}#{metadata_param})
@@ -134,18 +147,51 @@
return didEmit;
}
+ template<OpcodeSize size>
+ static void emitWithSmallestSizeRequirement(BytecodeGenerator* gen#{typed_args})
+ {
+ #{@metadata.create_emitter_local}
+ if (static_cast<unsigned>(size) <= static_cast<unsigned>(OpcodeSize::Narrow)) {
+ if (emit<OpcodeSize::Narrow, NoAssert, true>(gen#{untyped_args}#{metadata_arg}))
+ return;
+ }
+ if (static_cast<unsigned>(size) <= static_cast<unsigned>(OpcodeSize::Wide16)) {
+ if (emit<OpcodeSize::Wide16, NoAssert, true>(gen#{untyped_args}#{metadata_arg}))
+ return;
+ }
+ emit<OpcodeSize::Wide32, Assert, true>(gen#{untyped_args}#{metadata_arg});
+ }
+
private:
+ template<OpcodeSize size>
+ static bool checkImpl(BytecodeGenerator* gen#{typed_reference_args}#{metadata_param})
+ {
+ UNUSED_PARAM(gen);
+#if OS(WINDOWS) && ENABLE(C_LOOP)
+ // FIXME: Disable wide16 optimization for Windows CLoop
+ // https://bugs.webkit.org/show_bug.cgi?id=198283
+ if (size == OpcodeSize::Wide16)
+ return false;
+#endif
+ return #{map_fields_with_size("", "size", &:fits_check).join "\n && "}
+ && (size == OpcodeSize::Wide16 ? #{op_wide16.fits_check(Size::Narrow)} : true)
+ && (size == OpcodeSize::Wide32 ? #{op_wide32.fits_check(Size::Narrow)} : true);
+ }
+
template<OpcodeSize size, bool recordOpcode>
static bool emitImpl(BytecodeGenerator* gen#{typed_args}#{metadata_param})
{
- if (size == OpcodeSize::Wide)
- gen->alignWideOpcode();
- if (#{map_fields_with_size("", "size", &:fits_check).join "\n && "}
- && (size == OpcodeSize::Wide ? #{op_wide.fits_check(Size::Narrow)} : true)) {
+ if (size == OpcodeSize::Wide16)
+ gen->alignWideOpcode16();
+ else if (size == OpcodeSize::Wide32)
+ gen->alignWideOpcode32();
+ if (checkImpl<size>(gen#{untyped_args}#{metadata_arg})) {
if (recordOpcode)
gen->recordOpcode(opcodeID);
- if (size == OpcodeSize::Wide)
- #{op_wide.fits_write Size::Narrow}
+ if (size == OpcodeSize::Wide16)
+ #{op_wide16.fits_write Size::Narrow}
+ else if (size == OpcodeSize::Wide32)
+ #{op_wide32.fits_write Size::Narrow}
#{map_fields_with_size(" ", "size", &:fits_write).join "\n"}
return true;
}
@@ -159,9 +205,9 @@
def dumper
<<-EOF
template<typename Block>
- void dump(BytecodeDumper<Block>* dumper, InstructionStream::Offset __location, bool __isWide)
+ void dump(BytecodeDumper<Block>* dumper, InstructionStream::Offset __location, int __sizeShiftAmount)
{
- dumper->printLocationAndOp(__location, &"*#{@name}"[!__isWide]);
+ dumper->printLocationAndOp(__location, &"**#{@name}"[2 - __sizeShiftAmount]);
#{print_args { |arg|
<<-EOF.chomp
dumper->dumpOperand(#{arg.field_name}, #{arg.index == 1});
@@ -182,19 +228,26 @@
ASSERT_UNUSED(stream, stream[0] == opcodeID);
}
+ #{capitalized_name}(const uint16_t* stream)
+ #{init.call("OpcodeSize::Wide16")}
+ {
+ ASSERT_UNUSED(stream, stream[0] == opcodeID);
+ }
+
+
#{capitalized_name}(const uint32_t* stream)
- #{init.call("OpcodeSize::Wide")}
+ #{init.call("OpcodeSize::Wide32")}
{
ASSERT_UNUSED(stream, stream[0] == opcodeID);
}
static #{capitalized_name} decode(const uint8_t* stream)
{
- if (*stream != op_wide)
- return { stream };
-
- auto wideStream = bitwise_cast<const uint32_t*>(stream + 1);
- return { wideStream };
+ if (*stream == op_wide32)
+ return { bitwise_cast<const uint32_t*>(stream + 1) };
+ if (*stream == op_wide16)
+ return { bitwise_cast<const uint16_t*>(stream + 1) };
+ return { stream };
}
EOF
end
@@ -219,8 +272,12 @@
"setEntryAddress(#{id}, _#{full_name})"
end
- def set_entry_address_wide(id)
- "setEntryAddressWide(#{id}, _#{full_name}_wide)"
+ def set_entry_address_wide16(id)
+ "setEntryAddressWide16(#{id}, _#{full_name}_wide16)"
+ end
+
+ def set_entry_address_wide32(id)
+ "setEntryAddressWide32(#{id}, _#{full_name}_wide32)"
end
def struct_indices
@@ -253,7 +310,7 @@
#{opcodes.map { |op|
<<-EOF.chomp
case #{op.name}:
- __instruction->as<#{op.capitalized_name}>().dump(dumper, __location, __instruction->isWide());
+ __instruction->as<#{op.capitalized_name}>().dump(dumper, __location, __instruction->sizeShiftAmount());
break;
EOF
}.join "\n"}
diff --git a/Source/JavaScriptCore/generator/Section.rb b/Source/JavaScriptCore/generator/Section.rb
index 7a6afcc..8cd21db 100644
--- a/Source/JavaScriptCore/generator/Section.rb
+++ b/Source/JavaScriptCore/generator/Section.rb
@@ -100,7 +100,10 @@
out.write("#define #{opcode.name}_value_string \"#{opcode.id}\"\n")
}
opcodes.each { |opcode|
- out.write("#define #{opcode.name}_wide_value_string \"#{num_opcodes + opcode.id}\"\n")
+ out.write("#define #{opcode.name}_wide16_value_string \"#{num_opcodes + opcode.id}\"\n")
+ }
+ opcodes.each { |opcode|
+ out.write("#define #{opcode.name}_wide32_value_string \"#{num_opcodes * 2 + opcode.id}\"\n")
}
end
out.string
diff --git a/Source/JavaScriptCore/jit/JITExceptions.cpp b/Source/JavaScriptCore/jit/JITExceptions.cpp
index 7fb225b..95bbe50 100644
--- a/Source/JavaScriptCore/jit/JITExceptions.cpp
+++ b/Source/JavaScriptCore/jit/JITExceptions.cpp
@@ -74,9 +74,12 @@
#if ENABLE(JIT)
catchRoutine = handler->nativeCode.executableAddress();
#else
- catchRoutine = catchPCForInterpreter->isWide()
- ? LLInt::getWideCodePtr(catchPCForInterpreter->opcodeID())
- : LLInt::getCodePtr(catchPCForInterpreter->opcodeID());
+ if (catchPCForInterpreter->isWide32())
+ catchRoutine = LLInt::getWide32CodePtr(catchPCForInterpreter->opcodeID());
+ else if (catchPCForInterpreter->isWide16())
+ catchRoutine = LLInt::getWide16CodePtr(catchPCForInterpreter->opcodeID());
+ else
+ catchRoutine = LLInt::getCodePtr(catchPCForInterpreter->opcodeID());
#endif
} else
catchRoutine = LLInt::getCodePtr<ExceptionHandlerPtrTag>(handleUncaughtException).executableAddress();
diff --git a/Source/JavaScriptCore/llint/LLIntData.cpp b/Source/JavaScriptCore/llint/LLIntData.cpp
index 58f18e4..e34a79f 100644
--- a/Source/JavaScriptCore/llint/LLIntData.cpp
+++ b/Source/JavaScriptCore/llint/LLIntData.cpp
@@ -49,10 +49,11 @@
uint8_t Data::s_exceptionInstructions[maxOpcodeLength + 1] = { };
Opcode g_opcodeMap[numOpcodeIDs] = { };
-Opcode g_opcodeMapWide[numOpcodeIDs] = { };
+Opcode g_opcodeMapWide16[numOpcodeIDs] = { };
+Opcode g_opcodeMapWide32[numOpcodeIDs] = { };
#if !ENABLE(C_LOOP)
-extern "C" void llint_entry(void*, void*);
+extern "C" void llint_entry(void*, void*, void*);
#endif
void initialize()
@@ -61,11 +62,12 @@
CLoop::initialize();
#else // !ENABLE(C_LOOP)
- llint_entry(&g_opcodeMap, &g_opcodeMapWide);
+ llint_entry(&g_opcodeMap, &g_opcodeMapWide16, &g_opcodeMapWide32);
for (int i = 0; i < numOpcodeIDs; ++i) {
g_opcodeMap[i] = tagCodePtr(g_opcodeMap[i], BytecodePtrTag);
- g_opcodeMapWide[i] = tagCodePtr(g_opcodeMapWide[i], BytecodePtrTag);
+ g_opcodeMapWide16[i] = tagCodePtr(g_opcodeMapWide16[i], BytecodePtrTag);
+ g_opcodeMapWide32[i] = tagCodePtr(g_opcodeMapWide32[i], BytecodePtrTag);
}
ASSERT(llint_throw_from_slow_path_trampoline < UINT8_MAX);
diff --git a/Source/JavaScriptCore/llint/LLIntData.h b/Source/JavaScriptCore/llint/LLIntData.h
index b248abc..de39056 100644
--- a/Source/JavaScriptCore/llint/LLIntData.h
+++ b/Source/JavaScriptCore/llint/LLIntData.h
@@ -43,7 +43,8 @@
namespace LLInt {
extern "C" JS_EXPORT_PRIVATE Opcode g_opcodeMap[numOpcodeIDs];
-extern "C" JS_EXPORT_PRIVATE Opcode g_opcodeMapWide[numOpcodeIDs];
+extern "C" JS_EXPORT_PRIVATE Opcode g_opcodeMapWide16[numOpcodeIDs];
+extern "C" JS_EXPORT_PRIVATE Opcode g_opcodeMapWide32[numOpcodeIDs];
class Data {
@@ -57,11 +58,14 @@
friend Instruction* exceptionInstructions();
friend Opcode* opcodeMap();
- friend Opcode* opcodeMapWide();
+ friend Opcode* opcodeMapWide16();
+ friend Opcode* opcodeMapWide32();
friend Opcode getOpcode(OpcodeID);
- friend Opcode getOpcodeWide(OpcodeID);
+ friend Opcode getOpcodeWide16(OpcodeID);
+ friend Opcode getOpcodeWide32(OpcodeID);
template<PtrTag tag> friend MacroAssemblerCodePtr<tag> getCodePtr(OpcodeID);
- template<PtrTag tag> friend MacroAssemblerCodePtr<tag> getWideCodePtr(OpcodeID);
+ template<PtrTag tag> friend MacroAssemblerCodePtr<tag> getWide16CodePtr(OpcodeID);
+ template<PtrTag tag> friend MacroAssemblerCodePtr<tag> getWide32CodePtr(OpcodeID);
template<PtrTag tag> friend MacroAssemblerCodeRef<tag> getCodeRef(OpcodeID);
};
@@ -77,9 +81,14 @@
return g_opcodeMap;
}
-inline Opcode* opcodeMapWide()
+inline Opcode* opcodeMapWide16()
{
- return g_opcodeMapWide;
+ return g_opcodeMapWide16;
+}
+
+inline Opcode* opcodeMapWide32()
+{
+ return g_opcodeMapWide32;
}
inline Opcode getOpcode(OpcodeID id)
@@ -91,10 +100,20 @@
#endif
}
-inline Opcode getOpcodeWide(OpcodeID id)
+inline Opcode getOpcodeWide16(OpcodeID id)
{
#if ENABLE(COMPUTED_GOTO_OPCODES)
- return g_opcodeMapWide[id];
+ return g_opcodeMapWide16[id];
+#else
+ UNUSED_PARAM(id);
+ RELEASE_ASSERT_NOT_REACHED();
+#endif
+}
+
+inline Opcode getOpcodeWide32(OpcodeID id)
+{
+#if ENABLE(COMPUTED_GOTO_OPCODES)
+ return g_opcodeMapWide32[id];
#else
UNUSED_PARAM(id);
RELEASE_ASSERT_NOT_REACHED();
@@ -110,9 +129,17 @@
}
template<PtrTag tag>
-ALWAYS_INLINE MacroAssemblerCodePtr<tag> getWideCodePtr(OpcodeID opcodeID)
+ALWAYS_INLINE MacroAssemblerCodePtr<tag> getWide16CodePtr(OpcodeID opcodeID)
{
- void* address = reinterpret_cast<void*>(getOpcodeWide(opcodeID));
+ void* address = reinterpret_cast<void*>(getOpcodeWide16(opcodeID));
+ address = retagCodePtr<BytecodePtrTag, tag>(address);
+ return MacroAssemblerCodePtr<tag>::createFromExecutableAddress(address);
+}
+
+template<PtrTag tag>
+ALWAYS_INLINE MacroAssemblerCodePtr<tag> getWide32CodePtr(OpcodeID opcodeID)
+{
+ void* address = reinterpret_cast<void*>(getOpcodeWide32(opcodeID));
address = retagCodePtr<BytecodePtrTag, tag>(address);
return MacroAssemblerCodePtr<tag>::createFromExecutableAddress(address);
}
@@ -141,9 +168,14 @@
return reinterpret_cast<void*>(getOpcode(id));
}
-ALWAYS_INLINE void* getWideCodePtr(OpcodeID id)
+ALWAYS_INLINE void* getWide16CodePtr(OpcodeID id)
{
- return reinterpret_cast<void*>(getOpcodeWide(id));
+ return reinterpret_cast<void*>(getOpcodeWide16(id));
+}
+
+ALWAYS_INLINE void* getWide32CodePtr(OpcodeID id)
+{
+ return reinterpret_cast<void*>(getOpcodeWide32(id));
}
#endif
diff --git a/Source/JavaScriptCore/llint/LLIntOfflineAsmConfig.h b/Source/JavaScriptCore/llint/LLIntOfflineAsmConfig.h
index de5a145..8104e97 100644
--- a/Source/JavaScriptCore/llint/LLIntOfflineAsmConfig.h
+++ b/Source/JavaScriptCore/llint/LLIntOfflineAsmConfig.h
@@ -30,7 +30,13 @@
#include <wtf/Gigacage.h>
#if ENABLE(C_LOOP)
+#if !OS(WINDOWS)
#define OFFLINE_ASM_C_LOOP 1
+#define OFFLINE_ASM_C_LOOP_WIN 0
+#else
+#define OFFLINE_ASM_C_LOOP 0
+#define OFFLINE_ASM_C_LOOP_WIN 1
+#endif
#define OFFLINE_ASM_X86 0
#define OFFLINE_ASM_X86_WIN 0
#define OFFLINE_ASM_ARMv7 0
@@ -45,6 +51,7 @@
#else // ENABLE(C_LOOP)
#define OFFLINE_ASM_C_LOOP 0
+#define OFFLINE_ASM_C_LOOP_WIN 0
#if CPU(X86) && !COMPILER(MSVC)
#define OFFLINE_ASM_X86 1
diff --git a/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp b/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
index 362ab2d..b3be9d8 100644
--- a/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
+++ b/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
@@ -1722,9 +1722,14 @@
return commonCallEval(exec, pc, LLInt::getCodePtr<JSEntryPtrTag>(llint_generic_return_point));
}
-LLINT_SLOW_PATH_DECL(slow_path_call_eval_wide)
+LLINT_SLOW_PATH_DECL(slow_path_call_eval_wide16)
{
- return commonCallEval(exec, pc, LLInt::getWideCodePtr<JSEntryPtrTag>(llint_generic_return_point));
+ return commonCallEval(exec, pc, LLInt::getWide16CodePtr<JSEntryPtrTag>(llint_generic_return_point));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_call_eval_wide32)
+{
+ return commonCallEval(exec, pc, LLInt::getWide32CodePtr<JSEntryPtrTag>(llint_generic_return_point));
}
LLINT_SLOW_PATH_DECL(slow_path_strcat)
diff --git a/Source/JavaScriptCore/llint/LLIntSlowPaths.h b/Source/JavaScriptCore/llint/LLIntSlowPaths.h
index dc357a1..c24c2d8 100644
--- a/Source/JavaScriptCore/llint/LLIntSlowPaths.h
+++ b/Source/JavaScriptCore/llint/LLIntSlowPaths.h
@@ -117,7 +117,8 @@
LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_tail_call_forward_arguments);
LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_construct_varargs);
LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_call_eval);
-LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_call_eval_wide);
+LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_call_eval_wide16);
+LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_call_eval_wide32);
LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_tear_off_arguments);
LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_strcat);
LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_to_primitive);
diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter.asm
index 60d4170..75023d5 100644
--- a/Source/JavaScriptCore/llint/LowLevelInterpreter.asm
+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter.asm
@@ -1,4 +1,4 @@
-# Copyright (C) 2011-2019 Apple Inc. All rights reserved.
+# Copyrsght (C) 2011-2019 Apple Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
@@ -218,7 +218,7 @@
if X86_64 or X86_64_WIN or ARM64 or ARM64E
const CalleeSaveSpaceAsVirtualRegisters = 4
-elsif C_LOOP
+elsif C_LOOP or C_LOOP_WIN
const CalleeSaveSpaceAsVirtualRegisters = 1
elsif ARMv7
const CalleeSaveSpaceAsVirtualRegisters = 1
@@ -277,7 +277,7 @@
const PB = csr4
const tagTypeNumber = csr5
const tagMask = csr6
- elsif C_LOOP
+ elsif C_LOOP or C_LOOP_WIN
const PB = csr0
const tagTypeNumber = csr1
const tagMask = csr2
@@ -286,7 +286,7 @@
else
const PC = t4 # When changing this, make sure LLIntPC is up to date in LLIntPCRanges.h
- if C_LOOP
+ if C_LOOP or C_LOOP_WIN
const metadataTable = csr3
elsif ARMv7
const metadataTable = csr0
@@ -311,31 +311,39 @@
dispatch(constexpr %opcodeName%_length)
end
- macro dispatchWide()
+ macro dispatchWide16()
+ dispatch(constexpr %opcodeName%_length * 2 + 1)
+ end
+
+ macro dispatchWide32()
dispatch(constexpr %opcodeName%_length * 4 + 1)
end
- size(dispatchNarrow, dispatchWide, macro (dispatch) dispatch() end)
+ size(dispatchNarrow, dispatchWide16, dispatchWide32, macro (dispatch) dispatch() end)
end
macro getu(size, opcodeStruct, fieldName, dst)
- size(getuOperandNarrow, getuOperandWide, macro (getu)
+ size(getuOperandNarrow, getuOperandWide16, getuOperandWide32, macro (getu)
getu(opcodeStruct, fieldName, dst)
end)
end
macro get(size, opcodeStruct, fieldName, dst)
- size(getOperandNarrow, getOperandWide, macro (get)
+ size(getOperandNarrow, getOperandWide16, getOperandWide32, macro (get)
get(opcodeStruct, fieldName, dst)
end)
end
-macro narrow(narrowFn, wideFn, k)
+macro narrow(narrowFn, wide16Fn, wide32Fn, k)
k(narrowFn)
end
-macro wide(narrowFn, wideFn, k)
- k(wideFn)
+macro wide16(narrowFn, wide16Fn, wide32Fn, k)
+ k(wide16Fn)
+end
+
+macro wide32(narrowFn, wide16Fn, wide32Fn, k)
+ k(wide32Fn)
end
macro metadata(size, opcode, dst, scratch)
@@ -362,9 +370,19 @@
prologue()
fn(narrow)
-_%label%_wide:
+# FIXME: We cannot enable wide16 bytecode in Windows CLoop. With MSVC, as CLoop::execute gets larger code
+# size, CLoop::execute gets higher stack height requirement. This makes CLoop::execute takes 160KB stack
+# per call, causes stack overflow error easily. For now, we disable wide16 optimization for Windows CLoop.
+# https://bugs.webkit.org/show_bug.cgi?id=198283
+if not C_LOOP_WIN
+_%label%_wide16:
prologue()
- fn(wide)
+ fn(wide16)
+end
+
+_%label%_wide32:
+ prologue()
+ fn(wide32)
end
macro op(l, fn)
@@ -475,8 +493,9 @@
const ImplementsDefaultHasInstance = constexpr ImplementsDefaultHasInstance
# Bytecode operand constants.
-const FirstConstantRegisterIndexNarrow = 16
-const FirstConstantRegisterIndexWide = constexpr FirstConstantRegisterIndex
+const FirstConstantRegisterIndexNarrow = constexpr FirstConstantRegisterIndex8
+const FirstConstantRegisterIndexWide16 = constexpr FirstConstantRegisterIndex16
+const FirstConstantRegisterIndexWide32 = constexpr FirstConstantRegisterIndex
# Code type constants.
const GlobalCode = constexpr GlobalCode
@@ -522,7 +541,7 @@
# Some common utilities.
macro crash()
- if C_LOOP
+ if C_LOOP or C_LOOP_WIN
cloopCrash
else
call _llint_crash
@@ -605,9 +624,9 @@
macro checkStackPointerAlignment(tempReg, location)
if ASSERT_ENABLED
- if ARM64 or ARM64E or C_LOOP
+ if ARM64 or ARM64E or C_LOOP or C_LOOP_WIN
# ARM64 and ARM64E will check for us!
- # C_LOOP does not need the alignment, and can use a little perf
+ # C_LOOP or C_LOOP_WIN does not need the alignment, and can use a little perf
# improvement from avoiding useless work.
else
if ARMv7
@@ -625,7 +644,7 @@
end
end
-if C_LOOP or ARM64 or ARM64E or X86_64 or X86_64_WIN
+if C_LOOP or C_LOOP_WIN or ARM64 or ARM64E or X86_64 or X86_64_WIN
const CalleeSaveRegisterCount = 0
elsif ARMv7
const CalleeSaveRegisterCount = 7
@@ -642,7 +661,7 @@
const VMEntryTotalFrameSize = (CalleeRegisterSaveSize + sizeof VMEntryRecord + StackAlignment - 1) & ~StackAlignmentMask
macro pushCalleeSaves()
- if C_LOOP or ARM64 or ARM64E or X86_64 or X86_64_WIN
+ if C_LOOP or C_LOOP_WIN or ARM64 or ARM64E or X86_64 or X86_64_WIN
elsif ARMv7
emit "push {r4-r6, r8-r11}"
elsif MIPS
@@ -663,7 +682,7 @@
end
macro popCalleeSaves()
- if C_LOOP or ARM64 or ARM64E or X86_64 or X86_64_WIN
+ if C_LOOP or C_LOOP_WIN or ARM64 or ARM64E or X86_64 or X86_64_WIN
elsif ARMv7
emit "pop {r4-r6, r8-r11}"
elsif MIPS
@@ -682,7 +701,7 @@
end
macro preserveCallerPCAndCFR()
- if C_LOOP or ARMv7 or MIPS
+ if C_LOOP or C_LOOP_WIN or ARMv7 or MIPS
push lr
push cfr
elsif X86 or X86_WIN or X86_64 or X86_64_WIN
@@ -697,7 +716,7 @@
macro restoreCallerPCAndCFR()
move cfr, sp
- if C_LOOP or ARMv7 or MIPS
+ if C_LOOP or C_LOOP_WIN or ARMv7 or MIPS
pop cfr
pop lr
elsif X86 or X86_WIN or X86_64 or X86_64_WIN
@@ -709,7 +728,7 @@
macro preserveCalleeSavesUsedByLLInt()
subp CalleeSaveSpaceStackAligned, sp
- if C_LOOP
+ if C_LOOP or C_LOOP_WIN
storep metadataTable, -PtrSize[cfr]
elsif ARMv7 or MIPS
storep metadataTable, -4[cfr]
@@ -732,7 +751,7 @@
end
macro restoreCalleeSavesUsedByLLInt()
- if C_LOOP
+ if C_LOOP or C_LOOP_WIN
loadp -PtrSize[cfr], metadataTable
elsif ARMv7 or MIPS
loadp -4[cfr], metadataTable
@@ -843,8 +862,8 @@
end
macro preserveReturnAddressAfterCall(destinationRegister)
- if C_LOOP or ARMv7 or ARM64 or ARM64E or MIPS
- # In C_LOOP case, we're only preserving the bytecode vPC.
+ if C_LOOP or C_LOOP_WIN or ARMv7 or ARM64 or ARM64E or MIPS
+ # In C_LOOP or C_LOOP_WIN case, we're only preserving the bytecode vPC.
move lr, destinationRegister
elsif X86 or X86_WIN or X86_64 or X86_64_WIN
pop destinationRegister
@@ -859,7 +878,7 @@
push cfr
elsif ARM64 or ARM64E
push cfr, lr
- elsif C_LOOP or ARMv7 or MIPS
+ elsif C_LOOP or C_LOOP_WIN or ARMv7 or MIPS
push lr
push cfr
end
@@ -871,7 +890,7 @@
pop cfr
elsif ARM64 or ARM64E
pop lr, cfr
- elsif C_LOOP or ARMv7 or MIPS
+ elsif C_LOOP or C_LOOP_WIN or ARMv7 or MIPS
pop cfr
pop lr
end
@@ -905,7 +924,7 @@
end
macro callTargetFunction(size, opcodeStruct, dispatch, callee, callPtrTag)
- if C_LOOP
+ if C_LOOP or C_LOOP_WIN
cloopCallJSFunction callee
else
call callee, callPtrTag
@@ -943,7 +962,7 @@
addi StackAlignment - 1 + CallFrameHeaderSize, temp2
andi ~StackAlignmentMask, temp2
- if ARMv7 or ARM64 or ARM64E or C_LOOP or MIPS
+ if ARMv7 or ARM64 or ARM64E or C_LOOP or C_LOOP_WIN or MIPS
addp CallerFrameAndPCSize, sp
subi CallerFrameAndPCSize, temp2
loadp CallerFrameAndPC::returnPC[cfr], lr
@@ -1027,7 +1046,7 @@
end
macro assertNotConstant(size, index)
- size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide, macro (FirstConstantRegisterIndex)
+ size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide16, FirstConstantRegisterIndexWide32, macro (FirstConstantRegisterIndex)
assert(macro (ok) bilt index, FirstConstantRegisterIndex, ok end)
end)
end
@@ -1079,7 +1098,7 @@
addp maxFrameExtentForSlowPathCall, sp
end
codeBlockGetter(t1)
- if not C_LOOP
+ if not (C_LOOP or C_LOOP_WIN)
baddis 5, CodeBlock::m_llintExecuteCounter + BaselineExecutionCounter::m_counter[t1], .continue
if JSVALUE64
move cfr, a0
@@ -1129,7 +1148,7 @@
subp cfr, t0, t0
bpa t0, cfr, .needStackCheck
loadp CodeBlock::m_vm[t1], t2
- if C_LOOP
+ if C_LOOP or C_LOOP_WIN
bpbeq VM::m_cloopStackLimit[t2], t0, .stackHeightOK
else
bpbeq VM::m_softStackLimit[t2], t0, .stackHeightOK
@@ -1232,7 +1251,7 @@
# EncodedJSValue vmEntryToJavaScript(void* code, VM* vm, ProtoCallFrame* protoFrame)
# EncodedJSValue vmEntryToNativeFunction(void* code, VM* vm, ProtoCallFrame* protoFrame)
-if C_LOOP
+if C_LOOP or C_LOOP_WIN
_llint_vm_entry_to_javascript:
else
global _vmEntryToJavaScript
@@ -1241,7 +1260,7 @@
doVMEntry(makeJavaScriptCall)
-if C_LOOP
+if C_LOOP or C_LOOP_WIN
_llint_vm_entry_to_native:
else
global _vmEntryToNative
@@ -1250,7 +1269,7 @@
doVMEntry(makeHostFunctionCall)
-if not C_LOOP
+if not (C_LOOP or C_LOOP_WIN)
# void sanitizeStackForVMImpl(VM* vm)
global _sanitizeStackForVMImpl
_sanitizeStackForVMImpl:
@@ -1290,7 +1309,7 @@
ret
end
-if C_LOOP
+if C_LOOP or C_LOOP_WIN
# Dummy entry point the C Loop uses to initialize.
_llint_entry:
crash()
@@ -1312,41 +1331,45 @@
end
end
-# The PC base is in t2, as this is what _llint_entry leaves behind through
-# initPCRelative(t2)
+# The PC base is in t3, as this is what _llint_entry leaves behind through
+# initPCRelative(t3)
macro setEntryAddress(index, label)
setEntryAddressCommon(index, label, a0)
end
-macro setEntryAddressWide(index, label)
+macro setEntryAddressWide16(index, label)
setEntryAddressCommon(index, label, a1)
end
+macro setEntryAddressWide32(index, label)
+ setEntryAddressCommon(index, label, a2)
+end
+
macro setEntryAddressCommon(index, label, map)
if X86_64 or X86_64_WIN
- leap (label - _relativePCBase)[t2], t3
- move index, t4
- storep t3, [map, t4, 8]
+ leap (label - _relativePCBase)[t3], t4
+ move index, t5
+ storep t4, [map, t5, 8]
elsif X86 or X86_WIN
- leap (label - _relativePCBase)[t2], t3
- move index, t4
- storep t3, [map, t4, 4]
+ leap (label - _relativePCBase)[t3], t4
+ move index, t5
+ storep t4, [map, t5, 4]
elsif ARM64 or ARM64E
- pcrtoaddr label, t2
+ pcrtoaddr label, t3
move index, t4
- storep t2, [map, t4, PtrSize]
+ storep t3, [map, t4, PtrSize]
elsif ARMv7
mvlbl (label - _relativePCBase), t4
- addp t4, t2, t4
- move index, t3
- storep t4, [map, t3, 4]
+ addp t4, t3, t4
+ move index, t5
+ storep t4, [map, t5, 4]
elsif MIPS
la label, t4
la _relativePCBase, t3
subp t3, t4
- addp t4, t2, t4
- move index, t3
- storep t4, [map, t3, 4]
+ addp t4, t3, t4
+ move index, t5
+ storep t4, [map, t5, 4]
end
end
@@ -1358,9 +1381,10 @@
if X86 or X86_WIN
loadp 20[sp], a0
loadp 24[sp], a1
+ loadp 28[sp], a2
end
- initPCRelative(t2)
+ initPCRelative(t3)
# Include generated bytecode initialization file.
include InitBytecodes
@@ -1370,14 +1394,23 @@
ret
end
-_llint_op_wide:
- nextInstructionWide()
+_llint_op_wide16:
+ nextInstructionWide16()
-_llint_op_wide_wide:
+_llint_op_wide32:
+ nextInstructionWide32()
+
+macro noWide(label)
+_llint_%label%_wide16:
crash()
-_llint_op_enter_wide:
+_llint_%label%_wide32:
crash()
+end
+
+noWide(op_wide16)
+noWide(op_wide32)
+noWide(op_enter)
op(llint_program_prologue, macro ()
prologue(notFunctionCodeBlockGetter, notFunctionCodeBlockSetter, _llint_entry_osr, _llint_trace_prologue)
@@ -1778,23 +1811,29 @@
_llint_slow_path_call_eval,
prepareForRegularCall)
-_llint_op_call_eval_wide:
+_llint_op_call_eval_wide16:
slowPathForCall(
- wide,
+ wide16,
OpCallEval,
- macro () dispatchOp(wide, op_call_eval) end,
- _llint_slow_path_call_eval_wide,
+ macro () dispatchOp(wide16, op_call_eval) end,
+ _llint_slow_path_call_eval_wide16,
prepareForRegularCall)
-_llint_generic_return_point:
- dispatchAfterCall(narrow, OpCallEval, macro ()
- dispatchOp(narrow, op_call_eval)
- end)
+_llint_op_call_eval_wide32:
+ slowPathForCall(
+ wide32,
+ OpCallEval,
+ macro () dispatchOp(wide32, op_call_eval) end,
+ _llint_slow_path_call_eval_wide32,
+ prepareForRegularCall)
-_llint_generic_return_point_wide:
- dispatchAfterCall(wide, OpCallEval, macro()
- dispatchOp(wide, op_call_eval)
+
+commonOp(llint_generic_return_point, macro () end, macro (size)
+ dispatchAfterCall(size, OpCallEval, macro ()
+ dispatchOp(size, op_call_eval)
end)
+end)
+
llintOp(op_identity_with_profile, OpIdentityWithProfile, macro (unused, unused, dispatch)
dispatch()
diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter.cpp b/Source/JavaScriptCore/llint/LowLevelInterpreter.cpp
index 6c4cee7..b061ff4 100644
--- a/Source/JavaScriptCore/llint/LowLevelInterpreter.cpp
+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter.cpp
@@ -249,12 +249,14 @@
// are at play.
if (UNLIKELY(isInitializationPass)) {
Opcode* opcodeMap = LLInt::opcodeMap();
- Opcode* opcodeMapWide = LLInt::opcodeMapWide();
+ Opcode* opcodeMapWide16 = LLInt::opcodeMapWide16();
+ Opcode* opcodeMapWide32 = LLInt::opcodeMapWide32();
#if ENABLE(COMPUTED_GOTO_OPCODES)
#define OPCODE_ENTRY(__opcode, length) \
opcodeMap[__opcode] = bitwise_cast<void*>(&&__opcode); \
- opcodeMapWide[__opcode] = bitwise_cast<void*>(&&__opcode##_wide);
+ opcodeMapWide16[__opcode] = bitwise_cast<void*>(&&__opcode##_wide16); \
+ opcodeMapWide32[__opcode] = bitwise_cast<void*>(&&__opcode##_wide32);
#define LLINT_OPCODE_ENTRY(__opcode, length) \
opcodeMap[__opcode] = bitwise_cast<void*>(&&__opcode);
@@ -263,7 +265,8 @@
// narrow opcodes don't need any mapping and wide opcodes just need to add numOpcodeIDs
#define OPCODE_ENTRY(__opcode, length) \
opcodeMap[__opcode] = __opcode; \
- opcodeMapWide[__opcode] = static_cast<OpcodeID>(__opcode##_wide);
+ opcodeMapWide16[__opcode] = static_cast<OpcodeID>(__opcode##_wide16); \
+ opcodeMapWide32[__opcode] = static_cast<OpcodeID>(__opcode##_wide32);
#define LLINT_OPCODE_ENTRY(__opcode, length) \
opcodeMap[__opcode] = __opcode;
@@ -285,7 +288,7 @@
}
// Define the pseudo registers used by the LLINT C Loop backend:
- ASSERT(sizeof(CLoopRegister) == sizeof(intptr_t));
+ static_assert(sizeof(CLoopRegister) == sizeof(intptr_t));
// The CLoop llint backend is initially based on the ARMv7 backend, and
// then further enhanced with a few instructions from the x86 backend to
diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
index c2e60ab..c350c01 100644
--- a/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
@@ -29,9 +29,15 @@
jmp [t1, t0, 4], BytecodePtrTag
end
-macro nextInstructionWide()
+macro nextInstructionWide16()
+ loadh 1[PC], t0
+ leap _g_opcodeMapWide16, t1
+ jmp [t1, t0, 4], BytecodePtrTag
+end
+
+macro nextInstructionWide32()
loadi 1[PC], t0
- leap _g_opcodeMapWide, t1
+ leap _g_opcodeMapWide32, t1
jmp [t1, t0, 4], BytecodePtrTag
end
@@ -40,14 +46,22 @@
end
macro getOperandNarrow(opcodeStruct, fieldName, dst)
- loadbsp constexpr %opcodeStruct%_%fieldName%_index[PC], dst
+ loadbsi constexpr %opcodeStruct%_%fieldName%_index[PC], dst
end
-macro getuOperandWide(opcodeStruct, fieldName, dst)
+macro getuOperandWide16(opcodeStruct, fieldName, dst)
+ loadh constexpr %opcodeStruct%_%fieldName%_index * 2 + 1[PC], dst
+end
+
+macro getOperandWide16(opcodeStruct, fieldName, dst)
+ loadhsi constexpr %opcodeStruct%_%fieldName%_index * 2 + 1[PC], dst
+end
+
+macro getuOperandWide32(opcodeStruct, fieldName, dst)
loadi constexpr %opcodeStruct%_%fieldName%_index * 4 + 1[PC], dst
end
-macro getOperandWide(opcodeStruct, fieldName, dst)
+macro getOperandWide32(opcodeStruct, fieldName, dst)
loadis constexpr %opcodeStruct%_%fieldName%_index * 4 + 1[PC], dst
end
@@ -96,7 +110,7 @@
push a0
call function
addp 16, sp
- elsif C_LOOP
+ elsif C_LOOP or C_LOOP_WIN
cloopCallSlowPath function, a0, a1
else
error
@@ -104,7 +118,7 @@
end
macro cCall2Void(function)
- if C_LOOP
+ if C_LOOP or C_LOOP_WIN
cloopCallSlowPathVoid function, a0, a1
else
cCall2(function)
@@ -121,7 +135,7 @@
push a0
call function
addp 16, sp
- elsif C_LOOP
+ elsif C_LOOP or C_LOOP_WIN
error
else
error
@@ -190,7 +204,7 @@
# Ensure that we have enough additional stack capacity for the incoming args,
# and the frame for the JS code we're executing. We need to do this check
# before we start copying the args from the protoCallFrame below.
- if C_LOOP
+ if C_LOOP or C_LOOP_WIN
bpaeq t3, VM::m_cloopStackLimit[vm], .stackHeightOK
move entry, t4
move vm, t5
@@ -308,7 +322,7 @@
macro makeJavaScriptCall(entry, temp, unused)
addp CallerFrameAndPCSize, sp
checkStackPointerAlignment(temp, 0xbad0dc02)
- if C_LOOP
+ if C_LOOP or C_LOOP_WIN
cloopCallJSFunction entry
else
call entry
@@ -320,7 +334,7 @@
macro makeHostFunctionCall(entry, temp1, temp2)
move entry, temp1
storep cfr, [sp]
- if C_LOOP
+ if C_LOOP or C_LOOP_WIN
move sp, a0
storep lr, PtrSize[sp]
cloopCallNative temp1
@@ -447,7 +461,7 @@
# Index, tag, and payload must be different registers. Index is not
# changed.
macro loadConstantOrVariable(size, index, tag, payload)
- size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide, macro (FirstConstantRegisterIndex)
+ size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide16, FirstConstantRegisterIndexWide32, macro (FirstConstantRegisterIndex)
bigteq index, FirstConstantRegisterIndex, .constant
loadi TagOffset[cfr, index, 8], tag
loadi PayloadOffset[cfr, index, 8], payload
@@ -463,7 +477,7 @@
end
macro loadConstantOrVariableTag(size, index, tag)
- size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide, macro (FirstConstantRegisterIndex)
+ size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide16, FirstConstantRegisterIndexWide32, macro (FirstConstantRegisterIndex)
bigteq index, FirstConstantRegisterIndex, .constant
loadi TagOffset[cfr, index, 8], tag
jmp .done
@@ -478,7 +492,7 @@
# Index and payload may be the same register. Index may be clobbered.
macro loadConstantOrVariable2Reg(size, index, tag, payload)
- size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide, macro (FirstConstantRegisterIndex)
+ size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide16, FirstConstantRegisterIndexWide32, macro (FirstConstantRegisterIndex)
bigteq index, FirstConstantRegisterIndex, .constant
loadi TagOffset[cfr, index, 8], tag
loadi PayloadOffset[cfr, index, 8], payload
@@ -496,7 +510,7 @@
end
macro loadConstantOrVariablePayloadTagCustom(size, index, tagCheck, payload)
- size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide, macro (FirstConstantRegisterIndex)
+ size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide16, FirstConstantRegisterIndexWide32, macro (FirstConstantRegisterIndex)
bigteq index, FirstConstantRegisterIndex, .constant
tagCheck(TagOffset[cfr, index, 8])
loadi PayloadOffset[cfr, index, 8], payload
@@ -1982,7 +1996,7 @@
andp MarkedBlockMask, t3
loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
addp 8, sp
- elsif ARMv7 or C_LOOP or MIPS
+ elsif ARMv7 or C_LOOP or C_LOOP_WIN or MIPS
if MIPS
# calling convention says to save stack space for 4 first registers in
# all cases. To match our 16-byte alignment, that means we need to
@@ -1999,7 +2013,7 @@
loadi Callee + PayloadOffset[cfr], t1
loadp JSFunction::m_executable[t1], t1
checkStackPointerAlignment(t3, 0xdead0001)
- if C_LOOP
+ if C_LOOP or C_LOOP_WIN
cloopCallNative executableOffsetToFunction[t1]
else
call executableOffsetToFunction[t1]
@@ -2049,7 +2063,7 @@
andp MarkedBlockMask, t3
loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
addp 8, sp
- elsif ARMv7 or C_LOOP or MIPS
+ elsif ARMv7 or C_LOOP or C_LOOP_WIN or MIPS
subp 8, sp # align stack pointer
# t1 already contains the Callee.
andp MarkedBlockMask, t1
@@ -2058,7 +2072,7 @@
move cfr, a0
loadi Callee + PayloadOffset[cfr], t1
checkStackPointerAlignment(t3, 0xdead0001)
- if C_LOOP
+ if C_LOOP or C_LOOP_WIN
cloopCallNative offsetOfFunction[t1]
else
call offsetOfFunction[t1]
diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
index 8119da2..6aaf0dd 100644
--- a/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
@@ -30,9 +30,15 @@
jmp [t1, t0, PtrSize], BytecodePtrTag
end
-macro nextInstructionWide()
+macro nextInstructionWide16()
+ loadh 1[PB, PC, 1], t0
+ leap _g_opcodeMapWide16, t1
+ jmp [t1, t0, PtrSize], BytecodePtrTag
+end
+
+macro nextInstructionWide32()
loadi 1[PB, PC, 1], t0
- leap _g_opcodeMapWide, t1
+ leap _g_opcodeMapWide32, t1
jmp [t1, t0, PtrSize], BytecodePtrTag
end
@@ -41,14 +47,22 @@
end
macro getOperandNarrow(opcodeStruct, fieldName, dst)
- loadbsp constexpr %opcodeStruct%_%fieldName%_index[PB, PC, 1], dst
+ loadbsq constexpr %opcodeStruct%_%fieldName%_index[PB, PC, 1], dst
end
-macro getuOperandWide(opcodeStruct, fieldName, dst)
+macro getuOperandWide16(opcodeStruct, fieldName, dst)
+ loadh constexpr %opcodeStruct%_%fieldName%_index * 2 + 1[PB, PC, 1], dst
+end
+
+macro getOperandWide16(opcodeStruct, fieldName, dst)
+ loadhsq constexpr %opcodeStruct%_%fieldName%_index * 2 + 1[PB, PC, 1], dst
+end
+
+macro getuOperandWide32(opcodeStruct, fieldName, dst)
loadi constexpr %opcodeStruct%_%fieldName%_index * 4 + 1[PB, PC, 1], dst
end
-macro getOperandWide(opcodeStruct, fieldName, dst)
+macro getOperandWide32(opcodeStruct, fieldName, dst)
loadis constexpr %opcodeStruct%_%fieldName%_index * 4 + 1[PB, PC, 1], dst
end
@@ -109,7 +123,7 @@
addp 48, sp
move 8[r0], r1
move [r0], r0
- elsif C_LOOP
+ elsif C_LOOP or C_LOOP_WIN
cloopCallSlowPath function, a0, a1
else
error
@@ -117,7 +131,7 @@
end
macro cCall2Void(function)
- if C_LOOP
+ if C_LOOP or C_LOOP_WIN
cloopCallSlowPathVoid function, a0, a1
elsif X86_64_WIN
# Note: we cannot use the cCall2 macro for Win64 in this case,
@@ -179,7 +193,7 @@
# Ensure that we have enough additional stack capacity for the incoming args,
# and the frame for the JS code we're executing. We need to do this check
# before we start copying the args from the protoCallFrame below.
- if C_LOOP
+ if C_LOOP or C_LOOP_WIN
bpaeq t3, VM::m_cloopStackLimit[vm], .stackHeightOK
move entry, t4
move vm, t5
@@ -285,7 +299,7 @@
macro makeJavaScriptCall(entry, temp, unused)
addp 16, sp
- if C_LOOP
+ if C_LOOP or C_LOOP_WIN
cloopCallJSFunction entry
else
call entry, JSEntryPtrTag
@@ -297,7 +311,7 @@
move entry, temp
storep cfr, [sp]
move sp, a0
- if C_LOOP
+ if C_LOOP or C_LOOP_WIN
storep lr, 8[sp]
cloopCallNative temp
elsif X86_64_WIN
@@ -409,7 +423,7 @@
end
macro uncage(basePtr, mask, ptr, scratchOrLength)
- if GIGACAGE_ENABLED and not C_LOOP
+ if GIGACAGE_ENABLED and not (C_LOOP or C_LOOP_WIN)
loadp basePtr, scratchOrLength
btpz scratchOrLength, .done
andp mask, ptr
@@ -450,19 +464,30 @@
.done:
end
- macro loadWide()
- bpgteq index, FirstConstantRegisterIndexWide, .constant
+ macro loadWide16()
+ bpgteq index, FirstConstantRegisterIndexWide16, .constant
loadq [cfr, index, 8], value
jmp .done
.constant:
loadp CodeBlock[cfr], value
loadp CodeBlock::m_constantRegisters + VectorBufferOffset[value], value
- subp FirstConstantRegisterIndexWide, index
+ loadq -(FirstConstantRegisterIndexWide16 * 8)[value, index, 8], value
+ .done:
+ end
+
+ macro loadWide32()
+ bpgteq index, FirstConstantRegisterIndexWide32, .constant
+ loadq [cfr, index, 8], value
+ jmp .done
+ .constant:
+ loadp CodeBlock[cfr], value
+ loadp CodeBlock::m_constantRegisters + VectorBufferOffset[value], value
+ subp FirstConstantRegisterIndexWide32, index
loadq [value, index, 8], value
.done:
end
- size(loadNarrow, loadWide, macro (load) load() end)
+ size(loadNarrow, loadWide16, loadWide32, macro (load) load() end)
end
macro loadConstantOrVariableInt32(size, index, value, slow)
@@ -1518,7 +1543,7 @@
bia t2, Int8ArrayType - FirstTypedArrayType, .opGetByValUint8ArrayOrUint8ClampedArray
# We have Int8ArrayType.
- loadbs [t3, t1], t0
+ loadbsi [t3, t1], t0
finishIntGetByVal(t0, t1)
.opGetByValUint8ArrayOrUint8ClampedArray:
@@ -1538,7 +1563,7 @@
bia t2, Int16ArrayType - FirstTypedArrayType, .opGetByValUint16Array
# We have Int16ArrayType.
- loadhs [t3, t1, 2], t0
+ loadhsi [t3, t1, 2], t0
finishIntGetByVal(t0, t1)
.opGetByValUint16Array:
@@ -2060,14 +2085,14 @@
andp MarkedBlockMask, t0, t1
loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t1], t1
storep cfr, VM::topCallFrame[t1]
- if ARM64 or ARM64E or C_LOOP
+ if ARM64 or ARM64E or C_LOOP or C_LOOP_WIN
storep lr, ReturnPC[cfr]
end
move cfr, a0
loadp Callee[cfr], t1
loadp JSFunction::m_executable[t1], t1
checkStackPointerAlignment(t3, 0xdead0001)
- if C_LOOP
+ if C_LOOP or C_LOOP_WIN
cloopCallNative executableOffsetToFunction[t1]
else
if X86_64_WIN
@@ -2100,13 +2125,13 @@
andp MarkedBlockMask, t0, t1
loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t1], t1
storep cfr, VM::topCallFrame[t1]
- if ARM64 or ARM64E or C_LOOP
+ if ARM64 or ARM64E or C_LOOP or C_LOOP_WIN
storep lr, ReturnPC[cfr]
end
move cfr, a0
loadp Callee[cfr], t1
checkStackPointerAlignment(t3, 0xdead0001)
- if C_LOOP
+ if C_LOOP or C_LOOP_WIN
cloopCallNative offsetOfFunction[t1]
else
if X86_64_WIN
diff --git a/Source/JavaScriptCore/offlineasm/arm.rb b/Source/JavaScriptCore/offlineasm/arm.rb
index 85e0b8e..9881e4d 100644
--- a/Source/JavaScriptCore/offlineasm/arm.rb
+++ b/Source/JavaScriptCore/offlineasm/arm.rb
@@ -444,13 +444,13 @@
$asm.puts "str #{armOperands(operands)}"
when "loadb"
$asm.puts "ldrb #{armFlippedOperands(operands)}"
- when "loadbs", "loadbsp"
+ when "loadbsi"
$asm.puts "ldrsb.w #{armFlippedOperands(operands)}"
when "storeb"
$asm.puts "strb #{armOperands(operands)}"
when "loadh"
$asm.puts "ldrh #{armFlippedOperands(operands)}"
- when "loadhs"
+ when "loadhsi"
$asm.puts "ldrsh.w #{armFlippedOperands(operands)}"
when "storeh"
$asm.puts "strh #{armOperands(operands)}"
diff --git a/Source/JavaScriptCore/offlineasm/arm64.rb b/Source/JavaScriptCore/offlineasm/arm64.rb
index 9c0cbdc..58bbc2a 100644
--- a/Source/JavaScriptCore/offlineasm/arm64.rb
+++ b/Source/JavaScriptCore/offlineasm/arm64.rb
@@ -278,7 +278,7 @@
| node |
if node.is_a? Instruction
case node.opcode
- when "loadi", "loadis", "loadp", "loadq", "loadb", "loadbs", "loadh", "loadhs", "leap"
+ when "loadi", "loadis", "loadp", "loadq", "loadb", "loadbsi", "loadbsq", "loadh", "loadhsi", "loadhsq", "leap"
labelRef = node.operands[0]
if labelRef.is_a? LabelReference
tmp = Tmp.new(node.codeOrigin, :gpr)
@@ -374,9 +374,9 @@
result = riscLowerMalformedAddresses(result) {
| node, address |
case node.opcode
- when "loadb", "loadbs", "loadbsp", "storeb", /^bb/, /^btb/, /^cb/, /^tb/
+ when "loadb", "loadbsi", "loadbsq", "storeb", /^bb/, /^btb/, /^cb/, /^tb/
size = 1
- when "loadh", "loadhs"
+ when "loadh", "loadhsi", "loadhsq"
size = 2
when "loadi", "loadis", "storei", "addi", "andi", "lshifti", "muli", "negi",
"noti", "ori", "rshifti", "urshifti", "subi", "xori", /^bi/, /^bti/,
@@ -709,16 +709,18 @@
emitARM64Unflipped("str", operands, :quad)
when "loadb"
emitARM64Access("ldrb", "ldurb", operands[1], operands[0], :word)
- when "loadbs"
+ when "loadbsi"
emitARM64Access("ldrsb", "ldursb", operands[1], operands[0], :word)
- when "loadbsp"
- emitARM64Access("ldrsb", "ldursb", operands[1], operands[0], :ptr)
+ when "loadbsq"
+ emitARM64Access("ldrsb", "ldursb", operands[1], operands[0], :quad)
when "storeb"
emitARM64Unflipped("strb", operands, :word)
when "loadh"
emitARM64Access("ldrh", "ldurh", operands[1], operands[0], :word)
- when "loadhs"
+ when "loadhsi"
emitARM64Access("ldrsh", "ldursh", operands[1], operands[0], :word)
+ when "loadhsq"
+ emitARM64Access("ldrsh", "ldursh", operands[1], operands[0], :quad)
when "storeh"
emitARM64Unflipped("strh", operands, :word)
when "loadd"
diff --git a/Source/JavaScriptCore/offlineasm/asm.rb b/Source/JavaScriptCore/offlineasm/asm.rb
index ecf14cb..f96defc 100644
--- a/Source/JavaScriptCore/offlineasm/asm.rb
+++ b/Source/JavaScriptCore/offlineasm/asm.rb
@@ -393,7 +393,7 @@
# There could be multiple backends we are generating for, but the C_LOOP is
# always by itself so this check to turn off $enableDebugAnnotations won't
# affect the generation for any other backend.
- if backend == "C_LOOP"
+ if backend == "C_LOOP" || backend == "C_LOOP_WIN"
$enableDebugAnnotations = false
end
diff --git a/Source/JavaScriptCore/offlineasm/backends.rb b/Source/JavaScriptCore/offlineasm/backends.rb
index c6cc5ac..f64a01e 100644
--- a/Source/JavaScriptCore/offlineasm/backends.rb
+++ b/Source/JavaScriptCore/offlineasm/backends.rb
@@ -44,7 +44,8 @@
"ARM64",
"ARM64E",
"MIPS",
- "C_LOOP"
+ "C_LOOP",
+ "C_LOOP_WIN"
]
# Keep the set of working backends separate from the set of backends that might be
@@ -62,7 +63,8 @@
"ARM64",
"ARM64E",
"MIPS",
- "C_LOOP"
+ "C_LOOP",
+ "C_LOOP_WIN"
]
BACKEND_PATTERN = Regexp.new('\\A(' + BACKENDS.join(')|(') + ')\\Z')
diff --git a/Source/JavaScriptCore/offlineasm/cloop.rb b/Source/JavaScriptCore/offlineasm/cloop.rb
index 933e809..3a51a1f 100644
--- a/Source/JavaScriptCore/offlineasm/cloop.rb
+++ b/Source/JavaScriptCore/offlineasm/cloop.rb
@@ -656,16 +656,18 @@
$asm.putc "#{operands[1].intptrMemRef} = #{operands[0].clValue(:intptr)};"
when "loadb"
$asm.putc "#{operands[1].clLValue(:intptr)} = #{operands[0].uint8MemRef};"
- when "loadbs"
- $asm.putc "#{operands[1].clLValue(:intptr)} = (uint32_t)(#{operands[0].int8MemRef});"
- when "loadbsp"
- $asm.putc "#{operands[1].clLValue(:intptr)} = #{operands[0].int8MemRef};"
+ when "loadbsi"
+ $asm.putc "#{operands[1].clLValue(:uint32)} = (uint32_t)((int32_t)#{operands[0].int8MemRef});"
+ when "loadbsq"
+ $asm.putc "#{operands[1].clLValue(:uint64)} = (int64_t)#{operands[0].int8MemRef};"
when "storeb"
$asm.putc "#{operands[1].uint8MemRef} = #{operands[0].clValue(:int8)};"
when "loadh"
$asm.putc "#{operands[1].clLValue(:intptr)} = #{operands[0].uint16MemRef};"
- when "loadhs"
- $asm.putc "#{operands[1].clLValue(:intptr)} = (uint32_t)(#{operands[0].int16MemRef});"
+ when "loadhsi"
+ $asm.putc "#{operands[1].clLValue(:uint32)} = (uint32_t)((int32_t)#{operands[0].int16MemRef});"
+ when "loadhsq"
+ $asm.putc "#{operands[1].clLValue(:uint64)} = (int64_t)#{operands[0].int16MemRef};"
when "storeh"
$asm.putc "*#{operands[1].uint16MemRef} = #{operands[0].clValue(:int16)};"
when "loadd"
@@ -1156,6 +1158,10 @@
end
end
+ def lowerC_LOOP_WIN
+ lowerC_LOOP
+ end
+
def recordMetaDataC_LOOP
$asm.codeOrigin codeOriginString if $enableCodeOriginComments
$asm.annotation annotation if $enableInstrAnnotations && (opcode != "cloopDo")
diff --git a/Source/JavaScriptCore/offlineasm/instructions.rb b/Source/JavaScriptCore/offlineasm/instructions.rb
index 69e4b6aa..2ad5944 100644
--- a/Source/JavaScriptCore/offlineasm/instructions.rb
+++ b/Source/JavaScriptCore/offlineasm/instructions.rb
@@ -53,10 +53,11 @@
"loadi",
"loadis",
"loadb",
- "loadbs",
- "loadbsp",
+ "loadbsi",
+ "loadbsq",
"loadh",
- "loadhs",
+ "loadhsi",
+ "loadhsq",
"storei",
"storeb",
"loadd",
diff --git a/Source/JavaScriptCore/offlineasm/mips.rb b/Source/JavaScriptCore/offlineasm/mips.rb
index 36e1fb7..e44647c 100644
--- a/Source/JavaScriptCore/offlineasm/mips.rb
+++ b/Source/JavaScriptCore/offlineasm/mips.rb
@@ -880,13 +880,13 @@
$asm.puts "sw #{mipsOperands(operands)}"
when "loadb"
$asm.puts "lbu #{mipsFlippedOperands(operands)}"
- when "loadbs", "loadbsp"
+ when "loadbsi"
$asm.puts "lb #{mipsFlippedOperands(operands)}"
when "storeb"
$asm.puts "sb #{mipsOperands(operands)}"
when "loadh"
$asm.puts "lhu #{mipsFlippedOperands(operands)}"
- when "loadhs"
+ when "loadhsi"
$asm.puts "lh #{mipsFlippedOperands(operands)}"
when "storeh"
$asm.puts "shv #{mipsOperands(operands)}"
diff --git a/Source/JavaScriptCore/offlineasm/x86.rb b/Source/JavaScriptCore/offlineasm/x86.rb
index f2deba8..1eb709b 100644
--- a/Source/JavaScriptCore/offlineasm/x86.rb
+++ b/Source/JavaScriptCore/offlineasm/x86.rb
@@ -939,17 +939,17 @@
else
$asm.puts "movzx #{x86LoadOperands(:byte, :int)}"
end
- when "loadbs"
+ when "loadbsi"
if !isIntelSyntax
$asm.puts "movsbl #{x86LoadOperands(:byte, :int)}"
else
$asm.puts "movsx #{x86LoadOperands(:byte, :int)}"
end
- when "loadbsp"
+ when "loadbsq"
if !isIntelSyntax
- $asm.puts "movsb#{x86Suffix(:ptr)} #{x86LoadOperands(:byte, :ptr)}"
+ $asm.puts "movsbq #{x86LoadOperands(:byte, :quad)}"
else
- $asm.puts "movsx #{x86LoadOperands(:byte, :ptr)}"
+ $asm.puts "movsx #{x86LoadOperands(:byte, :quad)}"
end
when "loadh"
if !isIntelSyntax
@@ -957,12 +957,18 @@
else
$asm.puts "movzx #{x86LoadOperands(:half, :int)}"
end
- when "loadhs"
+ when "loadhsi"
if !isIntelSyntax
$asm.puts "movswl #{x86LoadOperands(:half, :int)}"
else
$asm.puts "movsx #{x86LoadOperands(:half, :int)}"
end
+ when "loadhsq"
+ if !isIntelSyntax
+ $asm.puts "movswq #{x86LoadOperands(:half, :quad)}"
+ else
+ $asm.puts "movsx #{x86LoadOperands(:half, :quad)}"
+ end
when "storeb"
$asm.puts "mov#{x86Suffix(:byte)} #{x86Operands(:byte, :byte)}"
when "loadd"
diff --git a/Source/JavaScriptCore/parser/ResultType.h b/Source/JavaScriptCore/parser/ResultType.h
index c53d17c..cce0f6d 100644
--- a/Source/JavaScriptCore/parser/ResultType.h
+++ b/Source/JavaScriptCore/parser/ResultType.h
@@ -194,40 +194,32 @@
{
OperandTypes(ResultType first = ResultType::unknownType(), ResultType second = ResultType::unknownType())
{
- // We have to initialize one of the int to ensure that
- // the entire struct is initialized.
- m_u.i = 0;
- m_u.rds.first = first.m_bits;
- m_u.rds.second = second.m_bits;
+ m_first = first.m_bits;
+ m_second = second.m_bits;
}
- union {
- struct {
- ResultType::Type first;
- ResultType::Type second;
- } rds;
- int i;
- } m_u;
+ ResultType::Type m_first;
+ ResultType::Type m_second;
ResultType first() const
{
- return ResultType(m_u.rds.first);
+ return ResultType(m_first);
}
ResultType second() const
{
- return ResultType(m_u.rds.second);
+ return ResultType(m_second);
}
- int toInt()
+ uint16_t bits()
{
- return m_u.i;
+ static_assert(sizeof(OperandTypes) == sizeof(uint16_t));
+ return bitwise_cast<uint16_t>(*this);
}
- static OperandTypes fromInt(int value)
+
+ static OperandTypes fromBits(uint16_t bits)
{
- OperandTypes types;
- types.m_u.i = value;
- return types;
+ return bitwise_cast<OperandTypes>(bits);
}
void dump(PrintStream& out) const