Get rid of forward exit on UInt32ToNumber by adding an op_unsigned bytecode instruction
https://bugs.webkit.org/show_bug.cgi?id=125553
Reviewed by Oliver Hunt.
UInt32ToNumber was a super complicated node because it had to do a speculation, but it
would do it after we already had computed the urshift. It couldn't just back to the
beginning of the urshift because the inputs to the urshift weren't necessarily live
anymore. We couldn't jump forward to the beginning of the next instruction because the
result of the urshift was not yet unsigned-converted.
For a while we solved this by forward-exiting in UInt32ToNumber. But that's really
gross and I want to get rid of all forward exits. They cause a lot of bugs.
We could also have turned UInt32ToNumber to a backwards exit by forcing the inputs to
the urshift to be live. I figure that this might be a bit too extreme.
So, I just created a new place that we can exit to: I split op_urshift into op_urshift
followed by op_unsigned. op_unsigned is an "unsigned cast" along the lines of what
UInt32ToNumber does. This allows me to get rid of all of the nastyness in the DFG for
forward exiting in UInt32ToNumber.
This patch enables massive code carnage in the DFG and FTL, and brings us closer to
eliminating one of the DFG's most confusing concepts. On the flipside, it does make the
bytecode slightly more complex (one new instruction). This is a profitable trade. We
want the DFG and FTL to trend towards simplicity, since they are both currently too
complicated.
* bytecode/BytecodeUseDef.h:
(JSC::computeUsesForBytecodeOffset):
(JSC::computeDefsForBytecodeOffset):
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::dumpBytecode):
* bytecode/Opcode.h:
(JSC::padOpcodeName):
* bytecode/ValueRecovery.cpp:
(JSC::ValueRecovery::dumpInContext):
* bytecode/ValueRecovery.h:
(JSC::ValueRecovery::gpr):
* bytecompiler/NodesCodegen.cpp:
(JSC::BinaryOpNode::emitBytecode):
(JSC::emitReadModifyAssignment):
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::toInt32):
(JSC::DFG::ByteCodeParser::parseBlock):
* dfg/DFGClobberize.h:
(JSC::DFG::clobberize):
* dfg/DFGNodeType.h:
* dfg/DFGOSRExitCompiler32_64.cpp:
(JSC::DFG::OSRExitCompiler::compileExit):
* dfg/DFGOSRExitCompiler64.cpp:
(JSC::DFG::OSRExitCompiler::compileExit):
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::compileMovHint):
(JSC::DFG::SpeculativeJIT::compileUInt32ToNumber):
* dfg/DFGSpeculativeJIT.h:
* dfg/DFGSpeculativeJIT32_64.cpp:
* dfg/DFGSpeculativeJIT64.cpp:
* dfg/DFGStrengthReductionPhase.cpp:
(JSC::DFG::StrengthReductionPhase::handleNode):
(JSC::DFG::StrengthReductionPhase::convertToIdentityOverChild):
(JSC::DFG::StrengthReductionPhase::convertToIdentityOverChild1):
(JSC::DFG::StrengthReductionPhase::convertToIdentityOverChild2):
* ftl/FTLFormattedValue.h:
(JSC::FTL::int32Value):
* ftl/FTLLowerDFGToLLVM.cpp:
(JSC::FTL::LowerDFGToLLVM::compileUInt32ToNumber):
* ftl/FTLValueFormat.cpp:
(JSC::FTL::reboxAccordingToFormat):
(WTF::printInternal):
* ftl/FTLValueFormat.h:
* jit/JIT.cpp:
(JSC::JIT::privateCompileMainPass):
(JSC::JIT::privateCompileSlowCases):
* jit/JIT.h:
* jit/JITArithmetic.cpp:
(JSC::JIT::emit_op_urshift):
(JSC::JIT::emitSlow_op_urshift):
(JSC::JIT::emit_op_unsigned):
(JSC::JIT::emitSlow_op_unsigned):
* jit/JITArithmetic32_64.cpp:
(JSC::JIT::emitRightShift):
(JSC::JIT::emitRightShiftSlowCase):
(JSC::JIT::emit_op_unsigned):
(JSC::JIT::emitSlow_op_unsigned):
* llint/LowLevelInterpreter32_64.asm:
* llint/LowLevelInterpreter64.asm:
* runtime/CommonSlowPaths.cpp:
(JSC::SLOW_PATH_DECL):
* runtime/CommonSlowPaths.h:
git-svn-id: http://svn.webkit.org/repository/webkit/trunk@160587 268f45cc-cd09-0410-ab3c-d52691b4dbfc
diff --git a/Source/JavaScriptCore/ChangeLog b/Source/JavaScriptCore/ChangeLog
index aeeb62b..305586e 100644
--- a/Source/JavaScriptCore/ChangeLog
+++ b/Source/JavaScriptCore/ChangeLog
@@ -1,3 +1,96 @@
+2013-12-11 Filip Pizlo <fpizlo@apple.com>
+
+ Get rid of forward exit on UInt32ToNumber by adding an op_unsigned bytecode instruction
+ https://bugs.webkit.org/show_bug.cgi?id=125553
+
+ Reviewed by Oliver Hunt.
+
+ UInt32ToNumber was a super complicated node because it had to do a speculation, but it
+ would do it after we already had computed the urshift. It couldn't just back to the
+ beginning of the urshift because the inputs to the urshift weren't necessarily live
+ anymore. We couldn't jump forward to the beginning of the next instruction because the
+ result of the urshift was not yet unsigned-converted.
+
+ For a while we solved this by forward-exiting in UInt32ToNumber. But that's really
+ gross and I want to get rid of all forward exits. They cause a lot of bugs.
+
+ We could also have turned UInt32ToNumber to a backwards exit by forcing the inputs to
+ the urshift to be live. I figure that this might be a bit too extreme.
+
+ So, I just created a new place that we can exit to: I split op_urshift into op_urshift
+ followed by op_unsigned. op_unsigned is an "unsigned cast" along the lines of what
+ UInt32ToNumber does. This allows me to get rid of all of the nastyness in the DFG for
+ forward exiting in UInt32ToNumber.
+
+ This patch enables massive code carnage in the DFG and FTL, and brings us closer to
+ eliminating one of the DFG's most confusing concepts. On the flipside, it does make the
+ bytecode slightly more complex (one new instruction). This is a profitable trade. We
+ want the DFG and FTL to trend towards simplicity, since they are both currently too
+ complicated.
+
+ * bytecode/BytecodeUseDef.h:
+ (JSC::computeUsesForBytecodeOffset):
+ (JSC::computeDefsForBytecodeOffset):
+ * bytecode/CodeBlock.cpp:
+ (JSC::CodeBlock::dumpBytecode):
+ * bytecode/Opcode.h:
+ (JSC::padOpcodeName):
+ * bytecode/ValueRecovery.cpp:
+ (JSC::ValueRecovery::dumpInContext):
+ * bytecode/ValueRecovery.h:
+ (JSC::ValueRecovery::gpr):
+ * bytecompiler/NodesCodegen.cpp:
+ (JSC::BinaryOpNode::emitBytecode):
+ (JSC::emitReadModifyAssignment):
+ * dfg/DFGByteCodeParser.cpp:
+ (JSC::DFG::ByteCodeParser::toInt32):
+ (JSC::DFG::ByteCodeParser::parseBlock):
+ * dfg/DFGClobberize.h:
+ (JSC::DFG::clobberize):
+ * dfg/DFGNodeType.h:
+ * dfg/DFGOSRExitCompiler32_64.cpp:
+ (JSC::DFG::OSRExitCompiler::compileExit):
+ * dfg/DFGOSRExitCompiler64.cpp:
+ (JSC::DFG::OSRExitCompiler::compileExit):
+ * dfg/DFGSpeculativeJIT.cpp:
+ (JSC::DFG::SpeculativeJIT::compileMovHint):
+ (JSC::DFG::SpeculativeJIT::compileUInt32ToNumber):
+ * dfg/DFGSpeculativeJIT.h:
+ * dfg/DFGSpeculativeJIT32_64.cpp:
+ * dfg/DFGSpeculativeJIT64.cpp:
+ * dfg/DFGStrengthReductionPhase.cpp:
+ (JSC::DFG::StrengthReductionPhase::handleNode):
+ (JSC::DFG::StrengthReductionPhase::convertToIdentityOverChild):
+ (JSC::DFG::StrengthReductionPhase::convertToIdentityOverChild1):
+ (JSC::DFG::StrengthReductionPhase::convertToIdentityOverChild2):
+ * ftl/FTLFormattedValue.h:
+ (JSC::FTL::int32Value):
+ * ftl/FTLLowerDFGToLLVM.cpp:
+ (JSC::FTL::LowerDFGToLLVM::compileUInt32ToNumber):
+ * ftl/FTLValueFormat.cpp:
+ (JSC::FTL::reboxAccordingToFormat):
+ (WTF::printInternal):
+ * ftl/FTLValueFormat.h:
+ * jit/JIT.cpp:
+ (JSC::JIT::privateCompileMainPass):
+ (JSC::JIT::privateCompileSlowCases):
+ * jit/JIT.h:
+ * jit/JITArithmetic.cpp:
+ (JSC::JIT::emit_op_urshift):
+ (JSC::JIT::emitSlow_op_urshift):
+ (JSC::JIT::emit_op_unsigned):
+ (JSC::JIT::emitSlow_op_unsigned):
+ * jit/JITArithmetic32_64.cpp:
+ (JSC::JIT::emitRightShift):
+ (JSC::JIT::emitRightShiftSlowCase):
+ (JSC::JIT::emit_op_unsigned):
+ (JSC::JIT::emitSlow_op_unsigned):
+ * llint/LowLevelInterpreter32_64.asm:
+ * llint/LowLevelInterpreter64.asm:
+ * runtime/CommonSlowPaths.cpp:
+ (JSC::SLOW_PATH_DECL):
+ * runtime/CommonSlowPaths.h:
+
2013-12-13 Mark Hahnenberg <mhahnenberg@apple.com>
LLInt should not conditionally branch to to labels outside of its function
diff --git a/Source/JavaScriptCore/bytecode/BytecodeUseDef.h b/Source/JavaScriptCore/bytecode/BytecodeUseDef.h
index db62835..45cb91a 100644
--- a/Source/JavaScriptCore/bytecode/BytecodeUseDef.h
+++ b/Source/JavaScriptCore/bytecode/BytecodeUseDef.h
@@ -156,7 +156,8 @@
case op_new_array_with_size:
case op_create_this:
case op_get_pnames:
- case op_del_by_id: {
+ case op_del_by_id:
+ case op_unsigned: {
functor(codeBlock, instruction, opcodeID, instruction[2].u.operand);
return;
}
@@ -390,7 +391,8 @@
case op_create_activation:
case op_create_arguments:
case op_del_by_id:
- case op_del_by_val: {
+ case op_del_by_val:
+ case op_unsigned: {
functor(codeBlock, instruction, opcodeID, instruction[1].u.operand);
return;
}
diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.cpp b/Source/JavaScriptCore/bytecode/CodeBlock.cpp
index 30de6c7..d5e7da0 100644
--- a/Source/JavaScriptCore/bytecode/CodeBlock.cpp
+++ b/Source/JavaScriptCore/bytecode/CodeBlock.cpp
@@ -901,6 +901,10 @@
out.printf("%s, %s, %s", registerName(r0).data(), registerName(r1).data(), registerName(r2).data());
break;
}
+ case op_unsigned: {
+ printUnaryOp(out, exec, location, it, "unsigned");
+ break;
+ }
case op_typeof: {
printUnaryOp(out, exec, location, it, "typeof");
break;
diff --git a/Source/JavaScriptCore/bytecode/Opcode.h b/Source/JavaScriptCore/bytecode/Opcode.h
index ec259e6..ca89090 100644
--- a/Source/JavaScriptCore/bytecode/Opcode.h
+++ b/Source/JavaScriptCore/bytecode/Opcode.h
@@ -82,6 +82,7 @@
macro(op_lshift, 4) \
macro(op_rshift, 4) \
macro(op_urshift, 4) \
+ macro(op_unsigned, 3) \
macro(op_bitand, 5) \
macro(op_bitxor, 5) \
macro(op_bitor, 5) \
diff --git a/Source/JavaScriptCore/bytecode/ValueRecovery.cpp b/Source/JavaScriptCore/bytecode/ValueRecovery.cpp
index c30e2ae..5032684 100644
--- a/Source/JavaScriptCore/bytecode/ValueRecovery.cpp
+++ b/Source/JavaScriptCore/bytecode/ValueRecovery.cpp
@@ -83,9 +83,6 @@
case UnboxedCellInGPR:
out.print("cell(", gpr(), ")");
return;
- case UInt32InGPR:
- out.print("uint32(", gpr(), ")");
- return;
case InFPR:
out.print(fpr());
return;
diff --git a/Source/JavaScriptCore/bytecode/ValueRecovery.h b/Source/JavaScriptCore/bytecode/ValueRecovery.h
index dc2d55a..3af2c34 100644
--- a/Source/JavaScriptCore/bytecode/ValueRecovery.h
+++ b/Source/JavaScriptCore/bytecode/ValueRecovery.h
@@ -54,7 +54,6 @@
InPair,
#endif
InFPR,
- UInt32InGPR,
// It's in the stack, but at a different location.
DisplacedInJSStack,
// It's in the stack, at a different location, and it's unboxed.
@@ -105,14 +104,6 @@
return result;
}
- static ValueRecovery uint32InGPR(MacroAssembler::RegisterID gpr)
- {
- ValueRecovery result;
- result.m_technique = UInt32InGPR;
- result.m_source.gpr = gpr;
- return result;
- }
-
#if USE(JSVALUE32_64)
static ValueRecovery inPair(MacroAssembler::RegisterID tagGPR, MacroAssembler::RegisterID payloadGPR)
{
@@ -209,7 +200,7 @@
MacroAssembler::RegisterID gpr() const
{
- ASSERT(m_technique == InGPR || m_technique == UnboxedInt32InGPR || m_technique == UnboxedBooleanInGPR || m_technique == UInt32InGPR || m_technique == UnboxedInt52InGPR || m_technique == UnboxedStrictInt52InGPR || m_technique == UnboxedCellInGPR);
+ ASSERT(m_technique == InGPR || m_technique == UnboxedInt32InGPR || m_technique == UnboxedBooleanInGPR || m_technique == UnboxedInt52InGPR || m_technique == UnboxedStrictInt52InGPR || m_technique == UnboxedCellInGPR);
return m_source.gpr;
}
diff --git a/Source/JavaScriptCore/bytecompiler/NodesCodegen.cpp b/Source/JavaScriptCore/bytecompiler/NodesCodegen.cpp
index 2e5fb45..9741174 100644
--- a/Source/JavaScriptCore/bytecompiler/NodesCodegen.cpp
+++ b/Source/JavaScriptCore/bytecompiler/NodesCodegen.cpp
@@ -1168,7 +1168,10 @@
RELEASE_ASSERT_NOT_REACHED();
return generator.emitUnaryOp(op_not, generator.finalDestination(dst, tmp.get()), tmp.get());
}
- return generator.emitBinaryOp(opcodeID, generator.finalDestination(dst, src1.get()), src1.get(), src2, OperandTypes(left->resultDescriptor(), right->resultDescriptor()));
+ RegisterID* result = generator.emitBinaryOp(opcodeID, generator.finalDestination(dst, src1.get()), src1.get(), src2, OperandTypes(left->resultDescriptor(), right->resultDescriptor()));
+ if (opcodeID == op_urshift && dst != generator.ignoredResult())
+ return generator.emitUnaryOp(op_unsigned, result, result);
+ return result;
}
RegisterID* EqualNode::emitBytecode(BytecodeGenerator& generator, RegisterID* dst)
@@ -1335,7 +1338,10 @@
// If this is required the node is passed as 'emitExpressionInfoForMe'; do so now.
if (emitExpressionInfoForMe)
generator.emitExpressionInfo(emitExpressionInfoForMe->divot(), emitExpressionInfoForMe->divotStart(), emitExpressionInfoForMe->divotEnd());
- return generator.emitBinaryOp(opcodeID, dst, src1, src2, types);
+ RegisterID* result = generator.emitBinaryOp(opcodeID, dst, src1, src2, types);
+ if (oper == OpURShift)
+ return generator.emitUnaryOp(op_unsigned, result, result);
+ return result;
}
RegisterID* ReadModifyResolveNode::emitBytecode(BytecodeGenerator& generator, RegisterID* dst)
diff --git a/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp b/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
index 02490d4..79a2f49 100644
--- a/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
+++ b/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
@@ -513,9 +513,6 @@
if (node->hasInt32Result())
return node;
- if (node->op() == UInt32ToNumber)
- return node->child1().node();
-
// Check for numeric constants boxed as JSValues.
if (canFold(node)) {
JSValue v = valueOfJSConstant(node);
@@ -2050,55 +2047,32 @@
case op_rshift: {
Node* op1 = getToInt32(currentInstruction[2].u.operand);
Node* op2 = getToInt32(currentInstruction[3].u.operand);
- Node* result;
- // Optimize out shifts by zero.
- if (isInt32Constant(op2) && !(valueOfInt32Constant(op2) & 0x1f))
- result = op1;
- else
- result = addToGraph(BitRShift, op1, op2);
- set(VirtualRegister(currentInstruction[1].u.operand), result);
+ set(VirtualRegister(currentInstruction[1].u.operand),
+ addToGraph(BitRShift, op1, op2));
NEXT_OPCODE(op_rshift);
}
case op_lshift: {
Node* op1 = getToInt32(currentInstruction[2].u.operand);
Node* op2 = getToInt32(currentInstruction[3].u.operand);
- Node* result;
- // Optimize out shifts by zero.
- if (isInt32Constant(op2) && !(valueOfInt32Constant(op2) & 0x1f))
- result = op1;
- else
- result = addToGraph(BitLShift, op1, op2);
- set(VirtualRegister(currentInstruction[1].u.operand), result);
+ set(VirtualRegister(currentInstruction[1].u.operand),
+ addToGraph(BitLShift, op1, op2));
NEXT_OPCODE(op_lshift);
}
case op_urshift: {
Node* op1 = getToInt32(currentInstruction[2].u.operand);
Node* op2 = getToInt32(currentInstruction[3].u.operand);
- Node* result;
- // The result of a zero-extending right shift is treated as an unsigned value.
- // This means that if the top bit is set, the result is not in the int32 range,
- // and as such must be stored as a double. If the shift amount is a constant,
- // we may be able to optimize.
- if (isInt32Constant(op2)) {
- // If we know we are shifting by a non-zero amount, then since the operation
- // zero fills we know the top bit of the result must be zero, and as such the
- // result must be within the int32 range. Conversely, if this is a shift by
- // zero, then the result may be changed by the conversion to unsigned, but it
- // is not necessary to perform the shift!
- if (valueOfInt32Constant(op2) & 0x1f)
- result = addToGraph(BitURShift, op1, op2);
- else
- result = makeSafe(addToGraph(UInt32ToNumber, op1));
- } else {
- // Cannot optimize at this stage; shift & potentially rebox as a double.
- result = addToGraph(BitURShift, op1, op2);
- result = makeSafe(addToGraph(UInt32ToNumber, result));
- }
- set(VirtualRegister(currentInstruction[1].u.operand), result);
+ set(VirtualRegister(currentInstruction[1].u.operand),
+ addToGraph(BitURShift, op1, op2));
NEXT_OPCODE(op_urshift);
}
+
+ case op_unsigned: {
+ set(VirtualRegister(currentInstruction[1].u.operand),
+ makeSafe(addToGraph(UInt32ToNumber, getToInt32(currentInstruction[2].u.operand))));
+ NEXT_OPCODE(op_unsigned);
+ }
// === Increment/Decrement opcodes ===
diff --git a/Source/JavaScriptCore/dfg/DFGCapabilities.cpp b/Source/JavaScriptCore/dfg/DFGCapabilities.cpp
index 2d3c7eb..4d677da 100644
--- a/Source/JavaScriptCore/dfg/DFGCapabilities.cpp
+++ b/Source/JavaScriptCore/dfg/DFGCapabilities.cpp
@@ -91,6 +91,7 @@
case op_rshift:
case op_lshift:
case op_urshift:
+ case op_unsigned:
case op_inc:
case op_dec:
case op_add:
diff --git a/Source/JavaScriptCore/dfg/DFGClobberize.h b/Source/JavaScriptCore/dfg/DFGClobberize.h
index 52bcabe..6b74daf 100644
--- a/Source/JavaScriptCore/dfg/DFGClobberize.h
+++ b/Source/JavaScriptCore/dfg/DFGClobberize.h
@@ -118,6 +118,8 @@
case Int52ToValue:
case CheckInBounds:
case ConstantStoragePointer:
+ case UInt32ToNumber:
+ case DoubleAsInt32:
return;
case MovHintAndCheck:
@@ -168,15 +170,6 @@
read(Watchpoint_fire);
return;
- // These are forward-exiting nodes that assume that the subsequent instruction
- // is a MovHint, and they try to roll forward over this MovHint in their
- // execution. This makes hoisting them impossible without additional magic. We
- // may add such magic eventually, but just not yet.
- case UInt32ToNumber:
- case DoubleAsInt32:
- write(SideState);
- return;
-
case ToThis:
case CreateThis:
read(MiscFields);
diff --git a/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp b/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
index d792064..06993df 100644
--- a/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
@@ -112,6 +112,8 @@
case UInt32ToNumber: {
fixEdge<KnownInt32Use>(node->child1());
+ if (bytecodeCanTruncateInteger(node->arithNodeFlags()))
+ node->convertToIdentity();
break;
}
diff --git a/Source/JavaScriptCore/dfg/DFGNodeType.h b/Source/JavaScriptCore/dfg/DFGNodeType.h
index 1d862f5..3534199 100644
--- a/Source/JavaScriptCore/dfg/DFGNodeType.h
+++ b/Source/JavaScriptCore/dfg/DFGNodeType.h
@@ -105,7 +105,7 @@
/* Bitwise operators call ToInt32 on their operands. */\
macro(ValueToInt32, NodeResultInt32) \
/* Used to box the result of URShift nodes (result has range 0..2^32-1). */\
- macro(UInt32ToNumber, NodeResultNumber | NodeExitsForward) \
+ macro(UInt32ToNumber, NodeResultNumber) \
\
/* Used to cast known integers to doubles, so as to separate the double form */\
/* of the value from the integer form. */\
diff --git a/Source/JavaScriptCore/dfg/DFGOSRExitCompiler32_64.cpp b/Source/JavaScriptCore/dfg/DFGOSRExitCompiler32_64.cpp
index 8c32c6d..9f77645 100644
--- a/Source/JavaScriptCore/dfg/DFGOSRExitCompiler32_64.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOSRExitCompiler32_64.cpp
@@ -177,7 +177,6 @@
switch (recovery.technique()) {
case UnboxedInt32InGPR:
- case UInt32InGPR:
case UnboxedBooleanInGPR:
case UnboxedCellInGPR:
m_jit.store32(
@@ -317,28 +316,6 @@
AssemblyHelpers::payloadFor(operand));
break;
- case UInt32InGPR: {
- m_jit.load32(
- &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload,
- GPRInfo::regT0);
- AssemblyHelpers::Jump positive = m_jit.branch32(
- AssemblyHelpers::GreaterThanOrEqual,
- GPRInfo::regT0, AssemblyHelpers::TrustedImm32(0));
- m_jit.convertInt32ToDouble(GPRInfo::regT0, FPRInfo::fpRegT0);
- m_jit.addDouble(
- AssemblyHelpers::AbsoluteAddress(&AssemblyHelpers::twoToThe32),
- FPRInfo::fpRegT0);
- m_jit.storeDouble(FPRInfo::fpRegT0, AssemblyHelpers::addressFor(operand));
- AssemblyHelpers::Jump done = m_jit.jump();
- positive.link(&m_jit);
- m_jit.store32(GPRInfo::regT0, AssemblyHelpers::payloadFor(operand));
- m_jit.store32(
- AssemblyHelpers::TrustedImm32(JSValue::Int32Tag),
- AssemblyHelpers::tagFor(operand));
- done.link(&m_jit);
- break;
- }
-
case Constant:
m_jit.store32(
AssemblyHelpers::TrustedImm32(recovery.constant().tag()),
diff --git a/Source/JavaScriptCore/dfg/DFGOSRExitCompiler64.cpp b/Source/JavaScriptCore/dfg/DFGOSRExitCompiler64.cpp
index 7d6bbf0..e9f3b76 100644
--- a/Source/JavaScriptCore/dfg/DFGOSRExitCompiler64.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOSRExitCompiler64.cpp
@@ -185,7 +185,6 @@
switch (recovery.technique()) {
case InGPR:
case UnboxedInt32InGPR:
- case UInt32InGPR:
case UnboxedInt52InGPR:
case UnboxedStrictInt52InGPR:
case UnboxedCellInGPR:
@@ -283,13 +282,6 @@
m_jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));
break;
- case UInt32InGPR:
- m_jit.load64(scratch + index, GPRInfo::regT0);
- m_jit.zeroExtend32ToPtr(GPRInfo::regT0, GPRInfo::regT0);
- m_jit.boxInt52(GPRInfo::regT0, GPRInfo::regT0, GPRInfo::regT1, FPRInfo::fpRegT0);
- m_jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));
- break;
-
case InFPR:
case DoubleDisplacedInJSStack:
m_jit.move(AssemblyHelpers::TrustedImmPtr(scratch + index), GPRInfo::regT0);
diff --git a/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp b/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
index 63f1d88..4fdc2dc 100644
--- a/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
@@ -195,6 +195,8 @@
}
case UInt32ToNumber: {
+ // FIXME: Support Int52.
+ // https://bugs.webkit.org/show_bug.cgi?id=125704
if (nodeCanSpeculateInt32(node->arithNodeFlags()))
changed |= mergePrediction(SpecInt32);
else
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
index 206736b..4248d45 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
@@ -1425,9 +1425,6 @@
Node* child = node->child1().node();
noticeOSRBirth(child);
- if (child->op() == UInt32ToNumber)
- noticeOSRBirth(child->child1().node());
-
m_stream->appendAndLog(VariableEvent::movHint(MinifiedID(child), node->local()));
}
@@ -2160,18 +2157,15 @@
doubleResult(outputFPR, node);
return;
}
+
+ RELEASE_ASSERT(!bytecodeCanTruncateInteger(node->arithNodeFlags()));
SpeculateInt32Operand op1(this, node->child1());
- GPRTemporary result(this); // For the benefit of OSR exit, force these to be in different registers. In reality the OSR exit compiler could find cases where you have uint32(%r1) followed by int32(%r1) and then use different registers, but that seems like too much effort.
+ GPRTemporary result(this);
m_jit.move(op1.gpr(), result.gpr());
- // Test the operand is positive. This is a very special speculation check - we actually
- // use roll-forward speculation here, where if this fails, we jump to the baseline
- // instruction that follows us, rather than the one we're executing right now. We have
- // to do this because by this point, the original values necessary to compile whatever
- // operation the UInt32ToNumber originated from might be dead.
- forwardSpeculationCheck(Overflow, JSValueRegs(), 0, m_jit.branch32(MacroAssembler::LessThan, result.gpr(), TrustedImm32(0)), ValueRecovery::uint32InGPR(result.gpr()));
+ speculationCheck(Overflow, JSValueRegs(), 0, m_jit.branch32(MacroAssembler::LessThan, result.gpr(), TrustedImm32(0)));
int32Result(result.gpr(), node, op1.format());
}
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
index ec917d2..573c582 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
@@ -698,8 +698,6 @@
void compileMovHint(Node*);
void compileMovHintAndCheck(Node*);
- void nonSpeculativeUInt32ToNumber(Node*);
-
#if USE(JSVALUE64)
void cachedGetById(CodeOrigin, GPRReg baseGPR, GPRReg resultGPR, unsigned identifierNumber, JITCompiler::Jump slowPathTarget = JITCompiler::Jump(), SpillRegistersMode = NeedToSpill);
void cachedPutById(CodeOrigin, GPRReg base, GPRReg value, Edge valueUse, GPRReg scratchGPR, unsigned identifierNumber, PutKind, JITCompiler::Jump slowPathTarget = JITCompiler::Jump());
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
index bf51fb1..f392913 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
@@ -168,33 +168,6 @@
}
}
-void SpeculativeJIT::nonSpeculativeUInt32ToNumber(Node* node)
-{
- SpeculateInt32Operand op1(this, node->child1());
- FPRTemporary boxer(this);
- GPRTemporary resultTag(this, Reuse, op1);
- GPRTemporary resultPayload(this);
-
- JITCompiler::Jump positive = m_jit.branch32(MacroAssembler::GreaterThanOrEqual, op1.gpr(), TrustedImm32(0));
-
- m_jit.convertInt32ToDouble(op1.gpr(), boxer.fpr());
- m_jit.move(JITCompiler::TrustedImmPtr(&AssemblyHelpers::twoToThe32), resultPayload.gpr()); // reuse resultPayload register here.
- m_jit.addDouble(JITCompiler::Address(resultPayload.gpr(), 0), boxer.fpr());
-
- boxDouble(boxer.fpr(), resultTag.gpr(), resultPayload.gpr());
-
- JITCompiler::Jump done = m_jit.jump();
-
- positive.link(&m_jit);
-
- m_jit.move(TrustedImm32(JSValue::Int32Tag), resultTag.gpr());
- m_jit.move(op1.gpr(), resultPayload.gpr());
-
- done.link(&m_jit);
-
- jsValueResult(resultTag.gpr(), resultPayload.gpr(), node);
-}
-
void SpeculativeJIT::cachedGetById(CodeOrigin codeOrigin, GPRReg baseTagGPROrNone, GPRReg basePayloadGPR, GPRReg resultTagGPR, GPRReg resultPayloadGPR, unsigned identifierNumber, JITCompiler::Jump slowPathTarget, SpillRegistersMode spillMode)
{
JITGetByIdGenerator gen(
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
index 7b37fb1..d8a1bab 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
@@ -186,30 +186,6 @@
}
}
-void SpeculativeJIT::nonSpeculativeUInt32ToNumber(Node* node)
-{
- SpeculateInt32Operand op1(this, node->child1());
- FPRTemporary boxer(this);
- GPRTemporary result(this, Reuse, op1);
-
- JITCompiler::Jump positive = m_jit.branch32(MacroAssembler::GreaterThanOrEqual, op1.gpr(), TrustedImm32(0));
-
- m_jit.convertInt32ToDouble(op1.gpr(), boxer.fpr());
- m_jit.addDouble(JITCompiler::AbsoluteAddress(&AssemblyHelpers::twoToThe32), boxer.fpr());
-
- boxDouble(boxer.fpr(), result.gpr());
-
- JITCompiler::Jump done = m_jit.jump();
-
- positive.link(&m_jit);
-
- m_jit.or64(GPRInfo::tagTypeNumberRegister, op1.gpr(), result.gpr());
-
- done.link(&m_jit);
-
- jsValueResult(result.gpr(), m_currentNode);
-}
-
void SpeculativeJIT::cachedGetById(CodeOrigin codeOrigin, GPRReg baseGPR, GPRReg resultGPR, unsigned identifierNumber, JITCompiler::Jump slowPathTarget, SpillRegistersMode spillMode)
{
JITGetByIdGenerator gen(
diff --git a/Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp b/Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp
index d137a07..3aa991c 100644
--- a/Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp
@@ -70,14 +70,40 @@
{
switch (m_node->op()) {
case BitOr:
- // Optimize X|0 -> X.
+ if (m_node->child1()->isConstant()) {
+ JSValue op1 = m_graph.valueOfJSConstant(m_node->child1().node());
+ if (op1.isInt32() && !op1.asInt32()) {
+ convertToIdentityOverChild2();
+ break;
+ }
+ }
if (m_node->child2()->isConstant()) {
- JSValue C2 = m_graph.valueOfJSConstant(m_node->child2().node());
- if (C2.isInt32() && !C2.asInt32()) {
- m_insertionSet.insertNode(
- m_nodeIndex, SpecNone, Phantom, m_node->codeOrigin,
- m_node->child2());
- m_node->children.removeEdge(1);
+ JSValue op2 = m_graph.valueOfJSConstant(m_node->child2().node());
+ if (op2.isInt32() && !op2.asInt32()) {
+ convertToIdentityOverChild1();
+ break;
+ }
+ }
+ break;
+
+ case BitLShift:
+ case BitRShift:
+ case BitURShift:
+ if (m_node->child2()->isConstant()) {
+ JSValue op2 = m_graph.valueOfJSConstant(m_node->child2().node());
+ if (op2.isInt32() && !(op2.asInt32() & 0x1f)) {
+ convertToIdentityOverChild1();
+ break;
+ }
+ }
+ break;
+
+ case UInt32ToNumber:
+ if (m_node->child1()->op() == BitURShift
+ && m_node->child1()->child2()->isConstant()) {
+ JSValue shiftAmount = m_graph.valueOfJSConstant(
+ m_node->child1()->child2().node());
+ if (shiftAmount.isInt32() && (shiftAmount.asInt32() & 0x1f)) {
m_node->convertToIdentity();
m_changed = true;
break;
@@ -116,6 +142,25 @@
break;
}
}
+
+ void convertToIdentityOverChild(unsigned childIndex)
+ {
+ m_insertionSet.insertNode(
+ m_nodeIndex, SpecNone, Phantom, m_node->codeOrigin, m_node->children);
+ m_node->children.removeEdge(childIndex ^ 1);
+ m_node->convertToIdentity();
+ m_changed = true;
+ }
+
+ void convertToIdentityOverChild1()
+ {
+ convertToIdentityOverChild(0);
+ }
+
+ void convertToIdentityOverChild2()
+ {
+ convertToIdentityOverChild(1);
+ }
void foldTypedArrayPropertyToConstant(JSArrayBufferView* view, JSValue constant)
{
diff --git a/Source/JavaScriptCore/ftl/FTLFormattedValue.h b/Source/JavaScriptCore/ftl/FTLFormattedValue.h
index 81743ef..b7ab361 100644
--- a/Source/JavaScriptCore/ftl/FTLFormattedValue.h
+++ b/Source/JavaScriptCore/ftl/FTLFormattedValue.h
@@ -72,7 +72,6 @@
static inline FormattedValue noValue() { return FormattedValue(); }
static inline FormattedValue int32Value(LValue value) { return FormattedValue(ValueFormatInt32, value); }
-static inline FormattedValue uInt32Value(LValue value) { return FormattedValue(ValueFormatUInt32, value); }
static inline FormattedValue booleanValue(LValue value) { return FormattedValue(ValueFormatBoolean, value); }
static inline FormattedValue jsValueValue(LValue value) { return FormattedValue(ValueFormatJSValue, value); }
static inline FormattedValue doubleValue(LValue value) { return FormattedValue(ValueFormatDouble, value); }
diff --git a/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp b/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp
index 9e070be..ed72204 100644
--- a/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp
+++ b/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp
@@ -1171,9 +1171,7 @@
return;
}
- speculateForward(
- Overflow, noValue(), 0, m_out.lessThan(value, m_out.int32Zero),
- FormattedValue(ValueFormatUInt32, value));
+ speculate(Overflow, noValue(), 0, m_out.lessThan(value, m_out.int32Zero));
setInt32(value);
}
diff --git a/Source/JavaScriptCore/ftl/FTLValueFormat.cpp b/Source/JavaScriptCore/ftl/FTLValueFormat.cpp
index e9a4a63..5a89d6a 100644
--- a/Source/JavaScriptCore/ftl/FTLValueFormat.cpp
+++ b/Source/JavaScriptCore/ftl/FTLValueFormat.cpp
@@ -42,14 +42,6 @@
break;
}
- case ValueFormatUInt32: {
- jit.zeroExtend32ToPtr(value, value);
- jit.moveDoubleTo64(FPRInfo::fpRegT0, scratch2);
- jit.boxInt52(value, value, scratch1, FPRInfo::fpRegT0);
- jit.move64ToDouble(scratch2, FPRInfo::fpRegT0);
- break;
- }
-
case ValueFormatInt52: {
jit.rshift64(AssemblyHelpers::TrustedImm32(JSValue::int52ShiftAmount), value);
jit.moveDoubleTo64(FPRInfo::fpRegT0, scratch2);
@@ -105,9 +97,6 @@
case ValueFormatInt32:
out.print("Int32");
return;
- case ValueFormatUInt32:
- out.print("UInt32");
- return;
case ValueFormatInt52:
out.print("Int52");
return;
diff --git a/Source/JavaScriptCore/ftl/FTLValueFormat.h b/Source/JavaScriptCore/ftl/FTLValueFormat.h
index 40ac775..b031f0d 100644
--- a/Source/JavaScriptCore/ftl/FTLValueFormat.h
+++ b/Source/JavaScriptCore/ftl/FTLValueFormat.h
@@ -45,7 +45,6 @@
enum ValueFormat {
InvalidValueFormat,
ValueFormatInt32,
- ValueFormatUInt32,
ValueFormatInt52,
ValueFormatStrictInt52,
ValueFormatBoolean,
diff --git a/Source/JavaScriptCore/jit/JIT.cpp b/Source/JavaScriptCore/jit/JIT.cpp
index ab68080..e01e26b 100644
--- a/Source/JavaScriptCore/jit/JIT.cpp
+++ b/Source/JavaScriptCore/jit/JIT.cpp
@@ -267,6 +267,7 @@
DEFINE_OP(op_ret)
DEFINE_OP(op_ret_object_or_this)
DEFINE_OP(op_rshift)
+ DEFINE_OP(op_unsigned)
DEFINE_OP(op_urshift)
DEFINE_OP(op_strcat)
DEFINE_OP(op_stricteq)
@@ -412,6 +413,7 @@
case op_put_by_val_direct:
DEFINE_SLOWCASE_OP(op_put_by_val)
DEFINE_SLOWCASE_OP(op_rshift)
+ DEFINE_SLOWCASE_OP(op_unsigned)
DEFINE_SLOWCASE_OP(op_urshift)
DEFINE_SLOWCASE_OP(op_stricteq)
DEFINE_SLOWCASE_OP(op_sub)
diff --git a/Source/JavaScriptCore/jit/JIT.h b/Source/JavaScriptCore/jit/JIT.h
index 7a87980..cba1d15 100644
--- a/Source/JavaScriptCore/jit/JIT.h
+++ b/Source/JavaScriptCore/jit/JIT.h
@@ -552,6 +552,7 @@
void emit_op_to_number(Instruction*);
void emit_op_to_primitive(Instruction*);
void emit_op_unexpected_load(Instruction*);
+ void emit_op_unsigned(Instruction*);
void emit_op_urshift(Instruction*);
void emitSlow_op_add(Instruction*, Vector<SlowCaseEntry>::iterator&);
@@ -603,6 +604,7 @@
void emitSlow_op_sub(Instruction*, Vector<SlowCaseEntry>::iterator&);
void emitSlow_op_to_number(Instruction*, Vector<SlowCaseEntry>::iterator&);
void emitSlow_op_to_primitive(Instruction*, Vector<SlowCaseEntry>::iterator&);
+ void emitSlow_op_unsigned(Instruction*, Vector<SlowCaseEntry>::iterator&);
void emitSlow_op_urshift(Instruction*, Vector<SlowCaseEntry>::iterator&);
void emit_op_resolve_scope(Instruction*);
diff --git a/Source/JavaScriptCore/jit/JITArithmetic.cpp b/Source/JavaScriptCore/jit/JITArithmetic.cpp
index a9d8048..438ee4f 100644
--- a/Source/JavaScriptCore/jit/JITArithmetic.cpp
+++ b/Source/JavaScriptCore/jit/JITArithmetic.cpp
@@ -306,96 +306,82 @@
void JIT::emit_op_urshift(Instruction* currentInstruction)
{
- int dst = currentInstruction[1].u.operand;
+ int result = currentInstruction[1].u.operand;
int op1 = currentInstruction[2].u.operand;
int op2 = currentInstruction[3].u.operand;
- // Slow case of urshift makes assumptions about what registers hold the
- // shift arguments, so any changes must be updated there as well.
if (isOperandConstantImmediateInt(op2)) {
+ // isOperandConstantImmediateInt(op2) => 1 SlowCase
emitGetVirtualRegister(op1, regT0);
emitJumpSlowCaseIfNotImmediateInteger(regT0);
- emitFastArithImmToInt(regT0);
- int shift = getConstantOperand(op2).asInt32();
- if (shift)
- urshift32(Imm32(shift & 0x1f), regT0);
- // unsigned shift < 0 or shift = k*2^32 may result in (essentially)
- // a toUint conversion, which can result in a value we can represent
- // as an immediate int.
- if (shift < 0 || !(shift & 31))
- addSlowCase(branch32(LessThan, regT0, TrustedImm32(0)));
- emitFastArithReTagImmediate(regT0, regT0);
- emitPutVirtualRegister(dst, regT0);
- return;
+ // Mask with 0x1f as per ecma-262 11.7.2 step 7.
+ urshift32(Imm32(getConstantOperandImmediateInt(op2) & 0x1f), regT0);
+ } else {
+ emitGetVirtualRegisters(op1, regT0, op2, regT2);
+ if (supportsFloatingPointTruncate()) {
+ Jump lhsIsInt = emitJumpIfImmediateInteger(regT0);
+ // supportsFloatingPoint() && USE(JSVALUE64) => 3 SlowCases
+ addSlowCase(emitJumpIfNotImmediateNumber(regT0));
+ add64(tagTypeNumberRegister, regT0);
+ move64ToDouble(regT0, fpRegT0);
+ addSlowCase(branchTruncateDoubleToInt32(fpRegT0, regT0));
+ lhsIsInt.link(this);
+ emitJumpSlowCaseIfNotImmediateInteger(regT2);
+ } else {
+ // !supportsFloatingPoint() => 2 SlowCases
+ emitJumpSlowCaseIfNotImmediateInteger(regT0);
+ emitJumpSlowCaseIfNotImmediateInteger(regT2);
+ }
+ emitFastArithImmToInt(regT2);
+ urshift32(regT2, regT0);
}
- emitGetVirtualRegisters(op1, regT0, op2, regT1);
- if (!isOperandConstantImmediateInt(op1))
- emitJumpSlowCaseIfNotImmediateInteger(regT0);
- emitJumpSlowCaseIfNotImmediateInteger(regT1);
- emitFastArithImmToInt(regT0);
- emitFastArithImmToInt(regT1);
- urshift32(regT1, regT0);
- addSlowCase(branch32(LessThan, regT0, TrustedImm32(0)));
- emitFastArithReTagImmediate(regT0, regT0);
- emitPutVirtualRegister(dst, regT0);
+ emitFastArithIntToImmNoCheck(regT0, regT0);
+ emitPutVirtualRegister(result);
}
void JIT::emitSlow_op_urshift(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
{
- int dst = currentInstruction[1].u.operand;
- int op1 = currentInstruction[2].u.operand;
int op2 = currentInstruction[3].u.operand;
- if (isOperandConstantImmediateInt(op2)) {
- int shift = getConstantOperand(op2).asInt32();
- // op1 = regT0
- linkSlowCase(iter); // int32 check
+
+ if (isOperandConstantImmediateInt(op2))
+ linkSlowCase(iter);
+
+ else {
if (supportsFloatingPointTruncate()) {
- JumpList failures;
- failures.append(emitJumpIfNotImmediateNumber(regT0)); // op1 is not a double
- add64(tagTypeNumberRegister, regT0);
- move64ToDouble(regT0, fpRegT0);
- failures.append(branchTruncateDoubleToInt32(fpRegT0, regT0));
- if (shift)
- urshift32(Imm32(shift & 0x1f), regT0);
- if (shift < 0 || !(shift & 31))
- failures.append(branch32(LessThan, regT0, TrustedImm32(0)));
- emitFastArithReTagImmediate(regT0, regT0);
- emitPutVirtualRegister(dst, regT0);
- emitJumpSlowToHot(jump(), OPCODE_LENGTH(op_rshift));
- failures.link(this);
+ linkSlowCase(iter);
+ linkSlowCase(iter);
+ linkSlowCase(iter);
+ } else {
+ linkSlowCase(iter);
+ linkSlowCase(iter);
}
- if (shift < 0 || !(shift & 31))
- linkSlowCase(iter); // failed to box in hot path
- } else {
- // op1 = regT0
- // op2 = regT1
- if (!isOperandConstantImmediateInt(op1)) {
- linkSlowCase(iter); // int32 check -- op1 is not an int
- if (supportsFloatingPointTruncate()) {
- JumpList failures;
- failures.append(emitJumpIfNotImmediateNumber(regT0)); // op1 is not a double
- add64(tagTypeNumberRegister, regT0);
- move64ToDouble(regT0, fpRegT0);
- failures.append(branchTruncateDoubleToInt32(fpRegT0, regT0));
- failures.append(emitJumpIfNotImmediateInteger(regT1)); // op2 is not an int
- emitFastArithImmToInt(regT1);
- urshift32(regT1, regT0);
- failures.append(branch32(LessThan, regT0, TrustedImm32(0)));
- emitFastArithReTagImmediate(regT0, regT0);
- emitPutVirtualRegister(dst, regT0);
- emitJumpSlowToHot(jump(), OPCODE_LENGTH(op_rshift));
- failures.link(this);
- }
- }
-
- linkSlowCase(iter); // int32 check - op2 is not an int
- linkSlowCase(iter); // Can't represent unsigned result as an immediate
}
-
+
JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_urshift);
slowPathCall.call();
}
+void JIT::emit_op_unsigned(Instruction* currentInstruction)
+{
+ int result = currentInstruction[1].u.operand;
+ int op1 = currentInstruction[2].u.operand;
+
+ emitGetVirtualRegister(op1, regT0);
+ emitJumpSlowCaseIfNotImmediateInteger(regT0);
+ addSlowCase(branch32(LessThan, regT0, TrustedImm32(0)));
+ emitFastArithReTagImmediate(regT0, regT0);
+ emitPutVirtualRegister(result, regT0);
+}
+
+void JIT::emitSlow_op_unsigned(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
+{
+ linkSlowCase(iter);
+ linkSlowCase(iter);
+
+ JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_unsigned);
+ slowPathCall.call();
+}
+
void JIT::emit_compareAndJump(OpcodeID, int op1, int op2, unsigned target, RelationalCondition condition)
{
// We generate inline code for the following cases in the fast path:
diff --git a/Source/JavaScriptCore/jit/JITArithmetic32_64.cpp b/Source/JavaScriptCore/jit/JITArithmetic32_64.cpp
index bb63120..6c62976d 100644
--- a/Source/JavaScriptCore/jit/JITArithmetic32_64.cpp
+++ b/Source/JavaScriptCore/jit/JITArithmetic32_64.cpp
@@ -211,18 +211,16 @@
urshift32(Imm32(shift), regT0);
else
rshift32(Imm32(shift), regT0);
- } else if (isUnsigned) // signed right shift by zero is simply toInt conversion
- addSlowCase(branch32(LessThan, regT0, TrustedImm32(0)));
+ }
emitStoreInt32(dst, regT0, dst == op1);
} else {
emitLoad2(op1, regT1, regT0, op2, regT3, regT2);
if (!isOperandConstantImmediateInt(op1))
addSlowCase(branch32(NotEqual, regT1, TrustedImm32(JSValue::Int32Tag)));
addSlowCase(branch32(NotEqual, regT3, TrustedImm32(JSValue::Int32Tag)));
- if (isUnsigned) {
+ if (isUnsigned)
urshift32(regT2, regT0);
- addSlowCase(branch32(LessThan, regT0, TrustedImm32(0)));
- } else
+ else
rshift32(regT2, regT0);
emitStoreInt32(dst, regT0, dst == op1);
}
@@ -247,15 +245,12 @@
urshift32(Imm32(shift), regT0);
else
rshift32(Imm32(shift), regT0);
- } else if (isUnsigned) // signed right shift by zero is simply toInt conversion
- failures.append(branch32(LessThan, regT0, TrustedImm32(0)));
+ }
move(TrustedImm32(JSValue::Int32Tag), regT1);
emitStoreInt32(dst, regT0, false);
emitJumpSlowToHot(jump(), OPCODE_LENGTH(op_rshift));
failures.link(this);
}
- if (isUnsigned && !shift)
- linkSlowCase(iter); // failed to box in hot path
} else {
// op1 = regT1:regT0
// op2 = regT3:regT2
@@ -267,10 +262,9 @@
emitLoadDouble(op1, fpRegT0);
failures.append(branch32(NotEqual, regT3, TrustedImm32(JSValue::Int32Tag))); // op2 is not an int
failures.append(branchTruncateDoubleToInt32(fpRegT0, regT0));
- if (isUnsigned) {
+ if (isUnsigned)
urshift32(regT2, regT0);
- failures.append(branch32(LessThan, regT0, TrustedImm32(0)));
- } else
+ else
rshift32(regT2, regT0);
move(TrustedImm32(JSValue::Int32Tag), regT1);
emitStoreInt32(dst, regT0, false);
@@ -280,8 +274,6 @@
}
linkSlowCase(iter); // int32 check - op2 is not an int
- if (isUnsigned)
- linkSlowCase(iter); // Can't represent unsigned result as an immediate
}
JITSlowPathCall slowPathCall(this, currentInstruction, isUnsigned ? slow_path_urshift : slow_path_rshift);
@@ -312,6 +304,27 @@
emitRightShiftSlowCase(currentInstruction, iter, true);
}
+void JIT::emit_op_unsigned(Instruction* currentInstruction)
+{
+ int result = currentInstruction[1].u.operand;
+ int op1 = currentInstruction[2].u.operand;
+
+ emitLoad(op1, regT1, regT0);
+
+ addSlowCase(branch32(NotEqual, regT1, TrustedImm32(JSValue::Int32Tag)));
+ addSlowCase(branch32(LessThan, regT0, TrustedImm32(0)));
+ emitStoreInt32(result, regT0, result == op1);
+}
+
+void JIT::emitSlow_op_unsigned(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
+{
+ linkSlowCase(iter);
+ linkSlowCase(iter);
+
+ JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_unsigned);
+ slowPathCall.call();
+}
+
// BitAnd (&)
void JIT::emit_op_bitand(Instruction* currentInstruction)
diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
index 50ba5cb..92ff430 100644
--- a/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
@@ -1044,7 +1044,7 @@
bineq t3, Int32Tag, .slow
bineq t2, Int32Tag, .slow
loadi 4[PC], t2
- operation(t1, t0, .slow)
+ operation(t1, t0)
storei t3, TagOffset[cfr, t2, 8]
storei t0, PayloadOffset[cfr, t2, 8]
dispatch(advance)
@@ -1057,7 +1057,7 @@
_llint_op_lshift:
traceExecution()
bitOp(
- macro (left, right, slow) lshifti left, right end,
+ macro (left, right) lshifti left, right end,
_slow_path_lshift,
4)
@@ -1065,7 +1065,7 @@
_llint_op_rshift:
traceExecution()
bitOp(
- macro (left, right, slow) rshifti left, right end,
+ macro (left, right) rshifti left, right end,
_slow_path_rshift,
4)
@@ -1073,18 +1073,29 @@
_llint_op_urshift:
traceExecution()
bitOp(
- macro (left, right, slow)
- urshifti left, right
- bilt right, 0, slow
- end,
+ macro (left, right) urshifti left, right end,
_slow_path_urshift,
4)
+_llint_op_unsigned:
+ traceExecution()
+ loadi 4[PC], t0
+ loadi 8[PC], t1
+ loadConstantOrVariablePayload(t1, Int32Tag, t2, .opUnsignedSlow)
+ bilt t2, 0, .opUnsignedSlow
+ storei t2, PayloadOffset[cfr, t0, 8]
+ storei Int32Tag, TagOffset[cfr, t0, 8]
+ dispatch(3)
+.opUnsignedSlow:
+ callSlowPath(_slow_path_unsigned)
+ dispatch(3)
+
+
_llint_op_bitand:
traceExecution()
bitOp(
- macro (left, right, slow) andi left, right end,
+ macro (left, right) andi left, right end,
_slow_path_bitand,
5)
@@ -1092,7 +1103,7 @@
_llint_op_bitxor:
traceExecution()
bitOp(
- macro (left, right, slow) xori left, right end,
+ macro (left, right) xori left, right end,
_slow_path_bitxor,
5)
@@ -1100,7 +1111,7 @@
_llint_op_bitor:
traceExecution()
bitOp(
- macro (left, right, slow) ori left, right end,
+ macro (left, right) ori left, right end,
_slow_path_bitor,
5)
diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
index 637f942..e5236f1 100644
--- a/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
@@ -861,7 +861,7 @@
loadConstantOrVariable(t2, t0)
bqb t0, tagTypeNumber, .slow
bqb t1, tagTypeNumber, .slow
- operation(t1, t0, .slow)
+ operation(t1, t0)
orq tagTypeNumber, t0
storeq t0, [cfr, t3, 8]
dispatch(advance)
@@ -874,7 +874,7 @@
_llint_op_lshift:
traceExecution()
bitOp(
- macro (left, right, slow) lshifti left, right end,
+ macro (left, right) lshifti left, right end,
_slow_path_lshift,
4)
@@ -882,7 +882,7 @@
_llint_op_rshift:
traceExecution()
bitOp(
- macro (left, right, slow) rshifti left, right end,
+ macro (left, right) rshifti left, right end,
_slow_path_rshift,
4)
@@ -890,18 +890,28 @@
_llint_op_urshift:
traceExecution()
bitOp(
- macro (left, right, slow)
- urshifti left, right
- bilt right, 0, slow
- end,
+ macro (left, right) urshifti left, right end,
_slow_path_urshift,
4)
+_llint_op_unsigned:
+ traceExecution()
+ loadisFromInstruction(1, t0)
+ loadisFromInstruction(2, t1)
+ loadConstantOrVariable(t1, t2)
+ bilt t2, 0, .opUnsignedSlow
+ storeq t2, [cfr, t0, 8]
+ dispatch(3)
+.opUnsignedSlow:
+ callSlowPath(_slow_path_unsigned)
+ dispatch(3)
+
+
_llint_op_bitand:
traceExecution()
bitOp(
- macro (left, right, slow) andi left, right end,
+ macro (left, right) andi left, right end,
_slow_path_bitand,
5)
@@ -909,7 +919,7 @@
_llint_op_bitxor:
traceExecution()
bitOp(
- macro (left, right, slow) xori left, right end,
+ macro (left, right) xori left, right end,
_slow_path_bitxor,
5)
@@ -917,7 +927,7 @@
_llint_op_bitor:
traceExecution()
bitOp(
- macro (left, right, slow) ori left, right end,
+ macro (left, right) ori left, right end,
_slow_path_bitor,
5)
diff --git a/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp b/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp
index 06d5431..72c74fb 100644
--- a/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp
+++ b/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp
@@ -417,7 +417,14 @@
BEGIN();
uint32_t a = OP_C(2).jsValue().toUInt32(exec);
uint32_t b = OP_C(3).jsValue().toUInt32(exec);
- RETURN(jsNumber(a >> (b & 31)));
+ RETURN(jsNumber(static_cast<int32_t>(a >> (b & 31))));
+}
+
+SLOW_PATH_DECL(slow_path_unsigned)
+{
+ BEGIN();
+ uint32_t a = OP_C(2).jsValue().toUInt32(exec);
+ RETURN(jsNumber(a));
}
SLOW_PATH_DECL(slow_path_bitand)
diff --git a/Source/JavaScriptCore/runtime/CommonSlowPaths.h b/Source/JavaScriptCore/runtime/CommonSlowPaths.h
index 8acbf9f..0ae3b40 100644
--- a/Source/JavaScriptCore/runtime/CommonSlowPaths.h
+++ b/Source/JavaScriptCore/runtime/CommonSlowPaths.h
@@ -188,6 +188,7 @@
SLOW_PATH_HIDDEN_DECL(slow_path_lshift);
SLOW_PATH_HIDDEN_DECL(slow_path_rshift);
SLOW_PATH_HIDDEN_DECL(slow_path_urshift);
+SLOW_PATH_HIDDEN_DECL(slow_path_unsigned);
SLOW_PATH_HIDDEN_DECL(slow_path_bitand);
SLOW_PATH_HIDDEN_DECL(slow_path_bitor);
SLOW_PATH_HIDDEN_DECL(slow_path_bitxor);