Polymorphic operand types for DFG and FTL bit operators.
https://bugs.webkit.org/show_bug.cgi?id=152191
Reviewed by Saam Barati.
Source/JavaScriptCore:
* bytecode/SpeculatedType.h:
(JSC::isUntypedSpeculationForBitOps):
* dfg/DFGAbstractInterpreterInlines.h:
(JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
* dfg/DFGNode.h:
(JSC::DFG::Node::shouldSpeculateUntypedForBitOps):
- Added check for types not supported by ValueToInt32, and therefore should be
treated as untyped for bitops.
* dfg/DFGClobberize.h:
(JSC::DFG::clobberize):
* dfg/DFGFixupPhase.cpp:
(JSC::DFG::FixupPhase::fixupNode):
- Handled untyped operands.
* dfg/DFGOperations.cpp:
* dfg/DFGOperations.h:
- Added DFG slow path functions for bitops.
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::emitUntypedBitOp):
(JSC::DFG::SpeculativeJIT::compileBitwiseOp):
(JSC::DFG::SpeculativeJIT::emitUntypedRightShiftBitOp):
(JSC::DFG::SpeculativeJIT::compileShiftOp):
* dfg/DFGSpeculativeJIT.h:
- Added DFG backend support untyped operands for bitops.
* dfg/DFGStrengthReductionPhase.cpp:
(JSC::DFG::StrengthReductionPhase::handleNode):
- Limit bitops strength reduction only to when we don't have untyped operands.
This is because values that are not int32s need to be converted to int32.
Without untyped operands, the ValueToInt32 node takes care of this.
With untyped operands, we cannot use ValueToInt32, and need to do the conversion
in the code emitted for the bitop node itself. For example:
5.5 | 0; // yields 5 because ValueToInt32 converts the 5.5 to a 5.
"abc" | 0; // would yield "abc" instead of the expected 0 if we let
// strength reduction do its thing.
* ftl/FTLCompileBinaryOp.cpp:
(JSC::FTL::generateBinaryBitOpFastPath):
(JSC::FTL::generateRightShiftFastPath):
(JSC::FTL::generateBinaryOpFastPath):
* ftl/FTLInlineCacheDescriptor.h:
(JSC::FTL::BitAndDescriptor::BitAndDescriptor):
(JSC::FTL::BitAndDescriptor::icSize):
(JSC::FTL::BitAndDescriptor::nodeType):
(JSC::FTL::BitAndDescriptor::opName):
(JSC::FTL::BitAndDescriptor::slowPathFunction):
(JSC::FTL::BitAndDescriptor::nonNumberSlowPathFunction):
(JSC::FTL::BitOrDescriptor::BitOrDescriptor):
(JSC::FTL::BitOrDescriptor::icSize):
(JSC::FTL::BitOrDescriptor::nodeType):
(JSC::FTL::BitOrDescriptor::opName):
(JSC::FTL::BitOrDescriptor::slowPathFunction):
(JSC::FTL::BitOrDescriptor::nonNumberSlowPathFunction):
(JSC::FTL::BitXorDescriptor::BitXorDescriptor):
(JSC::FTL::BitXorDescriptor::icSize):
(JSC::FTL::BitXorDescriptor::nodeType):
(JSC::FTL::BitXorDescriptor::opName):
(JSC::FTL::BitXorDescriptor::slowPathFunction):
(JSC::FTL::BitXorDescriptor::nonNumberSlowPathFunction):
(JSC::FTL::BitLShiftDescriptor::BitLShiftDescriptor):
(JSC::FTL::BitLShiftDescriptor::icSize):
(JSC::FTL::BitLShiftDescriptor::nodeType):
(JSC::FTL::BitLShiftDescriptor::opName):
(JSC::FTL::BitLShiftDescriptor::slowPathFunction):
(JSC::FTL::BitLShiftDescriptor::nonNumberSlowPathFunction):
(JSC::FTL::BitRShiftDescriptor::BitRShiftDescriptor):
(JSC::FTL::BitRShiftDescriptor::icSize):
(JSC::FTL::BitRShiftDescriptor::nodeType):
(JSC::FTL::BitRShiftDescriptor::opName):
(JSC::FTL::BitRShiftDescriptor::slowPathFunction):
(JSC::FTL::BitRShiftDescriptor::nonNumberSlowPathFunction):
(JSC::FTL::BitURShiftDescriptor::BitURShiftDescriptor):
(JSC::FTL::BitURShiftDescriptor::icSize):
(JSC::FTL::BitURShiftDescriptor::nodeType):
(JSC::FTL::BitURShiftDescriptor::opName):
(JSC::FTL::BitURShiftDescriptor::slowPathFunction):
(JSC::FTL::BitURShiftDescriptor::nonNumberSlowPathFunction):
- Added support for bitop ICs.
* ftl/FTLInlineCacheSize.cpp:
(JSC::FTL::sizeOfBitAnd):
(JSC::FTL::sizeOfBitOr):
(JSC::FTL::sizeOfBitXor):
(JSC::FTL::sizeOfBitLShift):
(JSC::FTL::sizeOfBitRShift):
(JSC::FTL::sizeOfBitURShift):
* ftl/FTLInlineCacheSize.h:
- Added new bitop IC sizes. These are just estimates for now that work adequately,
and are shown to not impact performance on benchmarks. We will re-tune these
sizes values later in another patch once all snippet ICs have been added.
* ftl/FTLLowerDFGToLLVM.cpp:
(JSC::FTL::DFG::LowerDFGToLLVM::compileBitAnd):
(JSC::FTL::DFG::LowerDFGToLLVM::compileBitOr):
(JSC::FTL::DFG::LowerDFGToLLVM::compileBitXor):
(JSC::FTL::DFG::LowerDFGToLLVM::compileBitRShift):
(JSC::FTL::DFG::LowerDFGToLLVM::compileBitLShift):
(JSC::FTL::DFG::LowerDFGToLLVM::compileBitURShift):
- Added support for bitop ICs.
* jit/JITLeftShiftGenerator.cpp:
(JSC::JITLeftShiftGenerator::generateFastPath):
* jit/JITLeftShiftGenerator.h:
(JSC::JITLeftShiftGenerator::JITLeftShiftGenerator):
* jit/JITRightShiftGenerator.cpp:
(JSC::JITRightShiftGenerator::generateFastPath):
- The shift MASM operatons need to ensure that the shiftAmount is not in the same
register as the destination register. With the baselineJIT and DFG, this is
ensured in how we allocate these registers, and hence, the bug does not manifest.
With the FTL, these registers are not guaranteed to be unique. Hence, we need
to fix the shift op snippet code to compensate for this.
LayoutTests:
* js/regress/ftl-polymorphic-bitand-expected.txt: Added.
* js/regress/ftl-polymorphic-bitand.html: Added.
* js/regress/ftl-polymorphic-bitor-expected.txt: Added.
* js/regress/ftl-polymorphic-bitor.html: Added.
* js/regress/ftl-polymorphic-bitxor-expected.txt: Added.
* js/regress/ftl-polymorphic-bitxor.html: Added.
* js/regress/ftl-polymorphic-lshift-expected.txt: Added.
* js/regress/ftl-polymorphic-lshift.html: Added.
* js/regress/ftl-polymorphic-rshift-expected.txt: Added.
* js/regress/ftl-polymorphic-rshift.html: Added.
* js/regress/ftl-polymorphic-urshift-expected.txt: Added.
* js/regress/ftl-polymorphic-urshift.html: Added.
* js/regress/script-tests/ftl-polymorphic-bitand.js: Added.
(o1.valueOf):
(foo):
* js/regress/script-tests/ftl-polymorphic-bitor.js: Added.
(o1.valueOf):
(foo):
* js/regress/script-tests/ftl-polymorphic-bitxor.js: Added.
(o1.valueOf):
(foo):
* js/regress/script-tests/ftl-polymorphic-lshift.js: Added.
(o1.valueOf):
(foo):
* js/regress/script-tests/ftl-polymorphic-rshift.js: Added.
(o1.valueOf):
(foo):
* js/regress/script-tests/ftl-polymorphic-urshift.js: Added.
(o1.valueOf):
(foo):
git-svn-id: http://svn.webkit.org/repository/webkit/trunk@194113 268f45cc-cd09-0410-ab3c-d52691b4dbfc
diff --git a/Source/JavaScriptCore/ChangeLog b/Source/JavaScriptCore/ChangeLog
index 2b385fd..8492bfa 100644
--- a/Source/JavaScriptCore/ChangeLog
+++ b/Source/JavaScriptCore/ChangeLog
@@ -1,3 +1,126 @@
+2015-12-15 Mark Lam <mark.lam@apple.com>
+
+ Polymorphic operand types for DFG and FTL bit operators.
+ https://bugs.webkit.org/show_bug.cgi?id=152191
+
+ Reviewed by Saam Barati.
+
+ * bytecode/SpeculatedType.h:
+ (JSC::isUntypedSpeculationForBitOps):
+ * dfg/DFGAbstractInterpreterInlines.h:
+ (JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
+ * dfg/DFGNode.h:
+ (JSC::DFG::Node::shouldSpeculateUntypedForBitOps):
+ - Added check for types not supported by ValueToInt32, and therefore should be
+ treated as untyped for bitops.
+
+ * dfg/DFGClobberize.h:
+ (JSC::DFG::clobberize):
+ * dfg/DFGFixupPhase.cpp:
+ (JSC::DFG::FixupPhase::fixupNode):
+ - Handled untyped operands.
+
+ * dfg/DFGOperations.cpp:
+ * dfg/DFGOperations.h:
+ - Added DFG slow path functions for bitops.
+
+ * dfg/DFGSpeculativeJIT.cpp:
+ (JSC::DFG::SpeculativeJIT::emitUntypedBitOp):
+ (JSC::DFG::SpeculativeJIT::compileBitwiseOp):
+ (JSC::DFG::SpeculativeJIT::emitUntypedRightShiftBitOp):
+ (JSC::DFG::SpeculativeJIT::compileShiftOp):
+ * dfg/DFGSpeculativeJIT.h:
+ - Added DFG backend support untyped operands for bitops.
+
+ * dfg/DFGStrengthReductionPhase.cpp:
+ (JSC::DFG::StrengthReductionPhase::handleNode):
+ - Limit bitops strength reduction only to when we don't have untyped operands.
+ This is because values that are not int32s need to be converted to int32.
+ Without untyped operands, the ValueToInt32 node takes care of this.
+ With untyped operands, we cannot use ValueToInt32, and need to do the conversion
+ in the code emitted for the bitop node itself. For example:
+
+ 5.5 | 0; // yields 5 because ValueToInt32 converts the 5.5 to a 5.
+ "abc" | 0; // would yield "abc" instead of the expected 0 if we let
+ // strength reduction do its thing.
+
+ * ftl/FTLCompileBinaryOp.cpp:
+ (JSC::FTL::generateBinaryBitOpFastPath):
+ (JSC::FTL::generateRightShiftFastPath):
+ (JSC::FTL::generateBinaryOpFastPath):
+
+ * ftl/FTLInlineCacheDescriptor.h:
+ (JSC::FTL::BitAndDescriptor::BitAndDescriptor):
+ (JSC::FTL::BitAndDescriptor::icSize):
+ (JSC::FTL::BitAndDescriptor::nodeType):
+ (JSC::FTL::BitAndDescriptor::opName):
+ (JSC::FTL::BitAndDescriptor::slowPathFunction):
+ (JSC::FTL::BitAndDescriptor::nonNumberSlowPathFunction):
+ (JSC::FTL::BitOrDescriptor::BitOrDescriptor):
+ (JSC::FTL::BitOrDescriptor::icSize):
+ (JSC::FTL::BitOrDescriptor::nodeType):
+ (JSC::FTL::BitOrDescriptor::opName):
+ (JSC::FTL::BitOrDescriptor::slowPathFunction):
+ (JSC::FTL::BitOrDescriptor::nonNumberSlowPathFunction):
+ (JSC::FTL::BitXorDescriptor::BitXorDescriptor):
+ (JSC::FTL::BitXorDescriptor::icSize):
+ (JSC::FTL::BitXorDescriptor::nodeType):
+ (JSC::FTL::BitXorDescriptor::opName):
+ (JSC::FTL::BitXorDescriptor::slowPathFunction):
+ (JSC::FTL::BitXorDescriptor::nonNumberSlowPathFunction):
+ (JSC::FTL::BitLShiftDescriptor::BitLShiftDescriptor):
+ (JSC::FTL::BitLShiftDescriptor::icSize):
+ (JSC::FTL::BitLShiftDescriptor::nodeType):
+ (JSC::FTL::BitLShiftDescriptor::opName):
+ (JSC::FTL::BitLShiftDescriptor::slowPathFunction):
+ (JSC::FTL::BitLShiftDescriptor::nonNumberSlowPathFunction):
+ (JSC::FTL::BitRShiftDescriptor::BitRShiftDescriptor):
+ (JSC::FTL::BitRShiftDescriptor::icSize):
+ (JSC::FTL::BitRShiftDescriptor::nodeType):
+ (JSC::FTL::BitRShiftDescriptor::opName):
+ (JSC::FTL::BitRShiftDescriptor::slowPathFunction):
+ (JSC::FTL::BitRShiftDescriptor::nonNumberSlowPathFunction):
+ (JSC::FTL::BitURShiftDescriptor::BitURShiftDescriptor):
+ (JSC::FTL::BitURShiftDescriptor::icSize):
+ (JSC::FTL::BitURShiftDescriptor::nodeType):
+ (JSC::FTL::BitURShiftDescriptor::opName):
+ (JSC::FTL::BitURShiftDescriptor::slowPathFunction):
+ (JSC::FTL::BitURShiftDescriptor::nonNumberSlowPathFunction):
+ - Added support for bitop ICs.
+
+ * ftl/FTLInlineCacheSize.cpp:
+ (JSC::FTL::sizeOfBitAnd):
+ (JSC::FTL::sizeOfBitOr):
+ (JSC::FTL::sizeOfBitXor):
+ (JSC::FTL::sizeOfBitLShift):
+ (JSC::FTL::sizeOfBitRShift):
+ (JSC::FTL::sizeOfBitURShift):
+ * ftl/FTLInlineCacheSize.h:
+ - Added new bitop IC sizes. These are just estimates for now that work adequately,
+ and are shown to not impact performance on benchmarks. We will re-tune these
+ sizes values later in another patch once all snippet ICs have been added.
+
+ * ftl/FTLLowerDFGToLLVM.cpp:
+ (JSC::FTL::DFG::LowerDFGToLLVM::compileBitAnd):
+ (JSC::FTL::DFG::LowerDFGToLLVM::compileBitOr):
+ (JSC::FTL::DFG::LowerDFGToLLVM::compileBitXor):
+ (JSC::FTL::DFG::LowerDFGToLLVM::compileBitRShift):
+ (JSC::FTL::DFG::LowerDFGToLLVM::compileBitLShift):
+ (JSC::FTL::DFG::LowerDFGToLLVM::compileBitURShift):
+ - Added support for bitop ICs.
+
+ * jit/JITLeftShiftGenerator.cpp:
+ (JSC::JITLeftShiftGenerator::generateFastPath):
+ * jit/JITLeftShiftGenerator.h:
+ (JSC::JITLeftShiftGenerator::JITLeftShiftGenerator):
+ * jit/JITRightShiftGenerator.cpp:
+ (JSC::JITRightShiftGenerator::generateFastPath):
+ - The shift MASM operatons need to ensure that the shiftAmount is not in the same
+ register as the destination register. With the baselineJIT and DFG, this is
+ ensured in how we allocate these registers, and hence, the bug does not manifest.
+ With the FTL, these registers are not guaranteed to be unique. Hence, we need
+ to fix the shift op snippet code to compensate for this.
+
2015-12-15 Caitlin Potter <caitp@igalia.com>
[JSC] SyntaxError if AssignmentElement is `eval` or `arguments` in strict code
@@ -1605,7 +1728,6 @@
(JSC::ArrayPrototype::finishCreation):
* runtime/CommonIdentifiers.h:
->>>>>>> .r193940
2015-12-08 Filip Pizlo <fpizlo@apple.com>
FTL B3 should have basic GetById support
diff --git a/Source/JavaScriptCore/bytecode/SpeculatedType.h b/Source/JavaScriptCore/bytecode/SpeculatedType.h
index 63d00b3..8f0929b 100644
--- a/Source/JavaScriptCore/bytecode/SpeculatedType.h
+++ b/Source/JavaScriptCore/bytecode/SpeculatedType.h
@@ -390,6 +390,11 @@
return !(value & (SpecFullNumber | SpecBoolean));
}
+inline bool isUntypedSpeculationForBitOps(SpeculatedType value)
+{
+ return !(value & (SpecFullNumber | SpecBoolean | SpecOther));
+}
+
void dumpSpeculation(PrintStream&, SpeculatedType);
void dumpSpeculationAbbreviated(PrintStream&, SpeculatedType);
diff --git a/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h b/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
index 226eae3..31b548b 100644
--- a/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
+++ b/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
@@ -233,6 +233,12 @@
case BitRShift:
case BitLShift:
case BitURShift: {
+ if (node->child1().useKind() == UntypedUse || node->child2().useKind() == UntypedUse) {
+ clobberWorld(node->origin.semantic, clobberLimit);
+ forNode(node).setType(m_graph, SpecInt32);
+ break;
+ }
+
JSValue left = forNode(node->child1()).value();
JSValue right = forNode(node->child2()).value();
if (left && right && left.isInt32() && right.isInt32()) {
diff --git a/Source/JavaScriptCore/dfg/DFGClobberize.h b/Source/JavaScriptCore/dfg/DFGClobberize.h
index fd60373..ba7bf72 100644
--- a/Source/JavaScriptCore/dfg/DFGClobberize.h
+++ b/Source/JavaScriptCore/dfg/DFGClobberize.h
@@ -121,12 +121,6 @@
case CheckStructureImmediate:
return;
- case BitAnd:
- case BitOr:
- case BitXor:
- case BitLShift:
- case BitRShift:
- case BitURShift:
case ArithIMul:
case ArithAbs:
case ArithClz32:
@@ -164,6 +158,20 @@
def(PureValue(node));
return;
+ case BitAnd:
+ case BitOr:
+ case BitXor:
+ case BitLShift:
+ case BitRShift:
+ case BitURShift:
+ if (node->child1().useKind() == UntypedUse || node->child2().useKind() == UntypedUse) {
+ read(World);
+ write(Heap);
+ return;
+ }
+ def(PureValue(node));
+ return;
+
case ArithRandom:
read(MathDotRandomState);
write(MathDotRandomState);
diff --git a/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp b/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
index 9198cd8..82fa48d 100644
--- a/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
@@ -105,6 +105,12 @@
case BitRShift:
case BitLShift:
case BitURShift: {
+ if (Node::shouldSpeculateUntypedForBitOps(node->child1().node(), node->child2().node())
+ && m_graph.hasExitSite(node->origin.semantic, BadType)) {
+ fixEdge<UntypedUse>(node->child1());
+ fixEdge<UntypedUse>(node->child2());
+ break;
+ }
fixIntConvertingEdge(node->child1());
fixIntConvertingEdge(node->child2());
break;
diff --git a/Source/JavaScriptCore/dfg/DFGNode.h b/Source/JavaScriptCore/dfg/DFGNode.h
index f96a7ef..1b3083b 100644
--- a/Source/JavaScriptCore/dfg/DFGNode.h
+++ b/Source/JavaScriptCore/dfg/DFGNode.h
@@ -2033,6 +2033,16 @@
return op1->shouldSpeculateUntypedForArithmetic() || op2->shouldSpeculateUntypedForArithmetic();
}
+ bool shouldSpeculateUntypedForBitOps()
+ {
+ return isUntypedSpeculationForBitOps(prediction());
+ }
+
+ static bool shouldSpeculateUntypedForBitOps(Node* op1, Node* op2)
+ {
+ return op1->shouldSpeculateUntypedForBitOps() || op2->shouldSpeculateUntypedForBitOps();
+ }
+
static bool shouldSpeculateBoolean(Node* op1, Node* op2)
{
return op1->shouldSpeculateBoolean() && op2->shouldSpeculateBoolean();
diff --git a/Source/JavaScriptCore/dfg/DFGOperations.cpp b/Source/JavaScriptCore/dfg/DFGOperations.cpp
index 7bcd75d..6736de0 100644
--- a/Source/JavaScriptCore/dfg/DFGOperations.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOperations.cpp
@@ -173,6 +173,84 @@
return constructEmptyObject(exec, jsCast<JSFunction*>(constructor)->rareData(exec, inlineCapacity)->allocationProfile()->structure());
}
+EncodedJSValue JIT_OPERATION operationValueBitAnd(ExecState* exec, EncodedJSValue encodedOp1, EncodedJSValue encodedOp2)
+{
+ VM* vm = &exec->vm();
+ NativeCallFrameTracer tracer(vm, exec);
+
+ JSValue op1 = JSValue::decode(encodedOp1);
+ JSValue op2 = JSValue::decode(encodedOp2);
+
+ int32_t a = op1.toInt32(exec);
+ int32_t b = op2.toInt32(exec);
+ return JSValue::encode(jsNumber(a & b));
+}
+
+EncodedJSValue JIT_OPERATION operationValueBitOr(ExecState* exec, EncodedJSValue encodedOp1, EncodedJSValue encodedOp2)
+{
+ VM* vm = &exec->vm();
+ NativeCallFrameTracer tracer(vm, exec);
+
+ JSValue op1 = JSValue::decode(encodedOp1);
+ JSValue op2 = JSValue::decode(encodedOp2);
+
+ int32_t a = op1.toInt32(exec);
+ int32_t b = op2.toInt32(exec);
+ return JSValue::encode(jsNumber(a | b));
+}
+
+EncodedJSValue JIT_OPERATION operationValueBitXor(ExecState* exec, EncodedJSValue encodedOp1, EncodedJSValue encodedOp2)
+{
+ VM* vm = &exec->vm();
+ NativeCallFrameTracer tracer(vm, exec);
+
+ JSValue op1 = JSValue::decode(encodedOp1);
+ JSValue op2 = JSValue::decode(encodedOp2);
+
+ int32_t a = op1.toInt32(exec);
+ int32_t b = op2.toInt32(exec);
+ return JSValue::encode(jsNumber(a ^ b));
+}
+
+EncodedJSValue JIT_OPERATION operationValueBitLShift(ExecState* exec, EncodedJSValue encodedOp1, EncodedJSValue encodedOp2)
+{
+ VM* vm = &exec->vm();
+ NativeCallFrameTracer tracer(vm, exec);
+
+ JSValue op1 = JSValue::decode(encodedOp1);
+ JSValue op2 = JSValue::decode(encodedOp2);
+
+ int32_t a = op1.toInt32(exec);
+ uint32_t b = op2.toUInt32(exec);
+ return JSValue::encode(jsNumber(a << (b & 0x1f)));
+}
+
+EncodedJSValue JIT_OPERATION operationValueBitRShift(ExecState* exec, EncodedJSValue encodedOp1, EncodedJSValue encodedOp2)
+{
+ VM* vm = &exec->vm();
+ NativeCallFrameTracer tracer(vm, exec);
+
+ JSValue op1 = JSValue::decode(encodedOp1);
+ JSValue op2 = JSValue::decode(encodedOp2);
+
+ int32_t a = op1.toInt32(exec);
+ uint32_t b = op2.toUInt32(exec);
+ return JSValue::encode(jsNumber(a >> (b & 0x1f)));
+}
+
+EncodedJSValue JIT_OPERATION operationValueBitURShift(ExecState* exec, EncodedJSValue encodedOp1, EncodedJSValue encodedOp2)
+{
+ VM* vm = &exec->vm();
+ NativeCallFrameTracer tracer(vm, exec);
+
+ JSValue op1 = JSValue::decode(encodedOp1);
+ JSValue op2 = JSValue::decode(encodedOp2);
+
+ uint32_t a = op1.toUInt32(exec);
+ uint32_t b = op2.toUInt32(exec);
+ return JSValue::encode(jsNumber(static_cast<int32_t>(a >> (b & 0x1f))));
+}
+
EncodedJSValue JIT_OPERATION operationValueAdd(ExecState* exec, EncodedJSValue encodedOp1, EncodedJSValue encodedOp2)
{
VM* vm = &exec->vm();
diff --git a/Source/JavaScriptCore/dfg/DFGOperations.h b/Source/JavaScriptCore/dfg/DFGOperations.h
index c25da06..5a48065 100644
--- a/Source/JavaScriptCore/dfg/DFGOperations.h
+++ b/Source/JavaScriptCore/dfg/DFGOperations.h
@@ -43,6 +43,12 @@
JSCell* JIT_OPERATION operationCreateThis(ExecState*, JSObject* constructor, int32_t inlineCapacity) WTF_INTERNAL;
EncodedJSValue JIT_OPERATION operationToThis(ExecState*, EncodedJSValue encodedOp1) WTF_INTERNAL;
EncodedJSValue JIT_OPERATION operationToThisStrict(ExecState*, EncodedJSValue encodedOp1) WTF_INTERNAL;
+EncodedJSValue JIT_OPERATION operationValueBitAnd(ExecState*, EncodedJSValue encodedOp1, EncodedJSValue encodedOp2) WTF_INTERNAL;
+EncodedJSValue JIT_OPERATION operationValueBitOr(ExecState*, EncodedJSValue encodedOp1, EncodedJSValue encodedOp2) WTF_INTERNAL;
+EncodedJSValue JIT_OPERATION operationValueBitXor(ExecState*, EncodedJSValue encodedOp1, EncodedJSValue encodedOp2) WTF_INTERNAL;
+EncodedJSValue JIT_OPERATION operationValueBitLShift(ExecState*, EncodedJSValue encodedOp1, EncodedJSValue encodedOp2) WTF_INTERNAL;
+EncodedJSValue JIT_OPERATION operationValueBitRShift(ExecState*, EncodedJSValue encodedOp1, EncodedJSValue encodedOp2) WTF_INTERNAL;
+EncodedJSValue JIT_OPERATION operationValueBitURShift(ExecState*, EncodedJSValue encodedOp1, EncodedJSValue encodedOp2) WTF_INTERNAL;
EncodedJSValue JIT_OPERATION operationValueAdd(ExecState*, EncodedJSValue encodedOp1, EncodedJSValue encodedOp2) WTF_INTERNAL;
EncodedJSValue JIT_OPERATION operationValueAddNotNumber(ExecState*, EncodedJSValue encodedOp1, EncodedJSValue encodedOp2) WTF_INTERNAL;
EncodedJSValue JIT_OPERATION operationValueDiv(ExecState*, EncodedJSValue encodedOp1, EncodedJSValue encodedOp2) WTF_INTERNAL;
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
index ea5b4f4..f644220 100755
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
@@ -39,8 +39,13 @@
#include "DFGSlowPathGenerator.h"
#include "DirectArguments.h"
#include "JITAddGenerator.h"
+#include "JITBitAndGenerator.h"
+#include "JITBitOrGenerator.h"
+#include "JITBitXorGenerator.h"
#include "JITDivGenerator.h"
+#include "JITLeftShiftGenerator.h"
#include "JITMulGenerator.h"
+#include "JITRightShiftGenerator.h"
#include "JITSubGenerator.h"
#include "JSArrowFunction.h"
#include "JSCInlines.h"
@@ -2786,12 +2791,120 @@
blessedBooleanResult(scratchReg, node);
}
+template<typename SnippetGenerator, J_JITOperation_EJJ snippetSlowPathFunction>
+void SpeculativeJIT::emitUntypedBitOp(Node* node)
+{
+ Edge& leftChild = node->child1();
+ Edge& rightChild = node->child2();
+
+ if (isKnownNotNumber(leftChild.node()) || isKnownNotNumber(rightChild.node())) {
+ JSValueOperand left(this, leftChild);
+ JSValueOperand right(this, rightChild);
+ JSValueRegs leftRegs = left.jsValueRegs();
+ JSValueRegs rightRegs = right.jsValueRegs();
+#if USE(JSVALUE64)
+ GPRTemporary result(this);
+ JSValueRegs resultRegs = JSValueRegs(result.gpr());
+#else
+ GPRTemporary resultTag(this);
+ GPRTemporary resultPayload(this);
+ JSValueRegs resultRegs = JSValueRegs(resultPayload.gpr(), resultTag.gpr());
+#endif
+ flushRegisters();
+ callOperation(snippetSlowPathFunction, resultRegs, leftRegs, rightRegs);
+ m_jit.exceptionCheck();
+
+ jsValueResult(resultRegs, node);
+ return;
+ }
+
+ Optional<JSValueOperand> left;
+ Optional<JSValueOperand> right;
+
+ JSValueRegs leftRegs;
+ JSValueRegs rightRegs;
+
+#if USE(JSVALUE64)
+ GPRTemporary result(this);
+ JSValueRegs resultRegs = JSValueRegs(result.gpr());
+ GPRTemporary scratch(this);
+ GPRReg scratchGPR = scratch.gpr();
+#else
+ GPRTemporary resultTag(this);
+ GPRTemporary resultPayload(this);
+ JSValueRegs resultRegs = JSValueRegs(resultPayload.gpr(), resultTag.gpr());
+ GPRReg scratchGPR = resultTag.gpr();
+#endif
+
+ SnippetOperand leftOperand;
+ SnippetOperand rightOperand;
+
+ // The snippet generator does not support both operands being constant. If the left
+ // operand is already const, we'll ignore the right operand's constness.
+ if (leftChild->isInt32Constant())
+ leftOperand.setConstInt32(leftChild->asInt32());
+ else if (rightChild->isInt32Constant())
+ rightOperand.setConstInt32(rightChild->asInt32());
+
+ RELEASE_ASSERT(!leftOperand.isConst() || !rightOperand.isConst());
+
+ if (!leftOperand.isConst()) {
+ left = JSValueOperand(this, leftChild);
+ leftRegs = left->jsValueRegs();
+ }
+ if (!rightOperand.isConst()) {
+ right = JSValueOperand(this, rightChild);
+ rightRegs = right->jsValueRegs();
+ }
+
+ SnippetGenerator gen(leftOperand, rightOperand, resultRegs, leftRegs, rightRegs, scratchGPR);
+ gen.generateFastPath(m_jit);
+
+ ASSERT(gen.didEmitFastPath());
+ gen.endJumpList().append(m_jit.jump());
+
+ gen.slowPathJumpList().link(&m_jit);
+ silentSpillAllRegisters(resultRegs);
+
+ if (leftOperand.isConst()) {
+ leftRegs = resultRegs;
+ m_jit.moveValue(leftChild->asJSValue(), leftRegs);
+ } else if (rightOperand.isConst()) {
+ rightRegs = resultRegs;
+ m_jit.moveValue(rightChild->asJSValue(), rightRegs);
+ }
+
+ callOperation(snippetSlowPathFunction, resultRegs, leftRegs, rightRegs);
+
+ silentFillAllRegisters(resultRegs);
+ m_jit.exceptionCheck();
+
+ gen.endJumpList().link(&m_jit);
+ jsValueResult(resultRegs, node);
+}
+
void SpeculativeJIT::compileBitwiseOp(Node* node)
{
NodeType op = node->op();
Edge& leftChild = node->child1();
Edge& rightChild = node->child2();
+ if (leftChild.useKind() == UntypedUse || rightChild.useKind() == UntypedUse) {
+ switch (op) {
+ case BitAnd:
+ emitUntypedBitOp<JITBitAndGenerator, operationValueBitAnd>(node);
+ return;
+ case BitOr:
+ emitUntypedBitOp<JITBitOrGenerator, operationValueBitOr>(node);
+ return;
+ case BitXor:
+ emitUntypedBitOp<JITBitXorGenerator, operationValueBitXor>(node);
+ return;
+ default:
+ RELEASE_ASSERT_NOT_REACHED();
+ }
+ }
+
if (leftChild->isInt32Constant()) {
SpeculateInt32Operand op2(this, rightChild);
GPRTemporary result(this, Reuse, op2);
@@ -2821,12 +2934,130 @@
}
}
+void SpeculativeJIT::emitUntypedRightShiftBitOp(Node* node)
+{
+ J_JITOperation_EJJ snippetSlowPathFunction = node->op() == BitRShift
+ ? operationValueBitRShift : operationValueBitURShift;
+ JITRightShiftGenerator::ShiftType shiftType = node->op() == BitRShift
+ ? JITRightShiftGenerator::SignedShift : JITRightShiftGenerator::UnsignedShift;
+
+ Edge& leftChild = node->child1();
+ Edge& rightChild = node->child2();
+
+ if (isKnownNotNumber(leftChild.node()) || isKnownNotNumber(rightChild.node())) {
+ JSValueOperand left(this, leftChild);
+ JSValueOperand right(this, rightChild);
+ JSValueRegs leftRegs = left.jsValueRegs();
+ JSValueRegs rightRegs = right.jsValueRegs();
+#if USE(JSVALUE64)
+ GPRTemporary result(this);
+ JSValueRegs resultRegs = JSValueRegs(result.gpr());
+#else
+ GPRTemporary resultTag(this);
+ GPRTemporary resultPayload(this);
+ JSValueRegs resultRegs = JSValueRegs(resultPayload.gpr(), resultTag.gpr());
+#endif
+ flushRegisters();
+ callOperation(snippetSlowPathFunction, resultRegs, leftRegs, rightRegs);
+ m_jit.exceptionCheck();
+
+ jsValueResult(resultRegs, node);
+ return;
+ }
+
+ Optional<JSValueOperand> left;
+ Optional<JSValueOperand> right;
+
+ JSValueRegs leftRegs;
+ JSValueRegs rightRegs;
+
+ FPRTemporary leftNumber(this);
+ FPRReg leftFPR = leftNumber.fpr();
+
+#if USE(JSVALUE64)
+ GPRTemporary result(this);
+ JSValueRegs resultRegs = JSValueRegs(result.gpr());
+ GPRTemporary scratch(this);
+ GPRReg scratchGPR = scratch.gpr();
+ FPRReg scratchFPR = InvalidFPRReg;
+#else
+ GPRTemporary resultTag(this);
+ GPRTemporary resultPayload(this);
+ JSValueRegs resultRegs = JSValueRegs(resultPayload.gpr(), resultTag.gpr());
+ GPRReg scratchGPR = resultTag.gpr();
+ FPRTemporary fprScratch(this);
+ FPRReg scratchFPR = fprScratch.fpr();
+#endif
+
+ SnippetOperand leftOperand;
+ SnippetOperand rightOperand;
+
+ // The snippet generator does not support both operands being constant. If the left
+ // operand is already const, we'll ignore the right operand's constness.
+ if (leftChild->isInt32Constant())
+ leftOperand.setConstInt32(leftChild->asInt32());
+ else if (rightChild->isInt32Constant())
+ rightOperand.setConstInt32(rightChild->asInt32());
+
+ RELEASE_ASSERT(!leftOperand.isConst() || !rightOperand.isConst());
+
+ if (!leftOperand.isConst()) {
+ left = JSValueOperand(this, leftChild);
+ leftRegs = left->jsValueRegs();
+ }
+ if (!rightOperand.isConst()) {
+ right = JSValueOperand(this, rightChild);
+ rightRegs = right->jsValueRegs();
+ }
+
+ JITRightShiftGenerator gen(leftOperand, rightOperand, resultRegs, leftRegs, rightRegs,
+ leftFPR, scratchGPR, scratchFPR, shiftType);
+ gen.generateFastPath(m_jit);
+
+ ASSERT(gen.didEmitFastPath());
+ gen.endJumpList().append(m_jit.jump());
+
+ gen.slowPathJumpList().link(&m_jit);
+ silentSpillAllRegisters(resultRegs);
+
+ if (leftOperand.isConst()) {
+ leftRegs = resultRegs;
+ m_jit.moveValue(leftChild->asJSValue(), leftRegs);
+ } else if (rightOperand.isConst()) {
+ rightRegs = resultRegs;
+ m_jit.moveValue(rightChild->asJSValue(), rightRegs);
+ }
+
+ callOperation(snippetSlowPathFunction, resultRegs, leftRegs, rightRegs);
+
+ silentFillAllRegisters(resultRegs);
+ m_jit.exceptionCheck();
+
+ gen.endJumpList().link(&m_jit);
+ jsValueResult(resultRegs, node);
+ return;
+}
+
void SpeculativeJIT::compileShiftOp(Node* node)
{
NodeType op = node->op();
Edge& leftChild = node->child1();
Edge& rightChild = node->child2();
+ if (leftChild.useKind() == UntypedUse || rightChild.useKind() == UntypedUse) {
+ switch (op) {
+ case BitLShift:
+ emitUntypedBitOp<JITLeftShiftGenerator, operationValueBitLShift>(node);
+ return;
+ case BitRShift:
+ case BitURShift:
+ emitUntypedRightShiftBitOp(node);
+ return;
+ default:
+ RELEASE_ASSERT_NOT_REACHED();
+ }
+ }
+
if (rightChild->isInt32Constant()) {
SpeculateInt32Operand op1(this, leftChild);
GPRTemporary result(this, Reuse, op1);
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
index 1f6d3ca..3fcfd90 100755
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
@@ -2217,8 +2217,14 @@
void compileValueToInt32(Node*);
void compileUInt32ToNumber(Node*);
void compileDoubleAsInt32(Node*);
+
+ template<typename SnippetGenerator, J_JITOperation_EJJ slowPathFunction>
+ void emitUntypedBitOp(Node*);
void compileBitwiseOp(Node*);
+
+ void emitUntypedRightShiftBitOp(Node*);
void compileShiftOp(Node*);
+
void compileValueAdd(Node*);
void compileArithAdd(Node*);
void compileMakeRope(Node*);
diff --git a/Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp b/Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp
index 2130b60..62b4594 100644
--- a/Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp
@@ -74,7 +74,7 @@
case BitOr:
handleCommutativity();
- if (m_node->child2()->isInt32Constant() && !m_node->child2()->asInt32()) {
+ if (m_node->child1().useKind() != UntypedUse && m_node->child2()->isInt32Constant() && !m_node->child2()->asInt32()) {
convertToIdentityOverChild1();
break;
}
@@ -88,7 +88,7 @@
case BitLShift:
case BitRShift:
case BitURShift:
- if (m_node->child2()->isInt32Constant() && !(m_node->child2()->asInt32() & 0x1f)) {
+ if (m_node->child1().useKind() != UntypedUse && m_node->child2()->isInt32Constant() && !(m_node->child2()->asInt32() & 0x1f)) {
convertToIdentityOverChild1();
break;
}
diff --git a/Source/JavaScriptCore/ftl/FTLCompileBinaryOp.cpp b/Source/JavaScriptCore/ftl/FTLCompileBinaryOp.cpp
index 48dad9f..6c23d43 100644
--- a/Source/JavaScriptCore/ftl/FTLCompileBinaryOp.cpp
+++ b/Source/JavaScriptCore/ftl/FTLCompileBinaryOp.cpp
@@ -32,8 +32,13 @@
#include "FTLInlineCacheDescriptor.h"
#include "GPRInfo.h"
#include "JITAddGenerator.h"
+#include "JITBitAndGenerator.h"
+#include "JITBitOrGenerator.h"
+#include "JITBitXorGenerator.h"
#include "JITDivGenerator.h"
+#include "JITLeftShiftGenerator.h"
#include "JITMulGenerator.h"
+#include "JITRightShiftGenerator.h"
#include "JITSubGenerator.h"
#include "ScratchRegisterAllocator.h"
@@ -159,6 +164,76 @@
NeedScratchFPR
};
+template<typename SnippetGenerator>
+void generateBinaryBitOpFastPath(BinaryOpDescriptor& ic, CCallHelpers& jit,
+ GPRReg result, GPRReg left, GPRReg right, RegisterSet usedRegisters,
+ CCallHelpers::Jump& done, CCallHelpers::Jump& slowPathStart)
+{
+ ScratchRegisterAllocator allocator(usedRegisters);
+
+ BinarySnippetRegisterContext context(allocator, result, left, right);
+
+ GPRReg scratchGPR = allocator.allocateScratchGPR();
+
+ SnippetGenerator gen(ic.leftOperand(), ic.rightOperand(), JSValueRegs(result),
+ JSValueRegs(left), JSValueRegs(right), scratchGPR);
+
+ unsigned numberOfBytesUsedToPreserveReusedRegisters =
+ allocator.preserveReusedRegistersByPushing(jit, ScratchRegisterAllocator::ExtraStackSpace::NoExtraSpace);
+
+ context.initializeRegisters(jit);
+ gen.generateFastPath(jit);
+
+ ASSERT(gen.didEmitFastPath());
+ gen.endJumpList().link(&jit);
+
+ context.restoreRegisters(jit);
+ allocator.restoreReusedRegistersByPopping(jit, numberOfBytesUsedToPreserveReusedRegisters,
+ ScratchRegisterAllocator::ExtraStackSpace::NoExtraSpace);
+ done = jit.jump();
+
+ gen.slowPathJumpList().link(&jit);
+ context.restoreRegisters(jit);
+ allocator.restoreReusedRegistersByPopping(jit, numberOfBytesUsedToPreserveReusedRegisters,
+ ScratchRegisterAllocator::ExtraStackSpace::NoExtraSpace);
+ slowPathStart = jit.jump();
+}
+
+static void generateRightShiftFastPath(BinaryOpDescriptor& ic, CCallHelpers& jit,
+ GPRReg result, GPRReg left, GPRReg right, RegisterSet usedRegisters,
+ CCallHelpers::Jump& done, CCallHelpers::Jump& slowPathStart,
+ JITRightShiftGenerator::ShiftType shiftType)
+{
+ ScratchRegisterAllocator allocator(usedRegisters);
+
+ BinarySnippetRegisterContext context(allocator, result, left, right);
+
+ FPRReg leftFPR = allocator.allocateScratchFPR();
+ GPRReg scratchGPR = allocator.allocateScratchGPR();
+
+ JITRightShiftGenerator gen(ic.leftOperand(), ic.rightOperand(), JSValueRegs(result),
+ JSValueRegs(left), JSValueRegs(right), leftFPR, scratchGPR, InvalidFPRReg, shiftType);
+
+ unsigned numberOfBytesUsedToPreserveReusedRegisters =
+ allocator.preserveReusedRegistersByPushing(jit, ScratchRegisterAllocator::ExtraStackSpace::NoExtraSpace);
+
+ context.initializeRegisters(jit);
+ gen.generateFastPath(jit);
+
+ ASSERT(gen.didEmitFastPath());
+ gen.endJumpList().link(&jit);
+ context.restoreRegisters(jit);
+ allocator.restoreReusedRegistersByPopping(jit, numberOfBytesUsedToPreserveReusedRegisters,
+ ScratchRegisterAllocator::ExtraStackSpace::NoExtraSpace);
+ done = jit.jump();
+
+ gen.slowPathJumpList().link(&jit);
+ context.restoreRegisters(jit);
+ allocator.restoreReusedRegistersByPopping(jit, numberOfBytesUsedToPreserveReusedRegisters,
+ ScratchRegisterAllocator::ExtraStackSpace::NoExtraSpace);
+ slowPathStart = jit.jump();
+}
+
template<typename BinaryArithOpGenerator, ScratchFPRUsage scratchFPRUsage = DontNeedScratchFPR>
void generateBinaryArithOpFastPath(BinaryOpDescriptor& ic, CCallHelpers& jit,
GPRReg result, GPRReg left, GPRReg right, RegisterSet usedRegisters,
@@ -178,7 +253,7 @@
BinaryArithOpGenerator gen(ic.leftOperand(), ic.rightOperand(), JSValueRegs(result),
JSValueRegs(left), JSValueRegs(right), leftFPR, rightFPR, scratchGPR, scratchFPR);
- auto numberOfBytesUsedToPreserveReusedRegisters =
+ unsigned numberOfBytesUsedToPreserveReusedRegisters =
allocator.preserveReusedRegistersByPushing(jit, ScratchRegisterAllocator::ExtraStackSpace::NoExtraSpace);
context.initializeRegisters(jit);
@@ -188,13 +263,13 @@
gen.endJumpList().link(&jit);
context.restoreRegisters(jit);
allocator.restoreReusedRegistersByPopping(jit, numberOfBytesUsedToPreserveReusedRegisters,
- ScratchRegisterAllocator::ExtraStackSpace::SpaceForCCall);
+ ScratchRegisterAllocator::ExtraStackSpace::NoExtraSpace);
done = jit.jump();
gen.slowPathJumpList().link(&jit);
context.restoreRegisters(jit);
allocator.restoreReusedRegistersByPopping(jit, numberOfBytesUsedToPreserveReusedRegisters,
- ScratchRegisterAllocator::ExtraStackSpace::SpaceForCCall);
+ ScratchRegisterAllocator::ExtraStackSpace::NoExtraSpace);
slowPathStart = jit.jump();
}
@@ -203,6 +278,24 @@
CCallHelpers::Jump& done, CCallHelpers::Jump& slowPathStart)
{
switch (ic.nodeType()) {
+ case BitAnd:
+ generateBinaryBitOpFastPath<JITBitAndGenerator>(ic, jit, result, left, right, usedRegisters, done, slowPathStart);
+ break;
+ case BitOr:
+ generateBinaryBitOpFastPath<JITBitOrGenerator>(ic, jit, result, left, right, usedRegisters, done, slowPathStart);
+ break;
+ case BitXor:
+ generateBinaryBitOpFastPath<JITBitXorGenerator>(ic, jit, result, left, right, usedRegisters, done, slowPathStart);
+ break;
+ case BitLShift:
+ generateBinaryBitOpFastPath<JITLeftShiftGenerator>(ic, jit, result, left, right, usedRegisters, done, slowPathStart);
+ break;
+ case BitRShift:
+ generateRightShiftFastPath(ic, jit, result, left, right, usedRegisters, done, slowPathStart, JITRightShiftGenerator::SignedShift);
+ break;
+ case BitURShift:
+ generateRightShiftFastPath(ic, jit, result, left, right, usedRegisters, done, slowPathStart, JITRightShiftGenerator::UnsignedShift);
+ break;
case ArithDiv:
generateBinaryArithOpFastPath<JITDivGenerator, NeedScratchFPR>(ic, jit, result, left, right, usedRegisters, done, slowPathStart);
break;
diff --git a/Source/JavaScriptCore/ftl/FTLInlineCacheDescriptor.h b/Source/JavaScriptCore/ftl/FTLInlineCacheDescriptor.h
index 89bfd67..852f8a4 100644
--- a/Source/JavaScriptCore/ftl/FTLInlineCacheDescriptor.h
+++ b/Source/JavaScriptCore/ftl/FTLInlineCacheDescriptor.h
@@ -164,6 +164,84 @@
SnippetOperand m_rightOperand;
};
+class BitAndDescriptor : public BinaryOpDescriptor {
+public:
+ BitAndDescriptor(unsigned stackmapID, CodeOrigin codeOrigin, const SnippetOperand& leftOperand, const SnippetOperand& rightOperand)
+ : BinaryOpDescriptor(nodeType(), stackmapID, codeOrigin, icSize(), opName(), slowPathFunction(), leftOperand, rightOperand)
+ { }
+
+ static size_t icSize() { return sizeOfBitAnd(); }
+ static unsigned nodeType() { return DFG::BitAnd; }
+ static const char* opName() { return "BitAnd"; }
+ static J_JITOperation_EJJ slowPathFunction() { return DFG::operationValueBitAnd; }
+ static J_JITOperation_EJJ nonNumberSlowPathFunction() { return slowPathFunction(); }
+};
+
+class BitOrDescriptor : public BinaryOpDescriptor {
+public:
+ BitOrDescriptor(unsigned stackmapID, CodeOrigin codeOrigin, const SnippetOperand& leftOperand, const SnippetOperand& rightOperand)
+ : BinaryOpDescriptor(nodeType(), stackmapID, codeOrigin, icSize(), opName(), slowPathFunction(), leftOperand, rightOperand)
+ { }
+
+ static size_t icSize() { return sizeOfBitOr(); }
+ static unsigned nodeType() { return DFG::BitOr; }
+ static const char* opName() { return "BitOr"; }
+ static J_JITOperation_EJJ slowPathFunction() { return DFG::operationValueBitOr; }
+ static J_JITOperation_EJJ nonNumberSlowPathFunction() { return slowPathFunction(); }
+};
+
+class BitXorDescriptor : public BinaryOpDescriptor {
+public:
+ BitXorDescriptor(unsigned stackmapID, CodeOrigin codeOrigin, const SnippetOperand& leftOperand, const SnippetOperand& rightOperand)
+ : BinaryOpDescriptor(nodeType(), stackmapID, codeOrigin, icSize(), opName(), slowPathFunction(), leftOperand, rightOperand)
+ { }
+
+ static size_t icSize() { return sizeOfBitXor(); }
+ static unsigned nodeType() { return DFG::BitXor; }
+ static const char* opName() { return "BitXor"; }
+ static J_JITOperation_EJJ slowPathFunction() { return DFG::operationValueBitXor; }
+ static J_JITOperation_EJJ nonNumberSlowPathFunction() { return slowPathFunction(); }
+};
+
+class BitLShiftDescriptor : public BinaryOpDescriptor {
+public:
+ BitLShiftDescriptor(unsigned stackmapID, CodeOrigin codeOrigin, const SnippetOperand& leftOperand, const SnippetOperand& rightOperand)
+ : BinaryOpDescriptor(nodeType(), stackmapID, codeOrigin, icSize(), opName(), slowPathFunction(), leftOperand, rightOperand)
+ { }
+
+ static size_t icSize() { return sizeOfBitLShift(); }
+ static unsigned nodeType() { return DFG::BitLShift; }
+ static const char* opName() { return "BitLShift"; }
+ static J_JITOperation_EJJ slowPathFunction() { return DFG::operationValueBitLShift; }
+ static J_JITOperation_EJJ nonNumberSlowPathFunction() { return slowPathFunction(); }
+};
+
+class BitRShiftDescriptor : public BinaryOpDescriptor {
+public:
+ BitRShiftDescriptor(unsigned stackmapID, CodeOrigin codeOrigin, const SnippetOperand& leftOperand, const SnippetOperand& rightOperand)
+ : BinaryOpDescriptor(nodeType(), stackmapID, codeOrigin, icSize(), opName(), slowPathFunction(), leftOperand, rightOperand)
+ { }
+
+ static size_t icSize() { return sizeOfBitRShift(); }
+ static unsigned nodeType() { return DFG::BitRShift; }
+ static const char* opName() { return "BitRShift"; }
+ static J_JITOperation_EJJ slowPathFunction() { return DFG::operationValueBitRShift; }
+ static J_JITOperation_EJJ nonNumberSlowPathFunction() { return slowPathFunction(); }
+};
+
+class BitURShiftDescriptor : public BinaryOpDescriptor {
+public:
+ BitURShiftDescriptor(unsigned stackmapID, CodeOrigin codeOrigin, const SnippetOperand& leftOperand, const SnippetOperand& rightOperand)
+ : BinaryOpDescriptor(nodeType(), stackmapID, codeOrigin, icSize(), opName(), slowPathFunction(), leftOperand, rightOperand)
+ { }
+
+ static size_t icSize() { return sizeOfBitURShift(); }
+ static unsigned nodeType() { return DFG::BitURShift; }
+ static const char* opName() { return "BitURShift"; }
+ static J_JITOperation_EJJ slowPathFunction() { return DFG::operationValueBitURShift; }
+ static J_JITOperation_EJJ nonNumberSlowPathFunction() { return slowPathFunction(); }
+};
+
class ArithDivDescriptor : public BinaryOpDescriptor {
public:
ArithDivDescriptor(unsigned stackmapID, CodeOrigin codeOrigin, const SnippetOperand& leftOperand, const SnippetOperand& rightOperand)
diff --git a/Source/JavaScriptCore/ftl/FTLInlineCacheSize.cpp b/Source/JavaScriptCore/ftl/FTLInlineCacheSize.cpp
index 5a291cc..7ad3f4d 100644
--- a/Source/JavaScriptCore/ftl/FTLInlineCacheSize.cpp
+++ b/Source/JavaScriptCore/ftl/FTLInlineCacheSize.cpp
@@ -128,6 +128,108 @@
#endif
}
+size_t sizeOfBitAnd()
+{
+#if CPU(ARM64)
+#ifdef NDEBUG
+return 180; // ARM64 release.
+#else
+return 276; // ARM64 debug.
+#endif
+#else // CPU(X86_64)
+#ifdef NDEBUG
+return 199; // X86_64 release.
+#else
+return 286; // X86_64 debug.
+#endif
+#endif
+}
+
+size_t sizeOfBitOr()
+{
+#if CPU(ARM64)
+#ifdef NDEBUG
+ return 180; // ARM64 release.
+#else
+ return 276; // ARM64 debug.
+#endif
+#else // CPU(X86_64)
+#ifdef NDEBUG
+ return 199; // X86_64 release.
+#else
+ return 286; // X86_64 debug.
+#endif
+#endif
+}
+
+size_t sizeOfBitXor()
+{
+#if CPU(ARM64)
+#ifdef NDEBUG
+ return 180; // ARM64 release.
+#else
+ return 276; // ARM64 debug.
+#endif
+#else // CPU(X86_64)
+#ifdef NDEBUG
+ return 199; // X86_64 release.
+#else
+ return 286; // X86_64 debug.
+#endif
+#endif
+}
+
+size_t sizeOfBitLShift()
+{
+#if CPU(ARM64)
+#ifdef NDEBUG
+ return 180; // ARM64 release.
+#else
+ return 276; // ARM64 debug.
+#endif
+#else // CPU(X86_64)
+#ifdef NDEBUG
+ return 199; // X86_64 release.
+#else
+ return 286; // X86_64 debug.
+#endif
+#endif
+}
+
+size_t sizeOfBitRShift()
+{
+#if CPU(ARM64)
+#ifdef NDEBUG
+ return 180; // ARM64 release.
+#else
+ return 276; // ARM64 debug.
+#endif
+#else // CPU(X86_64)
+#ifdef NDEBUG
+ return 199; // X86_64 release.
+#else
+ return 286; // X86_64 debug.
+#endif
+#endif
+}
+
+size_t sizeOfBitURShift()
+{
+#if CPU(ARM64)
+#ifdef NDEBUG
+ return 180; // ARM64 release.
+#else
+ return 276; // ARM64 debug.
+#endif
+#else // CPU(X86_64)
+#ifdef NDEBUG
+ return 199; // X86_64 release.
+#else
+ return 286; // X86_64 debug.
+#endif
+#endif
+}
+
size_t sizeOfArithDiv()
{
#if CPU(ARM64)
diff --git a/Source/JavaScriptCore/ftl/FTLInlineCacheSize.h b/Source/JavaScriptCore/ftl/FTLInlineCacheSize.h
index 4930687..cc18fbd 100644
--- a/Source/JavaScriptCore/ftl/FTLInlineCacheSize.h
+++ b/Source/JavaScriptCore/ftl/FTLInlineCacheSize.h
@@ -46,6 +46,12 @@
size_t sizeOfConstructVarargs();
size_t sizeOfConstructForwardVarargs();
size_t sizeOfIn();
+size_t sizeOfBitAnd();
+size_t sizeOfBitOr();
+size_t sizeOfBitXor();
+size_t sizeOfBitLShift();
+size_t sizeOfBitRShift();
+size_t sizeOfBitURShift();
size_t sizeOfArithDiv();
size_t sizeOfArithMul();
size_t sizeOfArithSub();
diff --git a/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp b/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp
index f1e55e1..541328f 100644
--- a/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp
+++ b/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp
@@ -2263,21 +2263,37 @@
void compileBitAnd()
{
+ if (m_node->child1().useKind() == UntypedUse || m_node->child2().useKind() == UntypedUse) {
+ compileUntypedBinaryOp<BitAndDescriptor>();
+ return;
+ }
setInt32(m_out.bitAnd(lowInt32(m_node->child1()), lowInt32(m_node->child2())));
}
void compileBitOr()
{
+ if (m_node->child1().useKind() == UntypedUse || m_node->child2().useKind() == UntypedUse) {
+ compileUntypedBinaryOp<BitOrDescriptor>();
+ return;
+ }
setInt32(m_out.bitOr(lowInt32(m_node->child1()), lowInt32(m_node->child2())));
}
void compileBitXor()
{
+ if (m_node->child1().useKind() == UntypedUse || m_node->child2().useKind() == UntypedUse) {
+ compileUntypedBinaryOp<BitXorDescriptor>();
+ return;
+ }
setInt32(m_out.bitXor(lowInt32(m_node->child1()), lowInt32(m_node->child2())));
}
void compileBitRShift()
{
+ if (m_node->child1().useKind() == UntypedUse || m_node->child2().useKind() == UntypedUse) {
+ compileUntypedBinaryOp<BitRShiftDescriptor>();
+ return;
+ }
setInt32(m_out.aShr(
lowInt32(m_node->child1()),
m_out.bitAnd(lowInt32(m_node->child2()), m_out.constInt32(31))));
@@ -2285,6 +2301,10 @@
void compileBitLShift()
{
+ if (m_node->child1().useKind() == UntypedUse || m_node->child2().useKind() == UntypedUse) {
+ compileUntypedBinaryOp<BitLShiftDescriptor>();
+ return;
+ }
setInt32(m_out.shl(
lowInt32(m_node->child1()),
m_out.bitAnd(lowInt32(m_node->child2()), m_out.constInt32(31))));
@@ -2292,6 +2312,10 @@
void compileBitURShift()
{
+ if (m_node->child1().useKind() == UntypedUse || m_node->child2().useKind() == UntypedUse) {
+ compileUntypedBinaryOp<BitURShiftDescriptor>();
+ return;
+ }
setInt32(m_out.lShr(
lowInt32(m_node->child1()),
m_out.bitAnd(lowInt32(m_node->child2()), m_out.constInt32(31))));
diff --git a/Source/JavaScriptCore/jit/JITLeftShiftGenerator.cpp b/Source/JavaScriptCore/jit/JITLeftShiftGenerator.cpp
index 0513c79..1ddaa6a 100644
--- a/Source/JavaScriptCore/jit/JITLeftShiftGenerator.cpp
+++ b/Source/JavaScriptCore/jit/JITLeftShiftGenerator.cpp
@@ -32,6 +32,14 @@
void JITLeftShiftGenerator::generateFastPath(CCallHelpers& jit)
{
+ ASSERT(m_scratchGPR != InvalidGPRReg);
+ ASSERT(m_scratchGPR != m_left.payloadGPR());
+ ASSERT(m_scratchGPR != m_right.payloadGPR());
+#if USE(JSVALUE32_64)
+ ASSERT(m_scratchGPR != m_left.tagGPR());
+ ASSERT(m_scratchGPR != m_right.tagGPR());
+#endif
+
ASSERT(!m_leftOperand.isConstInt32() || !m_rightOperand.isConstInt32());
m_didEmitFastPath = true;
@@ -47,6 +55,12 @@
// Try to do (intConstant << intVar) or (intVar << intVar).
m_slowPathJumpList.append(jit.branchIfNotInt32(m_right));
+ GPRReg rightOperandGPR = m_right.payloadGPR();
+ if (rightOperandGPR == m_result.payloadGPR()) {
+ jit.move(rightOperandGPR, m_scratchGPR);
+ rightOperandGPR = m_scratchGPR;
+ }
+
if (m_leftOperand.isConstInt32()) {
#if USE(JSVALUE32_64)
jit.move(m_right.tagGPR(), m_result.tagGPR());
@@ -57,7 +71,7 @@
jit.moveValueRegs(m_left, m_result);
}
- jit.lshift32(m_right.payloadGPR(), m_result.payloadGPR());
+ jit.lshift32(rightOperandGPR, m_result.payloadGPR());
}
#if USE(JSVALUE64)
diff --git a/Source/JavaScriptCore/jit/JITLeftShiftGenerator.h b/Source/JavaScriptCore/jit/JITLeftShiftGenerator.h
index a712e63..633bcb3 100644
--- a/Source/JavaScriptCore/jit/JITLeftShiftGenerator.h
+++ b/Source/JavaScriptCore/jit/JITLeftShiftGenerator.h
@@ -35,8 +35,8 @@
class JITLeftShiftGenerator : public JITBitBinaryOpGenerator {
public:
JITLeftShiftGenerator(const SnippetOperand& leftOperand, const SnippetOperand& rightOperand,
- JSValueRegs result, JSValueRegs left, JSValueRegs right, GPRReg unused = InvalidGPRReg)
- : JITBitBinaryOpGenerator(leftOperand, rightOperand, result, left, right, unused)
+ JSValueRegs result, JSValueRegs left, JSValueRegs right, GPRReg scratchGPR)
+ : JITBitBinaryOpGenerator(leftOperand, rightOperand, result, left, right, scratchGPR)
{ }
void generateFastPath(CCallHelpers&);
diff --git a/Source/JavaScriptCore/jit/JITRightShiftGenerator.cpp b/Source/JavaScriptCore/jit/JITRightShiftGenerator.cpp
index c6ac3b1..4e75faf 100644
--- a/Source/JavaScriptCore/jit/JITRightShiftGenerator.cpp
+++ b/Source/JavaScriptCore/jit/JITRightShiftGenerator.cpp
@@ -87,21 +87,27 @@
// Try to do (intConstant >> intVar) or (intVar >> intVar).
m_slowPathJumpList.append(jit.branchIfNotInt32(m_right));
+ GPRReg rightOperandGPR = m_right.payloadGPR();
+ if (rightOperandGPR == m_result.payloadGPR())
+ rightOperandGPR = m_scratchGPR;
+
CCallHelpers::Jump leftNotInt;
if (m_leftOperand.isConstInt32()) {
+ jit.move(m_right.payloadGPR(), rightOperandGPR);
#if USE(JSVALUE32_64)
jit.move(m_right.tagGPR(), m_result.tagGPR());
#endif
jit.move(CCallHelpers::Imm32(m_leftOperand.asConstInt32()), m_result.payloadGPR());
} else {
leftNotInt = jit.branchIfNotInt32(m_left);
+ jit.move(m_right.payloadGPR(), rightOperandGPR);
jit.moveValueRegs(m_left, m_result);
}
if (m_shiftType == SignedShift)
- jit.rshift32(m_right.payloadGPR(), m_result.payloadGPR());
+ jit.rshift32(rightOperandGPR, m_result.payloadGPR());
else
- jit.urshift32(m_right.payloadGPR(), m_result.payloadGPR());
+ jit.urshift32(rightOperandGPR, m_result.payloadGPR());
#if USE(JSVALUE64)
jit.or64(GPRInfo::tagTypeNumberRegister, m_result.payloadGPR());
#endif