Profiling should detect when multiplication overflows but does not create negative zero.
https://bugs.webkit.org/show_bug.cgi?id=132470

Reviewed by Geoffrey Garen.

* assembler/MacroAssemblerARM64.h:
(JSC::MacroAssemblerARM64::or32):
* assembler/MacroAssemblerARMv7.h:
(JSC::MacroAssemblerARMv7::or32):
- New or32 emitter needed by the mul snippet.

* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::resultProfileForBytecodeOffset):
(JSC::CodeBlock::updateResultProfileForBytecodeOffset): Deleted.
* bytecode/CodeBlock.h:
(JSC::CodeBlock::ensureResultProfile):
(JSC::CodeBlock::addResultProfile): Deleted.
(JSC::CodeBlock::likelyToTakeDeepestSlowCase): Deleted.
- Added a m_bytecodeOffsetToResultProfileIndexMap because we can now add result
  profiles in any order (based on runtime execution), not necessarily in bytecode
  order at baseline compilation time.

* bytecode/ValueProfile.cpp:
(WTF::printInternal):
* bytecode/ValueProfile.h:
(JSC::ResultProfile::didObserveInt52Overflow):
(JSC::ResultProfile::setObservedInt52Overflow):
- Add new Int52Overflow flags.

* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::makeSafe):
- Now with more straightforward mapping of profiling info.

* dfg/DFGCommon.h:
- Fixed a typo in a comment.

* dfg/DFGNode.h:
(JSC::DFG::Node::arithNodeFlags):
(JSC::DFG::Node::mayHaveNonIntResult):
(JSC::DFG::Node::hasConstantBuffer):
* dfg/DFGNodeFlags.cpp:
(JSC::DFG::dumpNodeFlags):
* dfg/DFGNodeFlags.h:
(JSC::DFG::nodeMayOverflowInt52):
(JSC::DFG::nodeCanSpeculateInt52):
* dfg/DFGPredictionPropagationPhase.cpp:
(JSC::DFG::PredictionPropagationPhase::propagate):
- We now have profiling info for whether the result was ever seen to be a non-Int.
  Use this to make a better prediction.

* jit/JITArithmetic.cpp:
(JSC::JIT::emit_op_div):
(JSC::JIT::emit_op_mul):
- Switch to using CodeBlock::ensureResultProfile().  ResultProfiles can now be
  created at any time (including the slow path), not just in bytecode order
  during baseline compilation.

* jit/JITMulGenerator.cpp:
(JSC::JITMulGenerator::generateFastPath):
- Removed the fast path profiling code for NegZero because we'll go to the slow
  path anyway.  Let the slow path do the profiling for us.
- Added profiling for NegZero and potential Int52 overflows in the fast path
  that does double math.

* runtime/CommonSlowPaths.cpp:
(JSC::updateResultProfileForBinaryArithOp):
- Removed the RETURN_WITH_RESULT_PROFILING macro (2 less macros), and just use
  the RETURN_WITH_PROFILING macro instead with a call to
  updateResultProfileForBinaryArithOp().  This makes it clear what we're doing
  to do profiling in each case, and also allows us to do custom profiling for
  each opcode if needed.  However, so far, we always call
  updateResultProfileForBinaryArithOp().



git-svn-id: http://svn.webkit.org/repository/webkit/trunk@194613 268f45cc-cd09-0410-ab3c-d52691b4dbfc
diff --git a/Source/JavaScriptCore/ChangeLog b/Source/JavaScriptCore/ChangeLog
index e96948f..921415e 100644
--- a/Source/JavaScriptCore/ChangeLog
+++ b/Source/JavaScriptCore/ChangeLog
@@ -1,3 +1,78 @@
+2016-01-04  Mark Lam  <mark.lam@apple.com>
+
+        Profiling should detect when multiplication overflows but does not create negative zero.
+        https://bugs.webkit.org/show_bug.cgi?id=132470
+
+        Reviewed by Geoffrey Garen.
+
+        * assembler/MacroAssemblerARM64.h:
+        (JSC::MacroAssemblerARM64::or32):
+        * assembler/MacroAssemblerARMv7.h:
+        (JSC::MacroAssemblerARMv7::or32):
+        - New or32 emitter needed by the mul snippet.
+
+        * bytecode/CodeBlock.cpp:
+        (JSC::CodeBlock::resultProfileForBytecodeOffset):
+        (JSC::CodeBlock::updateResultProfileForBytecodeOffset): Deleted.
+        * bytecode/CodeBlock.h:
+        (JSC::CodeBlock::ensureResultProfile):
+        (JSC::CodeBlock::addResultProfile): Deleted.
+        (JSC::CodeBlock::likelyToTakeDeepestSlowCase): Deleted.
+        - Added a m_bytecodeOffsetToResultProfileIndexMap because we can now add result
+          profiles in any order (based on runtime execution), not necessarily in bytecode
+          order at baseline compilation time.
+
+        * bytecode/ValueProfile.cpp:
+        (WTF::printInternal):
+        * bytecode/ValueProfile.h:
+        (JSC::ResultProfile::didObserveInt52Overflow):
+        (JSC::ResultProfile::setObservedInt52Overflow):
+        - Add new Int52Overflow flags.
+
+        * dfg/DFGByteCodeParser.cpp:
+        (JSC::DFG::ByteCodeParser::makeSafe):
+        - Now with more straightforward mapping of profiling info.
+
+        * dfg/DFGCommon.h:
+        - Fixed a typo in a comment.
+
+        * dfg/DFGNode.h:
+        (JSC::DFG::Node::arithNodeFlags):
+        (JSC::DFG::Node::mayHaveNonIntResult):
+        (JSC::DFG::Node::hasConstantBuffer):
+        * dfg/DFGNodeFlags.cpp:
+        (JSC::DFG::dumpNodeFlags):
+        * dfg/DFGNodeFlags.h:
+        (JSC::DFG::nodeMayOverflowInt52):
+        (JSC::DFG::nodeCanSpeculateInt52):
+        * dfg/DFGPredictionPropagationPhase.cpp:
+        (JSC::DFG::PredictionPropagationPhase::propagate):
+        - We now have profiling info for whether the result was ever seen to be a non-Int.
+          Use this to make a better prediction.
+
+        * jit/JITArithmetic.cpp:
+        (JSC::JIT::emit_op_div):
+        (JSC::JIT::emit_op_mul):
+        - Switch to using CodeBlock::ensureResultProfile().  ResultProfiles can now be
+          created at any time (including the slow path), not just in bytecode order
+          during baseline compilation.
+
+        * jit/JITMulGenerator.cpp:
+        (JSC::JITMulGenerator::generateFastPath):
+        - Removed the fast path profiling code for NegZero because we'll go to the slow
+          path anyway.  Let the slow path do the profiling for us.
+        - Added profiling for NegZero and potential Int52 overflows in the fast path
+          that does double math.
+
+        * runtime/CommonSlowPaths.cpp:
+        (JSC::updateResultProfileForBinaryArithOp):
+        - Removed the RETURN_WITH_RESULT_PROFILING macro (2 less macros), and just use
+          the RETURN_WITH_PROFILING macro instead with a call to
+          updateResultProfileForBinaryArithOp().  This makes it clear what we're doing
+          to do profiling in each case, and also allows us to do custom profiling for
+          each opcode if needed.  However, so far, we always call
+          updateResultProfileForBinaryArithOp().
+
 2016-01-05  Keith Miller  <keith_miller@apple.com>
 
         [ES6] Arrays should be subclassable.
diff --git a/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h b/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h
index aea666e..baf71147 100644
--- a/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h
+++ b/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h
@@ -523,6 +523,7 @@
             return;
         }
 
+        ASSERT(src != dataTempRegister);
         move(imm, getCachedDataTempRegisterIDAndInvalidate());
         m_assembler.orr<32>(dest, src, dataTempRegister);
     }
@@ -534,6 +535,13 @@
         store32(dataTempRegister, address.m_ptr);
     }
 
+    void or32(TrustedImm32 imm, AbsoluteAddress address)
+    {
+        load32(address.m_ptr, getCachedMemoryTempRegisterIDAndInvalidate());
+        or32(imm, memoryTempRegister, memoryTempRegister);
+        store32(memoryTempRegister, address.m_ptr);
+    }
+
     void or32(TrustedImm32 imm, Address address)
     {
         load32(address, getCachedDataTempRegisterIDAndInvalidate());
diff --git a/Source/JavaScriptCore/assembler/MacroAssemblerARMv7.h b/Source/JavaScriptCore/assembler/MacroAssemblerARMv7.h
index 623f2b6..a023367 100644
--- a/Source/JavaScriptCore/assembler/MacroAssemblerARMv7.h
+++ b/Source/JavaScriptCore/assembler/MacroAssemblerARMv7.h
@@ -352,6 +352,14 @@
         store32(dataTempRegister, addressTempRegister);
     }
 
+    void or32(TrustedImm32 imm, AbsoluteAddress address)
+    {
+        move(TrustedImmPtr(address.m_ptr), addressTempRegister);
+        load32(addressTempRegister, dataTempRegister);
+        or32(imm, dataTempRegister, dataTempRegister);
+        store32(dataTempRegister, addressTempRegister);
+    }
+
     void or32(TrustedImm32 imm, Address address)
     {
         load32(address, dataTempRegister);
diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.cpp b/Source/JavaScriptCore/bytecode/CodeBlock.cpp
index d13cf6a..0232567 100644
--- a/Source/JavaScriptCore/bytecode/CodeBlock.cpp
+++ b/Source/JavaScriptCore/bytecode/CodeBlock.cpp
@@ -4188,32 +4188,10 @@
 
 ResultProfile* CodeBlock::resultProfileForBytecodeOffset(int bytecodeOffset)
 {
-    return tryBinarySearch<ResultProfile, int>(
-        m_resultProfiles, m_resultProfiles.size(), bytecodeOffset,
-        getResultProfileBytecodeOffset);
-}
-
-void CodeBlock::updateResultProfileForBytecodeOffset(int bytecodeOffset, JSValue result)
-{
-#if ENABLE(DFG_JIT)
-    ResultProfile* profile = resultProfileForBytecodeOffset(bytecodeOffset);
-    if (!profile)
-        profile = addResultProfile(bytecodeOffset);
-
-    if (result.isNumber()) {
-        if (!result.isInt32()) {
-            double doubleVal = result.asNumber();
-            if (!doubleVal && std::signbit(doubleVal))
-                profile->setObservedNegZeroDouble();
-            else
-                profile->setObservedNonNegZeroDouble();
-        }
-    } else
-        profile->setObservedNonNumber();
-#else
-    UNUSED_PARAM(bytecodeOffset);
-    UNUSED_PARAM(result);
-#endif
+    auto iterator = m_bytecodeOffsetToResultProfileIndexMap.find(bytecodeOffset);
+    if (iterator == m_bytecodeOffsetToResultProfileIndexMap.end())
+        return nullptr;
+    return &m_resultProfiles[iterator->value];
 }
 
 #if ENABLE(JIT)
diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.h b/Source/JavaScriptCore/bytecode/CodeBlock.h
index dbb974b..ff735c0 100644
--- a/Source/JavaScriptCore/bytecode/CodeBlock.h
+++ b/Source/JavaScriptCore/bytecode/CodeBlock.h
@@ -452,16 +452,20 @@
         return value >= Options::couldTakeSlowCaseMinimumCount();
     }
 
-    ResultProfile* addResultProfile(int bytecodeOffset)
+    ResultProfile* ensureResultProfile(int bytecodeOffset)
     {
-        m_resultProfiles.append(ResultProfile(bytecodeOffset));
-        return &m_resultProfiles.last();
+        ResultProfile* profile = resultProfileForBytecodeOffset(bytecodeOffset);
+        if (!profile) {
+            m_resultProfiles.append(ResultProfile(bytecodeOffset));
+            profile = &m_resultProfiles.last();
+            ASSERT(&m_resultProfiles.last() == &m_resultProfiles[m_resultProfiles.size() - 1]);
+            m_bytecodeOffsetToResultProfileIndexMap.add(bytecodeOffset, m_resultProfiles.size() - 1);
+        }
+        return profile;
     }
     unsigned numberOfResultProfiles() { return m_resultProfiles.size(); }
     ResultProfile* resultProfileForBytecodeOffset(int bytecodeOffset);
 
-    void updateResultProfileForBytecodeOffset(int bytecodeOffset, JSValue result);
-
     unsigned specialFastCaseProfileCountForBytecodeOffset(int bytecodeOffset)
     {
         ResultProfile* profile = resultProfileForBytecodeOffset(bytecodeOffset);
@@ -478,16 +482,6 @@
         return specialFastCaseCount >= Options::couldTakeSlowCaseMinimumCount();
     }
 
-    bool likelyToTakeDeepestSlowCase(int bytecodeOffset)
-    {
-        if (!hasBaselineJITProfiling())
-            return false;
-        unsigned slowCaseCount = rareCaseProfileCountForBytecodeOffset(bytecodeOffset);
-        unsigned specialFastCaseCount = specialFastCaseProfileCountForBytecodeOffset(bytecodeOffset);
-        unsigned value = slowCaseCount - specialFastCaseCount;
-        return value >= Options::likelyToTakeSlowCaseMinimumCount();
-    }
-
     unsigned numberOfArrayProfiles() const { return m_arrayProfiles.size(); }
     const ArrayProfileVector& arrayProfiles() { return m_arrayProfiles; }
     ArrayProfile* addArrayProfile(unsigned bytecodeOffset)
@@ -1068,6 +1062,7 @@
     Vector<ValueProfile> m_valueProfiles;
     SegmentedVector<RareCaseProfile, 8> m_rareCaseProfiles;
     SegmentedVector<ResultProfile, 8> m_resultProfiles;
+    HashMap<unsigned, unsigned, IntHash<unsigned>, WTF::UnsignedWithZeroKeyHashTraits<unsigned>> m_bytecodeOffsetToResultProfileIndexMap;
     Vector<ArrayAllocationProfile> m_arrayAllocationProfiles;
     ArrayProfileVector m_arrayProfiles;
     Vector<ObjectAllocationProfile> m_objectAllocationProfiles;
diff --git a/Source/JavaScriptCore/bytecode/ValueProfile.cpp b/Source/JavaScriptCore/bytecode/ValueProfile.cpp
index a33fc70..876ce30 100644
--- a/Source/JavaScriptCore/bytecode/ValueProfile.cpp
+++ b/Source/JavaScriptCore/bytecode/ValueProfile.cpp
@@ -54,6 +54,10 @@
             out.print("Int32Overflow");
             separator = "|";
         }
+        if (profile.didObserveInt52Overflow()) {
+            out.print("Int52Overflow");
+            separator = "|";
+        }
     }
     if (profile.specialFastPathCount()) {
         out.print(" special fast path: ");
diff --git a/Source/JavaScriptCore/bytecode/ValueProfile.h b/Source/JavaScriptCore/bytecode/ValueProfile.h
index 64b34e7..48b47da 100644
--- a/Source/JavaScriptCore/bytecode/ValueProfile.h
+++ b/Source/JavaScriptCore/bytecode/ValueProfile.h
@@ -208,7 +208,7 @@
 
 struct ResultProfile {
 private:
-    static const int numberOfFlagBits = 4;
+    static const int numberOfFlagBits = 5;
 
 public:
     ResultProfile(int bytecodeOffset)
@@ -222,6 +222,7 @@
         NegZeroDouble    = 1 << 1,
         NonNumber        = 1 << 2,
         Int32Overflow    = 1 << 3,
+        Int52Overflow    = 1 << 4,
     };
 
     int bytecodeOffset() const { return m_bytecodeOffsetAndFlags >> numberOfFlagBits; }
@@ -233,11 +234,13 @@
     bool didObserveNegZeroDouble() const { return hasBits(NegZeroDouble); }
     bool didObserveNonNumber() const { return hasBits(NonNumber); }
     bool didObserveInt32Overflow() const { return hasBits(Int32Overflow); }
+    bool didObserveInt52Overflow() const { return hasBits(Int52Overflow); }
 
     void setObservedNonNegZeroDouble() { setBit(NonNegZeroDouble); }
     void setObservedNegZeroDouble() { setBit(NegZeroDouble); }
     void setObservedNonNumber() { setBit(NonNumber); }
     void setObservedInt32Overflow() { setBit(Int32Overflow); }
+    void setObservedInt52Overflow() { setBit(Int52Overflow); }
 
     void* addressOfFlags() { return &m_bytecodeOffsetAndFlags; }
     void* addressOfSpecialFastPathCount() { return &m_specialFastPathCount; }
diff --git a/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp b/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
index acacec9..a22f6cc 100644
--- a/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
+++ b/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
@@ -909,18 +909,19 @@
             node->mergeFlags(NodeMayNegZeroInBaseline);
             break;
 
-        case ArithMul:
-            // FIXME: We should detect cases where we only overflowed but never created
-            // negative zero.
-            // https://bugs.webkit.org/show_bug.cgi?id=132470
-            if (m_inlineStackTop->m_profiledBlock->likelyToTakeDeepestSlowCase(m_currentIndex)
-                || m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, Overflow))
-                node->mergeFlags(NodeMayOverflowInt32InBaseline | NodeMayNegZeroInBaseline);
-            else if (m_inlineStackTop->m_profiledBlock->likelyToTakeSlowCase(m_currentIndex)
-                || m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, NegativeZero))
+        case ArithMul: {
+            ResultProfile& resultProfile = *m_inlineStackTop->m_profiledBlock->resultProfileForBytecodeOffset(m_currentIndex);
+            if (resultProfile.didObserveInt52Overflow())
+                node->mergeFlags(NodeMayOverflowInt52);
+            if (resultProfile.didObserveInt32Overflow() || m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, Overflow))
+                node->mergeFlags(NodeMayOverflowInt32InBaseline);
+            if (resultProfile.didObserveNegZeroDouble() || m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, NegativeZero))
                 node->mergeFlags(NodeMayNegZeroInBaseline);
+            if (resultProfile.didObserveNonInt32())
+                node->mergeFlags(NodeMayHaveNonIntResult);
             break;
-            
+        }
+
         default:
             RELEASE_ASSERT_NOT_REACHED();
             break;
diff --git a/Source/JavaScriptCore/dfg/DFGCommon.h b/Source/JavaScriptCore/dfg/DFGCommon.h
index 6b17638..b0143d6 100644
--- a/Source/JavaScriptCore/dfg/DFGCommon.h
+++ b/Source/JavaScriptCore/dfg/DFGCommon.h
@@ -107,7 +107,7 @@
 // The prediction propagator effectively does four passes, with the last pass
 // being done by the separate FixuPhase.
 enum PredictionPass {
-    // We're converging in a straght-forward forward flow fixpoint. This is the
+    // We're converging in a straight-forward forward flow fixpoint. This is the
     // most conventional part of the propagator - it makes only monotonic decisions
     // based on value profiles and rare case profiles. It ignores baseline JIT rare
     // case profiles. The goal here is to develop a good guess of which variables
diff --git a/Source/JavaScriptCore/dfg/DFGNode.h b/Source/JavaScriptCore/dfg/DFGNode.h
index 323a420..12870d9 100644
--- a/Source/JavaScriptCore/dfg/DFGNode.h
+++ b/Source/JavaScriptCore/dfg/DFGNode.h
@@ -926,7 +926,12 @@
             return result;
         return result & ~NodeBytecodeNeedsNegZero;
     }
-    
+
+    bool mayHaveNonIntResult()
+    {
+        return m_flags & NodeMayHaveNonIntResult;
+    }
+
     bool hasConstantBuffer()
     {
         return op() == NewArrayBuffer;
diff --git a/Source/JavaScriptCore/dfg/DFGNodeFlags.cpp b/Source/JavaScriptCore/dfg/DFGNodeFlags.cpp
index 47d5519..79f4b43 100644
--- a/Source/JavaScriptCore/dfg/DFGNodeFlags.cpp
+++ b/Source/JavaScriptCore/dfg/DFGNodeFlags.cpp
@@ -85,6 +85,12 @@
             out.print(comma, "UseAsOther");
     }
 
+    if (flags & NodeMayHaveNonIntResult)
+        out.print(comma, "MayHaveNonIntResult");
+
+    if (flags & NodeMayOverflowInt52)
+        out.print(comma, "MayOverflowInt52");
+
     if (flags & NodeMayOverflowInt32InBaseline)
         out.print(comma, "MayOverflowInt32InBaseline");
 
diff --git a/Source/JavaScriptCore/dfg/DFGNodeFlags.h b/Source/JavaScriptCore/dfg/DFGNodeFlags.h
index a319c9c..626b9bb 100644
--- a/Source/JavaScriptCore/dfg/DFGNodeFlags.h
+++ b/Source/JavaScriptCore/dfg/DFGNodeFlags.h
@@ -46,9 +46,10 @@
                                 
 #define NodeMustGenerate                 0x0008 // set on nodes that have side effects, and may not trivially be removed by DCE.
 #define NodeHasVarArgs                   0x0010
-// 0x0020 and 0x0040 are free.
-                                
-#define NodeBehaviorMask                 0x0780
+    
+#define NodeBehaviorMask                 0x07e0
+#define NodeMayHaveNonIntResult          0x0020
+#define NodeMayOverflowInt52             0x0040
 #define NodeMayOverflowInt32InBaseline   0x0080
 #define NodeMayOverflowInt32InDFG        0x0100
 #define NodeMayNegZeroInBaseline         0x0200
@@ -94,6 +95,11 @@
     AllRareCases
 };
 
+static inline bool nodeMayOverflowInt52(NodeFlags flags, RareCaseProfilingSource)
+{
+    return !!(flags & NodeMayOverflowInt52);
+}
+
 static inline bool nodeMayOverflowInt32(NodeFlags flags, RareCaseProfilingSource source)
 {
     NodeFlags mask = 0;
@@ -141,6 +147,9 @@
 
 static inline bool nodeCanSpeculateInt52(NodeFlags flags, RareCaseProfilingSource source)
 {
+    if (nodeMayOverflowInt52(flags, source))
+        return false;
+
     if (nodeMayNegZero(flags, source))
         return bytecodeCanIgnoreNegativeZero(flags);
     
diff --git a/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp b/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
index 04fd274..a670c72 100644
--- a/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
+++ b/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
@@ -343,8 +343,12 @@
                         changed |= mergePrediction(SpecInt52);
                     else
                         changed |= mergePrediction(speculatedDoubleTypeForPredictions(left, right));
-                } else
-                    changed |= mergePrediction(SpecInt32 | SpecBytecodeDouble);
+                } else {
+                    if (node->mayHaveNonIntResult())
+                        changed |= mergePrediction(SpecInt32 | SpecBytecodeDouble);
+                    else
+                        changed |= mergePrediction(SpecInt32);
+                }
             }
             break;
         }
diff --git a/Source/JavaScriptCore/jit/JITArithmetic.cpp b/Source/JavaScriptCore/jit/JITArithmetic.cpp
index 6d217a0..6a02a05 100644
--- a/Source/JavaScriptCore/jit/JITArithmetic.cpp
+++ b/Source/JavaScriptCore/jit/JITArithmetic.cpp
@@ -760,7 +760,7 @@
 
     ResultProfile* resultProfile = nullptr;
     if (shouldEmitProfiling())
-        resultProfile = m_codeBlock->addResultProfile(m_bytecodeOffset);
+        resultProfile = m_codeBlock->ensureResultProfile(m_bytecodeOffset);
 
     SnippetOperand leftOperand(types.first());
     SnippetOperand rightOperand(types.second());
@@ -835,7 +835,7 @@
 
     ResultProfile* resultProfile = nullptr;
     if (shouldEmitProfiling())
-        resultProfile = m_codeBlock->addResultProfile(m_bytecodeOffset);
+        resultProfile = m_codeBlock->ensureResultProfile(m_bytecodeOffset);
 
     SnippetOperand leftOperand(types.first());
     SnippetOperand rightOperand(types.second());
diff --git a/Source/JavaScriptCore/jit/JITMulGenerator.cpp b/Source/JavaScriptCore/jit/JITMulGenerator.cpp
index ef80b5e..9399878 100644
--- a/Source/JavaScriptCore/jit/JITMulGenerator.cpp
+++ b/Source/JavaScriptCore/jit/JITMulGenerator.cpp
@@ -35,6 +35,7 @@
     ASSERT(m_scratchGPR != InvalidGPRReg);
     ASSERT(m_scratchGPR != m_left.payloadGPR());
     ASSERT(m_scratchGPR != m_right.payloadGPR());
+    ASSERT(m_scratchGPR != m_result.payloadGPR());
 #if USE(JSVALUE32_64)
     ASSERT(m_scratchGPR != m_left.tagGPR());
     ASSERT(m_scratchGPR != m_right.tagGPR());
@@ -91,24 +92,7 @@
         rightNotInt = jit.branchIfNotInt32(m_right);
 
         m_slowPathJumpList.append(jit.branchMul32(CCallHelpers::Overflow, m_right.payloadGPR(), m_left.payloadGPR(), m_scratchGPR));
-        if (!m_resultProfile) {
-            m_slowPathJumpList.append(jit.branchTest32(CCallHelpers::Zero, m_scratchGPR)); // Go slow if potential negative zero.
-
-        } else {
-            CCallHelpers::JumpList notNegativeZero;
-            notNegativeZero.append(jit.branchTest32(CCallHelpers::NonZero, m_scratchGPR));
-
-            CCallHelpers::Jump negativeZero = jit.branch32(CCallHelpers::LessThan, m_left.payloadGPR(), CCallHelpers::TrustedImm32(0));
-            notNegativeZero.append(jit.branch32(CCallHelpers::GreaterThanOrEqual, m_right.payloadGPR(), CCallHelpers::TrustedImm32(0)));
-
-            negativeZero.link(&jit);
-            // Record this, so that the speculative JIT knows that we failed speculation
-            // because of a negative zero.
-            jit.add32(CCallHelpers::TrustedImm32(1), CCallHelpers::AbsoluteAddress(m_resultProfile->addressOfSpecialFastPathCount()));
-            m_slowPathJumpList.append(jit.jump());
-
-            notNegativeZero.link(&jit);
-        }
+        m_slowPathJumpList.append(jit.branchTest32(CCallHelpers::Zero, m_scratchGPR)); // Go slow if potential negative zero.
 
         jit.boxInt32(m_scratchGPR, m_result);
         m_endJumpList.append(jit.jump());
@@ -147,7 +131,58 @@
 
     // Do doubleVar * doubleVar.
     jit.mulDouble(m_rightFPR, m_leftFPR);
-    jit.boxDouble(m_leftFPR, m_result);
+
+    if (!m_resultProfile)
+        jit.boxDouble(m_leftFPR, m_result);
+    else {
+        // The Int52 overflow check below intentionally omits 1ll << 51 as a valid negative Int52 value.
+        // Therefore, we will get a false positive if the result is that value. This is intentionally
+        // done to simplify the checking algorithm.
+
+        const int64_t negativeZeroBits = 1ll << 63;
+#if USE(JSVALUE64)
+        jit.moveDoubleTo64(m_leftFPR, m_result.payloadGPR());
+        CCallHelpers::Jump notNegativeZero = jit.branch64(CCallHelpers::NotEqual, m_result.payloadGPR(), CCallHelpers::TrustedImm64(negativeZeroBits));
+
+        jit.or32(CCallHelpers::TrustedImm32(ResultProfile::NegZeroDouble), CCallHelpers::AbsoluteAddress(m_resultProfile->addressOfFlags()));
+        CCallHelpers::Jump done = jit.jump();
+
+        notNegativeZero.link(&jit);
+        jit.or32(CCallHelpers::TrustedImm32(ResultProfile::NonNegZeroDouble), CCallHelpers::AbsoluteAddress(m_resultProfile->addressOfFlags()));
+
+        jit.move(m_result.payloadGPR(), m_scratchGPR);
+        jit.urshiftPtr(CCallHelpers::Imm32(52), m_scratchGPR);
+        jit.and32(CCallHelpers::Imm32(0x7ff), m_scratchGPR);
+        CCallHelpers::Jump noInt52Overflow = jit.branch32(CCallHelpers::LessThanOrEqual, m_scratchGPR, CCallHelpers::TrustedImm32(0x431));
+
+        jit.or32(CCallHelpers::TrustedImm32(ResultProfile::Int52Overflow), CCallHelpers::AbsoluteAddress(m_resultProfile->addressOfFlags()));
+        noInt52Overflow.link(&jit);
+
+        done.link(&jit);
+        jit.sub64(GPRInfo::tagTypeNumberRegister, m_result.payloadGPR()); // Box the double.
+#else
+        jit.boxDouble(m_leftFPR, m_result);
+        CCallHelpers::JumpList notNegativeZero;
+        notNegativeZero.append(jit.branch32(CCallHelpers::NotEqual, m_result.payloadGPR(), CCallHelpers::TrustedImm32(0)));
+        notNegativeZero.append(jit.branch32(CCallHelpers::NotEqual, m_result.tagGPR(), CCallHelpers::TrustedImm32(negativeZeroBits >> 32)));
+
+        jit.or32(CCallHelpers::TrustedImm32(ResultProfile::NegZeroDouble), CCallHelpers::AbsoluteAddress(m_resultProfile->addressOfFlags()));
+        CCallHelpers::Jump done = jit.jump();
+
+        notNegativeZero.link(&jit);
+        jit.or32(CCallHelpers::TrustedImm32(ResultProfile::NonNegZeroDouble), CCallHelpers::AbsoluteAddress(m_resultProfile->addressOfFlags()));
+
+        jit.move(m_result.tagGPR(), m_scratchGPR);
+        jit.urshiftPtr(CCallHelpers::Imm32(52 - 32), m_scratchGPR);
+        jit.and32(CCallHelpers::Imm32(0x7ff), m_scratchGPR);
+        CCallHelpers::Jump noInt52Overflow = jit.branch32(CCallHelpers::LessThanOrEqual, m_scratchGPR, CCallHelpers::TrustedImm32(0x431));
+        
+        jit.or32(CCallHelpers::TrustedImm32(ResultProfile::Int52Overflow), CCallHelpers::AbsoluteAddress(m_resultProfile->addressOfFlags()));
+
+        m_endJumpList.append(noInt52Overflow);
+        m_endJumpList.append(done);
+#endif
+    }
 }
 
 } // namespace JSC
diff --git a/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp b/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp
index f0626a1..6f8c754 100644
--- a/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp
+++ b/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp
@@ -135,15 +135,6 @@
         JSValue::encode(value);                  \
     } while (false)
 
-#define RETURN_WITH_RESULT_PROFILING(value__) \
-    RETURN_WITH_PROFILING(value__, PROFILE_RESULT(returnValue__))
-    
-#define PROFILE_RESULT(value__) do { \
-        CodeBlock* codeBlock = exec->codeBlock();                                   \
-        unsigned bytecodeOffset = codeBlock->bytecodeOffset(pc);                    \
-        codeBlock->updateResultProfileForBytecodeOffset(bytecodeOffset, value__);   \
-    } while (false)
-
 #define CALL_END_IMPL(exec, callTarget) RETURN_TWO((callTarget), (exec))
 
 #define CALL_THROW(exec, pc, exceptionToThrow) do {                     \
@@ -358,19 +349,57 @@
     RETURN(jsNumber(-OP_C(2).jsValue().toNumber(exec)));
 }
 
+#if ENABLE(DFG_JIT)
+static void updateResultProfileForBinaryArithOp(ExecState* exec, Instruction* pc, JSValue result, JSValue left, JSValue right)
+{
+    CodeBlock* codeBlock = exec->codeBlock();
+    unsigned bytecodeOffset = codeBlock->bytecodeOffset(pc);
+    ResultProfile* profile = codeBlock->ensureResultProfile(bytecodeOffset);
+
+    if (result.isNumber()) {
+        if (!result.isInt32()) {
+            if (left.isInt32() && right.isInt32())
+                profile->setObservedInt32Overflow();
+
+            double doubleVal = result.asNumber();
+            if (!doubleVal && std::signbit(doubleVal))
+                profile->setObservedNegZeroDouble();
+            else {
+                profile->setObservedNonNegZeroDouble();
+
+                // The Int52 overflow check here intentionally omits 1ll << 51 as a valid negative Int52 value.
+                // Therefore, we will get a false positive if the result is that value. This is intentionally
+                // done to simplify the checking algorithm.
+                static const int64_t int52OverflowPoint = (1ll << 51);
+                int64_t int64Val = static_cast<int64_t>(std::abs(doubleVal));
+                if (int64Val >= int52OverflowPoint)
+                    profile->setObservedInt52Overflow();
+            }
+        }
+    } else
+        profile->setObservedNonNumber();
+}
+#else
+static void updateResultProfileForBinaryArithOp(ExecState*, Instruction*, JSValue, JSValue, JSValue) { }
+#endif
+
 SLOW_PATH_DECL(slow_path_add)
 {
     BEGIN();
     JSValue v1 = OP_C(2).jsValue();
     JSValue v2 = OP_C(3).jsValue();
-    
+    JSValue result;
+
     if (v1.isString() && !v2.isObject())
-        RETURN_WITH_RESULT_PROFILING(jsString(exec, asString(v1), v2.toString(exec)));
-    
-    if (v1.isNumber() && v2.isNumber())
-        RETURN_WITH_RESULT_PROFILING(jsNumber(v1.asNumber() + v2.asNumber()));
-    
-    RETURN_WITH_RESULT_PROFILING(jsAddSlowCase(exec, v1, v2));
+        result = jsString(exec, asString(v1), v2.toString(exec));
+    else if (v1.isNumber() && v2.isNumber())
+        result = jsNumber(v1.asNumber() + v2.asNumber());
+    else
+        result = jsAddSlowCase(exec, v1, v2);
+
+    RETURN_WITH_PROFILING(result, {
+        updateResultProfileForBinaryArithOp(exec, pc, result, v1, v2);
+    });
 }
 
 // The following arithmetic and bitwise operations need to be sure to run
@@ -380,25 +409,40 @@
 SLOW_PATH_DECL(slow_path_mul)
 {
     BEGIN();
-    double a = OP_C(2).jsValue().toNumber(exec);
-    double b = OP_C(3).jsValue().toNumber(exec);
-    RETURN_WITH_RESULT_PROFILING(jsNumber(a * b));
+    JSValue left = OP_C(2).jsValue();
+    JSValue right = OP_C(3).jsValue();
+    double a = left.toNumber(exec);
+    double b = right.toNumber(exec);
+    JSValue result = jsNumber(a * b);
+    RETURN_WITH_PROFILING(result, {
+        updateResultProfileForBinaryArithOp(exec, pc, result, left, right);
+    });
 }
 
 SLOW_PATH_DECL(slow_path_sub)
 {
     BEGIN();
-    double a = OP_C(2).jsValue().toNumber(exec);
-    double b = OP_C(3).jsValue().toNumber(exec);
-    RETURN_WITH_RESULT_PROFILING(jsNumber(a - b));
+    JSValue left = OP_C(2).jsValue();
+    JSValue right = OP_C(3).jsValue();
+    double a = left.toNumber(exec);
+    double b = right.toNumber(exec);
+    JSValue result = jsNumber(a - b);
+    RETURN_WITH_PROFILING(result, {
+        updateResultProfileForBinaryArithOp(exec, pc, result, left, right);
+    });
 }
 
 SLOW_PATH_DECL(slow_path_div)
 {
     BEGIN();
-    double a = OP_C(2).jsValue().toNumber(exec);
-    double b = OP_C(3).jsValue().toNumber(exec);
-    RETURN_WITH_RESULT_PROFILING(jsNumber(a / b));
+    JSValue left = OP_C(2).jsValue();
+    JSValue right = OP_C(3).jsValue();
+    double a = left.toNumber(exec);
+    double b = right.toNumber(exec);
+    JSValue result = jsNumber(a / b);
+    RETURN_WITH_PROFILING(result, {
+        updateResultProfileForBinaryArithOp(exec, pc, result, left, right);
+    });
 }
 
 SLOW_PATH_DECL(slow_path_mod)