FTL should sink object allocations
https://bugs.webkit.org/show_bug.cgi?id=136330

Reviewed by Oliver Hunt.
Source/JavaScriptCore:

        
This adds a comprehensive infrastructure for sinking object allocations in DFG SSA form. The
ultimate goal of sinking is to sink an allocation "past the points of its death" - i.e. to
eliminate it completely. The way sinking reasons about the CFG means that it resembles a
partial escape analysis: we create paths through a function where some allocation(s) don't
have to be done at all even if there are other paths along which those allocations still have
to happen. But it also produces other side benefits. Even if an allocation isn't eliminated
along any path, the act of sinking reduces the number of barriers that have to execute.
        
Because this was a fairly ambituous SSA analysis and transformation, I added a bunch of C++11
sugar to the DFG's internal APIs to allow for easier iteration over blocks, nodes, and
successors; and to add more functor goodness to allow for more lambdas.
        
This is just the beginning. The bug has a bunch of other bugs that depend on it. So far this
is a spectacular speed-up on microbenchmarks but it's still too limited to affect big
benchmarks. For example, doing o == p makes the sinking phase think that o and p escape.
That's just an omission and there are likely others; we can easily fix them. I think it's
best to land it in its current form and then to worry about the big benchmarks in subsequent
work (see bug 137126).

* CMakeLists.txt:
* JavaScriptCore.vcxproj/JavaScriptCore.vcxproj:
* JavaScriptCore.xcodeproj/project.pbxproj:
* bytecode/StructureSet.h:
(JSC::StructureSet::iterator::iterator):
(JSC::StructureSet::iterator::operator*):
(JSC::StructureSet::iterator::operator++):
(JSC::StructureSet::iterator::operator==):
(JSC::StructureSet::iterator::operator!=):
(JSC::StructureSet::begin):
(JSC::StructureSet::end):
* dfg/DFGAbstractInterpreter.h:
(JSC::DFG::AbstractInterpreter::phiChildren):
* dfg/DFGAbstractInterpreterInlines.h:
(JSC::DFG::AbstractInterpreter<AbstractStateType>::AbstractInterpreter):
(JSC::DFG::AbstractInterpreter<AbstractStateType>::startExecuting):
(JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
(JSC::DFG::AbstractInterpreter<AbstractStateType>::execute):
* dfg/DFGAvailability.h:
(JSC::DFG::Availability::shouldUseNode):
(JSC::DFG::Availability::isFlushUseful):
(JSC::DFG::Availability::isDead):
(JSC::DFG::Availability::operator!=):
* dfg/DFGAvailabilityMap.cpp: Added.
(JSC::DFG::AvailabilityMap::prune):
(JSC::DFG::AvailabilityMap::clear):
(JSC::DFG::AvailabilityMap::dump):
(JSC::DFG::AvailabilityMap::operator==):
(JSC::DFG::AvailabilityMap::merge):
* dfg/DFGAvailabilityMap.h: Added.
(JSC::DFG::AvailabilityMap::forEachAvailability):
* dfg/DFGBasicBlock.cpp:
(JSC::DFG::BasicBlock::SSAData::SSAData):
* dfg/DFGBasicBlock.h:
(JSC::DFG::BasicBlock::begin):
(JSC::DFG::BasicBlock::end):
(JSC::DFG::BasicBlock::SuccessorsIterable::SuccessorsIterable):
(JSC::DFG::BasicBlock::SuccessorsIterable::iterator::iterator):
(JSC::DFG::BasicBlock::SuccessorsIterable::iterator::operator*):
(JSC::DFG::BasicBlock::SuccessorsIterable::iterator::operator++):
(JSC::DFG::BasicBlock::SuccessorsIterable::iterator::operator==):
(JSC::DFG::BasicBlock::SuccessorsIterable::iterator::operator!=):
(JSC::DFG::BasicBlock::SuccessorsIterable::begin):
(JSC::DFG::BasicBlock::SuccessorsIterable::end):
(JSC::DFG::BasicBlock::successors):
* dfg/DFGClobberize.h:
(JSC::DFG::clobberize):
* dfg/DFGConstantFoldingPhase.cpp:
(JSC::DFG::ConstantFoldingPhase::foldConstants):
* dfg/DFGDoesGC.cpp:
(JSC::DFG::doesGC):
* dfg/DFGFixupPhase.cpp:
(JSC::DFG::FixupPhase::fixupNode):
* dfg/DFGFlushedAt.cpp:
(JSC::DFG::FlushedAt::dump):
* dfg/DFGFlushedAt.h:
(JSC::DFG::FlushedAt::FlushedAt):
* dfg/DFGGraph.cpp:
(JSC::DFG::Graph::dump):
(JSC::DFG::Graph::dumpBlockHeader):
(JSC::DFG::Graph::mergeRelevantToOSR):
(JSC::DFG::Graph::invalidateCFG):
* dfg/DFGGraph.h:
(JSC::DFG::Graph::NaturalBlockIterable::NaturalBlockIterable):
(JSC::DFG::Graph::NaturalBlockIterable::iterator::iterator):
(JSC::DFG::Graph::NaturalBlockIterable::iterator::operator*):
(JSC::DFG::Graph::NaturalBlockIterable::iterator::operator++):
(JSC::DFG::Graph::NaturalBlockIterable::iterator::operator==):
(JSC::DFG::Graph::NaturalBlockIterable::iterator::operator!=):
(JSC::DFG::Graph::NaturalBlockIterable::iterator::findNext):
(JSC::DFG::Graph::NaturalBlockIterable::begin):
(JSC::DFG::Graph::NaturalBlockIterable::end):
(JSC::DFG::Graph::blocksInNaturalOrder):
(JSC::DFG::Graph::doToChildrenWithNode):
(JSC::DFG::Graph::doToChildren):
* dfg/DFGHeapLocation.cpp:
(WTF::printInternal):
* dfg/DFGHeapLocation.h:
* dfg/DFGInsertOSRHintsForUpdate.cpp: Added.
(JSC::DFG::insertOSRHintsForUpdate):
* dfg/DFGInsertOSRHintsForUpdate.h: Added.
* dfg/DFGInsertionSet.h:
(JSC::DFG::InsertionSet::graph):
* dfg/DFGMayExit.cpp:
(JSC::DFG::mayExit):
* dfg/DFGNode.h:
(JSC::DFG::Node::convertToPutByOffsetHint):
(JSC::DFG::Node::convertToPutStructureHint):
(JSC::DFG::Node::convertToPhantomNewObject):
(JSC::DFG::Node::isCellConstant):
(JSC::DFG::Node::castConstant):
(JSC::DFG::Node::hasIdentifier):
(JSC::DFG::Node::hasStorageAccessData):
(JSC::DFG::Node::hasObjectMaterializationData):
(JSC::DFG::Node::objectMaterializationData):
(JSC::DFG::Node::isPhantomObjectAllocation):
* dfg/DFGNodeType.h:
* dfg/DFGOSRAvailabilityAnalysisPhase.cpp:
(JSC::DFG::OSRAvailabilityAnalysisPhase::run):
(JSC::DFG::LocalOSRAvailabilityCalculator::endBlock):
(JSC::DFG::LocalOSRAvailabilityCalculator::executeNode):
* dfg/DFGOSRAvailabilityAnalysisPhase.h:
* dfg/DFGObjectAllocationSinkingPhase.cpp: Added.
(JSC::DFG::ObjectAllocationSinkingPhase::ObjectAllocationSinkingPhase):
(JSC::DFG::ObjectAllocationSinkingPhase::run):
(JSC::DFG::ObjectAllocationSinkingPhase::performSinking):
(JSC::DFG::ObjectAllocationSinkingPhase::determineMaterializationPoints):
(JSC::DFG::ObjectAllocationSinkingPhase::placeMaterializationPoints):
(JSC::DFG::ObjectAllocationSinkingPhase::lowerNonReadingOperationsOnPhantomAllocations):
(JSC::DFG::ObjectAllocationSinkingPhase::promoteSunkenFields):
(JSC::DFG::ObjectAllocationSinkingPhase::resolve):
(JSC::DFG::ObjectAllocationSinkingPhase::handleNode):
(JSC::DFG::ObjectAllocationSinkingPhase::createMaterialize):
(JSC::DFG::ObjectAllocationSinkingPhase::populateMaterialize):
(JSC::DFG::performObjectAllocationSinking):
* dfg/DFGObjectAllocationSinkingPhase.h: Added.
* dfg/DFGObjectMaterializationData.cpp: Added.
(JSC::DFG::PhantomPropertyValue::dump):
(JSC::DFG::ObjectMaterializationData::dump):
(JSC::DFG::ObjectMaterializationData::oneWaySimilarityScore):
(JSC::DFG::ObjectMaterializationData::similarityScore):
* dfg/DFGObjectMaterializationData.h: Added.
(JSC::DFG::PhantomPropertyValue::PhantomPropertyValue):
(JSC::DFG::PhantomPropertyValue::operator==):
* dfg/DFGPhantomCanonicalizationPhase.cpp:
(JSC::DFG::PhantomCanonicalizationPhase::run):
* dfg/DFGPhantomRemovalPhase.cpp:
(JSC::DFG::PhantomRemovalPhase::run):
* dfg/DFGPhiChildren.cpp: Added.
(JSC::DFG::PhiChildren::PhiChildren):
(JSC::DFG::PhiChildren::~PhiChildren):
(JSC::DFG::PhiChildren::upsilonsOf):
* dfg/DFGPhiChildren.h: Added.
(JSC::DFG::PhiChildren::forAllIncomingValues):
(JSC::DFG::PhiChildren::forAllTransitiveIncomingValues):
* dfg/DFGPlan.cpp:
(JSC::DFG::Plan::compileInThreadImpl):
* dfg/DFGPrePostNumbering.cpp: Added.
(JSC::DFG::PrePostNumbering::PrePostNumbering):
(JSC::DFG::PrePostNumbering::~PrePostNumbering):
(JSC::DFG::PrePostNumbering::compute):
(WTF::printInternal):
* dfg/DFGPrePostNumbering.h: Added.
(JSC::DFG::PrePostNumbering::preNumber):
(JSC::DFG::PrePostNumbering::postNumber):
(JSC::DFG::PrePostNumbering::isStrictAncestorOf):
(JSC::DFG::PrePostNumbering::isAncestorOf):
(JSC::DFG::PrePostNumbering::isStrictDescendantOf):
(JSC::DFG::PrePostNumbering::isDescendantOf):
(JSC::DFG::PrePostNumbering::edgeKind):
* dfg/DFGPredictionPropagationPhase.cpp:
(JSC::DFG::PredictionPropagationPhase::propagate):
* dfg/DFGPromoteHeapAccess.h: Added.
(JSC::DFG::promoteHeapAccess):
* dfg/DFGPromotedHeapLocation.cpp: Added.
(JSC::DFG::PromotedLocationDescriptor::dump):
(JSC::DFG::PromotedHeapLocation::createHint):
(JSC::DFG::PromotedHeapLocation::dump):
(WTF::printInternal):
* dfg/DFGPromotedHeapLocation.h: Added.
(JSC::DFG::PromotedLocationDescriptor::PromotedLocationDescriptor):
(JSC::DFG::PromotedLocationDescriptor::operator!):
(JSC::DFG::PromotedLocationDescriptor::kind):
(JSC::DFG::PromotedLocationDescriptor::info):
(JSC::DFG::PromotedLocationDescriptor::hash):
(JSC::DFG::PromotedLocationDescriptor::operator==):
(JSC::DFG::PromotedLocationDescriptor::operator!=):
(JSC::DFG::PromotedLocationDescriptor::isHashTableDeletedValue):
(JSC::DFG::PromotedHeapLocation::PromotedHeapLocation):
(JSC::DFG::PromotedHeapLocation::operator!):
(JSC::DFG::PromotedHeapLocation::kind):
(JSC::DFG::PromotedHeapLocation::base):
(JSC::DFG::PromotedHeapLocation::info):
(JSC::DFG::PromotedHeapLocation::descriptor):
(JSC::DFG::PromotedHeapLocation::hash):
(JSC::DFG::PromotedHeapLocation::operator==):
(JSC::DFG::PromotedHeapLocation::isHashTableDeletedValue):
(JSC::DFG::PromotedHeapLocationHash::hash):
(JSC::DFG::PromotedHeapLocationHash::equal):
* dfg/DFGSSACalculator.cpp:
(JSC::DFG::SSACalculator::reset):
* dfg/DFGSSACalculator.h:
* dfg/DFGSafeToExecute.h:
(JSC::DFG::safeToExecute):
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::compileCurrentBlock):
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGStructureRegistrationPhase.cpp:
(JSC::DFG::StructureRegistrationPhase::run):
* dfg/DFGValidate.cpp:
(JSC::DFG::Validate::validate):
* ftl/FTLCapabilities.cpp:
(JSC::FTL::canCompile):
* ftl/FTLExitPropertyValue.cpp: Added.
(JSC::FTL::ExitPropertyValue::dump):
* ftl/FTLExitPropertyValue.h: Added.
(JSC::FTL::ExitPropertyValue::ExitPropertyValue):
(JSC::FTL::ExitPropertyValue::operator!):
(JSC::FTL::ExitPropertyValue::location):
(JSC::FTL::ExitPropertyValue::value):
* ftl/FTLExitTimeObjectMaterialization.cpp: Added.
(JSC::FTL::ExitTimeObjectMaterialization::ExitTimeObjectMaterialization):
(JSC::FTL::ExitTimeObjectMaterialization::~ExitTimeObjectMaterialization):
(JSC::FTL::ExitTimeObjectMaterialization::add):
(JSC::FTL::ExitTimeObjectMaterialization::get):
(JSC::FTL::ExitTimeObjectMaterialization::dump):
* ftl/FTLExitTimeObjectMaterialization.h: Added.
(JSC::FTL::ExitTimeObjectMaterialization::type):
(JSC::FTL::ExitTimeObjectMaterialization::properties):
* ftl/FTLExitValue.cpp:
(JSC::FTL::ExitValue::materializeNewObject):
(JSC::FTL::ExitValue::dumpInContext):
* ftl/FTLExitValue.h:
(JSC::FTL::ExitValue::isObjectMaterialization):
(JSC::FTL::ExitValue::objectMaterialization):
(JSC::FTL::ExitValue::withVirtualRegister):
(JSC::FTL::ExitValue::valueFormat):
* ftl/FTLLowerDFGToLLVM.cpp:
(JSC::FTL::LowerDFGToLLVM::compileNode):
(JSC::FTL::LowerDFGToLLVM::compileCheckStructure):
(JSC::FTL::LowerDFGToLLVM::compileArrayifyToStructure):
(JSC::FTL::LowerDFGToLLVM::compilePutStructure):
(JSC::FTL::LowerDFGToLLVM::compileNewObject):
(JSC::FTL::LowerDFGToLLVM::compileMultiGetByOffset):
(JSC::FTL::LowerDFGToLLVM::compileMultiPutByOffset):
(JSC::FTL::LowerDFGToLLVM::compileInvalidationPoint):
(JSC::FTL::LowerDFGToLLVM::compileCheckStructureImmediate):
(JSC::FTL::LowerDFGToLLVM::compileMaterializeNewObject):
(JSC::FTL::LowerDFGToLLVM::checkStructure):
(JSC::FTL::LowerDFGToLLVM::allocateCell):
(JSC::FTL::LowerDFGToLLVM::storeStructure):
(JSC::FTL::LowerDFGToLLVM::allocateObject):
(JSC::FTL::LowerDFGToLLVM::speculateStringObjectForStructureID):
(JSC::FTL::LowerDFGToLLVM::appendOSRExit):
(JSC::FTL::LowerDFGToLLVM::buildExitArguments):
(JSC::FTL::LowerDFGToLLVM::exitValueForAvailability):
(JSC::FTL::LowerDFGToLLVM::exitValueForNode):
(JSC::FTL::LowerDFGToLLVM::weakStructureID):
(JSC::FTL::LowerDFGToLLVM::weakStructure):
(JSC::FTL::LowerDFGToLLVM::availabilityMap):
(JSC::FTL::LowerDFGToLLVM::availability): Deleted.
* ftl/FTLOSRExit.h:
* ftl/FTLOSRExitCompiler.cpp:
(JSC::FTL::compileRecovery):
(JSC::FTL::compileStub):
* ftl/FTLOperations.cpp: Added.
(JSC::FTL::operationNewObjectWithButterfly):
(JSC::FTL::operationMaterializeObjectInOSR):
* ftl/FTLOperations.h: Added.
* ftl/FTLSwitchCase.h:
(JSC::FTL::SwitchCase::SwitchCase):
* runtime/JSObject.h:
(JSC::JSObject::finishCreation):
(JSC::JSFinalObject::JSFinalObject):
(JSC::JSFinalObject::create):
* runtime/Structure.cpp:
(JSC::Structure::canUseForAllocationsOf):
* runtime/Structure.h:
* tests/stress/elidable-new-object-roflcopter-then-exit.js: Added.
(sumOfArithSeries):
(foo):
* tests/stress/elide-new-object-dag-then-exit.js: Added.
(sumOfArithSeries):
(bar):
(verify):
(foo):
* tests/stress/obviously-elidable-new-object-then-exit.js: Added.
(sumOfArithSeries):
(foo):

Source/WTF:

        
Make it possible to reset a Bag.

* wtf/Bag.h:
(WTF::Bag::Bag):
(WTF::Bag::~Bag):
(WTF::Bag::clear):

LayoutTests:


* js/math-denorm.html: Added.
* js/regress/elidable-new-object-dag-expected.txt: Added.
* js/regress/elidable-new-object-dag.html: Added.
* js/regress/elidable-new-object-roflcopter-expected.txt: Added.
* js/regress/elidable-new-object-roflcopter.html: Added.
* js/regress/elidable-new-object-tree-expected.txt: Added.
* js/regress/elidable-new-object-tree.html: Added.
* js/regress/obvious-sink-pathology-expected.txt: Added.
* js/regress/obvious-sink-pathology-taken-expected.txt: Added.
* js/regress/obvious-sink-pathology-taken.html: Added.
* js/regress/obvious-sink-pathology.html: Added.
* js/regress/obviously-elidable-new-object-expected.txt: Added.
* js/regress/obviously-elidable-new-object.html: Added.
* js/regress/script-tests/elidable-new-object-dag.js: Added.
(sumOfArithSeries):
(foo):
* js/regress/script-tests/elidable-new-object-roflcopter.js: Added.
(sumOfArithSeries):
(foo):
* js/regress/script-tests/elidable-new-object-tree.js: Added.
(sumOfArithSeries):
(foo):
* js/regress/script-tests/obvious-sink-pathology-taken.js: Added.
(sumOfArithSeries):
(bar):
(foo):
* js/regress/script-tests/obvious-sink-pathology.js: Added.
(sumOfArithSeries):
(bar):
(foo):
* js/regress/script-tests/obviously-elidable-new-object.js: Added.
(sumOfArithSeries):
(foo):
* js/regress/script-tests/sinkable-new-object-dag.js: Added.
(sumOfArithSeries):
(verify):
(foo):
* js/regress/script-tests/sinkable-new-object-taken.js: Added.
(sumOfArithSeries):
(bar):
(foo):
* js/regress/script-tests/sinkable-new-object.js: Added.
(sumOfArithSeries):
(bar):
(foo):
* js/regress/sinkable-new-object-dag-expected.txt: Added.
* js/regress/sinkable-new-object-dag.html: Added.
* js/regress/sinkable-new-object-expected.txt: Added.
* js/regress/sinkable-new-object-taken-expected.txt: Added.
* js/regress/sinkable-new-object-taken.html: Added.
* js/regress/sinkable-new-object.html: Added.



git-svn-id: http://svn.webkit.org/repository/webkit/trunk@173993 268f45cc-cd09-0410-ab3c-d52691b4dbfc
diff --git a/Source/JavaScriptCore/dfg/DFGObjectAllocationSinkingPhase.cpp b/Source/JavaScriptCore/dfg/DFGObjectAllocationSinkingPhase.cpp
new file mode 100644
index 0000000..13aabac
--- /dev/null
+++ b/Source/JavaScriptCore/dfg/DFGObjectAllocationSinkingPhase.cpp
@@ -0,0 +1,812 @@
+/*
+ * Copyright (C) 2014 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+#include "DFGObjectAllocationSinkingPhase.h"
+
+#if ENABLE(DFG_JIT)
+
+#include "DFGAbstractHeap.h"
+#include "DFGBlockMapInlines.h"
+#include "DFGClobberize.h"
+#include "DFGGraph.h"
+#include "DFGInsertOSRHintsForUpdate.h"
+#include "DFGInsertionSet.h"
+#include "DFGLivenessAnalysisPhase.h"
+#include "DFGOSRAvailabilityAnalysisPhase.h"
+#include "DFGPhase.h"
+#include "DFGPromoteHeapAccess.h"
+#include "DFGSSACalculator.h"
+#include "DFGValidate.h"
+#include "JSCInlines.h"
+
+namespace JSC { namespace DFG {
+
+static bool verbose = false;
+
+class ObjectAllocationSinkingPhase : public Phase {
+public:
+    ObjectAllocationSinkingPhase(Graph& graph)
+        : Phase(graph, "object allocation sinking")
+        , m_ssaCalculator(graph)
+        , m_insertionSet(graph)
+    {
+    }
+    
+    bool run()
+    {
+        ASSERT(m_graph.m_fixpointState == FixpointNotConverged);
+        
+        m_graph.m_dominators.computeIfNecessary(m_graph);
+        
+        // Logically we wish to consider every NewObject and sink it. However it's probably not
+        // profitable to sink a NewObject that will always escape. So, first we do a very simple
+        // forward flow analysis that determines the set of NewObject nodes that have any chance
+        // of benefiting from object allocation sinking. Then we fixpoint the following rules:
+        //
+        // - For each NewObject, we turn the original NewObject into a PhantomNewObject and then
+        //   we insert MaterializeNewObject just before those escaping sites that come before any
+        //   other escaping sites - that is, there is no path between the allocation and those sites
+        //   that would see any other escape. Note that Upsilons constitute escaping sites. Then we
+        //   insert additional MaterializeNewObject nodes on Upsilons that feed into Phis that mix
+        //   materializations and the original PhantomNewObject. We then turn each PutByOffset over a
+        //   PhantomNewObject into a PutByOffsetHint.
+        //
+        // - We perform the same optimization for MaterializeNewObject. This allows us to cover
+        //   cases where we had MaterializeNewObject flowing into a PutByOffsetHint.
+        //
+        // We could also add this rule:
+        //
+        // - If all of the Upsilons of a Phi have a MaterializeNewObject that isn't used by anyone
+        //   else, then replace the Phi with the MaterializeNewObject.
+        //
+        //   FIXME: Implement this. Note that this totally doable but it requires some gnarly
+        //   code, and to be effective the pruner needs to be aware of it. Currently any Upsilon
+        //   is considered to be an escape even by the pruner, so it's unlikely that we'll see
+        //   many cases of Phi over Materializations.
+        //   https://bugs.webkit.org/show_bug.cgi?id=136927
+        
+        if (!performSinking())
+            return false;
+        
+        while (performSinking()) { }
+        
+        if (verbose) {
+            dataLog("Graph after sinking:\n");
+            m_graph.dump();
+        }
+        
+        return true;
+    }
+
+private:
+    bool performSinking()
+    {
+        m_graph.computeRefCounts();
+        performLivenessAnalysis(m_graph);
+        performOSRAvailabilityAnalysis(m_graph);
+        
+        CString graphBeforeSinking;
+        if (Options::verboseValidationFailure() && Options::validateGraphAtEachPhase()) {
+            StringPrintStream out;
+            m_graph.dump(out);
+            graphBeforeSinking = out.toCString();
+        }
+        
+        if (verbose) {
+            dataLog("Graph before sinking:\n");
+            m_graph.dump();
+        }
+        
+        determineMaterializationPoints();
+        if (m_sinkCandidates.isEmpty())
+            return false;
+        
+        // At this point we are committed to sinking the sinking candidates.
+        placeMaterializationPoints();
+        lowerNonReadingOperationsOnPhantomAllocations();
+        promoteSunkenFields();
+        
+        if (Options::validateGraphAtEachPhase())
+            validate(m_graph, DumpGraph, graphBeforeSinking);
+        
+        if (verbose)
+            dataLog("Sinking iteration changed the graph.\n");
+        return true;
+    }
+    
+    void determineMaterializationPoints()
+    {
+        // The premise of this pass is that if there exists a point in the program where some
+        // path from a phantom allocation site to that point causes materialization, then *all*
+        // paths cause materialization. This should mean that there are never any redundant
+        // materializations.
+        
+        m_sinkCandidates.clear();
+        m_edgeToMaterializationPoint.clear();
+        
+        BlockMap<HashMap<Node*, bool>> materializedAtHead(m_graph);
+        BlockMap<HashMap<Node*, bool>> materializedAtTail(m_graph);
+        
+        bool changed;
+        do {
+            if (verbose)
+                dataLog("Doing iteration of materialization point placement.\n");
+            changed = false;
+            for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
+                HashMap<Node*, bool> materialized = materializedAtHead[block];
+                for (Node* node : *block) {
+                    handleNode(
+                        node,
+                        [&] () {
+                            materialized.add(node, false);
+                        },
+                        [&] (Node* escapee) {
+                            auto iter = materialized.find(escapee);
+                            if (iter != materialized.end())
+                                iter->value = true;
+                        });
+                }
+                
+                if (verbose)
+                    dataLog("    Materialized at tail of ", pointerDump(block), ": ", mapDump(materialized), "\n");
+                
+                if (materialized == materializedAtTail[block])
+                    continue;
+                
+                materializedAtTail[block] = materialized;
+                changed = true;
+                
+                // Only propagate things to our successors if they are alive in all successors.
+                // So, we prune materialized-at-tail to only include things that are live.
+                Vector<Node*> toRemove;
+                for (auto pair : materialized) {
+                    if (!block->ssa->liveAtTail.contains(pair.key))
+                        toRemove.append(pair.key);
+                }
+                for (Node* key : toRemove)
+                    materialized.remove(key);
+                
+                for (BasicBlock* successorBlock : block->successors()) {
+                    for (auto pair : materialized) {
+                        materializedAtHead[successorBlock].add(
+                            pair.key, false).iterator->value |= pair.value;
+                    }
+                }
+            }
+        } while (changed);
+        
+        // Determine the sink candidates. Broadly, a sink candidate is a node that handleNode()
+        // believes is sinkable, and one of the following is true:
+        //
+        // 1) There exists a basic block with only backward outgoing edges (or no outgoing edges)
+        //    in which the node wasn't materialized. This is meant to catch effectively-infinite
+        //    loops in which we don't need to have allocated the object.
+        //
+        // 2) There exists a basic block at the tail of which the node is not materialized and the
+        //    node is dead.
+        //
+        // 3) The sum of execution counts of the materializations is less than the sum of
+        //    execution counts of the original node.
+        //
+        // We currently implement only rule #2.
+        // FIXME: Implement the two other rules.
+        // https://bugs.webkit.org/show_bug.cgi?id=137073 (rule #1)
+        // https://bugs.webkit.org/show_bug.cgi?id=137074 (rule #3)
+        
+        for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
+            for (auto pair : materializedAtTail[block]) {
+                if (pair.value)
+                    continue; // It was materialized.
+                
+                if (block->ssa->liveAtTail.contains(pair.key))
+                    continue; // It might still get materialized in all of the successors.
+                
+                // We know that it died in this block and it wasn't materialized. That means that
+                // if we sink this allocation, then *this* will be a path along which we never
+                // have to allocate. Profit!
+                m_sinkCandidates.add(pair.key);
+            }
+        }
+        
+        if (m_sinkCandidates.isEmpty())
+            return;
+        
+        // A materialization edge exists at any point where a node escapes but hasn't been
+        // materialized yet.
+        //
+        // FIXME: This can create duplicate allocations when we really only needed to perform one.
+        // For example:
+        //
+        //     var o = new Object();
+        //     if (rare) {
+        //         if (stuff)
+        //             call(o); // o escapes here.
+        //         return;
+        //     }
+        //     // o doesn't escape down here.
+        //
+        // In this example, we would place a materialization point at call(o) and then we would find
+        // ourselves having to insert another one at the implicit else case of that if statement
+        // ('cause we've broken critical edges). We would instead really like to just have one
+        // materialization point right at the top of the then case of "if (rare)". To do this, we can
+        // find the LCA of the various materializations in the dom tree.
+        // https://bugs.webkit.org/show_bug.cgi?id=137124
+        for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
+            HashSet<Node*> materialized;
+            for (auto pair : materializedAtHead[block]) {
+                if (pair.value && m_sinkCandidates.contains(pair.key))
+                    materialized.add(pair.key);
+            }
+            
+            for (unsigned nodeIndex = 0; nodeIndex < block->size(); ++nodeIndex) {
+                Node* node = block->at(nodeIndex);
+                
+                handleNode(
+                    node,
+                    [&] () { },
+                    [&] (Node* escapee) {
+                        if (!m_sinkCandidates.contains(escapee))
+                            return;
+                        
+                        if (!materialized.add(escapee).isNewEntry)
+                            return;
+                        
+                        Node* materialize = createMaterialize(escapee, node->origin);
+                        if (verbose)
+                            dataLog("Adding materialization point: ", node, "->", escapee, " = ", materialize, "\n");
+                        m_edgeToMaterializationPoint.add(
+                            std::make_pair(node, escapee), materialize);
+                    });
+            }
+        }
+    }
+    
+    void placeMaterializationPoints()
+    {
+        m_ssaCalculator.reset();
+        
+        HashMap<Node*, SSACalculator::Variable*> nodeToVariable;
+        Vector<Node*> indexToNode;
+        
+        for (Node* node : m_sinkCandidates) {
+            SSACalculator::Variable* variable = m_ssaCalculator.newVariable();
+            nodeToVariable.add(node, variable);
+            ASSERT(indexToNode.size() == variable->index());
+            indexToNode.append(node);
+        }
+        
+        for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
+            for (Node* node : *block) {
+                if (SSACalculator::Variable* variable = nodeToVariable.get(node))
+                    m_ssaCalculator.newDef(variable, block, node);
+                
+                m_graph.doToChildren(
+                    node,
+                    [&] (Edge edge) {
+                        Node* materialize =
+                            m_edgeToMaterializationPoint.get(std::make_pair(node, edge.node()));
+                        if (!materialize)
+                            return;
+                        
+                        m_ssaCalculator.newDef(
+                            nodeToVariable.get(edge.node()), block, materialize);
+                    });
+            }
+        }
+        
+        m_ssaCalculator.computePhis(
+            [&] (SSACalculator::Variable* variable, BasicBlock* block) -> Node* {
+                Node* allocation = indexToNode[variable->index()];
+                if (!block->ssa->liveAtHead.contains(allocation))
+                    return nullptr;
+                
+                Node* phiNode = m_graph.addNode(allocation->prediction(), Phi, NodeOrigin());
+                phiNode->mergeFlags(NodeResultJS);
+                return phiNode;
+            });
+        
+        // Place Phis in the right places. Replace all uses of any allocation with the appropriate
+        // materialization. Create the appropriate Upsilon nodes.
+        LocalOSRAvailabilityCalculator availabilityCalculator;
+        for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
+            HashMap<Node*, Node*> mapping;
+            
+            for (Node* candidate : block->ssa->liveAtHead) {
+                SSACalculator::Variable* variable = nodeToVariable.get(candidate);
+                if (!variable)
+                    continue;
+                
+                SSACalculator::Def* def = m_ssaCalculator.reachingDefAtHead(block, variable);
+                if (!def)
+                    continue;
+                
+                mapping.set(indexToNode[variable->index()], def->value());
+            }
+            
+            availabilityCalculator.beginBlock(block);
+            for (SSACalculator::Def* phiDef : m_ssaCalculator.phisForBlock(block)) {
+                m_insertionSet.insert(0, phiDef->value());
+                
+                Node* originalNode = indexToNode[phiDef->variable()->index()];
+                insertOSRHintsForUpdate(
+                    m_insertionSet, 0, NodeOrigin(), availabilityCalculator.m_availability,
+                    originalNode, phiDef->value());
+
+                mapping.set(originalNode, phiDef->value());
+            }
+            
+            for (unsigned nodeIndex = 0; nodeIndex < block->size(); ++nodeIndex) {
+                Node* node = block->at(nodeIndex);
+
+                m_graph.doToChildren(
+                    node,
+                    [&] (Edge edge) {
+                        Node* materialize = m_edgeToMaterializationPoint.get(
+                            std::make_pair(node, edge.node()));
+                        if (materialize) {
+                            m_insertionSet.insert(nodeIndex, materialize);
+                            insertOSRHintsForUpdate(
+                                m_insertionSet, nodeIndex, node->origin,
+                                availabilityCalculator.m_availability, edge.node(), materialize);
+                            mapping.set(edge.node(), materialize);
+                        }
+                    });
+
+                availabilityCalculator.executeNode(node);
+                
+                m_graph.doToChildren(
+                    node,
+                    [&] (Edge& edge) {
+                        if (Node* materialize = mapping.get(edge.node()))
+                            edge.setNode(materialize);
+                    });
+            }
+            
+            size_t upsilonInsertionPoint = block->size() - 1;
+            NodeOrigin upsilonOrigin = block->last()->origin;
+            for (BasicBlock* successorBlock : block->successors()) {
+                for (SSACalculator::Def* phiDef : m_ssaCalculator.phisForBlock(successorBlock)) {
+                    Node* phiNode = phiDef->value();
+                    SSACalculator::Variable* variable = phiDef->variable();
+                    Node* allocation = indexToNode[variable->index()];
+                    
+                    Node* originalIncoming = mapping.get(allocation);
+                    Node* incoming;
+                    if (originalIncoming == allocation) {
+                        // If we have a Phi that combines materializations with the original
+                        // phantom object, then the path with the phantom object must materialize.
+                        
+                        incoming = createMaterialize(allocation, upsilonOrigin);
+                        m_insertionSet.insert(upsilonInsertionPoint, incoming);
+                        insertOSRHintsForUpdate(
+                            m_insertionSet, upsilonInsertionPoint, upsilonOrigin,
+                            availabilityCalculator.m_availability, originalIncoming, incoming);
+                    } else
+                        incoming = originalIncoming;
+                    
+                    Node* upsilon = m_insertionSet.insertNode(
+                        upsilonInsertionPoint, SpecNone, Upsilon, upsilonOrigin,
+                        OpInfo(phiNode), incoming->defaultEdge());
+                    
+                    if (originalIncoming == allocation) {
+                        m_edgeToMaterializationPoint.add(
+                            std::make_pair(upsilon, allocation), incoming);
+                    }
+                }
+            }
+            
+            m_insertionSet.execute(block);
+        }
+        
+        // At this point we have dummy materialization nodes along with edges to them. This means
+        // that the part of the control flow graph that prefers to see actual object allocations
+        // is completely fixed up, except for the materializations themselves.
+    }
+    
+    void lowerNonReadingOperationsOnPhantomAllocations()
+    {
+        // Lower everything but reading operations on phantom allocations. We absolutely have to
+        // lower all writes so as to reveal them to the SSA calculator. We cannot lower reads
+        // because the whole point is that those go away completely.
+        
+        for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
+            for (unsigned nodeIndex = 0; nodeIndex < block->size(); ++nodeIndex) {
+                Node* node = block->at(nodeIndex);
+                switch (node->op()) {
+                case PutByOffset: {
+                    if (m_sinkCandidates.contains(node->child2().node()))
+                        node->convertToPutByOffsetHint();
+                    break;
+                }
+                    
+                case PutStructure: {
+                    if (m_sinkCandidates.contains(node->child1().node())) {
+                        Node* structure = m_insertionSet.insertConstant(
+                            nodeIndex, node->origin, JSValue(node->transition()->next));
+                        node->convertToPutStructureHint(structure);
+                    }
+                    break;
+                }
+                    
+                case NewObject: {
+                    if (m_sinkCandidates.contains(node)) {
+                        Node* structure = m_insertionSet.insertConstant(
+                            nodeIndex + 1, node->origin, JSValue(node->structure()));
+                        m_insertionSet.insertNode(
+                            nodeIndex + 1, SpecNone, PutStructureHint, node->origin,
+                            Edge(node, KnownCellUse), Edge(structure, KnownCellUse));
+                        node->convertToPhantomNewObject();
+                    }
+                    break;
+                }
+                    
+                case MaterializeNewObject: {
+                    if (m_sinkCandidates.contains(node)) {
+                        m_insertionSet.insertNode(
+                            nodeIndex + 1, SpecNone, PutStructureHint, node->origin,
+                            Edge(node, KnownCellUse), m_graph.varArgChild(node, 0));
+                        for (unsigned i = 0; i < node->objectMaterializationData().m_properties.size(); ++i) {
+                            m_insertionSet.insertNode(
+                                nodeIndex + 1, SpecNone, PutByOffsetHint, node->origin,
+                                Edge(node, KnownCellUse), m_graph.varArgChild(node, i + 1));
+                        }
+                        node->convertToPhantomNewObject();
+                    }
+                    break;
+                }
+                    
+                case StoreBarrier:
+                case StoreBarrierWithNullCheck: {
+                    if (m_sinkCandidates.contains(node->child1().node()))
+                        node->convertToPhantom();
+                    break;
+                }
+                    
+                default:
+                    break;
+                }
+                
+                m_graph.doToChildren(
+                    node,
+                    [&] (Edge& edge) {
+                        if (m_sinkCandidates.contains(edge.node()))
+                            edge.setUseKind(KnownCellUse);
+                    });
+            }
+            m_insertionSet.execute(block);
+        }
+    }
+    
+    void promoteSunkenFields()
+    {
+        // Henceforth when we encounter a materialization point, we will want to ask *who* it is
+        // a materialization for. Invert the map to be able to answer such questions.
+        m_materializationPointToEscapee.clear();
+        for (auto pair : m_edgeToMaterializationPoint)
+            m_materializationPointToEscapee.add(pair.value, pair.key.second);
+        
+        // Collect the set of heap locations that we will be operating over.
+        HashSet<PromotedHeapLocation> locations;
+        for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
+            for (Node* node : *block) {
+                promoteHeapAccess(
+                    node,
+                    [&] (PromotedHeapLocation location, Edge) {
+                        locations.add(location);
+                    },
+                    [&] (PromotedHeapLocation location) {
+                        locations.add(location);
+                    });
+            }
+        }
+        
+        // Figure out which locations belong to which allocations.
+        m_locationsForAllocation.clear();
+        for (PromotedHeapLocation location : locations) {
+            auto result = m_locationsForAllocation.add(location.base(), Vector<PromotedHeapLocation>());
+            ASSERT(!result.iterator->value.contains(location));
+            result.iterator->value.append(location);
+        }
+        
+        // For each sunken thingy, make sure we create Bottom values for all of its fields.
+        // Note that this has the hilarious slight inefficiency of creating redundant hints for
+        // things that were previously materializations. This should only impact compile times and
+        // not code quality, and it's necessary for soundness without some data structure hackage.
+        // For example, a MaterializeNewObject that we choose to sink may have new fields added to
+        // it conditionally. That would necessitate Bottoms.
+        Node* bottom = nullptr;
+        for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
+            if (block == m_graph.block(0))
+                bottom = m_insertionSet.insertNode(0, SpecNone, BottomValue, NodeOrigin());
+            
+            for (unsigned nodeIndex = 0; nodeIndex < block->size(); ++nodeIndex) {
+                Node* node = block->at(nodeIndex);
+                for (PromotedHeapLocation location : m_locationsForAllocation.get(node)) {
+                    m_insertionSet.insert(
+                        nodeIndex + 1, location.createHint(m_graph, node->origin, bottom));
+                }
+            }
+            m_insertionSet.execute(block);
+        }
+
+        m_ssaCalculator.reset();
+
+        // Collect the set of "variables" that we will be sinking.
+        m_locationToVariable.clear();
+        m_indexToLocation.clear();
+        for (PromotedHeapLocation location : locations) {
+            SSACalculator::Variable* variable = m_ssaCalculator.newVariable();
+            m_locationToVariable.add(location, variable);
+            ASSERT(m_indexToLocation.size() == variable->index());
+            m_indexToLocation.append(location);
+        }
+        
+        // Create Defs from the existing hints.
+        for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
+            for (Node* node : *block) {
+                promoteHeapAccess(
+                    node,
+                    [&] (PromotedHeapLocation location, Edge value) {
+                        SSACalculator::Variable* variable = m_locationToVariable.get(location);
+                        m_ssaCalculator.newDef(variable, block, value.node());
+                    },
+                    [&] (PromotedHeapLocation) { });
+            }
+        }
+        
+        // OMG run the SSA calculator to create Phis!
+        m_ssaCalculator.computePhis(
+            [&] (SSACalculator::Variable* variable, BasicBlock* block) -> Node* {
+                PromotedHeapLocation location = m_indexToLocation[variable->index()];
+                if (!block->ssa->liveAtHead.contains(location.base()))
+                    return nullptr;
+                
+                Node* phiNode = m_graph.addNode(SpecHeapTop, Phi, NodeOrigin());
+                phiNode->mergeFlags(NodeResultJS);
+                return phiNode;
+            });
+        
+        // Place Phis in the right places, replace all uses of any load with the appropriate
+        // value, and create the appropriate Upsilon nodes.
+        m_graph.clearReplacements();
+        for (BasicBlock* block : m_graph.blocksInPreOrder()) {
+            // This mapping table is intended to be lazy. If something is omitted from the table,
+            // it means that there haven't been any local stores to that promoted heap location.
+            m_localMapping.clear();
+            
+            // Insert the Phi functions that we had previously created.
+            for (SSACalculator::Def* phiDef : m_ssaCalculator.phisForBlock(block)) {
+                PromotedHeapLocation location = m_indexToLocation[phiDef->variable()->index()];
+                
+                m_insertionSet.insert(
+                    0, phiDef->value());
+                m_insertionSet.insert(
+                    0, location.createHint(m_graph, NodeOrigin(), phiDef->value()));
+                m_localMapping.add(location, phiDef->value());
+            }
+            
+            if (verbose)
+                dataLog("Local mapping at ", pointerDump(block), ": ", mapDump(m_localMapping), "\n");
+            
+            // Process the block and replace all uses of loads with the promoted value.
+            for (Node* node : *block) {
+                m_graph.performSubstitution(node);
+                
+                if (Node* escapee = m_materializationPointToEscapee.get(node))
+                    populateMaterialize(block, node, escapee);
+                
+                promoteHeapAccess(
+                    node,
+                    [&] (PromotedHeapLocation location, Edge value) {
+                        m_localMapping.set(location, value.node());
+                    },
+                    [&] (PromotedHeapLocation location) {
+                        node->replaceWith(resolve(block, location));
+                    });
+            }
+            
+            // Gotta drop some Upsilons.
+            size_t upsilonInsertionPoint = block->size() - 1;
+            NodeOrigin upsilonOrigin = block->last()->origin;
+            for (BasicBlock* successorBlock : block->successors()) {
+                for (SSACalculator::Def* phiDef : m_ssaCalculator.phisForBlock(successorBlock)) {
+                    Node* phiNode = phiDef->value();
+                    SSACalculator::Variable* variable = phiDef->variable();
+                    PromotedHeapLocation location = m_indexToLocation[variable->index()];
+                    Node* incoming = resolve(block, location);
+                    
+                    m_insertionSet.insertNode(
+                        upsilonInsertionPoint, SpecNone, Upsilon, upsilonOrigin,
+                        OpInfo(phiNode), incoming->defaultEdge());
+                }
+            }
+            
+            m_insertionSet.execute(block);
+        }
+    }
+    
+    Node* resolve(BasicBlock* block, PromotedHeapLocation location)
+    {
+        if (Node* result = m_localMapping.get(location))
+            return result;
+        
+        // This implies that there is no local mapping. Find a non-local mapping.
+        SSACalculator::Def* def = m_ssaCalculator.nonLocalReachingDef(
+            block, m_locationToVariable.get(location));
+        ASSERT(def);
+        ASSERT(def->value());
+        m_localMapping.add(location, def->value());
+        return def->value();
+    }
+
+    template<typename SinkCandidateFunctor, typename EscapeFunctor>
+    void handleNode(
+        Node* node,
+        const SinkCandidateFunctor& sinkCandidate,
+        const EscapeFunctor& escape)
+    {
+        switch (node->op()) {
+        case NewObject:
+        case MaterializeNewObject:
+            sinkCandidate();
+            m_graph.doToChildren(
+                node,
+                [&] (Edge edge) {
+                    escape(edge.node());
+                });
+            break;
+            
+        case CheckStructure:
+        case GetByOffset:
+        case MultiGetByOffset:
+        case PutStructure:
+        case GetGetterSetterByOffset:
+        case MovHint:
+        case Phantom:
+        case Check:
+        case HardPhantom:
+        case StoreBarrier:
+        case StoreBarrierWithNullCheck:
+        case PutByOffsetHint:
+            break;
+            
+        case PutByOffset:
+            escape(node->child3().node());
+            break;
+            
+        case MultiPutByOffset:
+            // FIXME: In the future we should be able to handle this. It's just a matter of
+            // building the appropriate *Hint variant of this instruction, along with a
+            // PhantomStructureSelect node - since this transforms the Structure in a conditional
+            // way.
+            // https://bugs.webkit.org/show_bug.cgi?id=136924
+            escape(node->child1().node());
+            escape(node->child2().node());
+            break;
+
+        default:
+            m_graph.doToChildren(
+                node,
+                [&] (Edge edge) {
+                    escape(edge.node());
+                });
+            break;
+        }
+    }
+    
+    Node* createMaterialize(Node* escapee, const NodeOrigin whereOrigin)
+    {
+        switch (escapee->op()) {
+        case NewObject:
+        case MaterializeNewObject: {
+            ObjectMaterializationData* data = m_graph.m_objectMaterializationData.add();
+            
+            Node* result = m_graph.addNode(
+                escapee->prediction(), Node::VarArg, MaterializeNewObject,
+                NodeOrigin(
+                    escapee->origin.semantic,
+                    whereOrigin.forExit),
+                OpInfo(data), OpInfo(), 0, 0);
+            return result;
+        }
+            
+        default:
+            DFG_CRASH(m_graph, escapee, "Bad escapee op");
+            return nullptr;
+        }
+    }
+    
+    void populateMaterialize(BasicBlock* block, Node* node, Node* escapee)
+    {
+        switch (node->op()) {
+        case MaterializeNewObject: {
+            ObjectMaterializationData& data = node->objectMaterializationData();
+            unsigned firstChild = m_graph.m_varArgChildren.size();
+            
+            Vector<PromotedHeapLocation> locations = m_locationsForAllocation.get(escapee);
+            
+            PromotedHeapLocation structure(StructurePLoc, escapee);
+            ASSERT(locations.contains(structure));
+            
+            m_graph.m_varArgChildren.append(Edge(resolve(block, structure), KnownCellUse));
+            
+            for (unsigned i = 0; i < locations.size(); ++i) {
+                switch (locations[i].kind()) {
+                case StructurePLoc: {
+                    ASSERT(locations[i] == structure);
+                    break;
+                }
+                    
+                case NamedPropertyPLoc: {
+                    Node* value = resolve(block, locations[i]);
+                    if (value->op() == BottomValue) {
+                        // We can skip Bottoms entirely.
+                        break;
+                    }
+                    
+                    data.m_properties.append(PhantomPropertyValue(locations[i].info()));
+                    m_graph.m_varArgChildren.append(value);
+                    break;
+                }
+                    
+                default:
+                    DFG_CRASH(m_graph, node, "Bad location kind");
+                }
+            }
+            
+            node->children = AdjacencyList(
+                AdjacencyList::Variable,
+                firstChild, m_graph.m_varArgChildren.size() - firstChild);
+            break;
+        }
+            
+        default:
+            DFG_CRASH(m_graph, node, "Bad materialize op");
+            break;
+        }
+    }
+    
+    SSACalculator m_ssaCalculator;
+    HashSet<Node*> m_sinkCandidates;
+    HashMap<std::pair<Node*, Node*>, Node*> m_edgeToMaterializationPoint;
+    HashMap<Node*, Node*> m_materializationPointToEscapee;
+    HashMap<Node*, Vector<PromotedHeapLocation>> m_locationsForAllocation;
+    HashMap<PromotedHeapLocation, SSACalculator::Variable*> m_locationToVariable;
+    Vector<PromotedHeapLocation> m_indexToLocation;
+    HashMap<PromotedHeapLocation, Node*> m_localMapping;
+    InsertionSet m_insertionSet;
+};
+    
+bool performObjectAllocationSinking(Graph& graph)
+{
+    SamplingRegion samplingRegion("DFG Object Allocation Sinking Phase");
+    return runPhase<ObjectAllocationSinkingPhase>(graph);
+}
+
+} } // namespace JSC::DFG
+
+#endif // ENABLE(DFG_JIT)
+