[JSC] Make IsoSubspace scalable
https://bugs.webkit.org/show_bug.cgi?id=201908

Reviewed by Keith Miller.

This patch introduces lower-tier into IsoSubspace so that we can avoid allocating MarkedBlock
if a certain type of object is not allocated so many. This optimization allows us apply IsoSubspace
more aggressively to various types of objects without introducing memory regression even if such a
type of object is allocated so frequently.

We use LargeAllocation for these lower-tier objects. Each IsoSubspace holds up to 8 lower-tier objects
allocated via LargeAllocation. We use this special LargeAllocation when we tend to allocate small # of cells
for this type. Specifically, what we are doing right now is, (1) first, try to allocate in an existing
MarkedBlock (there won't be one to start), and (2) then, try to allocate in LargeAllocation, and if we cannot
allocate lower-tier objects, (3) finally we allocate a new MarkedBlock. Once this LargeAllocation is allocated
to a certain type, we do not deallocate it until VM is destroyed, so that we can keep IsoSubspace's
characteristics: once an address is assigned to a certain type, we continue using this address only for this type.

To introduce this optimization, we need to remove an restriction that no callee cells can be a LargeAllocation.
This also turns out that SamplingProfiler's isValueGCObject is heavily relies on that all the callee is small-sized.
isValueGCObject relies on the thing that MarkedSpace::m_largeAllocations is sorted. But this is not true since
this vector is sorted only when conservative scan happens. And further, this vector is only partially sorted: we
sort only an eden part part of this vector. So we cannot use this vector to implement isValueGCObject in the sampling
profiler. Instead we register HeapCell address into a hash-set in MarkedSpace. Since we do not need to find a pointer
that is pointing at the middle of the JSCell in sampling profiler, just registering cell address is enough. And we
maintain this hash-set only when sampling profiler is enabled to save memory in major cases.

We also fix the code that is relying on that JSString is always allocated in MarkedBlock. And we also fix PackedCellPtr's
assumption that CodeBlock is always allocated in MarkedBlock.

We also make sizeof(LargeAllocation) small since it is now used for non-large allocations.

JetStream2 and Speedometer2 are neutral. RAMification shows 0.6% progression on iOS devices.

* heap/BlockDirectory.cpp:
(JSC::BlockDirectory::BlockDirectory):
* heap/BlockDirectory.h:
* heap/BlockDirectoryInlines.h:
(JSC::BlockDirectory::tryAllocateFromLowerTier):
* heap/CompleteSubspace.cpp:
(JSC::CompleteSubspace::allocatorForSlow):
(JSC::CompleteSubspace::tryAllocateSlow):
(JSC::CompleteSubspace::reallocateLargeAllocationNonVirtual):
* heap/Heap.cpp:
(JSC::Heap::dumpHeapStatisticsAtVMDestruction):
(JSC::Heap::addCoreConstraints):
* heap/HeapUtil.h:
(JSC::HeapUtil::isPointerGCObjectJSCell):
(JSC::HeapUtil::isValueGCObject):
* heap/IsoAlignedMemoryAllocator.cpp:
(JSC::IsoAlignedMemoryAllocator::tryAllocateMemory):
(JSC::IsoAlignedMemoryAllocator::freeMemory):
(JSC::IsoAlignedMemoryAllocator::tryReallocateMemory):
* heap/IsoCellSet.cpp:
(JSC::IsoCellSet::~IsoCellSet):
* heap/IsoCellSet.h:
* heap/IsoCellSetInlines.h:
(JSC::IsoCellSet::add):
(JSC::IsoCellSet::remove):
(JSC::IsoCellSet::contains const):
(JSC::IsoCellSet::forEachMarkedCell):
(JSC::IsoCellSet::forEachMarkedCellInParallel):
(JSC::IsoCellSet::forEachLiveCell):
(JSC::IsoCellSet::sweepLowerTierCell):
* heap/IsoSubspace.cpp:
(JSC::IsoSubspace::IsoSubspace):
(JSC::IsoSubspace::tryAllocateFromLowerTier):
(JSC::IsoSubspace::sweepLowerTierCell):
* heap/IsoSubspace.h:
* heap/LargeAllocation.cpp:
(JSC::LargeAllocation::tryReallocate):
(JSC::LargeAllocation::createForLowerTier):
(JSC::LargeAllocation::reuseForLowerTier):
(JSC::LargeAllocation::LargeAllocation):
* heap/LargeAllocation.h:
(JSC::LargeAllocation::lowerTierIndex const):
(JSC::LargeAllocation::isLowerTier const):
* heap/LocalAllocator.cpp:
(JSC::LocalAllocator::allocateSlowCase):
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::Handle::Handle):
(JSC::MarkedBlock::Handle::stopAllocating):
* heap/MarkedBlock.h:
(JSC::MarkedBlock::Handle::forEachCell):
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::freeMemory):
(JSC::MarkedSpace::lastChanceToFinalize):
(JSC::MarkedSpace::sweepLargeAllocations):
(JSC::MarkedSpace::enableLargeAllocationTracking):
* heap/MarkedSpace.h:
(JSC::MarkedSpace:: const):
* heap/PackedCellPtr.h:
(JSC::PackedCellPtr::PackedCellPtr):
* heap/Subspace.h:
* heap/WeakSet.cpp:
(JSC::WeakSet::~WeakSet):
(JSC::WeakSet::findAllocator):
(JSC::WeakSet::addAllocator):
* heap/WeakSet.h:
(JSC::WeakSet::WeakSet):
(JSC::WeakSet::resetAllocator):
(JSC::WeakSet::container const): Deleted.
(JSC::WeakSet::setContainer): Deleted.
* heap/WeakSetInlines.h:
(JSC::WeakSet::allocate):
* runtime/InternalFunction.cpp:
(JSC::InternalFunction::InternalFunction):
* runtime/JSCallee.cpp:
(JSC::JSCallee::JSCallee):
* runtime/JSString.h:
* runtime/SamplingProfiler.cpp:
(JSC::SamplingProfiler::SamplingProfiler):
(JSC::SamplingProfiler::processUnverifiedStackTraces):
(JSC::SamplingProfiler::releaseStackTraces):
(JSC::SamplingProfiler::stackTracesAsJSON):
(JSC::SamplingProfiler::reportTopFunctions):
(JSC::SamplingProfiler::reportTopBytecodes):
* runtime/SamplingProfiler.h:

git-svn-id: http://svn.webkit.org/repository/webkit/trunk@252298 268f45cc-cd09-0410-ab3c-d52691b4dbfc
diff --git a/Source/JavaScriptCore/ChangeLog b/Source/JavaScriptCore/ChangeLog
index 501125f..2934634 100644
--- a/Source/JavaScriptCore/ChangeLog
+++ b/Source/JavaScriptCore/ChangeLog
@@ -1,3 +1,124 @@
+2019-11-08  Yusuke Suzuki  <ysuzuki@apple.com>
+
+        [JSC] Make IsoSubspace scalable
+        https://bugs.webkit.org/show_bug.cgi?id=201908
+
+        Reviewed by Keith Miller.
+
+        This patch introduces lower-tier into IsoSubspace so that we can avoid allocating MarkedBlock
+        if a certain type of object is not allocated so many. This optimization allows us apply IsoSubspace
+        more aggressively to various types of objects without introducing memory regression even if such a
+        type of object is allocated so frequently.
+
+        We use LargeAllocation for these lower-tier objects. Each IsoSubspace holds up to 8 lower-tier objects
+        allocated via LargeAllocation. We use this special LargeAllocation when we tend to allocate small # of cells
+        for this type. Specifically, what we are doing right now is, (1) first, try to allocate in an existing
+        MarkedBlock (there won't be one to start), and (2) then, try to allocate in LargeAllocation, and if we cannot
+        allocate lower-tier objects, (3) finally we allocate a new MarkedBlock. Once this LargeAllocation is allocated
+        to a certain type, we do not deallocate it until VM is destroyed, so that we can keep IsoSubspace's
+        characteristics: once an address is assigned to a certain type, we continue using this address only for this type.
+
+        To introduce this optimization, we need to remove an restriction that no callee cells can be a LargeAllocation.
+        This also turns out that SamplingProfiler's isValueGCObject is heavily relies on that all the callee is small-sized.
+        isValueGCObject relies on the thing that MarkedSpace::m_largeAllocations is sorted. But this is not true since
+        this vector is sorted only when conservative scan happens. And further, this vector is only partially sorted: we
+        sort only an eden part part of this vector. So we cannot use this vector to implement isValueGCObject in the sampling
+        profiler. Instead we register HeapCell address into a hash-set in MarkedSpace. Since we do not need to find a pointer
+        that is pointing at the middle of the JSCell in sampling profiler, just registering cell address is enough. And we
+        maintain this hash-set only when sampling profiler is enabled to save memory in major cases.
+
+        We also fix the code that is relying on that JSString is always allocated in MarkedBlock. And we also fix PackedCellPtr's
+        assumption that CodeBlock is always allocated in MarkedBlock.
+
+        We also make sizeof(LargeAllocation) small since it is now used for non-large allocations.
+
+        JetStream2 and Speedometer2 are neutral. RAMification shows 0.6% progression on iOS devices.
+
+        * heap/BlockDirectory.cpp:
+        (JSC::BlockDirectory::BlockDirectory):
+        * heap/BlockDirectory.h:
+        * heap/BlockDirectoryInlines.h:
+        (JSC::BlockDirectory::tryAllocateFromLowerTier):
+        * heap/CompleteSubspace.cpp:
+        (JSC::CompleteSubspace::allocatorForSlow):
+        (JSC::CompleteSubspace::tryAllocateSlow):
+        (JSC::CompleteSubspace::reallocateLargeAllocationNonVirtual):
+        * heap/Heap.cpp:
+        (JSC::Heap::dumpHeapStatisticsAtVMDestruction):
+        (JSC::Heap::addCoreConstraints):
+        * heap/HeapUtil.h:
+        (JSC::HeapUtil::isPointerGCObjectJSCell):
+        (JSC::HeapUtil::isValueGCObject):
+        * heap/IsoAlignedMemoryAllocator.cpp:
+        (JSC::IsoAlignedMemoryAllocator::tryAllocateMemory):
+        (JSC::IsoAlignedMemoryAllocator::freeMemory):
+        (JSC::IsoAlignedMemoryAllocator::tryReallocateMemory):
+        * heap/IsoCellSet.cpp:
+        (JSC::IsoCellSet::~IsoCellSet):
+        * heap/IsoCellSet.h:
+        * heap/IsoCellSetInlines.h:
+        (JSC::IsoCellSet::add):
+        (JSC::IsoCellSet::remove):
+        (JSC::IsoCellSet::contains const):
+        (JSC::IsoCellSet::forEachMarkedCell):
+        (JSC::IsoCellSet::forEachMarkedCellInParallel):
+        (JSC::IsoCellSet::forEachLiveCell):
+        (JSC::IsoCellSet::sweepLowerTierCell):
+        * heap/IsoSubspace.cpp:
+        (JSC::IsoSubspace::IsoSubspace):
+        (JSC::IsoSubspace::tryAllocateFromLowerTier):
+        (JSC::IsoSubspace::sweepLowerTierCell):
+        * heap/IsoSubspace.h:
+        * heap/LargeAllocation.cpp:
+        (JSC::LargeAllocation::tryReallocate):
+        (JSC::LargeAllocation::createForLowerTier):
+        (JSC::LargeAllocation::reuseForLowerTier):
+        (JSC::LargeAllocation::LargeAllocation):
+        * heap/LargeAllocation.h:
+        (JSC::LargeAllocation::lowerTierIndex const):
+        (JSC::LargeAllocation::isLowerTier const):
+        * heap/LocalAllocator.cpp:
+        (JSC::LocalAllocator::allocateSlowCase):
+        * heap/MarkedBlock.cpp:
+        (JSC::MarkedBlock::Handle::Handle):
+        (JSC::MarkedBlock::Handle::stopAllocating):
+        * heap/MarkedBlock.h:
+        (JSC::MarkedBlock::Handle::forEachCell):
+        * heap/MarkedSpace.cpp:
+        (JSC::MarkedSpace::freeMemory):
+        (JSC::MarkedSpace::lastChanceToFinalize):
+        (JSC::MarkedSpace::sweepLargeAllocations):
+        (JSC::MarkedSpace::enableLargeAllocationTracking):
+        * heap/MarkedSpace.h:
+        (JSC::MarkedSpace:: const):
+        * heap/PackedCellPtr.h:
+        (JSC::PackedCellPtr::PackedCellPtr):
+        * heap/Subspace.h:
+        * heap/WeakSet.cpp:
+        (JSC::WeakSet::~WeakSet):
+        (JSC::WeakSet::findAllocator):
+        (JSC::WeakSet::addAllocator):
+        * heap/WeakSet.h:
+        (JSC::WeakSet::WeakSet):
+        (JSC::WeakSet::resetAllocator):
+        (JSC::WeakSet::container const): Deleted.
+        (JSC::WeakSet::setContainer): Deleted.
+        * heap/WeakSetInlines.h:
+        (JSC::WeakSet::allocate):
+        * runtime/InternalFunction.cpp:
+        (JSC::InternalFunction::InternalFunction):
+        * runtime/JSCallee.cpp:
+        (JSC::JSCallee::JSCallee):
+        * runtime/JSString.h:
+        * runtime/SamplingProfiler.cpp:
+        (JSC::SamplingProfiler::SamplingProfiler):
+        (JSC::SamplingProfiler::processUnverifiedStackTraces):
+        (JSC::SamplingProfiler::releaseStackTraces):
+        (JSC::SamplingProfiler::stackTracesAsJSON):
+        (JSC::SamplingProfiler::reportTopFunctions):
+        (JSC::SamplingProfiler::reportTopBytecodes):
+        * runtime/SamplingProfiler.h:
+
 2019-11-08  Matt Lewis  <jlewis3@apple.com>
 
         Unreviewed, rolling out r252229.
diff --git a/Source/JavaScriptCore/heap/CompleteSubspace.cpp b/Source/JavaScriptCore/heap/CompleteSubspace.cpp
index 1d62a8d..403c13a 100644
--- a/Source/JavaScriptCore/heap/CompleteSubspace.cpp
+++ b/Source/JavaScriptCore/heap/CompleteSubspace.cpp
@@ -79,8 +79,7 @@
     if (false)
         dataLog("Creating BlockDirectory/LocalAllocator for ", m_name, ", ", attributes(), ", ", sizeClass, ".\n");
     
-    std::unique_ptr<BlockDirectory> uniqueDirectory =
-        makeUnique<BlockDirectory>(m_space.heap(), sizeClass);
+    std::unique_ptr<BlockDirectory> uniqueDirectory = makeUnique<BlockDirectory>(m_space.heap(), sizeClass);
     BlockDirectory* directory = uniqueDirectory.get();
     m_directories.append(WTFMove(uniqueDirectory));
     
@@ -145,6 +144,8 @@
         return nullptr;
     
     m_space.m_largeAllocations.append(allocation);
+    if (auto* set = m_space.largeAllocationSet())
+        set->add(allocation->cell());
     ASSERT(allocation->indexInSpace() == m_space.m_largeAllocations.size() - 1);
     vm.heap.didAllocate(size);
     m_space.m_capacity += size;
@@ -194,6 +195,14 @@
     }
     ASSERT(oldIndexInSpace == allocation->indexInSpace());
 
+    // If reallocation changes the address, we should update HashSet.
+    if (oldAllocation != allocation) {
+        if (auto* set = m_space.largeAllocationSet()) {
+            set->remove(oldAllocation->cell());
+            set->add(allocation->cell());
+        }
+    }
+
     m_space.m_largeAllocations[oldIndexInSpace] = allocation;
     vm.heap.didAllocate(difference);
     m_space.m_capacity += difference;
diff --git a/Source/JavaScriptCore/heap/Heap.cpp b/Source/JavaScriptCore/heap/Heap.cpp
index 151f60d..cdf44e4 100644
--- a/Source/JavaScriptCore/heap/Heap.cpp
+++ b/Source/JavaScriptCore/heap/Heap.cpp
@@ -372,13 +372,13 @@
     unsigned counter = 0;
     m_objectSpace.forEachBlock([&] (MarkedBlock::Handle* block) {
         unsigned live = 0;
-        block->forEachCell([&] (HeapCell* cell, HeapCell::Kind) {
+        block->forEachCell([&] (size_t, HeapCell* cell, HeapCell::Kind) {
             if (cell->isLive())
                 live++;
             return IterationStatus::Continue;
         });
         dataLogLn("[", counter++, "] ", block->cellSize(), ", ", live, " / ", block->cellsPerBlock(), " ", static_cast<double>(live) / block->cellsPerBlock() * 100, "% ", block->attributes(), " ", block->subspace()->name());
-        block->forEachCell([&] (HeapCell* heapCell, HeapCell::Kind kind) {
+        block->forEachCell([&] (size_t, HeapCell* heapCell, HeapCell::Kind kind) {
             if (heapCell->isLive() && kind == HeapCell::Kind::JSCell) {
                 auto* cell = static_cast<JSCell*>(heapCell);
                 if (cell->isObject())
@@ -2774,8 +2774,8 @@
 
 #if ENABLE(SAMPLING_PROFILER)
             if (SamplingProfiler* samplingProfiler = m_vm.samplingProfiler()) {
-                LockHolder locker(samplingProfiler->getLock());
-                samplingProfiler->processUnverifiedStackTraces();
+                auto locker = holdLock(samplingProfiler->getLock());
+                samplingProfiler->processUnverifiedStackTraces(locker);
                 samplingProfiler->visit(slotVisitor);
                 if (Options::logGC() == GCLogging::Verbose)
                     dataLog("Sampling Profiler data:\n", slotVisitor);
diff --git a/Source/JavaScriptCore/heap/HeapUtil.h b/Source/JavaScriptCore/heap/HeapUtil.h
index 85b3cbb..8f053c2 100644
--- a/Source/JavaScriptCore/heap/HeapUtil.h
+++ b/Source/JavaScriptCore/heap/HeapUtil.h
@@ -128,32 +128,15 @@
             tryPointer(alignedPointer - candidate->cellSize());
     }
     
-    static bool isPointerGCObjectJSCell(
-        Heap& heap, TinyBloomFilter filter, const void* pointer)
+    static bool isPointerGCObjectJSCell(Heap& heap, TinyBloomFilter filter, JSCell* pointer)
     {
         // It could point to a large allocation.
-        const Vector<LargeAllocation*>& largeAllocations = heap.objectSpace().largeAllocations();
-        if (!largeAllocations.isEmpty()) {
-            if (largeAllocations[0]->aboveLowerBound(pointer)
-                && largeAllocations.last()->belowUpperBound(pointer)) {
-                LargeAllocation*const* result = approximateBinarySearch<LargeAllocation*const>(
-                    largeAllocations.begin(), largeAllocations.size(),
-                    LargeAllocation::fromCell(pointer),
-                    [] (LargeAllocation*const* ptr) -> LargeAllocation* { return *ptr; });
-                if (result) {
-                    if (result > largeAllocations.begin()
-                        && result[-1]->cell() == pointer
-                        && isJSCellKind(result[-1]->attributes().cellKind))
-                        return true;
-                    if (result[0]->cell() == pointer
-                        && isJSCellKind(result[0]->attributes().cellKind))
-                        return true;
-                    if (result + 1 < largeAllocations.end()
-                        && result[1]->cell() == pointer
-                        && isJSCellKind(result[1]->attributes().cellKind))
-                        return true;
-                }
-            }
+        if (pointer->isLargeAllocation()) {
+            auto* set = heap.objectSpace().largeAllocationSet();
+            ASSERT(set);
+            if (set->isEmpty())
+                return false;
+            return set->contains(pointer);
         }
     
         const HashSet<MarkedBlock*>& set = heap.objectSpace().blocks().set();
@@ -179,12 +162,14 @@
         return true;
     }
     
+    // This does not find the cell if the pointer is pointing at the middle of a JSCell.
     static bool isValueGCObject(
         Heap& heap, TinyBloomFilter filter, JSValue value)
     {
+        ASSERT(heap.objectSpace().largeAllocationSet());
         if (!value.isCell())
             return false;
-        return isPointerGCObjectJSCell(heap, filter, static_cast<void*>(value.asCell()));
+        return isPointerGCObjectJSCell(heap, filter, value.asCell());
     }
 };
 
diff --git a/Source/JavaScriptCore/heap/IsoAlignedMemoryAllocator.cpp b/Source/JavaScriptCore/heap/IsoAlignedMemoryAllocator.cpp
index 6912870..6365820 100644
--- a/Source/JavaScriptCore/heap/IsoAlignedMemoryAllocator.cpp
+++ b/Source/JavaScriptCore/heap/IsoAlignedMemoryAllocator.cpp
@@ -89,18 +89,19 @@
     out.print("Iso(", RawPointer(this), ")");
 }
 
-void* IsoAlignedMemoryAllocator::tryAllocateMemory(size_t)
+void* IsoAlignedMemoryAllocator::tryAllocateMemory(size_t size)
 {
-    RELEASE_ASSERT_NOT_REACHED();
+    return FastMalloc::tryMalloc(size);
 }
 
-void IsoAlignedMemoryAllocator::freeMemory(void*)
+void IsoAlignedMemoryAllocator::freeMemory(void* pointer)
 {
-    RELEASE_ASSERT_NOT_REACHED();
+    FastMalloc::free(pointer);
 }
 
 void* IsoAlignedMemoryAllocator::tryReallocateMemory(void*, size_t)
 {
+    // In IsoSubspace-managed LargeAllocation, we must not perform realloc.
     RELEASE_ASSERT_NOT_REACHED();
 }
 
diff --git a/Source/JavaScriptCore/heap/IsoCellSet.cpp b/Source/JavaScriptCore/heap/IsoCellSet.cpp
index 7c2ed24..aa62e11 100644
--- a/Source/JavaScriptCore/heap/IsoCellSet.cpp
+++ b/Source/JavaScriptCore/heap/IsoCellSet.cpp
@@ -43,7 +43,7 @@
 IsoCellSet::~IsoCellSet()
 {
     if (isOnList())
-        BasicRawSentinelNode<IsoCellSet>::remove();
+        PackedRawSentinelNode<IsoCellSet>::remove();
 }
 
 Ref<SharedTask<MarkedBlock::Handle*()>> IsoCellSet::parallelNotEmptyMarkedBlockSource()
diff --git a/Source/JavaScriptCore/heap/IsoCellSet.h b/Source/JavaScriptCore/heap/IsoCellSet.h
index dada672..f4bf6f9 100644
--- a/Source/JavaScriptCore/heap/IsoCellSet.h
+++ b/Source/JavaScriptCore/heap/IsoCellSet.h
@@ -40,7 +40,7 @@
 // Create a set of cells that are in an IsoSubspace. This allows concurrent O(1) set insertion and
 // removal. Each such set should be thought of as a 0.8% increase in object size for objects in that
 // IsoSubspace (it's like adding 1 bit every 16 bytes, or 1 bit every 128 bits).
-class IsoCellSet : public BasicRawSentinelNode<IsoCellSet> {
+class IsoCellSet : public PackedRawSentinelNode<IsoCellSet> {
 public:
     IsoCellSet(IsoSubspace& subspace);
     ~IsoCellSet();
@@ -72,7 +72,10 @@
     void didResizeBits(size_t newSize);
     void didRemoveBlock(size_t blockIndex);
     void sweepToFreeList(MarkedBlock::Handle*);
+    void sweepLowerTierCell(unsigned);
     
+    Bitmap<MarkedBlock::numberOfLowerTierCells> m_lowerTierBits;
+
     IsoSubspace& m_subspace;
     
     // Idea: sweeping to free-list clears bits for those cells that were free-listed. The first time
diff --git a/Source/JavaScriptCore/heap/IsoCellSetInlines.h b/Source/JavaScriptCore/heap/IsoCellSetInlines.h
index ce7de8e..5149b97 100644
--- a/Source/JavaScriptCore/heap/IsoCellSetInlines.h
+++ b/Source/JavaScriptCore/heap/IsoCellSetInlines.h
@@ -33,6 +33,8 @@
 
 inline bool IsoCellSet::add(HeapCell* cell)
 {
+    if (cell->isLargeAllocation())
+        return !m_lowerTierBits.concurrentTestAndSet(cell->largeAllocation().lowerTierIndex());
     AtomIndices atomIndices(cell);
     auto& bitsPtrRef = m_bits[atomIndices.blockIndex];
     auto* bits = bitsPtrRef.get();
@@ -43,6 +45,8 @@
 
 inline bool IsoCellSet::remove(HeapCell* cell)
 {
+    if (cell->isLargeAllocation())
+        return !m_lowerTierBits.concurrentTestAndClear(cell->largeAllocation().lowerTierIndex());
     AtomIndices atomIndices(cell);
     auto& bitsPtrRef = m_bits[atomIndices.blockIndex];
     auto* bits = bitsPtrRef.get();
@@ -53,6 +57,8 @@
 
 inline bool IsoCellSet::contains(HeapCell* cell) const
 {
+    if (cell->isLargeAllocation())
+        return !m_lowerTierBits.get(cell->largeAllocation().lowerTierIndex());
     AtomIndices atomIndices(cell);
     auto* bits = m_bits[atomIndices.blockIndex].get();
     if (bits)
@@ -76,6 +82,13 @@
                     return IterationStatus::Continue;
                 });
         });
+
+    CellAttributes attributes = m_subspace.attributes();
+    m_subspace.forEachLargeAllocation(
+        [&] (LargeAllocation* allocation) {
+            if (m_lowerTierBits.get(allocation->lowerTierIndex()) && allocation->isMarked())
+                func(allocation->cell(), attributes.cellKind);
+        });
 }
 
 template<typename Func>
@@ -102,6 +115,20 @@
                         return IterationStatus::Continue;
                     });
             }
+
+            {
+                auto locker = holdLock(m_lock);
+                if (!m_needToVisitLargeAllocations)
+                    return;
+                m_needToVisitLargeAllocations = false;
+            }
+
+            CellAttributes attributes = m_set.m_subspace.attributes();
+            m_set.m_subspace.forEachLargeAllocation(
+                [&] (LargeAllocation* allocation) {
+                    if (m_set.m_lowerTierBits.get(allocation->lowerTierIndex()) && allocation->isMarked())
+                        m_func(visitor, allocation->cell(), attributes.cellKind);
+                });
         }
         
     private:
@@ -109,6 +136,7 @@
         Ref<SharedTask<MarkedBlock::Handle*()>> m_blockSource;
         Func m_func;
         Lock m_lock;
+        bool m_needToVisitLargeAllocations { true };
     };
     
     return adoptRef(*new Task(*this, func));
@@ -122,16 +150,26 @@
         [&] (size_t blockIndex) {
             MarkedBlock::Handle* block = directory.m_blocks[blockIndex];
 
-            // FIXME: We could optimize this by checking our bits before querying isLive.
-            // OOPS! (need bug URL)
             auto* bits = m_bits[blockIndex].get();
-            block->forEachLiveCell(
+            block->forEachCell(
                 [&] (size_t atomNumber, HeapCell* cell, HeapCell::Kind kind) -> IterationStatus {
-                    if (bits->get(atomNumber))
+                    if (bits->get(atomNumber) && block->isLive(cell))
                         func(cell, kind);
                     return IterationStatus::Continue;
                 });
         });
+
+    CellAttributes attributes = m_subspace.attributes();
+    m_subspace.forEachLargeAllocation(
+        [&] (LargeAllocation* allocation) {
+            if (m_lowerTierBits.get(allocation->lowerTierIndex()) && allocation->isLive())
+                func(allocation->cell(), attributes.cellKind);
+        });
+}
+
+inline void IsoCellSet::sweepLowerTierCell(unsigned index)
+{
+    m_lowerTierBits.concurrentTestAndClear(index);
 }
 
 } // namespace JSC
diff --git a/Source/JavaScriptCore/heap/IsoSubspace.cpp b/Source/JavaScriptCore/heap/IsoSubspace.cpp
index cd3e7e5..200d14c 100644
--- a/Source/JavaScriptCore/heap/IsoSubspace.cpp
+++ b/Source/JavaScriptCore/heap/IsoSubspace.cpp
@@ -29,6 +29,7 @@
 #include "AllocatorInlines.h"
 #include "BlockDirectoryInlines.h"
 #include "IsoAlignedMemoryAllocator.h"
+#include "IsoCellSetInlines.h"
 #include "IsoSubspaceInlines.h"
 #include "LocalAllocatorInlines.h"
 
@@ -41,6 +42,7 @@
     , m_localAllocator(&m_directory)
     , m_isoAlignedMemoryAllocator(makeUnique<IsoAlignedMemoryAllocator>())
 {
+    m_isIsoSubspace = true;
     initialize(heapCellType, m_isoAlignedMemoryAllocator.get());
 
     auto locker = holdLock(m_space.directoryLock());
@@ -88,5 +90,49 @@
         });
 }
 
+void* IsoSubspace::tryAllocateFromLowerTier()
+{
+    auto revive = [&] (LargeAllocation* allocation) {
+        allocation->setIndexInSpace(m_space.m_largeAllocations.size());
+        allocation->m_hasValidCell = true;
+        m_space.m_largeAllocations.append(allocation);
+        if (auto* set = m_space.largeAllocationSet())
+            set->add(allocation->cell());
+        ASSERT(allocation->indexInSpace() == m_space.m_largeAllocations.size() - 1);
+        m_largeAllocations.append(allocation);
+        return allocation->cell();
+    };
+
+    if (!m_lowerTierFreeList.isEmpty()) {
+        LargeAllocation* allocation = m_lowerTierFreeList.begin();
+        allocation->remove();
+        return revive(allocation);
+    }
+    if (m_lowerTierCellCount != MarkedBlock::numberOfLowerTierCells) {
+        size_t size = WTF::roundUpToMultipleOf<MarkedSpace::sizeStep>(m_size);
+        LargeAllocation* allocation = LargeAllocation::createForLowerTier(*m_space.heap(), size, this, m_lowerTierCellCount++);
+        return revive(allocation);
+    }
+    return nullptr;
+}
+
+void IsoSubspace::sweepLowerTierCell(LargeAllocation* largeAllocation)
+{
+    unsigned lowerTierIndex = largeAllocation->lowerTierIndex();
+    largeAllocation = largeAllocation->reuseForLowerTier();
+    m_lowerTierFreeList.append(largeAllocation);
+    m_cellSets.forEach(
+        [&] (IsoCellSet* set) {
+            set->sweepLowerTierCell(lowerTierIndex);
+        });
+}
+
+void IsoSubspace::destroyLowerTierFreeList()
+{
+    m_lowerTierFreeList.forEach([&](LargeAllocation* allocation) {
+        allocation->destroy();
+    });
+}
+
 } // namespace JSC
 
diff --git a/Source/JavaScriptCore/heap/IsoSubspace.h b/Source/JavaScriptCore/heap/IsoSubspace.h
index e516ca3..187226a 100644
--- a/Source/JavaScriptCore/heap/IsoSubspace.h
+++ b/Source/JavaScriptCore/heap/IsoSubspace.h
@@ -48,6 +48,11 @@
     void* allocate(VM&, size_t, GCDeferralContext*, AllocationFailureMode) override;
     void* allocateNonVirtual(VM&, size_t, GCDeferralContext*, AllocationFailureMode);
 
+    void sweepLowerTierCell(LargeAllocation*);
+
+    void* tryAllocateFromLowerTier();
+    void destroyLowerTierFreeList();
+
 private:
     friend class IsoCellSet;
     
@@ -59,7 +64,9 @@
     BlockDirectory m_directory;
     LocalAllocator m_localAllocator;
     std::unique_ptr<IsoAlignedMemoryAllocator> m_isoAlignedMemoryAllocator;
-    SentinelLinkedList<IsoCellSet, BasicRawSentinelNode<IsoCellSet>> m_cellSets;
+    SentinelLinkedList<LargeAllocation, PackedRawSentinelNode<LargeAllocation>> m_lowerTierFreeList;
+    SentinelLinkedList<IsoCellSet, PackedRawSentinelNode<IsoCellSet>> m_cellSets;
+    uint8_t m_lowerTierCellCount { 0 };
 };
 
 ALWAYS_INLINE Allocator IsoSubspace::allocatorForNonVirtual(size_t size, AllocatorForMode)
diff --git a/Source/JavaScriptCore/heap/LargeAllocation.cpp b/Source/JavaScriptCore/heap/LargeAllocation.cpp
index eeb5f4b..052597f 100644
--- a/Source/JavaScriptCore/heap/LargeAllocation.cpp
+++ b/Source/JavaScriptCore/heap/LargeAllocation.cpp
@@ -67,6 +67,7 @@
 
 LargeAllocation* LargeAllocation::tryReallocate(size_t size, Subspace* subspace)
 {
+    ASSERT(!isLowerTier());
     size_t adjustedAlignmentAllocationSize = headerSize() + size + halfAlignment;
     static_assert(halfAlignment == 8, "We assume that memory returned by malloc has alignment >= 8.");
 
@@ -118,15 +119,58 @@
     return newAllocation;
 }
 
+
+LargeAllocation* LargeAllocation::createForLowerTier(Heap& heap, size_t size, Subspace* subspace, uint8_t lowerTierIndex)
+{
+    if (validateDFGDoesGC)
+        RELEASE_ASSERT(heap.expectDoesGC());
+
+    size_t adjustedAlignmentAllocationSize = headerSize() + size + halfAlignment;
+    static_assert(halfAlignment == 8, "We assume that memory returned by malloc has alignment >= 8.");
+
+    void* space = subspace->alignedMemoryAllocator()->tryAllocateMemory(adjustedAlignmentAllocationSize);
+    RELEASE_ASSERT(space);
+
+    bool adjustedAlignment = false;
+    if (!isAlignedForLargeAllocation(space)) {
+        space = bitwise_cast<void*>(bitwise_cast<uintptr_t>(space) + halfAlignment);
+        adjustedAlignment = true;
+        ASSERT(isAlignedForLargeAllocation(space));
+    }
+
+    if (scribbleFreeCells())
+        scribble(space, size);
+    LargeAllocation* largeAllocation = new (NotNull, space) LargeAllocation(heap, size, subspace, 0, adjustedAlignment);
+    largeAllocation->m_lowerTierIndex = lowerTierIndex;
+    return largeAllocation;
+}
+
+LargeAllocation* LargeAllocation::reuseForLowerTier()
+{
+    Heap& heap = *this->heap();
+    size_t size = m_cellSize;
+    Subspace* subspace = m_subspace;
+    bool adjustedAlignment = m_adjustedAlignment;
+    uint8_t lowerTierIndex = m_lowerTierIndex;
+
+    void* space = this->basePointer();
+    this->~LargeAllocation();
+
+    LargeAllocation* largeAllocation = new (NotNull, space) LargeAllocation(heap, size, subspace, 0, adjustedAlignment);
+    largeAllocation->m_lowerTierIndex = lowerTierIndex;
+    largeAllocation->m_hasValidCell = false;
+    return largeAllocation;
+}
+
 LargeAllocation::LargeAllocation(Heap& heap, size_t size, Subspace* subspace, unsigned indexInSpace, bool adjustedAlignment)
-    : m_cellSize(size)
-    , m_indexInSpace(indexInSpace)
+    : m_indexInSpace(indexInSpace)
+    , m_cellSize(size)
     , m_isNewlyAllocated(true)
     , m_hasValidCell(true)
     , m_adjustedAlignment(adjustedAlignment)
     , m_attributes(subspace->attributes())
     , m_subspace(subspace)
-    , m_weakSet(heap.vm(), *this)
+    , m_weakSet(heap.vm())
 {
     m_isMarked.store(0);
 }
diff --git a/Source/JavaScriptCore/heap/LargeAllocation.h b/Source/JavaScriptCore/heap/LargeAllocation.h
index c361e9e..239e7bc 100644
--- a/Source/JavaScriptCore/heap/LargeAllocation.h
+++ b/Source/JavaScriptCore/heap/LargeAllocation.h
@@ -30,6 +30,7 @@
 
 namespace JSC {
 
+class IsoSubspace;
 class SlotVisitor;
 
 // WebKit has a good malloc that already knows what to do for large allocations. The GC shouldn't
@@ -37,12 +38,16 @@
 // objects directly using malloc, and put the LargeAllocation header just before them. We can detect
 // when a HeapCell* is a LargeAllocation because it will have the MarkedBlock::atomSize / 2 bit set.
 
-class LargeAllocation : public BasicRawSentinelNode<LargeAllocation> {
+class LargeAllocation : public PackedRawSentinelNode<LargeAllocation> {
 public:
     friend class LLIntOffsetsExtractor;
+    friend class IsoSubspace;
 
     static LargeAllocation* tryCreate(Heap&, size_t, Subspace*, unsigned indexInSpace);
 
+    static LargeAllocation* createForLowerTier(Heap&, size_t, Subspace*, uint8_t lowerTierIndex);
+    LargeAllocation* reuseForLowerTier();
+
     LargeAllocation* tryReallocate(size_t, Subspace*);
     
     ~LargeAllocation();
@@ -93,6 +98,8 @@
     bool isEmpty();
     
     size_t cellSize() const { return m_cellSize; }
+
+    uint8_t lowerTierIndex() const { return m_lowerTierIndex; }
     
     bool aboveLowerBound(const void* rawPtr)
     {
@@ -146,6 +153,8 @@
     void destroy();
     
     void dump(PrintStream&) const;
+
+    bool isLowerTier() const { return m_lowerTierIndex != UINT8_MAX; }
     
     static constexpr unsigned alignment = MarkedBlock::atomSize;
     static constexpr unsigned halfAlignment = alignment / 2;
@@ -156,13 +165,14 @@
     
     void* basePointer() const;
     
-    size_t m_cellSize;
     unsigned m_indexInSpace { 0 };
+    size_t m_cellSize;
     bool m_isNewlyAllocated : 1;
     bool m_hasValidCell : 1;
     bool m_adjustedAlignment : 1;
     Atomic<bool> m_isMarked;
     CellAttributes m_attributes;
+    uint8_t m_lowerTierIndex { UINT8_MAX };
     Subspace* m_subspace;
     WeakSet m_weakSet;
 };
diff --git a/Source/JavaScriptCore/heap/LocalAllocator.cpp b/Source/JavaScriptCore/heap/LocalAllocator.cpp
index 04b19f2..53e0508 100644
--- a/Source/JavaScriptCore/heap/LocalAllocator.cpp
+++ b/Source/JavaScriptCore/heap/LocalAllocator.cpp
@@ -133,8 +133,14 @@
     
     void* result = tryAllocateWithoutCollecting();
     
-    if (LIKELY(result != 0))
+    if (LIKELY(result != nullptr))
         return result;
+
+    Subspace* subspace = m_directory->m_subspace;
+    if (subspace->isIsoSubspace()) {
+        if (void* result = static_cast<IsoSubspace*>(subspace)->tryAllocateFromLowerTier())
+            return result;
+    }
     
     MarkedBlock::Handle* block = m_directory->tryAllocateBlock();
     if (!block) {
diff --git a/Source/JavaScriptCore/heap/MarkedBlock.cpp b/Source/JavaScriptCore/heap/MarkedBlock.cpp
index 916f838..c6e1f6c 100644
--- a/Source/JavaScriptCore/heap/MarkedBlock.cpp
+++ b/Source/JavaScriptCore/heap/MarkedBlock.cpp
@@ -62,12 +62,10 @@
 
 MarkedBlock::Handle::Handle(Heap& heap, AlignedMemoryAllocator* alignedMemoryAllocator, void* blockSpace)
     : m_alignedMemoryAllocator(alignedMemoryAllocator)
-    , m_weakSet(heap.vm(), CellContainer())
+    , m_weakSet(heap.vm())
 {
     m_block = new (NotNull, blockSpace) MarkedBlock(heap.vm(), *this);
     
-    m_weakSet.setContainer(*m_block);
-    
     heap.didAllocateBlock(blockSize);
 }
 
@@ -149,7 +147,7 @@
     blockFooter().m_newlyAllocatedVersion = heap()->objectSpace().newlyAllocatedVersion();
 
     forEachCell(
-        [&] (HeapCell* cell, HeapCell::Kind) -> IterationStatus {
+        [&] (size_t, HeapCell* cell, HeapCell::Kind) -> IterationStatus {
             block().setNewlyAllocated(cell);
             return IterationStatus::Continue;
         });
diff --git a/Source/JavaScriptCore/heap/MarkedBlock.h b/Source/JavaScriptCore/heap/MarkedBlock.h
index 60d8246..9596a69 100644
--- a/Source/JavaScriptCore/heap/MarkedBlock.h
+++ b/Source/JavaScriptCore/heap/MarkedBlock.h
@@ -77,6 +77,9 @@
     static constexpr size_t blockMask = ~(blockSize - 1); // blockSize must be a power of two.
 
     static constexpr size_t atomsPerBlock = blockSize / atomSize;
+
+    static constexpr size_t numberOfLowerTierCells = 8;
+    static_assert(numberOfLowerTierCells <= 256);
     
     static_assert(!(MarkedBlock::atomSize & (MarkedBlock::atomSize - 1)), "MarkedBlock::atomSize must be a power of two.");
     static_assert(!(MarkedBlock::blockSize & (MarkedBlock::blockSize - 1)), "MarkedBlock::blockSize must be a power of two.");
@@ -308,9 +311,6 @@
     static constexpr size_t footerSize = blockSize - payloadSize;
 
     static_assert(payloadSize == ((blockSize - sizeof(MarkedBlock::Footer)) & ~(atomSize - 1)), "Payload size computed the alternate way should give the same result");
-    // Some of JSCell types assume that the last JSCell in a MarkedBlock has a subsequent memory region (Footer) that can still safely accessed.
-    // For example, JSRopeString assumes that it can safely access up to 2 bytes beyond the JSRopeString cell.
-    static_assert(sizeof(Footer) >= sizeof(uint16_t));
     
     static MarkedBlock::Handle* tryCreate(Heap&, AlignedMemoryAllocator*);
         
@@ -643,7 +643,7 @@
     HeapCell::Kind kind = m_attributes.cellKind;
     for (size_t i = 0; i < m_endAtom; i += m_atomsPerCell) {
         HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&m_block->atoms()[i]);
-        if (functor(cell, kind) == IterationStatus::Done)
+        if (functor(i, cell, kind) == IterationStatus::Done)
             return IterationStatus::Done;
     }
     return IterationStatus::Continue;
diff --git a/Source/JavaScriptCore/heap/MarkedSpace.cpp b/Source/JavaScriptCore/heap/MarkedSpace.cpp
index 77e178f..02e750b 100644
--- a/Source/JavaScriptCore/heap/MarkedSpace.cpp
+++ b/Source/JavaScriptCore/heap/MarkedSpace.cpp
@@ -213,6 +213,11 @@
         });
     for (LargeAllocation* allocation : m_largeAllocations)
         allocation->destroy();
+    forEachSubspace([&](Subspace& subspace) {
+        if (subspace.isIsoSubspace())
+            static_cast<IsoSubspace&>(subspace).destroyLowerTierFreeList();
+        return IterationStatus::Continue;
+    });
 }
 
 void MarkedSpace::lastChanceToFinalize()
@@ -224,6 +229,7 @@
         });
     for (LargeAllocation* allocation : m_largeAllocations)
         allocation->lastChanceToFinalize();
+    // We do not call lastChanceToFinalize for lower-tier swept cells since we need nothing to do.
 }
 
 void MarkedSpace::sweep()
@@ -245,8 +251,14 @@
         LargeAllocation* allocation = m_largeAllocations[srcIndex++];
         allocation->sweep();
         if (allocation->isEmpty()) {
-            m_capacity -= allocation->cellSize();
-            allocation->destroy();
+            if (auto* set = largeAllocationSet())
+                set->remove(allocation->cell());
+            if (allocation->isLowerTier())
+                static_cast<IsoSubspace*>(allocation->subspace())->sweepLowerTierCell(allocation);
+            else {
+                m_capacity -= allocation->cellSize();
+                allocation->destroy();
+            }
             continue;
         }
         allocation->setIndexInSpace(dstIndex);
@@ -271,6 +283,13 @@
     m_largeAllocationsNurseryOffset = m_largeAllocations.size();
 }
 
+void MarkedSpace::enableLargeAllocationTracking()
+{
+    m_largeAllocationSet = makeUnique<HashSet<HeapCell*>>();
+    for (auto* allocation : m_largeAllocations)
+        m_largeAllocationSet->add(allocation->cell());
+}
+
 void MarkedSpace::visitWeakSets(SlotVisitor& visitor)
 {
     auto visit = [&] (WeakSet* weakSet) {
diff --git a/Source/JavaScriptCore/heap/MarkedSpace.h b/Source/JavaScriptCore/heap/MarkedSpace.h
index fcd801d..017db7a 100644
--- a/Source/JavaScriptCore/heap/MarkedSpace.h
+++ b/Source/JavaScriptCore/heap/MarkedSpace.h
@@ -39,7 +39,9 @@
 
 class CompleteSubspace;
 class Heap;
+class HeapCell;
 class HeapIterationScope;
+class IsoSubspace;
 class LLIntOffsetsExtractor;
 class Subspace;
 class WeakSet;
@@ -153,6 +155,9 @@
     const Vector<LargeAllocation*>& largeAllocations() const { return m_largeAllocations; }
     unsigned largeAllocationsNurseryOffset() const { return m_largeAllocationsNurseryOffset; }
     unsigned largeAllocationsOffsetForThisCollection() const { return m_largeAllocationsOffsetForThisCollection; }
+    HashSet<HeapCell*>* largeAllocationSet() const { return m_largeAllocationSet.get(); }
+
+    void enableLargeAllocationTracking();
     
     // These are cached pointers and offsets for quickly searching the large allocations that are
     // relevant to this collection.
@@ -183,6 +188,7 @@
     friend class JIT;
     friend class WeakSet;
     friend class Subspace;
+    friend class IsoSubspace;
     
     // Use this version when calling from within the GC where we know that the directories
     // have already been stopped.
@@ -198,6 +204,7 @@
 
     Vector<Subspace*> m_subspaces;
 
+    std::unique_ptr<HashSet<HeapCell*>> m_largeAllocationSet;
     Vector<LargeAllocation*> m_largeAllocations;
     unsigned m_largeAllocationsNurseryOffset { 0 };
     unsigned m_largeAllocationsOffsetForThisCollection { 0 };
diff --git a/Source/JavaScriptCore/heap/PackedCellPtr.h b/Source/JavaScriptCore/heap/PackedCellPtr.h
index f249306..f3d7d3b 100644
--- a/Source/JavaScriptCore/heap/PackedCellPtr.h
+++ b/Source/JavaScriptCore/heap/PackedCellPtr.h
@@ -33,14 +33,12 @@
 namespace JSC {
 
 template<typename T>
-class PackedCellPtr : public PackedAlignedPtr<T, MarkedBlock::atomSize> {
+class PackedCellPtr : public PackedAlignedPtr<T, 8> {
 public:
-    using Base = PackedAlignedPtr<T, MarkedBlock::atomSize>;
+    using Base = PackedAlignedPtr<T, 8>;
     PackedCellPtr(T* pointer)
         : Base(pointer)
     {
-        static_assert((sizeof(T) <= MarkedSpace::largeCutoff && std::is_final<T>::value) || isAllocatedFromIsoSubspace<T>::value, "LargeAllocation does not have 16byte alignment");
-        ASSERT(!(bitwise_cast<uintptr_t>(pointer) & (16 - 1)));
     }
 };
 
diff --git a/Source/JavaScriptCore/heap/Subspace.h b/Source/JavaScriptCore/heap/Subspace.h
index c327199..ebb72c2 100644
--- a/Source/JavaScriptCore/heap/Subspace.h
+++ b/Source/JavaScriptCore/heap/Subspace.h
@@ -101,6 +101,8 @@
     virtual void didRemoveBlock(size_t blockIndex);
     virtual void didBeginSweepingToFreeList(MarkedBlock::Handle*);
 
+    bool isIsoSubspace() const { return m_isIsoSubspace; }
+
 protected:
     void initialize(HeapCellType*, AlignedMemoryAllocator*);
     
@@ -111,10 +113,12 @@
     
     BlockDirectory* m_firstDirectory { nullptr };
     BlockDirectory* m_directoryForEmptyAllocation { nullptr }; // Uses the MarkedSpace linked list of blocks.
-    SentinelLinkedList<LargeAllocation, BasicRawSentinelNode<LargeAllocation>> m_largeAllocations;
+    SentinelLinkedList<LargeAllocation, PackedRawSentinelNode<LargeAllocation>> m_largeAllocations;
     Subspace* m_nextSubspaceInAlignedMemoryAllocator { nullptr };
 
     CString m_name;
+
+    bool m_isIsoSubspace { false };
 };
 
 } // namespace JSC
diff --git a/Source/JavaScriptCore/heap/WeakSet.cpp b/Source/JavaScriptCore/heap/WeakSet.cpp
index faae02c..8340f36 100644
--- a/Source/JavaScriptCore/heap/WeakSet.cpp
+++ b/Source/JavaScriptCore/heap/WeakSet.cpp
@@ -38,7 +38,7 @@
         remove();
     
     Heap& heap = *this->heap();
-    WeakBlock* next = 0;
+    WeakBlock* next = nullptr;
     for (WeakBlock* block = m_blocks.head(); block; block = next) {
         next = block->next();
         WeakBlock::destroy(heap, block);
@@ -83,12 +83,12 @@
         remove();
 }
 
-WeakBlock::FreeCell* WeakSet::findAllocator()
+WeakBlock::FreeCell* WeakSet::findAllocator(CellContainer container)
 {
     if (WeakBlock::FreeCell* allocator = tryFindAllocator())
         return allocator;
 
-    return addAllocator();
+    return addAllocator(container);
 }
 
 WeakBlock::FreeCell* WeakSet::tryFindAllocator()
@@ -105,12 +105,12 @@
     return 0;
 }
 
-WeakBlock::FreeCell* WeakSet::addAllocator()
+WeakBlock::FreeCell* WeakSet::addAllocator(CellContainer container)
 {
     if (!isOnList())
         heap()->objectSpace().addActiveWeakSet(this);
     
-    WeakBlock* block = WeakBlock::create(*heap(), m_container);
+    WeakBlock* block = WeakBlock::create(*heap(), container);
     heap()->didAllocate(WeakBlock::blockSize);
     m_blocks.append(block);
     WeakBlock::SweepResult sweepResult = block->takeSweepResult();
diff --git a/Source/JavaScriptCore/heap/WeakSet.h b/Source/JavaScriptCore/heap/WeakSet.h
index 8e10659..9867e24 100644
--- a/Source/JavaScriptCore/heap/WeakSet.h
+++ b/Source/JavaScriptCore/heap/WeakSet.h
@@ -41,13 +41,10 @@
     static WeakImpl* allocate(JSValue, WeakHandleOwner* = 0, void* context = 0);
     static void deallocate(WeakImpl*);
 
-    WeakSet(VM&, CellContainer);
+    WeakSet(VM&);
     ~WeakSet();
     void lastChanceToFinalize();
     
-    CellContainer container() const { return m_container; }
-    void setContainer(CellContainer container) { m_container = container; }
-
     Heap* heap() const;
     VM& vm() const;
 
@@ -62,25 +59,21 @@
     void resetAllocator();
 
 private:
-    JS_EXPORT_PRIVATE WeakBlock::FreeCell* findAllocator();
+    JS_EXPORT_PRIVATE WeakBlock::FreeCell* findAllocator(CellContainer);
     WeakBlock::FreeCell* tryFindAllocator();
-    WeakBlock::FreeCell* addAllocator();
+    WeakBlock::FreeCell* addAllocator(CellContainer);
     void removeAllocator(WeakBlock*);
 
-    WeakBlock::FreeCell* m_allocator;
-    WeakBlock* m_nextAllocator;
+    WeakBlock::FreeCell* m_allocator { nullptr };
+    WeakBlock* m_nextAllocator { nullptr };
     DoublyLinkedList<WeakBlock> m_blocks;
     // m_vm must be a pointer (instead of a reference) because the JSCLLIntOffsetsExtractor
     // cannot handle it being a reference.
     VM* m_vm;
-    CellContainer m_container;
 };
 
-inline WeakSet::WeakSet(VM& vm, CellContainer container)
-    : m_allocator(0)
-    , m_nextAllocator(0)
-    , m_vm(&vm)
-    , m_container(container)
+inline WeakSet::WeakSet(VM& vm)
+    : m_vm(&vm)
 {
 }
 
@@ -133,7 +126,7 @@
 
 inline void WeakSet::resetAllocator()
 {
-    m_allocator = 0;
+    m_allocator = nullptr;
     m_nextAllocator = m_blocks.head();
 }
 
diff --git a/Source/JavaScriptCore/heap/WeakSetInlines.h b/Source/JavaScriptCore/heap/WeakSetInlines.h
index 360e1f9..74b5fc5 100644
--- a/Source/JavaScriptCore/heap/WeakSetInlines.h
+++ b/Source/JavaScriptCore/heap/WeakSetInlines.h
@@ -32,10 +32,11 @@
 
 inline WeakImpl* WeakSet::allocate(JSValue jsValue, WeakHandleOwner* weakHandleOwner, void* context)
 {
-    WeakSet& weakSet = jsValue.asCell()->cellContainer().weakSet();
+    CellContainer container = jsValue.asCell()->cellContainer();
+    WeakSet& weakSet = container.weakSet();
     WeakBlock::FreeCell* allocator = weakSet.m_allocator;
     if (UNLIKELY(!allocator))
-        allocator = weakSet.findAllocator();
+        allocator = weakSet.findAllocator(container);
     weakSet.m_allocator = allocator->next;
 
     WeakImpl* weakImpl = WeakBlock::asWeakImpl(allocator);
diff --git a/Source/JavaScriptCore/runtime/InternalFunction.cpp b/Source/JavaScriptCore/runtime/InternalFunction.cpp
index 62d5993..248e8df 100644
--- a/Source/JavaScriptCore/runtime/InternalFunction.cpp
+++ b/Source/JavaScriptCore/runtime/InternalFunction.cpp
@@ -40,8 +40,6 @@
     , m_functionForConstruct(functionForConstruct ? functionForConstruct : callHostFunctionAsConstructor)
     , m_globalObject(vm, this, structure->globalObject())
 {
-    // globalObject->vm() wants callees to not be large allocations.
-    RELEASE_ASSERT(!isLargeAllocation());
     ASSERT_WITH_MESSAGE(m_functionForCall, "[[Call]] must be implemented");
     ASSERT(m_functionForConstruct);
 }
diff --git a/Source/JavaScriptCore/runtime/JSCallee.cpp b/Source/JavaScriptCore/runtime/JSCallee.cpp
index b6bfc0d..1f01de0 100644
--- a/Source/JavaScriptCore/runtime/JSCallee.cpp
+++ b/Source/JavaScriptCore/runtime/JSCallee.cpp
@@ -39,7 +39,6 @@
     : Base(vm, structure)
     , m_scope(vm, this, globalObject)
 {
-    RELEASE_ASSERT(!isLargeAllocation());
 }
 
 JSCallee::JSCallee(VM& vm, JSScope* scope, Structure* structure)
diff --git a/Source/JavaScriptCore/runtime/JSString.h b/Source/JavaScriptCore/runtime/JSString.h
index d2bb976..0f28c6b 100644
--- a/Source/JavaScriptCore/runtime/JSString.h
+++ b/Source/JavaScriptCore/runtime/JSString.h
@@ -286,10 +286,7 @@
         JSString* fiber2() const
         {
 #if CPU(LITTLE_ENDIAN)
-            // This access exceeds the sizeof(JSRopeString). But this is OK because JSRopeString is always allocated in MarkedBlock,
-            // and the last JSRopeString cell in the block has some subsequent bytes which are used for MarkedBlock::Footer.
-            // So the following access does not step over the page boundary in which the latter page does not have read permission.
-            return bitwise_cast<JSString*>(WTF::unalignedLoad<uintptr_t>(&m_fiber2Lower) & addressMask);
+            return bitwise_cast<JSString*>(WTF::unalignedLoad<uintptr_t>(&m_fiber1Upper) >> 16);
 #else
             return bitwise_cast<JSString*>(static_cast<uintptr_t>(m_fiber2Lower) | (static_cast<uintptr_t>(m_fiber2Upper) << 16));
 #endif
diff --git a/Source/JavaScriptCore/runtime/SamplingProfiler.cpp b/Source/JavaScriptCore/runtime/SamplingProfiler.cpp
index 0fa241d..0751f9b 100644
--- a/Source/JavaScriptCore/runtime/SamplingProfiler.cpp
+++ b/Source/JavaScriptCore/runtime/SamplingProfiler.cpp
@@ -307,6 +307,7 @@
     }
 
     m_currentFrames.grow(256);
+    vm.heap.objectSpace().enableLargeAllocationTracking();
 }
 
 SamplingProfiler::~SamplingProfiler()
@@ -468,7 +469,7 @@
 #endif
 }
 
-void SamplingProfiler::processUnverifiedStackTraces()
+void SamplingProfiler::processUnverifiedStackTraces(const AbstractLocker&)
 {
     // This function needs to be called from the JSC execution thread.
     RELEASE_ASSERT(m_lock.isLocked());
@@ -951,7 +952,7 @@
     ASSERT(m_lock.isLocked());
     {
         HeapIterationScope heapIterationScope(m_vm.heap);
-        processUnverifiedStackTraces();
+        processUnverifiedStackTraces(locker);
     }
 
     Vector<StackTrace> result(WTFMove(m_stackTraces));
@@ -962,11 +963,11 @@
 String SamplingProfiler::stackTracesAsJSON()
 {
     DeferGC deferGC(m_vm.heap);
-    LockHolder locker(m_lock);
+    auto locker = holdLock(m_lock);
 
     {
         HeapIterationScope heapIterationScope(m_vm.heap);
-        processUnverifiedStackTraces();
+        processUnverifiedStackTraces(locker);
     }
 
     StringBuilder json;
@@ -1038,12 +1039,12 @@
 
 void SamplingProfiler::reportTopFunctions(PrintStream& out)
 {
-    LockHolder locker(m_lock);
+    auto locker = holdLock(m_lock);
     DeferGCForAWhile deferGC(m_vm.heap);
 
     {
         HeapIterationScope heapIterationScope(m_vm.heap);
-        processUnverifiedStackTraces();
+        processUnverifiedStackTraces(locker);
     }
 
 
@@ -1091,12 +1092,12 @@
 
 void SamplingProfiler::reportTopBytecodes(PrintStream& out)
 {
-    LockHolder locker(m_lock);
+    auto locker = holdLock(m_lock);
     DeferGCForAWhile deferGC(m_vm.heap);
 
     {
         HeapIterationScope heapIterationScope(m_vm.heap);
-        processUnverifiedStackTraces();
+        processUnverifiedStackTraces(locker);
     }
 
     HashMap<String, size_t> bytecodeCounts;
diff --git a/Source/JavaScriptCore/runtime/SamplingProfiler.h b/Source/JavaScriptCore/runtime/SamplingProfiler.h
index 2a14494..795ee6d 100644
--- a/Source/JavaScriptCore/runtime/SamplingProfiler.h
+++ b/Source/JavaScriptCore/runtime/SamplingProfiler.h
@@ -180,7 +180,7 @@
     JS_EXPORT_PRIVATE String stackTracesAsJSON();
     JS_EXPORT_PRIVATE void noticeCurrentThreadAsJSCExecutionThread();
     void noticeCurrentThreadAsJSCExecutionThread(const AbstractLocker&);
-    void processUnverifiedStackTraces(); // You should call this only after acquiring the lock.
+    void processUnverifiedStackTraces(const AbstractLocker&);
     void setStopWatch(const AbstractLocker&, Ref<Stopwatch>&& stopwatch) { m_stopwatch = WTFMove(stopwatch); }
     void pause(const AbstractLocker&);
     void clearData(const AbstractLocker&);
diff --git a/Source/JavaScriptCore/wasm/js/WebAssemblyFunction.cpp b/Source/JavaScriptCore/wasm/js/WebAssemblyFunction.cpp
index e591a78..a4973eb 100644
--- a/Source/JavaScriptCore/wasm/js/WebAssemblyFunction.cpp
+++ b/Source/JavaScriptCore/wasm/js/WebAssemblyFunction.cpp
@@ -433,7 +433,6 @@
     NativeExecutable* executable = vm.getHostFunction(callWebAssemblyFunction, NoIntrinsic, callHostFunctionAsConstructor, nullptr, name);
     WebAssemblyFunction* function = new (NotNull, allocateCell<WebAssemblyFunction>(vm.heap)) WebAssemblyFunction(vm, globalObject, structure, jsEntrypoint, wasmToWasmEntrypointLoadLocation, signatureIndex);
     function->finishCreation(vm, executable, length, name, instance);
-    ASSERT_WITH_MESSAGE(!function->isLargeAllocation(), "WebAssemblyFunction should be allocated not in large allocation since it is JSCallee.");
     return function;
 }