Make MarkedBlock state tracking support overlapped allocation and marking state
https://bugs.webkit.org/show_bug.cgi?id=161581
Reviewed by Geoffrey Garen.
JSTests:
Add a microbenchmark for why we want to reclaim empty blocks from other allocators.
* microbenchmarks/switching-size-classes.js: Added.
Source/JavaScriptCore:
Concurrent GCs must allow for mutation and allocation during collection. We already know
how to mutate during collection. We have a write barrier for that. Allocation during
collection is more involved: the collector modifies the the mark bits, as well as other
kinds of MarkedBlock state, in-place during a collection. The allocator uses that same
MarkedBlock state to decide which regions of memory are free. This works if the allocator
never runs while the collector is running, but if we want to allow them to run at the same
time, then we need to have two versions of the state: one version built up by the
collector and another consumed by the allocator. We clear the collector state at the
beginning of collection, and splat the collector state onto the allocator state after
collection.
This could be super expensive, but we can make it cheap with some cleverness. The biggest
observation is just that most of the state is a handful of bits per block: is the block
free-listed? is it completely full? completely empty? in the incremental sweeper's
snapshot? is it retired? is it in eden? There is also state inside blocks, like the mark
bits, but I have a solid plan there and I'll save it for another patch. Once we view the
state of blocks as bits, we can put that state into bitvectors, so that if the collector
needs to transform the state of some blocks, it can do it with a single operation over
bitvectors. I like to think of this as 32-way parallelizing block operations, since
doing one operation on a 32-bit word in one of those bitvectors instantly affects 32
blocks.
This change converts all previous collections of MarkedBlocks, along with the MarkedBlock
state, into 8 bitvectors (live, empty, allocated, canAllocateButNotEmpty, eden, unswept,
markingNotEmpty, and markingRetired). The bitvectors separate allocator state (empty,
allocated, canAllocateButNotEmpty) from marking state (markingNotEmpty, markingRetired).
As a nice side-effect of switching to bitvectors, we get size class rebalancing for free.
It used to be that if a MarkedAllocator had an empty block, we would only allow that
memory to be reused by a different MarkedAllocator if we did an incremental sweep or a
full eager sweep. Now we hunt down all destructorless empty blocks before allocating new
MarkedBlocks. It would be relatively easy to also hunt down destructor empty blocks, but
the theory is that those might be expensive to sweep, so it might still be better to leave
those to the incremental sweeper.
This change is perf-neutral all around. I did some tests with two different kinds of
allocation strategies - something that is somewhat easier to do now that you can look for
blocks that are candidates for allocation by just scanning some bitvectors. I tried two
variants:
- Allocate out of non-empty blocks first, leaving empty blocks for last in case a
different allocator needed them. This is sort of a best-fit strategy. I tried this
first, and it can be expressed as:
m_allocationCursor = m_canAllocateButNotEmpty.findBit(m_allocationCursor, true)
- Allocate out of lower-indexed blocks first, treating empty and canAllocateButNotEmpty
blocks equally. This is sort of a first-fit strategy. This is what I ended up settling
on, and it can be expressed as:
m_allocationCursor = (m_canAllocateButNotEmpty | m_empty).findBit(m_allocationCursor, true)
The best-fit strategy meant 1% regressions in LongSpider and Octane overall, and a 11%
regression on Octane/earley. First-fit means perf-neutrality. Most great allocators skew
towards first-fit because it's empirically better, so this result is not surprising.
Overall, the performance of this patch on my machine is as follows, where "neutral" means
less than 1% and not statistically significant.
run-jsc-benchmarks:
SunSpider: neutral
LongSpider: 0.6% slower
V8Spider: neutral
Octane: neutral
Kraken: neutral
Microbenchmarks: 0.37% slower
AsmBench: neutral
CompressionBench: maybe 1% faster
For browser benchmarks, I report the ratio of means (bigger / smaller) along with a T-test
from Mathematica reported as % chance of not [sic] the null hypothesis. Note that we
normally consider anything less than 95% confidence to be inconclusive.
Browser benchmarks:
PLT3: 0.3% faster with 67% confidence
membuster:
Snap2FinishedLoadingPost: 0.68% more memory with 50% confidence
Snap3EndPost: 2.4% more memory with 61% confidence
JetStream: 0.2% slower with 32% confidence
Speedometer: 0.7% faster with 82% confidence
Additionally, Octane/splay's heap capacity goes down to ~180KB from ~200KB, so about a 10%
progression. This is due to the allocator rebalancing feature.
Finally, this breaks --useImmortalObjects. It was already broken as far as I can tell. I
filed a bug to reimplement it (bug 162296). Unless someone urgently needs this internal
tool, it's probably best to reimplement it after I'm done refactoring MarkedSpace.
* JavaScriptCore.xcodeproj/project.pbxproj:
* debugger/Debugger.cpp:
* heap/CellContainer.h:
* heap/CellContainerInlines.h:
(JSC::CellContainer::vm):
(JSC::CellContainer::heap):
(JSC::CellContainer::isMarkedOrNewlyAllocated):
(JSC::CellContainer::aboutToMark):
(JSC::CellContainer::isMarked): Deleted.
(JSC::CellContainer::flipIfNecessary): Deleted.
* heap/ConservativeRoots.cpp:
* heap/Heap.cpp:
(JSC::Heap::beginMarking):
(JSC::Heap::endMarking):
(JSC::Heap::collectAllGarbage):
(JSC::Heap::collectImpl):
(JSC::Heap::snapshotMarkedSpace):
(JSC::Heap::prepareForAllocation):
(JSC::Heap::zombifyDeadObjects):
(JSC::MarkedBlockSnapshotFunctor::MarkedBlockSnapshotFunctor): Deleted.
(JSC::MarkedBlockSnapshotFunctor::operator()): Deleted.
(JSC::Heap::resetAllocators): Deleted.
* heap/Heap.h:
* heap/HeapInlines.h:
(JSC::Heap::isMarked):
(JSC::Heap::isMarkedConcurrently):
(JSC::Heap::testAndSetMarked):
* heap/HeapStatistics.cpp:
* heap/HeapUtil.h:
(JSC::HeapUtil::findGCObjectPointersForMarking):
(JSC::HeapUtil::isPointerGCObjectJSCell):
* heap/HeapVerifier.cpp:
* heap/IncrementalSweeper.cpp:
(JSC::IncrementalSweeper::IncrementalSweeper):
(JSC::IncrementalSweeper::doSweep):
(JSC::IncrementalSweeper::sweepNextBlock):
(JSC::IncrementalSweeper::startSweeping):
(JSC::IncrementalSweeper::willFinishSweeping):
* heap/IncrementalSweeper.h:
* heap/LargeAllocation.h:
(JSC::LargeAllocation::isMarked):
(JSC::LargeAllocation::isMarkedConcurrently):
(JSC::LargeAllocation::isMarkedOrNewlyAllocated):
(JSC::LargeAllocation::aboutToMark):
(JSC::LargeAllocation::isMarkedDuringWeakVisiting): Deleted.
(JSC::LargeAllocation::flipIfNecessary): Deleted.
(JSC::LargeAllocation::flipIfNecessaryDuringMarking): Deleted.
* heap/MarkedAllocator.cpp:
(JSC::MarkedAllocator::MarkedAllocator):
(JSC::MarkedAllocator::isPagedOut):
(JSC::MarkedAllocator::findEmptyBlock):
(JSC::MarkedAllocator::tryAllocateWithoutCollectingImpl):
(JSC::MarkedAllocator::allocateIn):
(JSC::MarkedAllocator::tryAllocateIn):
(JSC::MarkedAllocator::allocateSlowCaseImpl):
(JSC::MarkedAllocator::tryAllocateBlock):
(JSC::MarkedAllocator::addBlock):
(JSC::MarkedAllocator::removeBlock):
(JSC::MarkedAllocator::stopAllocating):
(JSC::MarkedAllocator::prepareForAllocation):
(JSC::MarkedAllocator::lastChanceToFinalize):
(JSC::MarkedAllocator::resumeAllocating):
(JSC::MarkedAllocator::beginMarkingForFullCollection):
(JSC::MarkedAllocator::endMarking):
(JSC::MarkedAllocator::snapshotForEdenCollection):
(JSC::MarkedAllocator::snapshotForFullCollection):
(JSC::MarkedAllocator::findBlockToSweep):
(JSC::MarkedAllocator::sweep):
(JSC::MarkedAllocator::shrink):
(JSC::MarkedAllocator::assertSnapshotEmpty):
(JSC::MarkedAllocator::dump):
(JSC::MarkedAllocator::dumpBits):
(JSC::MarkedAllocator::retire): Deleted.
(JSC::MarkedAllocator::filterNextBlock): Deleted.
(JSC::MarkedAllocator::setNextBlockToSweep): Deleted.
(JSC::MarkedAllocator::reset): Deleted.
* heap/MarkedAllocator.h:
(JSC::MarkedAllocator::forEachBitVector):
(JSC::MarkedAllocator::forEachBitVectorWithName):
(JSC::MarkedAllocator::nextAllocator):
(JSC::MarkedAllocator::setNextAllocator):
(JSC::MarkedAllocator::forEachBlock):
(JSC::MarkedAllocator::resumeAllocating): Deleted.
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::tryCreate):
(JSC::MarkedBlock::Handle::Handle):
(JSC::MarkedBlock::Handle::~Handle):
(JSC::MarkedBlock::MarkedBlock):
(JSC::MarkedBlock::Handle::specializedSweep):
(JSC::MarkedBlock::Handle::sweep):
(JSC::MarkedBlock::Handle::sweepHelperSelectScribbleMode):
(JSC::MarkedBlock::Handle::sweepHelperSelectEmptyMode):
(JSC::MarkedBlock::Handle::sweepHelperSelectHasNewlyAllocated):
(JSC::MarkedBlock::Handle::sweepHelperSelectSweepMode):
(JSC::MarkedBlock::Handle::sweepHelperSelectFlipMode):
(JSC::MarkedBlock::Handle::unsweepWithNoNewlyAllocated):
(JSC::MarkedBlock::Handle::setIsFreeListed):
(JSC::MarkedBlock::Handle::stopAllocating):
(JSC::MarkedBlock::Handle::lastChanceToFinalize):
(JSC::MarkedBlock::Handle::resumeAllocating):
(JSC::MarkedBlock::aboutToMarkSlow):
(JSC::MarkedBlock::clearMarks):
(JSC::MarkedBlock::isMarked):
(JSC::MarkedBlock::Handle::isMarkedOrNewlyAllocated):
(JSC::MarkedBlock::isMarkedOrNewlyAllocated):
(JSC::MarkedBlock::Handle::didConsumeFreeList):
(JSC::MarkedBlock::markCount):
(JSC::MarkedBlock::Handle::isEmpty):
(JSC::MarkedBlock::noteMarkedSlow):
(JSC::MarkedBlock::Handle::removeFromAllocator):
(JSC::MarkedBlock::Handle::didAddToAllocator):
(JSC::MarkedBlock::Handle::didRemoveFromAllocator):
(JSC::MarkedBlock::Handle::isLive):
(JSC::MarkedBlock::Handle::isLiveCell):
(JSC::MarkedBlock::Handle::sweepHelperSelectStateAndSweepMode): Deleted.
(JSC::MarkedBlock::flipIfNecessary): Deleted.
(JSC::MarkedBlock::Handle::flipIfNecessary): Deleted.
(JSC::MarkedBlock::flipIfNecessarySlow): Deleted.
(JSC::MarkedBlock::flipIfNecessaryDuringMarkingSlow): Deleted.
(JSC::MarkedBlock::Handle::willRemoveBlock): Deleted.
(WTF::printInternal): Deleted.
* heap/MarkedBlock.h:
(JSC::MarkedBlock::Handle::isFreeListed):
(JSC::MarkedBlock::Handle::index):
(JSC::MarkedBlock::aboutToMark):
(JSC::MarkedBlock::isMarked):
(JSC::MarkedBlock::isMarkedConcurrently):
(JSC::MarkedBlock::Handle::isMarkedOrNewlyAllocated):
(JSC::MarkedBlock::isMarkedOrNewlyAllocated):
(JSC::MarkedBlock::Handle::isOnBlocksToSweep): Deleted.
(JSC::MarkedBlock::Handle::setIsOnBlocksToSweep): Deleted.
(JSC::MarkedBlock::Handle::state): Deleted.
(JSC::MarkedBlock::flipIfNecessary): Deleted.
(JSC::MarkedBlock::flipIfNecessaryDuringMarking): Deleted.
(JSC::MarkedBlock::Handle::flipIfNecessary): Deleted.
(JSC::MarkedBlock::Handle::flipIfNecessaryDuringMarking): Deleted.
(JSC::MarkedBlock::Handle::flipForEdenCollection): Deleted.
(JSC::MarkedBlock::isMarkedDuringWeakVisiting): Deleted.
(JSC::MarkedBlock::Handle::isLive): Deleted.
(JSC::MarkedBlock::Handle::isLiveCell): Deleted.
(JSC::MarkedBlock::Handle::forEachLiveCell): Deleted.
(JSC::MarkedBlock::Handle::forEachDeadCell): Deleted.
(JSC::MarkedBlock::Handle::needsSweeping): Deleted.
(JSC::MarkedBlock::Handle::isAllocated): Deleted.
(JSC::MarkedBlock::Handle::isMarked): Deleted.
* heap/MarkedBlockInlines.h: Added.
(JSC::MarkedBlock::Handle::isLive):
(JSC::MarkedBlock::Handle::isLiveCell):
(JSC::MarkedBlock::Handle::forEachLiveCell):
(JSC::MarkedBlock::Handle::forEachDeadCell):
(JSC::MarkedBlock::resetVersion):
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::MarkedSpace):
(JSC::MarkedSpace::allocate):
(JSC::MarkedSpace::tryAllocate):
(JSC::MarkedSpace::sweep):
(JSC::MarkedSpace::prepareForAllocation):
(JSC::MarkedSpace::shrink):
(JSC::MarkedSpace::clearNewlyAllocated):
(JSC::MarkedSpace::beginMarking):
(JSC::MarkedSpace::endMarking):
(JSC::MarkedSpace::didAllocateInBlock):
(JSC::MarkedSpace::findEmptyBlock):
(JSC::MarkedSpace::snapshot):
(JSC::MarkedSpace::assertSnapshotEmpty):
(JSC::MarkedSpace::dumpBits):
(JSC::MarkedSpace::zombifySweep): Deleted.
(JSC::MarkedSpace::resetAllocators): Deleted.
(JSC::VerifyMarked::operator()): Deleted.
(JSC::MarkedSpace::flip): Deleted.
* heap/MarkedSpace.h:
(JSC::MarkedSpace::nextVersion):
(JSC::MarkedSpace::firstAllocator):
(JSC::MarkedSpace::allocatorForEmptyAllocation):
(JSC::MarkedSpace::forEachAllocator):
(JSC::MarkedSpace::blocksWithNewObjects): Deleted.
(JSC::MarkedSpace::setIsMarking): Deleted.
(JSC::MarkedSpace::forEachLiveCell): Deleted.
(JSC::MarkedSpace::forEachDeadCell): Deleted.
* heap/MarkedSpaceInlines.h: Added.
(JSC::MarkedSpace::forEachLiveCell):
(JSC::MarkedSpace::forEachDeadCell):
* heap/SlotVisitor.cpp:
(JSC::SlotVisitor::setMarkedAndAppendToMarkStack):
(JSC::SlotVisitor::markAuxiliary):
(JSC::SlotVisitor::visitChildren):
* heap/Weak.h:
(WTF::HashTraits<JSC::Weak<T>>::emptyValue):
(WTF::HashTraits<JSC::Weak<T>>::peek):
* heap/WeakBlock.cpp:
(JSC::WeakBlock::specializedVisit):
(JSC::WeakBlock::reap):
* heap/WeakInlines.h:
(WTF::HashTraits<JSC::Weak<T>>::emptyValue): Deleted.
(WTF::HashTraits<JSC::Weak<T>>::peek): Deleted.
* jit/JITThunks.h:
* runtime/JSGlobalObject.cpp:
* runtime/PrototypeMap.h:
* runtime/SamplingProfiler.cpp:
* runtime/WeakGCMap.h:
* tools/JSDollarVMPrototype.cpp:
Source/WTF:
The main change here is to bring back FastBitVector.cpp, so that I could outline some
large slow path functions. This also adds some utilities, like atomicSetAndCheck() and
isEmpty(). The GC uses these.
* WTF.xcodeproj/project.pbxproj:
* wtf/CMakeLists.txt:
* wtf/FastBitVector.cpp: Added.
(WTF::FastBitVectorWordOwner::setEqualsSlow):
(WTF::FastBitVectorWordOwner::resizeSlow):
* wtf/FastBitVector.h:
(WTF::FastBitVectorWordOwner::operator=):
(WTF::FastBitVectorWordOwner::resize):
(WTF::FastBitVectorImpl::isEmpty):
(WTF::FastBitVector::atomicSetAndCheck):
(WTF::FastBitVector::operator[]): Deleted.
Tools:
Remove the always-trigger-copy-phase configuration.
* Scripts/run-jsc-stress-tests:
git-svn-id: http://svn.webkit.org/repository/webkit/trunk@206154 268f45cc-cd09-0410-ab3c-d52691b4dbfc
diff --git a/Source/JavaScriptCore/heap/MarkedAllocator.cpp b/Source/JavaScriptCore/heap/MarkedAllocator.cpp
index 375395f..d961022 100644
--- a/Source/JavaScriptCore/heap/MarkedAllocator.cpp
+++ b/Source/JavaScriptCore/heap/MarkedAllocator.cpp
@@ -26,10 +26,12 @@
#include "config.h"
#include "MarkedAllocator.h"
+#include "AllocationScope.h"
#include "GCActivityCallback.h"
#include "Heap.h"
#include "IncrementalSweeper.h"
#include "JSCInlines.h"
+#include "MarkedBlockInlines.h"
#include "SuperSampler.h"
#include "VM.h"
#include <wtf/CurrentTime.h>
@@ -39,7 +41,6 @@
MarkedAllocator::MarkedAllocator(Heap* heap, MarkedSpace* markedSpace, size_t cellSize, const AllocatorAttributes& attributes)
: m_currentBlock(0)
, m_lastActiveBlock(0)
- , m_nextBlockToSweep(nullptr)
, m_cellSize(static_cast<unsigned>(cellSize))
, m_attributes(attributes)
, m_heap(heap)
@@ -50,11 +51,13 @@
bool MarkedAllocator::isPagedOut(double deadline)
{
unsigned itersSinceLastTimeCheck = 0;
- MarkedBlock::Handle* block = m_blockList.begin();
- while (block) {
- block = filterNextBlock(block->next());
- if (block)
- block->flipIfNecessary(); // Forces us to touch the memory of the block, but has no semantic effect.
+ for (size_t index = 0; index < m_blocks.size(); ++index) {
+ MarkedBlock::Handle* block = m_blocks[index];
+ if (block) {
+ // Forces us to touch the memory of the block, but has no semantic effect.
+ if (block->needsFlip())
+ block->block().resetVersion();
+ }
++itersSinceLastTimeCheck;
if (itersSinceLastTimeCheck >= Heap::s_timeCheckResolution) {
double currentTime = WTF::monotonicallyIncreasingTime();
@@ -66,84 +69,96 @@
return false;
}
-void MarkedAllocator::retire(MarkedBlock::Handle* block)
+bool MarkedAllocator::shouldStealEmptyBlocksFromOtherAllocators() const
{
- LockHolder locker(m_lock); // This will be called in parallel during GC.
- if (block == m_currentBlock) {
- // This happens when the mutator is running. We finished a full GC and marked too few things
- // to retire. Then we started allocating in this block. Then a barrier ran, which marked an
- // object in this block, which put it over the retirement threshold. It's OK to simply do
- // nothing in that case.
- return;
- }
- if (block == m_lastActiveBlock) {
- // This can easily happen during marking. It would be easy to handle this case, but it's
- // just as easy to ignore it.
- return;
- }
- RELEASE_ASSERT(block->isOnList());
- if (block == m_nextBlockToSweep)
- m_nextBlockToSweep = filterNextBlock(block->next());
- block->remove();
- m_retiredBlocks.push(block);
+ return !needsDestruction();
}
-MarkedBlock::Handle* MarkedAllocator::filterNextBlock(MarkedBlock::Handle* block)
+MarkedBlock::Handle* MarkedAllocator::findEmptyBlockToSteal()
{
- if (block == m_blockList.end())
+ // Don't allow others to steal from us, if we wouldn't steal from others.
+ if (!shouldStealEmptyBlocksFromOtherAllocators())
return nullptr;
- return block;
+
+ m_emptyCursor = m_empty.findBit(m_emptyCursor, true);
+ if (m_emptyCursor >= m_blocks.size())
+ return nullptr;
+ return m_blocks[m_emptyCursor];
}
-void MarkedAllocator::setNextBlockToSweep(MarkedBlock::Handle* block)
+void MarkedAllocator::didConsumeFreeList()
{
- m_nextBlockToSweep = filterNextBlock(block);
+ if (m_currentBlock)
+ m_currentBlock->didConsumeFreeList();
+
+ setFreeList(FreeList());
+ m_currentBlock = nullptr;
}
-void* MarkedAllocator::tryAllocateWithoutCollectingImpl()
+void* MarkedAllocator::tryAllocateWithoutCollecting()
{
SuperSamplerScope superSamplerScope(false);
- if (m_currentBlock) {
- ASSERT(m_currentBlock == m_nextBlockToSweep);
- m_currentBlock->didConsumeFreeList();
- setNextBlockToSweep(m_currentBlock->next());
+ ASSERT(!m_currentBlock);
+ ASSERT(!m_freeList);
+
+ for (;;) {
+ m_allocationCursor = (m_canAllocateButNotEmpty | m_empty).findBit(m_allocationCursor, true);
+ if (m_allocationCursor >= m_blocks.size())
+ break;
+
+ setIsCanAllocateButNotEmpty(m_allocationCursor, false);
+
+ if (void* result = tryAllocateIn(m_blocks[m_allocationCursor]))
+ return result;
}
- setFreeList(FreeList());
-
- RELEASE_ASSERT(m_nextBlockToSweep != m_blockList.end());
-
- MarkedBlock::Handle* next;
- for (MarkedBlock::Handle*& block = m_nextBlockToSweep; block; block = next) {
- next = filterNextBlock(block->next());
-
- // It would be super weird if the blocks we are sweeping have anything allocated during this
- // cycle.
- ASSERT(!block->hasAnyNewlyAllocated());
-
- FreeList freeList = block->sweep(MarkedBlock::Handle::SweepToFreeList);
-
- // It's possible to stumble on a complete-full block. Marking tries to retire these, but
- // that algorithm is racy and may forget to do it sometimes.
- if (freeList.allocationWillFail()) {
- ASSERT(block->isFreeListed());
- block->unsweepWithNoNewlyAllocated();
- ASSERT(block->isMarked());
- retire(block);
- continue;
+ if (Options::stealEmptyBlocksFromOtherAllocators()
+ && shouldStealEmptyBlocksFromOtherAllocators()) {
+ if (MarkedBlock::Handle* block = m_markedSpace->findEmptyBlockToSteal()) {
+ block->sweep();
+
+ // It's good that this clears canAllocateButNotEmpty as well as all other bits,
+ // because there is a remote chance that a block may have both canAllocateButNotEmpty
+ // and empty set at the same time.
+ block->removeFromAllocator();
+ addBlock(block);
+ return allocateIn(block);
}
-
- m_currentBlock = block;
- setFreeList(freeList);
- break;
}
- if (!m_freeList) {
- m_currentBlock = 0;
- return 0;
- }
+ return nullptr;
+}
+void* MarkedAllocator::allocateIn(MarkedBlock::Handle* block)
+{
+ void* result = tryAllocateIn(block);
+ RELEASE_ASSERT(result);
+ return result;
+}
+
+void* MarkedAllocator::tryAllocateIn(MarkedBlock::Handle* block)
+{
+ ASSERT(block);
+ ASSERT(!block->hasAnyNewlyAllocated());
+ ASSERT(!block->isFreeListed());
+
+ FreeList freeList = block->sweep(MarkedBlock::Handle::SweepToFreeList);
+
+ // It's possible to stumble on a completely full block. Marking tries to retire these, but
+ // that algorithm is racy and may forget to do it sometimes.
+ if (freeList.allocationWillFail()) {
+ ASSERT(block->isFreeListed());
+ block->unsweepWithNoNewlyAllocated();
+ ASSERT(!block->isFreeListed());
+ ASSERT(!isEmpty(block));
+ ASSERT(!isCanAllocateButNotEmpty(block));
+ return nullptr;
+ }
+
+ m_currentBlock = block;
+ setFreeList(freeList);
+
void* result;
if (m_freeList.remaining) {
unsigned cellSize = m_cellSize;
@@ -155,21 +170,11 @@
result = head;
}
RELEASE_ASSERT(result);
+ setIsEden(m_currentBlock, true);
m_markedSpace->didAllocateInBlock(m_currentBlock);
return result;
}
-inline void* MarkedAllocator::tryAllocateWithoutCollecting()
-{
- ASSERT(!m_heap->isBusy());
- m_heap->m_operationInProgress = Allocation;
- void* result = tryAllocateWithoutCollectingImpl();
-
- m_heap->m_operationInProgress = NoOperation;
- ASSERT(result || !m_currentBlock);
- return result;
-}
-
ALWAYS_INLINE void MarkedAllocator::doTestCollectionsIfNeeded()
{
if (!Options::slowPathAllocsBetweenGCs())
@@ -206,19 +211,17 @@
ASSERT(!m_markedSpace->isIterating());
m_heap->didAllocate(m_freeList.originalSize);
+ didConsumeFreeList();
+
+ m_heap->collectIfNecessaryOrDefer();
+
+ AllocationScope allocationScope(*m_heap);
+
void* result = tryAllocateWithoutCollecting();
if (LIKELY(result != 0))
return result;
- if (m_heap->collectIfNecessaryOrDefer()) {
- result = tryAllocateWithoutCollecting();
- if (result)
- return result;
- }
-
- ASSERT(!m_heap->shouldCollect());
-
MarkedBlock::Handle* block = tryAllocateBlock();
if (!block) {
if (crashOnFailure)
@@ -227,8 +230,7 @@
return nullptr;
}
addBlock(block);
-
- result = tryAllocateWithoutCollecting();
+ result = allocateIn(block);
ASSERT(result);
return result;
}
@@ -249,37 +251,75 @@
MarkedBlock::Handle* MarkedAllocator::tryAllocateBlock()
{
SuperSamplerScope superSamplerScope(false);
- return MarkedBlock::tryCreate(*m_heap, this, m_cellSize, m_attributes);
+
+ MarkedBlock::Handle* handle = MarkedBlock::tryCreate(*m_heap);
+ if (!handle)
+ return nullptr;
+
+ m_markedSpace->didAddBlock(handle);
+
+ return handle;
}
void MarkedAllocator::addBlock(MarkedBlock::Handle* block)
{
- ASSERT(!m_currentBlock);
- ASSERT(!m_freeList);
+ size_t index;
+ if (m_freeBlockIndices.isEmpty()) {
+ index = m_blocks.size();
+
+ size_t oldCapacity = m_blocks.capacity();
+ m_blocks.append(block);
+ if (m_blocks.capacity() != oldCapacity) {
+ forEachBitVector(
+ [&] (FastBitVector& vector) {
+ ASSERT_UNUSED(vector, vector.numBits() == oldCapacity);
+ });
+
+ ASSERT(m_blocks.capacity() > oldCapacity);
+
+ forEachBitVector(
+ [&] (FastBitVector& vector) {
+ vector.resize(m_blocks.capacity());
+ });
+ }
+ } else {
+ index = m_freeBlockIndices.takeLast();
+ ASSERT(!m_blocks[index]);
+ m_blocks[index] = block;
+ }
- m_blockList.append(block);
- setNextBlockToSweep(block);
- m_markedSpace->didAddBlock(block);
+ forEachBitVector(
+ [&] (FastBitVector& vector) {
+ ASSERT_UNUSED(vector, !vector[index]);
+ });
+
+ // This is the point at which the block learns of its cellSize() and attributes().
+ block->didAddToAllocator(this, index);
+
+ setIsLive(index, true);
+ setIsEmpty(index, true);
}
void MarkedAllocator::removeBlock(MarkedBlock::Handle* block)
{
- if (m_currentBlock == block) {
- m_currentBlock = filterNextBlock(m_currentBlock->next());
- setFreeList(FreeList());
- }
- if (m_nextBlockToSweep == block)
- setNextBlockToSweep(m_nextBlockToSweep->next());
+ ASSERT(block->allocator() == this);
+ ASSERT(m_blocks[block->index()] == block);
- block->willRemoveBlock();
- m_blockList.remove(block);
+ m_blocks[block->index()] = nullptr;
+ m_freeBlockIndices.append(block->index());
+
+ forEachBitVector(
+ [&] (FastBitVector& vector) {
+ vector[block->index()] = false;
+ });
+
+ block->didRemoveFromAllocator();
}
void MarkedAllocator::stopAllocating()
{
- if (m_heap->operationInProgress() == FullCollection)
- m_blockList.takeFrom(m_retiredBlocks);
-
+ if (false)
+ dataLog(RawPointer(this), ": MarkedAllocator::stopAllocating!\n");
ASSERT(!m_lastActiveBlock);
if (!m_currentBlock) {
ASSERT(!m_freeList);
@@ -292,29 +332,27 @@
m_freeList = FreeList();
}
-void MarkedAllocator::reset()
+void MarkedAllocator::prepareForAllocation()
{
- m_lastActiveBlock = 0;
- m_currentBlock = 0;
+ m_lastActiveBlock = nullptr;
+ m_currentBlock = nullptr;
setFreeList(FreeList());
- setNextBlockToSweep(m_blockList.begin());
+ m_allocationCursor = 0;
+ m_emptyCursor = 0;
+ m_unsweptCursor = 0;
+
+ m_eden.clearAll();
if (UNLIKELY(Options::useImmortalObjects())) {
- MarkedBlock::Handle* next;
- for (MarkedBlock::Handle*& block = m_nextBlockToSweep; block; block = next) {
- next = filterNextBlock(block->next());
-
- FreeList freeList = block->sweep(MarkedBlock::Handle::SweepToFreeList);
- block->zap(freeList);
- retire(block);
- }
+ // FIXME: Make this work again.
+ // https://bugs.webkit.org/show_bug.cgi?id=162296
+ RELEASE_ASSERT_NOT_REACHED();
}
}
void MarkedAllocator::lastChanceToFinalize()
{
- m_blockList.takeFrom(m_retiredBlocks);
forEachBlock(
[&] (MarkedBlock::Handle* block) {
block->lastChanceToFinalize();
@@ -326,4 +364,121 @@
m_freeList = freeList;
}
+void MarkedAllocator::resumeAllocating()
+{
+ if (!m_lastActiveBlock)
+ return;
+
+ m_freeList = m_lastActiveBlock->resumeAllocating();
+ m_currentBlock = m_lastActiveBlock;
+ m_lastActiveBlock = nullptr;
+}
+
+void MarkedAllocator::beginMarkingForFullCollection()
+{
+ // Mark bits are sticky and so is our summary of mark bits. We only clear these during full
+ // collections, so if you survived the last collection you will survive the next one so long
+ // as the next one is eden.
+ m_markingNotEmpty.clearAll();
+ m_markingRetired.clearAll();
+}
+
+void MarkedAllocator::endMarking()
+{
+ m_allocated.clearAll();
+
+ // It's surprising and frustrating to comprehend, but the end-of-marking flip does not need to
+ // know what kind of collection it is. That knowledge is already encoded in the m_markingXYZ
+ // vectors.
+
+ if (needsDestruction()) {
+ // If blocks need destruction then nothing is empty! This is a correct assertion but may
+ // become wrong once we go full concurrent: when we create a new block, it will flicker
+ // into the empty set for a tiny moment. On the other hand, this code is likely to be run
+ // in stopTheWorld.
+ ASSERT(m_empty.isEmpty());
+ m_canAllocateButNotEmpty = m_live & ~m_markingRetired;
+ return;
+ }
+
+ m_empty = m_live & ~m_markingNotEmpty;
+ m_canAllocateButNotEmpty = m_live & m_markingNotEmpty & ~m_markingRetired;
+
+ if (false) {
+ dataLog("Bits for ", m_cellSize, ", ", m_attributes, " after endMarking:\n");
+ dumpBits(WTF::dataFile());
+ }
+}
+
+void MarkedAllocator::snapshotUnsweptForEdenCollection()
+{
+ m_unswept |= m_eden;
+}
+
+void MarkedAllocator::snapshotUnsweptForFullCollection()
+{
+ m_unswept = m_live;
+}
+
+MarkedBlock::Handle* MarkedAllocator::findBlockToSweep()
+{
+ m_unsweptCursor = m_unswept.findBit(m_unsweptCursor, true);
+ if (m_unsweptCursor >= m_blocks.size())
+ return nullptr;
+ return m_blocks[m_unsweptCursor];
+}
+
+void MarkedAllocator::sweep()
+{
+ m_unswept.forEachSetBit(
+ [&] (size_t index) {
+ m_blocks[index]->sweep();
+ });
+}
+
+void MarkedAllocator::shrink()
+{
+ m_empty.forEachSetBit(
+ [&] (size_t index) {
+ m_markedSpace->freeBlock(m_blocks[index]);
+ });
+}
+
+void MarkedAllocator::assertNoUnswept()
+{
+ if (ASSERT_DISABLED)
+ return;
+
+ if (m_unswept.isEmpty())
+ return;
+
+ dataLog("Assertion failed: unswept not empty in ", *this, ".\n");
+ dumpBits();
+ ASSERT_NOT_REACHED();
+}
+
+void MarkedAllocator::dump(PrintStream& out) const
+{
+ out.print(RawPointer(this), ":", m_cellSize, "/", m_attributes);
+}
+
+void MarkedAllocator::dumpBits(PrintStream& out)
+{
+ unsigned maxNameLength = 0;
+ forEachBitVectorWithName(
+ [&] (FastBitVector&, const char* name) {
+ unsigned length = strlen(name);
+ maxNameLength = std::max(maxNameLength, length);
+ });
+
+ forEachBitVectorWithName(
+ [&] (FastBitVector& vector, const char* name) {
+ out.print(" ", name, ": ");
+ for (unsigned i = maxNameLength - strlen(name); i--;)
+ out.print(" ");
+ out.print(vector, "\n");
+ });
+}
+
} // namespace JSC
+