Marking should be generational
https://bugs.webkit.org/show_bug.cgi?id=126552
Reviewed by Geoffrey Garen.
Source/JavaScriptCore:
Re-marking the same objects over and over is a waste of effort. This patch implements
the sticky mark bit algorithm (along with our already-present write barriers) to reduce
overhead during garbage collection caused by rescanning objects.
There are now two collection modes, EdenCollection and FullCollection. EdenCollections
only visit new objects or objects that were added to the remembered set by a write barrier.
FullCollections are normal collections that visit all objects regardless of their
generation.
In this patch EdenCollections do not do anything in CopiedSpace. This will be fixed in
https://bugs.webkit.org/show_bug.cgi?id=126555.
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::visitAggregate):
* bytecode/CodeBlock.h:
(JSC::CodeBlockSet::mark):
* dfg/DFGOperations.cpp:
* heap/CodeBlockSet.cpp:
(JSC::CodeBlockSet::add):
(JSC::CodeBlockSet::traceMarked):
(JSC::CodeBlockSet::rememberCurrentlyExecutingCodeBlocks):
* heap/CodeBlockSet.h:
* heap/CopiedBlockInlines.h:
(JSC::CopiedBlock::reportLiveBytes):
* heap/CopiedSpace.cpp:
(JSC::CopiedSpace::didStartFullCollection):
* heap/CopiedSpace.h:
(JSC::CopiedSpace::heap):
* heap/Heap.cpp:
(JSC::Heap::Heap):
(JSC::Heap::didAbandon):
(JSC::Heap::markRoots):
(JSC::Heap::copyBackingStores):
(JSC::Heap::addToRememberedSet):
(JSC::Heap::collectAllGarbage):
(JSC::Heap::collect):
(JSC::Heap::didAllocate):
(JSC::Heap::writeBarrier):
* heap/Heap.h:
(JSC::Heap::isInRememberedSet):
(JSC::Heap::operationInProgress):
(JSC::Heap::shouldCollect):
(JSC::Heap::isCollecting):
(JSC::Heap::isWriteBarrierEnabled):
(JSC::Heap::writeBarrier):
* heap/HeapOperation.h:
* heap/MarkStack.cpp:
(JSC::MarkStackArray::~MarkStackArray):
(JSC::MarkStackArray::clear):
(JSC::MarkStackArray::fillVector):
* heap/MarkStack.h:
* heap/MarkedAllocator.cpp:
(JSC::isListPagedOut):
(JSC::MarkedAllocator::isPagedOut):
(JSC::MarkedAllocator::tryAllocateHelper):
(JSC::MarkedAllocator::addBlock):
(JSC::MarkedAllocator::removeBlock):
(JSC::MarkedAllocator::reset):
* heap/MarkedAllocator.h:
(JSC::MarkedAllocator::MarkedAllocator):
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::clearMarks):
(JSC::MarkedBlock::clearRememberedSet):
(JSC::MarkedBlock::clearMarksWithCollectionType):
(JSC::MarkedBlock::lastChanceToFinalize):
* heap/MarkedBlock.h: Changed atomSize to 16 bytes because we have no objects smaller
than 16 bytes. This is also to pay for the additional Bitmap for the remembered set.
(JSC::MarkedBlock::didConsumeEmptyFreeList):
(JSC::MarkedBlock::setRemembered):
(JSC::MarkedBlock::clearRemembered):
(JSC::MarkedBlock::atomicClearRemembered):
(JSC::MarkedBlock::isRemembered):
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::~MarkedSpace):
(JSC::MarkedSpace::resetAllocators):
(JSC::MarkedSpace::visitWeakSets):
(JSC::MarkedSpace::reapWeakSets):
(JSC::VerifyMarked::operator()):
(JSC::MarkedSpace::clearMarks):
* heap/MarkedSpace.h:
(JSC::ClearMarks::operator()):
(JSC::ClearRememberedSet::operator()):
(JSC::MarkedSpace::didAllocateInBlock):
(JSC::MarkedSpace::clearRememberedSet):
* heap/SlotVisitor.cpp:
(JSC::SlotVisitor::~SlotVisitor):
(JSC::SlotVisitor::clearMarkStack):
* heap/SlotVisitor.h:
(JSC::SlotVisitor::markStack):
(JSC::SlotVisitor::sharedData):
* heap/SlotVisitorInlines.h:
(JSC::SlotVisitor::internalAppend):
(JSC::SlotVisitor::unconditionallyAppend):
(JSC::SlotVisitor::copyLater):
(JSC::SlotVisitor::reportExtraMemoryUsage):
(JSC::SlotVisitor::heap):
* jit/Repatch.cpp:
* runtime/JSGenericTypedArrayViewInlines.h:
(JSC::JSGenericTypedArrayView<Adaptor>::visitChildren):
* runtime/JSPropertyNameIterator.h:
(JSC::StructureRareData::setEnumerationCache):
* runtime/JSString.cpp:
(JSC::JSString::visitChildren):
* runtime/StructureRareDataInlines.h:
(JSC::StructureRareData::setPreviousID):
(JSC::StructureRareData::setObjectToStringValue):
* runtime/WeakMapData.cpp:
(JSC::WeakMapData::visitChildren):
Source/WTF:
* wtf/Bitmap.h:
(WTF::WordType>::count): Added a cast that became necessary when Bitmap
is used with smaller types than int32_t.
git-svn-id: http://svn.webkit.org/repository/webkit/trunk@161615 268f45cc-cd09-0410-ab3c-d52691b4dbfc
diff --git a/Source/JavaScriptCore/ChangeLog b/Source/JavaScriptCore/ChangeLog
index 8ef8667..3089948 100644
--- a/Source/JavaScriptCore/ChangeLog
+++ b/Source/JavaScriptCore/ChangeLog
@@ -1,3 +1,119 @@
+2014-01-07 Mark Hahnenberg <mhahnenberg@apple.com>
+
+ Marking should be generational
+ https://bugs.webkit.org/show_bug.cgi?id=126552
+
+ Reviewed by Geoffrey Garen.
+
+ Re-marking the same objects over and over is a waste of effort. This patch implements
+ the sticky mark bit algorithm (along with our already-present write barriers) to reduce
+ overhead during garbage collection caused by rescanning objects.
+
+ There are now two collection modes, EdenCollection and FullCollection. EdenCollections
+ only visit new objects or objects that were added to the remembered set by a write barrier.
+ FullCollections are normal collections that visit all objects regardless of their
+ generation.
+
+ In this patch EdenCollections do not do anything in CopiedSpace. This will be fixed in
+ https://bugs.webkit.org/show_bug.cgi?id=126555.
+
+ * bytecode/CodeBlock.cpp:
+ (JSC::CodeBlock::visitAggregate):
+ * bytecode/CodeBlock.h:
+ (JSC::CodeBlockSet::mark):
+ * dfg/DFGOperations.cpp:
+ * heap/CodeBlockSet.cpp:
+ (JSC::CodeBlockSet::add):
+ (JSC::CodeBlockSet::traceMarked):
+ (JSC::CodeBlockSet::rememberCurrentlyExecutingCodeBlocks):
+ * heap/CodeBlockSet.h:
+ * heap/CopiedBlockInlines.h:
+ (JSC::CopiedBlock::reportLiveBytes):
+ * heap/CopiedSpace.cpp:
+ (JSC::CopiedSpace::didStartFullCollection):
+ * heap/CopiedSpace.h:
+ (JSC::CopiedSpace::heap):
+ * heap/Heap.cpp:
+ (JSC::Heap::Heap):
+ (JSC::Heap::didAbandon):
+ (JSC::Heap::markRoots):
+ (JSC::Heap::copyBackingStores):
+ (JSC::Heap::addToRememberedSet):
+ (JSC::Heap::collectAllGarbage):
+ (JSC::Heap::collect):
+ (JSC::Heap::didAllocate):
+ (JSC::Heap::writeBarrier):
+ * heap/Heap.h:
+ (JSC::Heap::isInRememberedSet):
+ (JSC::Heap::operationInProgress):
+ (JSC::Heap::shouldCollect):
+ (JSC::Heap::isCollecting):
+ (JSC::Heap::isWriteBarrierEnabled):
+ (JSC::Heap::writeBarrier):
+ * heap/HeapOperation.h:
+ * heap/MarkStack.cpp:
+ (JSC::MarkStackArray::~MarkStackArray):
+ (JSC::MarkStackArray::clear):
+ (JSC::MarkStackArray::fillVector):
+ * heap/MarkStack.h:
+ * heap/MarkedAllocator.cpp:
+ (JSC::isListPagedOut):
+ (JSC::MarkedAllocator::isPagedOut):
+ (JSC::MarkedAllocator::tryAllocateHelper):
+ (JSC::MarkedAllocator::addBlock):
+ (JSC::MarkedAllocator::removeBlock):
+ (JSC::MarkedAllocator::reset):
+ * heap/MarkedAllocator.h:
+ (JSC::MarkedAllocator::MarkedAllocator):
+ * heap/MarkedBlock.cpp:
+ (JSC::MarkedBlock::clearMarks):
+ (JSC::MarkedBlock::clearRememberedSet):
+ (JSC::MarkedBlock::clearMarksWithCollectionType):
+ (JSC::MarkedBlock::lastChanceToFinalize):
+ * heap/MarkedBlock.h: Changed atomSize to 16 bytes because we have no objects smaller
+ than 16 bytes. This is also to pay for the additional Bitmap for the remembered set.
+ (JSC::MarkedBlock::didConsumeEmptyFreeList):
+ (JSC::MarkedBlock::setRemembered):
+ (JSC::MarkedBlock::clearRemembered):
+ (JSC::MarkedBlock::atomicClearRemembered):
+ (JSC::MarkedBlock::isRemembered):
+ * heap/MarkedSpace.cpp:
+ (JSC::MarkedSpace::~MarkedSpace):
+ (JSC::MarkedSpace::resetAllocators):
+ (JSC::MarkedSpace::visitWeakSets):
+ (JSC::MarkedSpace::reapWeakSets):
+ (JSC::VerifyMarked::operator()):
+ (JSC::MarkedSpace::clearMarks):
+ * heap/MarkedSpace.h:
+ (JSC::ClearMarks::operator()):
+ (JSC::ClearRememberedSet::operator()):
+ (JSC::MarkedSpace::didAllocateInBlock):
+ (JSC::MarkedSpace::clearRememberedSet):
+ * heap/SlotVisitor.cpp:
+ (JSC::SlotVisitor::~SlotVisitor):
+ (JSC::SlotVisitor::clearMarkStack):
+ * heap/SlotVisitor.h:
+ (JSC::SlotVisitor::markStack):
+ (JSC::SlotVisitor::sharedData):
+ * heap/SlotVisitorInlines.h:
+ (JSC::SlotVisitor::internalAppend):
+ (JSC::SlotVisitor::unconditionallyAppend):
+ (JSC::SlotVisitor::copyLater):
+ (JSC::SlotVisitor::reportExtraMemoryUsage):
+ (JSC::SlotVisitor::heap):
+ * jit/Repatch.cpp:
+ * runtime/JSGenericTypedArrayViewInlines.h:
+ (JSC::JSGenericTypedArrayView<Adaptor>::visitChildren):
+ * runtime/JSPropertyNameIterator.h:
+ (JSC::StructureRareData::setEnumerationCache):
+ * runtime/JSString.cpp:
+ (JSC::JSString::visitChildren):
+ * runtime/StructureRareDataInlines.h:
+ (JSC::StructureRareData::setPreviousID):
+ (JSC::StructureRareData::setObjectToStringValue):
+ * runtime/WeakMapData.cpp:
+ (JSC::WeakMapData::visitChildren):
+
2014-01-09 Joseph Pecoraro <pecoraro@apple.com>
Unreviewed Windows build fix for r161563.
diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.cpp b/Source/JavaScriptCore/bytecode/CodeBlock.cpp
index a2aaa1d..462c062 100644
--- a/Source/JavaScriptCore/bytecode/CodeBlock.cpp
+++ b/Source/JavaScriptCore/bytecode/CodeBlock.cpp
@@ -1954,15 +1954,15 @@
if (CodeBlock* otherBlock = specialOSREntryBlockOrNull())
otherBlock->visitAggregate(visitor);
- visitor.reportExtraMemoryUsage(sizeof(CodeBlock));
+ visitor.reportExtraMemoryUsage(ownerExecutable(), sizeof(CodeBlock));
if (m_jitCode)
- visitor.reportExtraMemoryUsage(m_jitCode->size());
+ visitor.reportExtraMemoryUsage(ownerExecutable(), m_jitCode->size());
if (m_instructions.size()) {
// Divide by refCount() because m_instructions points to something that is shared
// by multiple CodeBlocks, and we only want to count it towards the heap size once.
// Having each CodeBlock report only its proportional share of the size is one way
// of accomplishing this.
- visitor.reportExtraMemoryUsage(m_instructions.size() * sizeof(Instruction) / m_instructions.refCount());
+ visitor.reportExtraMemoryUsage(ownerExecutable(), m_instructions.size() * sizeof(Instruction) / m_instructions.refCount());
}
visitor.append(&m_unlinkedCode);
diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.h b/Source/JavaScriptCore/bytecode/CodeBlock.h
index 2ff11a9..cf62f8e 100644
--- a/Source/JavaScriptCore/bytecode/CodeBlock.h
+++ b/Source/JavaScriptCore/bytecode/CodeBlock.h
@@ -1269,6 +1269,9 @@
return;
(*iter)->m_mayBeExecuting = true;
+#if ENABLE(GGC)
+ m_currentlyExecuting.append(static_cast<CodeBlock*>(candidateCodeBlock));
+#endif
}
} // namespace JSC
diff --git a/Source/JavaScriptCore/dfg/DFGOperations.cpp b/Source/JavaScriptCore/dfg/DFGOperations.cpp
index eb63aee..b31b6fc 100644
--- a/Source/JavaScriptCore/dfg/DFGOperations.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOperations.cpp
@@ -850,6 +850,7 @@
NativeCallFrameTracer tracer(&vm, exec);
ASSERT(!object->structure()->outOfLineCapacity());
+ DeferGC deferGC(vm.heap);
Butterfly* result = object->growOutOfLineStorage(vm, 0, initialOutOfLineCapacity);
object->setButterflyWithoutChangingStructure(vm, result);
return reinterpret_cast<char*>(result);
@@ -860,6 +861,7 @@
VM& vm = exec->vm();
NativeCallFrameTracer tracer(&vm, exec);
+ DeferGC deferGC(vm.heap);
Butterfly* result = object->growOutOfLineStorage(vm, object->structure()->outOfLineCapacity(), newSize);
object->setButterflyWithoutChangingStructure(vm, result);
return reinterpret_cast<char*>(result);
diff --git a/Source/JavaScriptCore/heap/CodeBlockSet.cpp b/Source/JavaScriptCore/heap/CodeBlockSet.cpp
index ae27480..2fc999b 100644
--- a/Source/JavaScriptCore/heap/CodeBlockSet.cpp
+++ b/Source/JavaScriptCore/heap/CodeBlockSet.cpp
@@ -45,7 +45,8 @@
void CodeBlockSet::add(PassRefPtr<CodeBlock> codeBlock)
{
- bool isNewEntry = m_set.add(codeBlock.leakRef()).isNewEntry;
+ CodeBlock* block = codeBlock.leakRef();
+ bool isNewEntry = m_set.add(block).isNewEntry;
ASSERT_UNUSED(isNewEntry, isNewEntry);
}
@@ -101,9 +102,20 @@
CodeBlock* codeBlock = *iter;
if (!codeBlock->m_mayBeExecuting)
continue;
- codeBlock->visitAggregate(visitor);
+ codeBlock->ownerExecutable()->methodTable()->visitChildren(codeBlock->ownerExecutable(), visitor);
}
}
+void CodeBlockSet::rememberCurrentlyExecutingCodeBlocks(Heap* heap)
+{
+#if ENABLE(GGC)
+ for (size_t i = 0; i < m_currentlyExecuting.size(); ++i)
+ heap->addToRememberedSet(m_currentlyExecuting[i]->ownerExecutable());
+ m_currentlyExecuting.clear();
+#else
+ UNUSED_PARAM(heap);
+#endif // ENABLE(GGC)
+}
+
} // namespace JSC
diff --git a/Source/JavaScriptCore/heap/CodeBlockSet.h b/Source/JavaScriptCore/heap/CodeBlockSet.h
index 2e4e606..bb786f0 100644
--- a/Source/JavaScriptCore/heap/CodeBlockSet.h
+++ b/Source/JavaScriptCore/heap/CodeBlockSet.h
@@ -30,10 +30,12 @@
#include <wtf/Noncopyable.h>
#include <wtf/PassRefPtr.h>
#include <wtf/RefPtr.h>
+#include <wtf/Vector.h>
namespace JSC {
class CodeBlock;
+class Heap;
class SlotVisitor;
// CodeBlockSet tracks all CodeBlocks. Every CodeBlock starts out with one
@@ -65,11 +67,16 @@
// mayBeExecuting.
void traceMarked(SlotVisitor&);
+ // Add all currently executing CodeBlocks to the remembered set to be
+ // re-scanned during the next collection.
+ void rememberCurrentlyExecutingCodeBlocks(Heap*);
+
private:
// This is not a set of RefPtr<CodeBlock> because we need to be able to find
// arbitrary bogus pointers. I could have written a thingy that had peek types
// and all, but that seemed like overkill.
HashSet<CodeBlock* > m_set;
+ Vector<CodeBlock*> m_currentlyExecuting;
};
} // namespace JSC
diff --git a/Source/JavaScriptCore/heap/CopiedBlockInlines.h b/Source/JavaScriptCore/heap/CopiedBlockInlines.h
index 61996ce..150b4c7 100644
--- a/Source/JavaScriptCore/heap/CopiedBlockInlines.h
+++ b/Source/JavaScriptCore/heap/CopiedBlockInlines.h
@@ -42,6 +42,9 @@
#endif
m_liveBytes += bytes;
+ if (isPinned())
+ return;
+
if (!shouldEvacuate()) {
pin();
return;
diff --git a/Source/JavaScriptCore/heap/CopiedSpace.cpp b/Source/JavaScriptCore/heap/CopiedSpace.cpp
index f0e7722..9601634 100644
--- a/Source/JavaScriptCore/heap/CopiedSpace.cpp
+++ b/Source/JavaScriptCore/heap/CopiedSpace.cpp
@@ -316,4 +316,17 @@
|| isBlockListPagedOut(deadline, &m_oversizeBlocks);
}
+void CopiedSpace::didStartFullCollection()
+{
+ ASSERT(heap()->operationInProgress() == FullCollection);
+
+ ASSERT(m_fromSpace->isEmpty());
+
+ for (CopiedBlock* block = m_toSpace->head(); block; block = block->next())
+ block->didSurviveGC();
+
+ for (CopiedBlock* block = m_oversizeBlocks.head(); block; block = block->next())
+ block->didSurviveGC();
+}
+
} // namespace JSC
diff --git a/Source/JavaScriptCore/heap/CopiedSpace.h b/Source/JavaScriptCore/heap/CopiedSpace.h
index 65ca04e..5fca45a 100644
--- a/Source/JavaScriptCore/heap/CopiedSpace.h
+++ b/Source/JavaScriptCore/heap/CopiedSpace.h
@@ -60,6 +60,8 @@
CopiedAllocator& allocator() { return m_allocator; }
+ void didStartFullCollection();
+
void startedCopying();
void doneCopying();
bool isInCopyPhase() { return m_inCopyingPhase; }
@@ -80,6 +82,8 @@
static CopiedBlock* blockFor(void*);
+ Heap* heap() const { return m_heap; }
+
private:
static bool isOversize(size_t);
diff --git a/Source/JavaScriptCore/heap/Heap.cpp b/Source/JavaScriptCore/heap/Heap.cpp
index 307c30c..7cc3860 100644
--- a/Source/JavaScriptCore/heap/Heap.cpp
+++ b/Source/JavaScriptCore/heap/Heap.cpp
@@ -253,9 +253,11 @@
, m_ramSize(ramSize())
, m_minBytesPerCycle(minHeapSize(m_heapType, m_ramSize))
, m_sizeAfterLastCollect(0)
- , m_bytesAllocatedLimit(m_minBytesPerCycle)
- , m_bytesAllocated(0)
- , m_bytesAbandoned(0)
+ , m_bytesAllocatedThisCycle(0)
+ , m_bytesAbandonedThisCycle(0)
+ , m_maxEdenSize(m_minBytesPerCycle)
+ , m_maxHeapSize(m_minBytesPerCycle)
+ , m_shouldDoFullCollection(false)
, m_totalBytesVisited(0)
, m_totalBytesCopied(0)
, m_operationInProgress(NoOperation)
@@ -269,7 +271,7 @@
, m_copyVisitor(m_sharedData)
, m_handleSet(vm)
, m_isSafeToCollect(false)
- , m_writeBarrierBuffer(128)
+ , m_writeBarrierBuffer(256)
, m_vm(vm)
, m_lastGCLength(0)
, m_lastCodeDiscardTime(WTF::monotonicallyIncreasingTime())
@@ -332,8 +334,8 @@
void Heap::didAbandon(size_t bytes)
{
if (m_activityCallback)
- m_activityCallback->didAllocate(m_bytesAllocated + m_bytesAbandoned);
- m_bytesAbandoned += bytes;
+ m_activityCallback->didAllocate(m_bytesAllocatedThisCycle + m_bytesAbandonedThisCycle);
+ m_bytesAbandonedThisCycle += bytes;
}
void Heap::protect(JSValue k)
@@ -487,6 +489,9 @@
visitor.setup();
HeapRootVisitor heapRootVisitor(visitor);
+ Vector<const JSCell*> rememberedSet(m_slotVisitor.markStack().size());
+ m_slotVisitor.markStack().fillVector(rememberedSet);
+
{
ParallelModeEnabler enabler(visitor);
@@ -590,6 +595,14 @@
}
}
+ {
+ GCPHASE(ClearRememberedSet);
+ for (unsigned i = 0; i < rememberedSet.size(); ++i) {
+ const JSCell* cell = rememberedSet[i];
+ MarkedBlock::blockFor(cell)->clearRemembered(cell);
+ }
+ }
+
GCCOUNTER(VisitedValueCount, visitor.visitCount());
m_sharedData.didFinishMarking();
@@ -601,8 +614,14 @@
MARK_LOG_MESSAGE2("\nNumber of live Objects after full GC %lu, took %.6f secs\n", visitCount, WTF::monotonicallyIncreasingTime() - gcStartTime);
#endif
- m_totalBytesVisited = visitor.bytesVisited();
- m_totalBytesCopied = visitor.bytesCopied();
+ if (m_operationInProgress == EdenCollection) {
+ m_totalBytesVisited += visitor.bytesVisited();
+ m_totalBytesCopied += visitor.bytesCopied();
+ } else {
+ ASSERT(m_operationInProgress == FullCollection);
+ m_totalBytesVisited = visitor.bytesVisited();
+ m_totalBytesCopied = visitor.bytesCopied();
+ }
#if ENABLE(PARALLEL_GC)
m_totalBytesVisited += m_sharedData.childBytesVisited();
m_totalBytesCopied += m_sharedData.childBytesCopied();
@@ -615,8 +634,12 @@
m_sharedData.reset();
}
+template <HeapOperation collectionType>
void Heap::copyBackingStores()
{
+ if (collectionType == EdenCollection)
+ return;
+
m_storageSpace.startedCopying();
if (m_storageSpace.shouldDoCopyPhase()) {
m_sharedData.didStartCopying();
@@ -627,7 +650,7 @@
// before signaling that the phase is complete.
m_storageSpace.doneCopying();
m_sharedData.didFinishCopying();
- } else
+ } else
m_storageSpace.doneCopying();
}
@@ -723,11 +746,22 @@
m_jitStubRoutines.deleteUnmarkedJettisonedStubRoutines();
}
+void Heap::addToRememberedSet(const JSCell* cell)
+{
+ ASSERT(cell);
+ ASSERT(!Options::enableConcurrentJIT() || !isCompilationThread());
+ if (isInRememberedSet(cell))
+ return;
+ MarkedBlock::blockFor(cell)->setRemembered(cell);
+ m_slotVisitor.unconditionallyAppend(const_cast<JSCell*>(cell));
+}
+
void Heap::collectAllGarbage()
{
if (!m_isSafeToCollect)
return;
+ m_shouldDoFullCollection = true;
collect();
SamplingRegion samplingRegion("Garbage Collection: Sweeping");
@@ -764,9 +798,28 @@
RecursiveAllocationScope scope(*this);
m_vm->prepareToDiscardCode();
}
-
- m_operationInProgress = Collection;
- m_extraMemoryUsage = 0;
+
+ bool isFullCollection = m_shouldDoFullCollection;
+ if (isFullCollection) {
+ m_operationInProgress = FullCollection;
+ m_slotVisitor.clearMarkStack();
+ m_shouldDoFullCollection = false;
+ if (Options::logGC())
+ dataLog("FullCollection, ");
+ } else {
+#if ENABLE(GGC)
+ m_operationInProgress = EdenCollection;
+ if (Options::logGC())
+ dataLog("EdenCollection, ");
+#else
+ m_operationInProgress = FullCollection;
+ m_slotVisitor.clearMarkStack();
+ if (Options::logGC())
+ dataLog("FullCollection, ");
+#endif
+ }
+ if (m_operationInProgress == FullCollection)
+ m_extraMemoryUsage = 0;
if (m_activityCallback)
m_activityCallback->willCollect();
@@ -780,6 +833,16 @@
{
GCPHASE(StopAllocation);
m_objectSpace.stopAllocating();
+ if (m_operationInProgress == FullCollection)
+ m_storageSpace.didStartFullCollection();
+ }
+
+ {
+ GCPHASE(FlushWriteBarrierBuffer);
+ if (m_operationInProgress == EdenCollection)
+ m_writeBarrierBuffer.flush(*this);
+ else
+ m_writeBarrierBuffer.reset();
}
markRoots();
@@ -796,13 +859,16 @@
m_arrayBuffers.sweep();
}
- {
+ if (m_operationInProgress == FullCollection) {
m_blockSnapshot.resize(m_objectSpace.blocks().set().size());
MarkedBlockSnapshotFunctor functor(m_blockSnapshot);
m_objectSpace.forEachBlock(functor);
}
- copyBackingStores();
+ if (m_operationInProgress == FullCollection)
+ copyBackingStores<FullCollection>();
+ else
+ copyBackingStores<EdenCollection>();
{
GCPHASE(FinalizeUnconditionalFinalizers);
@@ -819,8 +885,15 @@
m_vm->clearSourceProviderCaches();
}
- m_sweeper->startSweeping(m_blockSnapshot);
- m_bytesAbandoned = 0;
+ if (m_operationInProgress == FullCollection)
+ m_sweeper->startSweeping(m_blockSnapshot);
+
+ {
+ GCPHASE(AddCurrentlyExecutingCodeBlocksToRememberedSet);
+ m_codeBlocks.rememberCurrentlyExecutingCodeBlocks(this);
+ }
+
+ m_bytesAbandonedThisCycle = 0;
{
GCPHASE(ResetAllocators);
@@ -831,21 +904,32 @@
if (Options::gcMaxHeapSize() && currentHeapSize > Options::gcMaxHeapSize())
HeapStatistics::exitWithFailure();
+ if (m_operationInProgress == FullCollection) {
+ // To avoid pathological GC churn in very small and very large heaps, we set
+ // the new allocation limit based on the current size of the heap, with a
+ // fixed minimum.
+ m_maxHeapSize = max(minHeapSize(m_heapType, m_ramSize), proportionalHeapSize(currentHeapSize, m_ramSize));
+ m_maxEdenSize = m_maxHeapSize - currentHeapSize;
+ } else {
+ ASSERT(currentHeapSize >= m_sizeAfterLastCollect);
+ m_maxEdenSize = m_maxHeapSize - currentHeapSize;
+ double edenToOldGenerationRatio = (double)m_maxEdenSize / (double)m_maxHeapSize;
+ double minEdenToOldGenerationRatio = 1.0 / 3.0;
+ if (edenToOldGenerationRatio < minEdenToOldGenerationRatio)
+ m_shouldDoFullCollection = true;
+ m_maxHeapSize += currentHeapSize - m_sizeAfterLastCollect;
+ m_maxEdenSize = m_maxHeapSize - currentHeapSize;
+ }
+
m_sizeAfterLastCollect = currentHeapSize;
- // To avoid pathological GC churn in very small and very large heaps, we set
- // the new allocation limit based on the current size of the heap, with a
- // fixed minimum.
- size_t maxHeapSize = max(minHeapSize(m_heapType, m_ramSize), proportionalHeapSize(currentHeapSize, m_ramSize));
- m_bytesAllocatedLimit = maxHeapSize - currentHeapSize;
-
- m_bytesAllocated = 0;
+ m_bytesAllocatedThisCycle = 0;
double lastGCEndTime = WTF::monotonicallyIncreasingTime();
m_lastGCLength = lastGCEndTime - lastGCStartTime;
if (Options::recordGCPauseTimes())
HeapStatistics::recordGCPauseTime(lastGCStartTime, lastGCEndTime);
- RELEASE_ASSERT(m_operationInProgress == Collection);
+ RELEASE_ASSERT(m_operationInProgress == EdenCollection || m_operationInProgress == FullCollection);
m_operationInProgress = NoOperation;
JAVASCRIPTCORE_GC_END();
@@ -863,10 +947,6 @@
double after = currentTimeMS();
dataLog(after - before, " ms, ", currentHeapSize / 1024, " kb]\n");
}
-
-#if ENABLE(ALLOCATION_LOGGING)
- dataLogF("JSC GC finishing collection.\n");
-#endif
}
bool Heap::collectIfNecessaryOrDefer()
@@ -916,8 +996,8 @@
void Heap::didAllocate(size_t bytes)
{
if (m_activityCallback)
- m_activityCallback->didAllocate(m_bytesAllocated + m_bytesAbandoned);
- m_bytesAllocated += bytes;
+ m_activityCallback->didAllocate(m_bytesAllocatedThisCycle + m_bytesAbandonedThisCycle);
+ m_bytesAllocatedThisCycle += bytes;
}
bool Heap::isValidAllocation(size_t)
@@ -994,6 +1074,15 @@
collectIfNecessaryOrDefer();
}
+void Heap::writeBarrier(const JSCell* from)
+{
+ ASSERT_GC_OBJECT_LOOKS_VALID(const_cast<JSCell*>(from));
+ if (!from || !isMarked(from))
+ return;
+ Heap* heap = Heap::heap(from);
+ heap->addToRememberedSet(from);
+}
+
void Heap::flushWriteBarrierBuffer(JSCell* cell)
{
#if ENABLE(GGC)
diff --git a/Source/JavaScriptCore/heap/Heap.h b/Source/JavaScriptCore/heap/Heap.h
index ba4e801..ab580aa 100644
--- a/Source/JavaScriptCore/heap/Heap.h
+++ b/Source/JavaScriptCore/heap/Heap.h
@@ -94,11 +94,17 @@
static bool testAndSetMarked(const void*);
static void setMarked(const void*);
+ JS_EXPORT_PRIVATE void addToRememberedSet(const JSCell*);
+ bool isInRememberedSet(const JSCell* cell) const
+ {
+ ASSERT(cell);
+ ASSERT(!Options::enableConcurrentJIT() || !isCompilationThread());
+ return MarkedBlock::blockFor(cell)->isRemembered(cell);
+ }
static bool isWriteBarrierEnabled();
- static void writeBarrier(const JSCell*);
+ JS_EXPORT_PRIVATE static void writeBarrier(const JSCell*);
static void writeBarrier(const JSCell*, JSValue);
static void writeBarrier(const JSCell*, JSCell*);
- static uint8_t* addressOfCardFor(JSCell*);
WriteBarrierBuffer& writeBarrierBuffer() { return m_writeBarrierBuffer; }
void flushWriteBarrierBuffer(JSCell*);
@@ -120,6 +126,7 @@
// true if collection is in progress
inline bool isCollecting();
+ inline HeapOperation operationInProgress() { return m_operationInProgress; }
// true if an allocation or collection is in progress
inline bool isBusy();
@@ -236,6 +243,7 @@
void markRoots();
void markProtectedObjects(HeapRootVisitor&);
void markTempSortVectors(HeapRootVisitor&);
+ template <HeapOperation collectionType>
void copyBackingStores();
void harvestWeakReferences();
void finalizeUnconditionalFinalizers();
@@ -257,10 +265,11 @@
const size_t m_minBytesPerCycle;
size_t m_sizeAfterLastCollect;
- size_t m_bytesAllocatedLimit;
- size_t m_bytesAllocated;
- size_t m_bytesAbandoned;
-
+ size_t m_bytesAllocatedThisCycle;
+ size_t m_bytesAbandonedThisCycle;
+ size_t m_maxEdenSize;
+ size_t m_maxHeapSize;
+ bool m_shouldDoFullCollection;
size_t m_totalBytesVisited;
size_t m_totalBytesCopied;
@@ -271,6 +280,8 @@
GCIncomingRefCountedSet<ArrayBuffer> m_arrayBuffers;
size_t m_extraMemoryUsage;
+ HashSet<const JSCell*> m_copyingRememberedSet;
+
ProtectCountSet m_protectedValues;
Vector<Vector<ValueStringPair, 0, UnsafeVectorOverflow>* > m_tempSortingVectors;
OwnPtr<HashSet<MarkedArgumentBuffer*>> m_markListSet;
@@ -322,8 +333,8 @@
if (isDeferred())
return false;
if (Options::gcMaxHeapSize())
- return m_bytesAllocated > Options::gcMaxHeapSize() && m_isSafeToCollect && m_operationInProgress == NoOperation;
- return m_bytesAllocated > m_bytesAllocatedLimit && m_isSafeToCollect && m_operationInProgress == NoOperation;
+ return m_bytesAllocatedThisCycle > Options::gcMaxHeapSize() && m_isSafeToCollect && m_operationInProgress == NoOperation;
+ return m_bytesAllocatedThisCycle > m_maxEdenSize && m_isSafeToCollect && m_operationInProgress == NoOperation;
}
bool Heap::isBusy()
@@ -333,7 +344,7 @@
bool Heap::isCollecting()
{
- return m_operationInProgress == Collection;
+ return m_operationInProgress == FullCollection || m_operationInProgress == EdenCollection;
}
inline Heap* Heap::heap(const JSCell* cell)
@@ -370,26 +381,33 @@
inline bool Heap::isWriteBarrierEnabled()
{
-#if ENABLE(WRITE_BARRIER_PROFILING)
+#if ENABLE(WRITE_BARRIER_PROFILING) || ENABLE(GGC)
return true;
#else
return false;
#endif
}
- inline void Heap::writeBarrier(const JSCell*)
+ inline void Heap::writeBarrier(const JSCell* from, JSCell* to)
{
+#if ENABLE(WRITE_BARRIER_PROFILING)
WriteBarrierCounters::countWriteBarrier();
+#endif
+ if (!from || !isMarked(from))
+ return;
+ if (!to || isMarked(to))
+ return;
+ Heap::heap(from)->addToRememberedSet(from);
}
- inline void Heap::writeBarrier(const JSCell*, JSCell*)
+ inline void Heap::writeBarrier(const JSCell* from, JSValue to)
{
+#if ENABLE(WRITE_BARRIER_PROFILING)
WriteBarrierCounters::countWriteBarrier();
- }
-
- inline void Heap::writeBarrier(const JSCell*, JSValue)
- {
- WriteBarrierCounters::countWriteBarrier();
+#endif
+ if (!to.isCell())
+ return;
+ writeBarrier(from, to.asCell());
}
inline void Heap::reportExtraMemoryCost(size_t cost)
diff --git a/Source/JavaScriptCore/heap/HeapOperation.h b/Source/JavaScriptCore/heap/HeapOperation.h
index 8f0a023..769127e 100644
--- a/Source/JavaScriptCore/heap/HeapOperation.h
+++ b/Source/JavaScriptCore/heap/HeapOperation.h
@@ -28,7 +28,7 @@
namespace JSC {
-enum HeapOperation { NoOperation, Allocation, Collection };
+enum HeapOperation { NoOperation, Allocation, FullCollection, EdenCollection };
} // namespace JSC
diff --git a/Source/JavaScriptCore/heap/MarkStack.cpp b/Source/JavaScriptCore/heap/MarkStack.cpp
index 39907c7..688de42 100644
--- a/Source/JavaScriptCore/heap/MarkStack.cpp
+++ b/Source/JavaScriptCore/heap/MarkStack.cpp
@@ -57,8 +57,29 @@
MarkStackArray::~MarkStackArray()
{
- ASSERT(m_numberOfSegments == 1 && m_segments.size() == 1);
+ ASSERT(m_numberOfSegments == 1);
+ ASSERT(m_segments.size() == 1);
m_blockAllocator.deallocate(MarkStackSegment::destroy(m_segments.removeHead()));
+ m_numberOfSegments--;
+ ASSERT(!m_numberOfSegments);
+ ASSERT(!m_segments.size());
+}
+
+void MarkStackArray::clear()
+{
+ if (!m_segments.head())
+ return;
+ MarkStackSegment* next;
+ for (MarkStackSegment* current = m_segments.head(); current->next(); current = next) {
+ next = current->next();
+ m_segments.remove(current);
+ m_blockAllocator.deallocate(MarkStackSegment::destroy(current));
+ }
+ m_top = 0;
+ m_numberOfSegments = 1;
+#if !ASSERT_DISABLED
+ m_segments.head()->m_top = 0;
+#endif
}
void MarkStackArray::expand()
@@ -167,4 +188,28 @@
append(other.removeLast());
}
+void MarkStackArray::fillVector(Vector<const JSCell*>& vector)
+{
+ ASSERT(vector.size() == size());
+
+ MarkStackSegment* currentSegment = m_segments.head();
+ if (!currentSegment)
+ return;
+
+ unsigned count = 0;
+ for (unsigned i = 0; i < m_top; ++i) {
+ ASSERT(currentSegment->data()[i]);
+ vector[count++] = currentSegment->data()[i];
+ }
+
+ currentSegment = currentSegment->next();
+ while (currentSegment) {
+ for (unsigned i = 0; i < s_segmentCapacity; ++i) {
+ ASSERT(currentSegment->data()[i]);
+ vector[count++] = currentSegment->data()[i];
+ }
+ currentSegment = currentSegment->next();
+ }
+}
+
} // namespace JSC
diff --git a/Source/JavaScriptCore/heap/MarkStack.h b/Source/JavaScriptCore/heap/MarkStack.h
index c97b6a7..6729bad 100644
--- a/Source/JavaScriptCore/heap/MarkStack.h
+++ b/Source/JavaScriptCore/heap/MarkStack.h
@@ -52,6 +52,7 @@
#include "HeapBlock.h"
#include <wtf/StdLibExtras.h>
+#include <wtf/Vector.h>
namespace JSC {
@@ -100,6 +101,9 @@
size_t size();
bool isEmpty();
+ void fillVector(Vector<const JSCell*>&);
+ void clear();
+
private:
template <size_t size> struct CapacityFromSize {
static const size_t value = (size - sizeof(MarkStackSegment)) / sizeof(const JSCell*);
diff --git a/Source/JavaScriptCore/heap/MarkedAllocator.cpp b/Source/JavaScriptCore/heap/MarkedAllocator.cpp
index 7440208..c2b0f72 100644
--- a/Source/JavaScriptCore/heap/MarkedAllocator.cpp
+++ b/Source/JavaScriptCore/heap/MarkedAllocator.cpp
@@ -10,10 +10,10 @@
namespace JSC {
-bool MarkedAllocator::isPagedOut(double deadline)
+static bool isListPagedOut(double deadline, DoublyLinkedList<MarkedBlock>& list)
{
unsigned itersSinceLastTimeCheck = 0;
- MarkedBlock* block = m_blockList.head();
+ MarkedBlock* block = list.head();
while (block) {
block = block->next();
++itersSinceLastTimeCheck;
@@ -24,7 +24,13 @@
itersSinceLastTimeCheck = 0;
}
}
+ return false;
+}
+bool MarkedAllocator::isPagedOut(double deadline)
+{
+ if (isListPagedOut(deadline, m_blockList))
+ return true;
return false;
}
@@ -36,15 +42,23 @@
while (!m_freeList.head) {
DelayedReleaseScope delayedReleaseScope(*m_markedSpace);
if (m_currentBlock) {
- ASSERT(m_currentBlock == m_blocksToSweep);
+ ASSERT(m_currentBlock == m_nextBlockToSweep);
m_currentBlock->didConsumeFreeList();
- m_blocksToSweep = m_currentBlock->next();
+ m_nextBlockToSweep = m_currentBlock->next();
}
- for (MarkedBlock*& block = m_blocksToSweep; block; block = block->next()) {
+ MarkedBlock* next;
+ for (MarkedBlock*& block = m_nextBlockToSweep; block; block = next) {
+ next = block->next();
+
MarkedBlock::FreeList freeList = block->sweep(MarkedBlock::SweepToFreeList);
+
if (!freeList.head) {
block->didConsumeEmptyFreeList();
+ m_blockList.remove(block);
+ m_blockList.push(block);
+ if (!m_lastFullBlock)
+ m_lastFullBlock = block;
continue;
}
@@ -68,6 +82,7 @@
MarkedBlock::FreeCell* head = m_freeList.head;
m_freeList.head = head->next;
ASSERT(head);
+ m_markedSpace->didAllocateInBlock(m_currentBlock);
return head;
}
@@ -136,7 +151,7 @@
ASSERT(!m_freeList.head);
m_blockList.append(block);
- m_blocksToSweep = m_currentBlock = block;
+ m_nextBlockToSweep = m_currentBlock = block;
m_freeList = block->sweep(MarkedBlock::SweepToFreeList);
m_markedSpace->didAddBlock(block);
}
@@ -147,9 +162,27 @@
m_currentBlock = m_currentBlock->next();
m_freeList = MarkedBlock::FreeList();
}
- if (m_blocksToSweep == block)
- m_blocksToSweep = m_blocksToSweep->next();
+ if (m_nextBlockToSweep == block)
+ m_nextBlockToSweep = m_nextBlockToSweep->next();
+
+ if (block == m_lastFullBlock)
+ m_lastFullBlock = m_lastFullBlock->prev();
+
m_blockList.remove(block);
}
+void MarkedAllocator::reset()
+{
+ m_lastActiveBlock = 0;
+ m_currentBlock = 0;
+ m_freeList = MarkedBlock::FreeList();
+ if (m_heap->operationInProgress() == FullCollection)
+ m_lastFullBlock = 0;
+
+ if (m_lastFullBlock)
+ m_nextBlockToSweep = m_lastFullBlock->next() ? m_lastFullBlock->next() : m_lastFullBlock;
+ else
+ m_nextBlockToSweep = m_blockList.head();
+}
+
} // namespace JSC
diff --git a/Source/JavaScriptCore/heap/MarkedAllocator.h b/Source/JavaScriptCore/heap/MarkedAllocator.h
index 3a629c3..e0d3e89 100644
--- a/Source/JavaScriptCore/heap/MarkedAllocator.h
+++ b/Source/JavaScriptCore/heap/MarkedAllocator.h
@@ -52,7 +52,8 @@
MarkedBlock::FreeList m_freeList;
MarkedBlock* m_currentBlock;
MarkedBlock* m_lastActiveBlock;
- MarkedBlock* m_blocksToSweep;
+ MarkedBlock* m_nextBlockToSweep;
+ MarkedBlock* m_lastFullBlock;
DoublyLinkedList<MarkedBlock> m_blockList;
size_t m_cellSize;
MarkedBlock::DestructorType m_destructorType;
@@ -68,7 +69,8 @@
inline MarkedAllocator::MarkedAllocator()
: m_currentBlock(0)
, m_lastActiveBlock(0)
- , m_blocksToSweep(0)
+ , m_nextBlockToSweep(0)
+ , m_lastFullBlock(0)
, m_cellSize(0)
, m_destructorType(MarkedBlock::None)
, m_heap(0)
@@ -102,14 +104,6 @@
return head;
}
-inline void MarkedAllocator::reset()
-{
- m_lastActiveBlock = 0;
- m_currentBlock = 0;
- m_freeList = MarkedBlock::FreeList();
- m_blocksToSweep = m_blockList.head();
-}
-
inline void MarkedAllocator::stopAllocating()
{
ASSERT(!m_lastActiveBlock);
diff --git a/Source/JavaScriptCore/heap/MarkedBlock.cpp b/Source/JavaScriptCore/heap/MarkedBlock.cpp
index 1085804..34a0931 100644
--- a/Source/JavaScriptCore/heap/MarkedBlock.cpp
+++ b/Source/JavaScriptCore/heap/MarkedBlock.cpp
@@ -197,6 +197,45 @@
m_state = Marked;
}
+void MarkedBlock::clearMarks()
+{
+ if (heap()->operationInProgress() == JSC::EdenCollection)
+ this->clearMarksWithCollectionType<EdenCollection>();
+ else
+ this->clearMarksWithCollectionType<FullCollection>();
+}
+
+void MarkedBlock::clearRememberedSet()
+{
+ m_rememberedSet.clearAll();
+}
+
+template <HeapOperation collectionType>
+void MarkedBlock::clearMarksWithCollectionType()
+{
+ ASSERT(collectionType == FullCollection || collectionType == EdenCollection);
+ HEAP_LOG_BLOCK_STATE_TRANSITION(this);
+
+ ASSERT(m_state != New && m_state != FreeListed);
+ if (collectionType == FullCollection) {
+ m_marks.clearAll();
+ m_rememberedSet.clearAll();
+ }
+
+ // This will become true at the end of the mark phase. We set it now to
+ // avoid an extra pass to do so later.
+ m_state = Marked;
+}
+
+void MarkedBlock::lastChanceToFinalize()
+{
+ m_weakSet.lastChanceToFinalize();
+
+ clearNewlyAllocated();
+ clearMarksWithCollectionType<FullCollection>();
+ sweep();
+}
+
MarkedBlock::FreeList MarkedBlock::resumeAllocating()
{
HEAP_LOG_BLOCK_STATE_TRANSITION(this);
diff --git a/Source/JavaScriptCore/heap/MarkedBlock.h b/Source/JavaScriptCore/heap/MarkedBlock.h
index 2f1bfbd..73f56cd 100644
--- a/Source/JavaScriptCore/heap/MarkedBlock.h
+++ b/Source/JavaScriptCore/heap/MarkedBlock.h
@@ -25,6 +25,7 @@
#include "BlockAllocator.h"
#include "HeapBlock.h"
+#include "HeapOperation.h"
#include "WeakSet.h"
#include <wtf/Bitmap.h>
#include <wtf/DataLog.h>
@@ -72,7 +73,7 @@
friend class LLIntOffsetsExtractor;
public:
- static const size_t atomSize = 8; // bytes
+ static const size_t atomSize = 16; // bytes
static const size_t atomShiftAmount = 4; // log_2(atomSize) FIXME: Change atomSize to 16.
static const size_t blockSize = 64 * KB;
static const size_t blockMask = ~(blockSize - 1); // blockSize must be a power of two.
@@ -140,11 +141,16 @@
void stopAllocating(const FreeList&);
FreeList resumeAllocating(); // Call this if you canonicalized a block for some non-collection related purpose.
void didConsumeEmptyFreeList(); // Call this if you sweep a block, but the returned FreeList is empty.
+ void didSweepToNoAvail(); // Call this if you sweep a block and get an empty free list back.
// Returns true if the "newly allocated" bitmap was non-null
// and was successfully cleared and false otherwise.
bool clearNewlyAllocated();
void clearMarks();
+ void clearRememberedSet();
+ template <HeapOperation collectionType>
+ void clearMarksWithCollectionType();
+
size_t markCount();
bool isEmpty();
@@ -161,6 +167,11 @@
void setMarked(const void*);
void clearMarked(const void*);
+ void setRemembered(const void*);
+ void clearRemembered(const void*);
+ void atomicClearRemembered(const void*);
+ bool isRemembered(const void*);
+
bool isNewlyAllocated(const void*);
void setNewlyAllocated(const void*);
void clearNewlyAllocated(const void*);
@@ -190,9 +201,11 @@
size_t m_atomsPerCell;
size_t m_endAtom; // This is a fuzzy end. Always test for < m_endAtom.
#if ENABLE(PARALLEL_GC)
- WTF::Bitmap<atomsPerBlock, WTF::BitmapAtomic> m_marks;
+ WTF::Bitmap<atomsPerBlock, WTF::BitmapAtomic, uint8_t> m_marks;
+ WTF::Bitmap<atomsPerBlock, WTF::BitmapAtomic, uint8_t> m_rememberedSet;
#else
- WTF::Bitmap<atomsPerBlock, WTF::BitmapNotAtomic> m_marks;
+ WTF::Bitmap<atomsPerBlock, WTF::BitmapNotAtomic, uint8_t> m_marks;
+ WTF::Bitmap<atomsPerBlock, WTF::BitmapNotAtomic, uint8_t> m_rememberedSet;
#endif
OwnPtr<WTF::Bitmap<atomsPerBlock>> m_newlyAllocated;
@@ -234,15 +247,6 @@
return reinterpret_cast<MarkedBlock*>(reinterpret_cast<Bits>(p) & blockMask);
}
- inline void MarkedBlock::lastChanceToFinalize()
- {
- m_weakSet.lastChanceToFinalize();
-
- clearNewlyAllocated();
- clearMarks();
- sweep();
- }
-
inline MarkedAllocator* MarkedBlock::allocator() const
{
return m_allocator;
@@ -291,26 +295,10 @@
HEAP_LOG_BLOCK_STATE_TRANSITION(this);
ASSERT(!m_newlyAllocated);
-#ifndef NDEBUG
- for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell)
- ASSERT(m_marks.get(i));
-#endif
ASSERT(m_state == FreeListed);
m_state = Marked;
}
- inline void MarkedBlock::clearMarks()
- {
- HEAP_LOG_BLOCK_STATE_TRANSITION(this);
-
- ASSERT(m_state != New && m_state != FreeListed);
- m_marks.clearAll();
-
- // This will become true at the end of the mark phase. We set it now to
- // avoid an extra pass to do so later.
- m_state = Marked;
- }
-
inline size_t MarkedBlock::markCount()
{
return m_marks.count();
@@ -346,6 +334,26 @@
return (reinterpret_cast<Bits>(p) - reinterpret_cast<Bits>(this)) / atomSize;
}
+ inline void MarkedBlock::setRemembered(const void* p)
+ {
+ m_rememberedSet.set(atomNumber(p));
+ }
+
+ inline void MarkedBlock::clearRemembered(const void* p)
+ {
+ m_rememberedSet.clear(atomNumber(p));
+ }
+
+ inline void MarkedBlock::atomicClearRemembered(const void* p)
+ {
+ m_rememberedSet.concurrentTestAndClear(atomNumber(p));
+ }
+
+ inline bool MarkedBlock::isRemembered(const void* p)
+ {
+ return m_rememberedSet.get(atomNumber(p));
+ }
+
inline bool MarkedBlock::isMarked(const void* p)
{
return m_marks.get(atomNumber(p));
diff --git a/Source/JavaScriptCore/heap/MarkedSpace.cpp b/Source/JavaScriptCore/heap/MarkedSpace.cpp
index 48648d2..4deca13 100644
--- a/Source/JavaScriptCore/heap/MarkedSpace.cpp
+++ b/Source/JavaScriptCore/heap/MarkedSpace.cpp
@@ -105,6 +105,7 @@
{
Free free(Free::FreeAll, this);
forEachBlock(free);
+ ASSERT(!m_blocks.set().size());
}
struct LastChanceToFinalize : MarkedBlock::VoidFunctor {
@@ -143,17 +144,27 @@
m_normalSpace.largeAllocator.reset();
m_normalDestructorSpace.largeAllocator.reset();
m_immortalStructureDestructorSpace.largeAllocator.reset();
+
+ m_blocksWithNewObjects.clear();
}
void MarkedSpace::visitWeakSets(HeapRootVisitor& heapRootVisitor)
{
VisitWeakSet visitWeakSet(heapRootVisitor);
- forEachBlock(visitWeakSet);
+ if (m_heap->operationInProgress() == EdenCollection) {
+ for (unsigned i = 0; i < m_blocksWithNewObjects.size(); ++i)
+ visitWeakSet(m_blocksWithNewObjects[i]);
+ } else
+ forEachBlock(visitWeakSet);
}
void MarkedSpace::reapWeakSets()
{
- forEachBlock<ReapWeakSet>();
+ if (m_heap->operationInProgress() == EdenCollection) {
+ for (unsigned i = 0; i < m_blocksWithNewObjects.size(); ++i)
+ m_blocksWithNewObjects[i]->reapWeakSet();
+ } else
+ forEachBlock<ReapWeakSet>();
}
template <typename Functor>
@@ -305,6 +316,24 @@
#endif
}
+#ifndef NDEBUG
+struct VerifyMarked : MarkedBlock::VoidFunctor {
+ void operator()(MarkedBlock* block) { ASSERT(block->needsSweeping()); }
+};
+#endif
+
+void MarkedSpace::clearMarks()
+{
+ if (m_heap->operationInProgress() == EdenCollection) {
+ for (unsigned i = 0; i < m_blocksWithNewObjects.size(); ++i)
+ m_blocksWithNewObjects[i]->clearMarks();
+ } else
+ forEachBlock<ClearMarks>();
+#ifndef NDEBUG
+ forEachBlock<VerifyMarked>();
+#endif
+}
+
void MarkedSpace::willStartIterating()
{
ASSERT(!isIterating());
diff --git a/Source/JavaScriptCore/heap/MarkedSpace.h b/Source/JavaScriptCore/heap/MarkedSpace.h
index 9680670..9c97fbd 100644
--- a/Source/JavaScriptCore/heap/MarkedSpace.h
+++ b/Source/JavaScriptCore/heap/MarkedSpace.h
@@ -46,7 +46,17 @@
class SlotVisitor;
struct ClearMarks : MarkedBlock::VoidFunctor {
- void operator()(MarkedBlock* block) { block->clearMarks(); }
+ void operator()(MarkedBlock* block)
+ {
+ block->clearMarks();
+ }
+};
+
+struct ClearRememberedSet : MarkedBlock::VoidFunctor {
+ void operator()(MarkedBlock* block)
+ {
+ block->clearRememberedSet();
+ }
};
struct Sweep : MarkedBlock::VoidFunctor {
@@ -105,8 +115,10 @@
void didAddBlock(MarkedBlock*);
void didConsumeFreeList(MarkedBlock*);
+ void didAllocateInBlock(MarkedBlock*);
void clearMarks();
+ void clearRememberedSet();
void clearNewlyAllocated();
void sweep();
size_t objectCount();
@@ -150,6 +162,7 @@
size_t m_capacity;
bool m_isIterating;
MarkedBlockSet m_blocks;
+ Vector<MarkedBlock*> m_blocksWithNewObjects;
DelayedReleaseScope* m_currentDelayedReleaseScope;
};
@@ -262,9 +275,14 @@
m_blocks.add(block);
}
-inline void MarkedSpace::clearMarks()
+inline void MarkedSpace::didAllocateInBlock(MarkedBlock* block)
{
- forEachBlock<ClearMarks>();
+ m_blocksWithNewObjects.append(block);
+}
+
+inline void MarkedSpace::clearRememberedSet()
+{
+ forEachBlock<ClearRememberedSet>();
}
inline size_t MarkedSpace::objectCount()
diff --git a/Source/JavaScriptCore/heap/SlotVisitor.cpp b/Source/JavaScriptCore/heap/SlotVisitor.cpp
index cda2b79a..05fb001 100644
--- a/Source/JavaScriptCore/heap/SlotVisitor.cpp
+++ b/Source/JavaScriptCore/heap/SlotVisitor.cpp
@@ -33,7 +33,7 @@
SlotVisitor::~SlotVisitor()
{
- ASSERT(m_stack.isEmpty());
+ clearMarkStack();
}
void SlotVisitor::setup()
@@ -63,6 +63,11 @@
}
}
+void SlotVisitor::clearMarkStack()
+{
+ m_stack.clear();
+}
+
void SlotVisitor::append(ConservativeRoots& conservativeRoots)
{
StackStats::probe();
diff --git a/Source/JavaScriptCore/heap/SlotVisitor.h b/Source/JavaScriptCore/heap/SlotVisitor.h
index a4aacdc..4a8dc3e 100644
--- a/Source/JavaScriptCore/heap/SlotVisitor.h
+++ b/Source/JavaScriptCore/heap/SlotVisitor.h
@@ -49,6 +49,10 @@
SlotVisitor(GCThreadSharedData&);
~SlotVisitor();
+ MarkStackArray& markStack() { return m_stack; }
+
+ Heap* heap() const;
+
void append(ConservativeRoots&);
template<typename T> void append(JITWriteBarrier<T>*);
@@ -61,17 +65,19 @@
void appendUnbarrieredValue(JSValue*);
template<typename T>
void appendUnbarrieredWeak(Weak<T>*);
+ void unconditionallyAppend(JSCell*);
void addOpaqueRoot(void*);
bool containsOpaqueRoot(void*);
TriState containsOpaqueRootTriState(void*);
int opaqueRootCount();
- GCThreadSharedData& sharedData() { return m_shared; }
+ GCThreadSharedData& sharedData() const { return m_shared; }
bool isEmpty() { return m_stack.isEmpty(); }
void setup();
void reset();
+ void clearMarkStack();
size_t bytesVisited() const { return m_bytesVisited; }
size_t bytesCopied() const { return m_bytesCopied; }
@@ -89,7 +95,7 @@
void copyLater(JSCell*, CopyToken, void*, size_t);
- void reportExtraMemoryUsage(size_t size);
+ void reportExtraMemoryUsage(JSCell* owner, size_t);
void addWeakReferenceHarvester(WeakReferenceHarvester*);
void addUnconditionalFinalizer(UnconditionalFinalizer*);
diff --git a/Source/JavaScriptCore/heap/SlotVisitorInlines.h b/Source/JavaScriptCore/heap/SlotVisitorInlines.h
index d503d1c..cd63ab5 100644
--- a/Source/JavaScriptCore/heap/SlotVisitorInlines.h
+++ b/Source/JavaScriptCore/heap/SlotVisitorInlines.h
@@ -105,6 +105,14 @@
MARK_LOG_CHILD(*this, cell);
+ unconditionallyAppend(cell);
+}
+
+ALWAYS_INLINE void SlotVisitor::unconditionallyAppend(JSCell* cell)
+{
+ ASSERT(Heap::isMarked(cell));
+ m_visitCount++;
+
// Should never attempt to mark something that is zapped.
ASSERT(!cell->isZapped());
@@ -218,6 +226,9 @@
inline void SlotVisitor::copyLater(JSCell* owner, CopyToken token, void* ptr, size_t bytes)
{
ASSERT(bytes);
+ // We don't do any copying during EdenCollections.
+ ASSERT(heap()->operationInProgress() != EdenCollection);
+
m_bytesCopied += bytes;
CopiedBlock* block = CopiedSpace::blockFor(ptr);
@@ -226,14 +237,15 @@
return;
}
- if (block->isPinned())
- return;
-
block->reportLiveBytes(owner, token, bytes);
}
-inline void SlotVisitor::reportExtraMemoryUsage(size_t size)
+inline void SlotVisitor::reportExtraMemoryUsage(JSCell* owner, size_t size)
{
+ // We don't want to double-count the extra memory that was reported in previous collections.
+ if (heap()->operationInProgress() == EdenCollection && MarkedBlock::blockFor(owner)->isRemembered(owner))
+ return;
+
size_t* counter = &m_shared.m_vm->heap.m_extraMemoryUsage;
#if ENABLE(COMPARE_AND_SWAP)
@@ -247,6 +259,11 @@
#endif
}
+inline Heap* SlotVisitor::heap() const
+{
+ return &sharedData().m_vm->heap;
+}
+
} // namespace JSC
#endif // SlotVisitorInlines_h
diff --git a/Source/JavaScriptCore/jit/Repatch.cpp b/Source/JavaScriptCore/jit/Repatch.cpp
index 3e29da5..5c9aa96 100644
--- a/Source/JavaScriptCore/jit/Repatch.cpp
+++ b/Source/JavaScriptCore/jit/Repatch.cpp
@@ -39,6 +39,7 @@
#include "PolymorphicPutByIdList.h"
#include "RepatchBuffer.h"
#include "ScratchRegisterAllocator.h"
+#include "StackAlignment.h"
#include "StructureRareDataInlines.h"
#include "StructureStubClearingWatchpoint.h"
#include "ThunkGenerators.h"
diff --git a/Source/JavaScriptCore/runtime/JSGenericTypedArrayViewInlines.h b/Source/JavaScriptCore/runtime/JSGenericTypedArrayViewInlines.h
index e920abe..6db8627 100644
--- a/Source/JavaScriptCore/runtime/JSGenericTypedArrayViewInlines.h
+++ b/Source/JavaScriptCore/runtime/JSGenericTypedArrayViewInlines.h
@@ -447,7 +447,7 @@
}
case OversizeTypedArray: {
- visitor.reportExtraMemoryUsage(thisObject->byteSize());
+ visitor.reportExtraMemoryUsage(thisObject, thisObject->byteSize());
break;
}
diff --git a/Source/JavaScriptCore/runtime/JSPropertyNameIterator.h b/Source/JavaScriptCore/runtime/JSPropertyNameIterator.h
index 5914030..f4362ff 100644
--- a/Source/JavaScriptCore/runtime/JSPropertyNameIterator.h
+++ b/Source/JavaScriptCore/runtime/JSPropertyNameIterator.h
@@ -109,9 +109,9 @@
return m_enumerationCache.get();
}
- inline void StructureRareData::setEnumerationCache(VM& vm, const Structure* owner, JSPropertyNameIterator* value)
+ inline void StructureRareData::setEnumerationCache(VM& vm, const Structure*, JSPropertyNameIterator* value)
{
- m_enumerationCache.set(vm, owner, value);
+ m_enumerationCache.set(vm, this, value);
}
} // namespace JSC
diff --git a/Source/JavaScriptCore/runtime/JSString.cpp b/Source/JavaScriptCore/runtime/JSString.cpp
index a5bfe26..099b623 100644
--- a/Source/JavaScriptCore/runtime/JSString.cpp
+++ b/Source/JavaScriptCore/runtime/JSString.cpp
@@ -72,7 +72,7 @@
else {
StringImpl* impl = thisObject->m_value.impl();
ASSERT(impl);
- visitor.reportExtraMemoryUsage(impl->costDuringGC());
+ visitor.reportExtraMemoryUsage(thisObject, impl->costDuringGC());
}
}
diff --git a/Source/JavaScriptCore/runtime/StructureRareDataInlines.h b/Source/JavaScriptCore/runtime/StructureRareDataInlines.h
index 20b7f8b..5b39bad 100644
--- a/Source/JavaScriptCore/runtime/StructureRareDataInlines.h
+++ b/Source/JavaScriptCore/runtime/StructureRareDataInlines.h
@@ -35,9 +35,9 @@
return m_previous.get();
}
-inline void StructureRareData::setPreviousID(VM& vm, Structure* transition, Structure* structure)
+inline void StructureRareData::setPreviousID(VM& vm, Structure*, Structure* structure)
{
- m_previous.set(vm, transition, structure);
+ m_previous.set(vm, this, structure);
}
inline void StructureRareData::clearPreviousID()
@@ -50,9 +50,9 @@
return m_objectToStringValue.get();
}
-inline void StructureRareData::setObjectToStringValue(VM& vm, const JSCell* owner, JSString* value)
+inline void StructureRareData::setObjectToStringValue(VM& vm, const JSCell*, JSString* value)
{
- m_objectToStringValue.set(vm, owner, value);
+ m_objectToStringValue.set(vm, this, value);
}
} // namespace JSC
diff --git a/Source/JavaScriptCore/runtime/WeakMapData.cpp b/Source/JavaScriptCore/runtime/WeakMapData.cpp
index ce60c8c..224be8a 100644
--- a/Source/JavaScriptCore/runtime/WeakMapData.cpp
+++ b/Source/JavaScriptCore/runtime/WeakMapData.cpp
@@ -64,7 +64,7 @@
// Rough approximation of the external storage needed for the hashtable.
// This isn't exact, but it is close enough, and proportional to the actual
// external mermory usage.
- visitor.reportExtraMemoryUsage(thisObj->m_map.capacity() * (sizeof(JSObject*) + sizeof(WriteBarrier<Unknown>)));
+ visitor.reportExtraMemoryUsage(thisObj, thisObj->m_map.capacity() * (sizeof(JSObject*) + sizeof(WriteBarrier<Unknown>)));
}
void WeakMapData::set(VM& vm, JSObject* key, JSValue value)