JavaScriptCore does not have speculative->baseline OSR
https://bugs.webkit.org/show_bug.cgi?id=67826

Reviewed by Oliver Hunt.
        
This adds the ability to bail out of DFG speculative JIT execution by
performing an on-stack replacement (OSR) that results in the control
flow going to the equivalent code generated by the old JIT.
        
This required a number of new features, as well as taking advantage of
some features that happened to already be present:
        
We already had a policy of storing the bytecode index for which a DFG
node was generated inside the DFG::Node class. This was previously
called exceptionInfo. It's now renamed to codeOrigin to reflect that
it's used for more than just excpetions. OSR uses this to figure out
which bytecode index to use to look up the machine code location in
the code generated by the old JIT that we should be jumping to.
        
CodeBlock now stores a mapping between bytecode indices and machine
code offsets for code generated by the old JIT. This is implemented
by CompactJITCodeMap, which tries to compress this data a bit.  The
OSR compiler decodes this and uses it to find the machine code
locations it should be jumping to.
        
We already had a mechanism that emitted SetLocal nodes in the DFG graph
that told us the time at which the old JIT would have stored something
into its register file, and the DFG::Node that corresponds to the value
that it would have stored. These SetLocal's were mostly dead-code-
eliminated, but our DCE leaves the nodes intact except for making them
have 0 as the ref count. This allows the OSR compiler to construct a
mapping between the state as it would have been seen by the old JIT
and the state as the DFG JIT sees it. The OSR compiler uses this to
generate code that reshapes the call frame so that it is like what the
old JIT would expect.
        
Finally, when DFG_OSR is enabled (the default for TIERED_COMPILATION)
we no longer emit the non-speculative path.

* JavaScriptCore.xcodeproj/project.pbxproj:
* bytecode/CodeBlock.h:
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::currentCodeOrigin):
(JSC::DFG::ByteCodeParser::addToGraph):
* dfg/DFGGPRInfo.h:
* dfg/DFGGenerationInfo.h:
(JSC::DFG::GenerationInfo::alive):
* dfg/DFGGraph.cpp:
(JSC::DFG::Graph::dump):
* dfg/DFGJITCodeGenerator.cpp:
(JSC::DFG::JITCodeGenerator::emitCall):
* dfg/DFGJITCodeGenerator.h:
(JSC::DFG::JITCodeGenerator::appendCallWithExceptionCheck):
* dfg/DFGJITCompiler.cpp:
(JSC::DFG::JITCompiler::exitSpeculativeWithOSR):
(JSC::DFG::JITCompiler::linkOSRExits):
(JSC::DFG::JITCompiler::compileBody):
(JSC::DFG::JITCompiler::link):
* dfg/DFGJITCompiler.h:
(JSC::DFG::CallRecord::CallRecord):
(JSC::DFG::JITCompiler::notifyCall):
(JSC::DFG::JITCompiler::appendCallWithExceptionCheck):
(JSC::DFG::JITCompiler::appendCallWithFastExceptionCheck):
(JSC::DFG::JITCompiler::addJSCall):
(JSC::DFG::JITCompiler::JSCallRecord::JSCallRecord):
* dfg/DFGNode.h:
(JSC::DFG::CodeOrigin::CodeOrigin):
(JSC::DFG::CodeOrigin::isSet):
(JSC::DFG::CodeOrigin::bytecodeIndex):
(JSC::DFG::Node::Node):
(JSC::DFG::Node::child1Unchecked):
* dfg/DFGNonSpeculativeJIT.cpp:
(JSC::DFG::NonSpeculativeJIT::compile):
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::ValueSource::dump):
(JSC::DFG::ValueRecovery::dump):
(JSC::DFG::OSRExit::OSRExit):
(JSC::DFG::SpeculativeJIT::compile):
(JSC::DFG::SpeculativeJIT::compileMovHint):
(JSC::DFG::SpeculativeJIT::computeValueRecoveryFor):
* dfg/DFGSpeculativeJIT.h:
(JSC::DFG::ValueSource::ValueSource):
(JSC::DFG::ValueSource::isSet):
(JSC::DFG::ValueSource::nodeIndex):
(JSC::DFG::ValueRecovery::ValueRecovery):
(JSC::DFG::ValueRecovery::alreadyInRegisterFile):
(JSC::DFG::ValueRecovery::inGPR):
(JSC::DFG::ValueRecovery::inFPR):
(JSC::DFG::ValueRecovery::displacedInRegisterFile):
(JSC::DFG::ValueRecovery::constant):
(JSC::DFG::ValueRecovery::technique):
(JSC::DFG::ValueRecovery::gpr):
(JSC::DFG::ValueRecovery::fpr):
(JSC::DFG::ValueRecovery::virtualRegister):
(JSC::DFG::OSRExit::numberOfRecoveries):
(JSC::DFG::OSRExit::valueRecovery):
(JSC::DFG::OSRExit::isArgument):
(JSC::DFG::OSRExit::argumentForIndex):
(JSC::DFG::OSRExit::variableForIndex):
(JSC::DFG::OSRExit::operandForIndex):
(JSC::DFG::SpeculativeJIT::osrExits):
(JSC::DFG::SpeculativeJIT::speculationCheck):
(JSC::DFG::SpeculativeJIT::valueSourceForOperand):
(JSC::DFG::SpeculativeJIT::setNodeIndexForOperand):
(JSC::DFG::SpeculativeJIT::valueSourceReferenceForOperand):
(JSC::DFG::SpeculativeJIT::computeValueRecoveryFor):
(JSC::DFG::SpeculationCheckIndexIterator::SpeculationCheckIndexIterator):
(JSC::DFG::SpeculativeJIT::SpeculativeJIT):
* jit/CompactJITCodeMap.h: Added.
(JSC::BytecodeAndMachineOffset::BytecodeAndMachineOffset):
(JSC::BytecodeAndMachineOffset::getBytecodeIndex):
(JSC::BytecodeAndMachineOffset::getMachineCodeOffset):
(JSC::CompactJITCodeMap::~CompactJITCodeMap):
(JSC::CompactJITCodeMap::decode):
(JSC::CompactJITCodeMap::CompactJITCodeMap):
(JSC::CompactJITCodeMap::at):
(JSC::CompactJITCodeMap::decodeNumber):
(JSC::CompactJITCodeMap::Encoder::Encoder):
(JSC::CompactJITCodeMap::Encoder::~Encoder):
(JSC::CompactJITCodeMap::Encoder::append):
(JSC::CompactJITCodeMap::Encoder::finish):
(JSC::CompactJITCodeMap::Encoder::appendByte):
(JSC::CompactJITCodeMap::Encoder::encodeNumber):
(JSC::CompactJITCodeMap::Encoder::ensureCapacityFor):
* jit/JIT.cpp:
(JSC::JIT::privateCompileMainPass):
(JSC::JIT::privateCompile):
* jit/JIT.h:
* runtime/JSGlobalData.cpp:
(JSC::JSGlobalData::JSGlobalData):
(JSC::JSGlobalData::~JSGlobalData):
* runtime/JSGlobalData.h:
(JSC::JSGlobalData::osrScratchBufferForSize):
* runtime/JSValue.cpp:
(JSC::JSValue::description):



git-svn-id: http://svn.webkit.org/repository/webkit/trunk@94996 268f45cc-cd09-0410-ab3c-d52691b4dbfc
diff --git a/Source/JavaScriptCore/ChangeLog b/Source/JavaScriptCore/ChangeLog
index 9cd36e1..4cc0412 100644
--- a/Source/JavaScriptCore/ChangeLog
+++ b/Source/JavaScriptCore/ChangeLog
@@ -1,3 +1,141 @@
+2011-09-09  Filip Pizlo  <fpizlo@apple.com>
+
+        JavaScriptCore does not have speculative->baseline OSR
+        https://bugs.webkit.org/show_bug.cgi?id=67826
+
+        Reviewed by Oliver Hunt.
+        
+        This adds the ability to bail out of DFG speculative JIT execution by
+        performing an on-stack replacement (OSR) that results in the control
+        flow going to the equivalent code generated by the old JIT.
+        
+        This required a number of new features, as well as taking advantage of
+        some features that happened to already be present:
+        
+        We already had a policy of storing the bytecode index for which a DFG
+        node was generated inside the DFG::Node class. This was previously
+        called exceptionInfo. It's now renamed to codeOrigin to reflect that
+        it's used for more than just excpetions. OSR uses this to figure out
+        which bytecode index to use to look up the machine code location in
+        the code generated by the old JIT that we should be jumping to.
+        
+        CodeBlock now stores a mapping between bytecode indices and machine
+        code offsets for code generated by the old JIT. This is implemented
+        by CompactJITCodeMap, which tries to compress this data a bit.  The
+        OSR compiler decodes this and uses it to find the machine code
+        locations it should be jumping to.
+        
+        We already had a mechanism that emitted SetLocal nodes in the DFG graph
+        that told us the time at which the old JIT would have stored something
+        into its register file, and the DFG::Node that corresponds to the value
+        that it would have stored. These SetLocal's were mostly dead-code-
+        eliminated, but our DCE leaves the nodes intact except for making them
+        have 0 as the ref count. This allows the OSR compiler to construct a
+        mapping between the state as it would have been seen by the old JIT
+        and the state as the DFG JIT sees it. The OSR compiler uses this to
+        generate code that reshapes the call frame so that it is like what the
+        old JIT would expect.
+        
+        Finally, when DFG_OSR is enabled (the default for TIERED_COMPILATION)
+        we no longer emit the non-speculative path.
+
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * bytecode/CodeBlock.h:
+        * dfg/DFGByteCodeParser.cpp:
+        (JSC::DFG::ByteCodeParser::currentCodeOrigin):
+        (JSC::DFG::ByteCodeParser::addToGraph):
+        * dfg/DFGGPRInfo.h:
+        * dfg/DFGGenerationInfo.h:
+        (JSC::DFG::GenerationInfo::alive):
+        * dfg/DFGGraph.cpp:
+        (JSC::DFG::Graph::dump):
+        * dfg/DFGJITCodeGenerator.cpp:
+        (JSC::DFG::JITCodeGenerator::emitCall):
+        * dfg/DFGJITCodeGenerator.h:
+        (JSC::DFG::JITCodeGenerator::appendCallWithExceptionCheck):
+        * dfg/DFGJITCompiler.cpp:
+        (JSC::DFG::JITCompiler::exitSpeculativeWithOSR):
+        (JSC::DFG::JITCompiler::linkOSRExits):
+        (JSC::DFG::JITCompiler::compileBody):
+        (JSC::DFG::JITCompiler::link):
+        * dfg/DFGJITCompiler.h:
+        (JSC::DFG::CallRecord::CallRecord):
+        (JSC::DFG::JITCompiler::notifyCall):
+        (JSC::DFG::JITCompiler::appendCallWithExceptionCheck):
+        (JSC::DFG::JITCompiler::appendCallWithFastExceptionCheck):
+        (JSC::DFG::JITCompiler::addJSCall):
+        (JSC::DFG::JITCompiler::JSCallRecord::JSCallRecord):
+        * dfg/DFGNode.h:
+        (JSC::DFG::CodeOrigin::CodeOrigin):
+        (JSC::DFG::CodeOrigin::isSet):
+        (JSC::DFG::CodeOrigin::bytecodeIndex):
+        (JSC::DFG::Node::Node):
+        (JSC::DFG::Node::child1Unchecked):
+        * dfg/DFGNonSpeculativeJIT.cpp:
+        (JSC::DFG::NonSpeculativeJIT::compile):
+        * dfg/DFGSpeculativeJIT.cpp:
+        (JSC::DFG::ValueSource::dump):
+        (JSC::DFG::ValueRecovery::dump):
+        (JSC::DFG::OSRExit::OSRExit):
+        (JSC::DFG::SpeculativeJIT::compile):
+        (JSC::DFG::SpeculativeJIT::compileMovHint):
+        (JSC::DFG::SpeculativeJIT::computeValueRecoveryFor):
+        * dfg/DFGSpeculativeJIT.h:
+        (JSC::DFG::ValueSource::ValueSource):
+        (JSC::DFG::ValueSource::isSet):
+        (JSC::DFG::ValueSource::nodeIndex):
+        (JSC::DFG::ValueRecovery::ValueRecovery):
+        (JSC::DFG::ValueRecovery::alreadyInRegisterFile):
+        (JSC::DFG::ValueRecovery::inGPR):
+        (JSC::DFG::ValueRecovery::inFPR):
+        (JSC::DFG::ValueRecovery::displacedInRegisterFile):
+        (JSC::DFG::ValueRecovery::constant):
+        (JSC::DFG::ValueRecovery::technique):
+        (JSC::DFG::ValueRecovery::gpr):
+        (JSC::DFG::ValueRecovery::fpr):
+        (JSC::DFG::ValueRecovery::virtualRegister):
+        (JSC::DFG::OSRExit::numberOfRecoveries):
+        (JSC::DFG::OSRExit::valueRecovery):
+        (JSC::DFG::OSRExit::isArgument):
+        (JSC::DFG::OSRExit::argumentForIndex):
+        (JSC::DFG::OSRExit::variableForIndex):
+        (JSC::DFG::OSRExit::operandForIndex):
+        (JSC::DFG::SpeculativeJIT::osrExits):
+        (JSC::DFG::SpeculativeJIT::speculationCheck):
+        (JSC::DFG::SpeculativeJIT::valueSourceForOperand):
+        (JSC::DFG::SpeculativeJIT::setNodeIndexForOperand):
+        (JSC::DFG::SpeculativeJIT::valueSourceReferenceForOperand):
+        (JSC::DFG::SpeculativeJIT::computeValueRecoveryFor):
+        (JSC::DFG::SpeculationCheckIndexIterator::SpeculationCheckIndexIterator):
+        (JSC::DFG::SpeculativeJIT::SpeculativeJIT):
+        * jit/CompactJITCodeMap.h: Added.
+        (JSC::BytecodeAndMachineOffset::BytecodeAndMachineOffset):
+        (JSC::BytecodeAndMachineOffset::getBytecodeIndex):
+        (JSC::BytecodeAndMachineOffset::getMachineCodeOffset):
+        (JSC::CompactJITCodeMap::~CompactJITCodeMap):
+        (JSC::CompactJITCodeMap::decode):
+        (JSC::CompactJITCodeMap::CompactJITCodeMap):
+        (JSC::CompactJITCodeMap::at):
+        (JSC::CompactJITCodeMap::decodeNumber):
+        (JSC::CompactJITCodeMap::Encoder::Encoder):
+        (JSC::CompactJITCodeMap::Encoder::~Encoder):
+        (JSC::CompactJITCodeMap::Encoder::append):
+        (JSC::CompactJITCodeMap::Encoder::finish):
+        (JSC::CompactJITCodeMap::Encoder::appendByte):
+        (JSC::CompactJITCodeMap::Encoder::encodeNumber):
+        (JSC::CompactJITCodeMap::Encoder::ensureCapacityFor):
+        * jit/JIT.cpp:
+        (JSC::JIT::privateCompileMainPass):
+        (JSC::JIT::privateCompile):
+        * jit/JIT.h:
+        * runtime/JSGlobalData.cpp:
+        (JSC::JSGlobalData::JSGlobalData):
+        (JSC::JSGlobalData::~JSGlobalData):
+        * runtime/JSGlobalData.h:
+        (JSC::JSGlobalData::osrScratchBufferForSize):
+        * runtime/JSValue.cpp:
+        (JSC::JSValue::description):
+
 2011-09-12  Geoffrey Garen  <ggaren@apple.com>
 
         Re-enabled ENABLE(LAZY_BLOCK_FREEING).
diff --git a/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj b/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
index a0cce6a..4853ea1 100644
--- a/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
+++ b/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
@@ -65,6 +65,7 @@
 		0FD3C82714115D4F00FD81CB /* DFGPropagator.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD3C82414115D2200FD81CB /* DFGPropagator.h */; };
 		0FD3C82814115D4F00FD81CB /* DFGDriver.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD3C82214115D0E00FD81CB /* DFGDriver.h */; };
 		0FD82E2114172CE300179C94 /* DFGCapabilities.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FD82E1E14172C2F00179C94 /* DFGCapabilities.cpp */; };
+		0FD82E39141AB14D00179C94 /* CompactJITCodeMap.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD82E37141AB14200179C94 /* CompactJITCodeMap.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		1400067712A6F7830064D123 /* OSAllocator.h in Headers */ = {isa = PBXBuildFile; fileRef = 1400067612A6F7830064D123 /* OSAllocator.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		1400069312A6F9E10064D123 /* OSAllocatorPosix.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 1400069212A6F9E10064D123 /* OSAllocatorPosix.cpp */; };
 		140566C4107EC255005DBC8D /* JSAPIValueWrapper.cpp in Sources */ = {isa = PBXBuildFile; fileRef = BC0894D50FAFBA2D00001865 /* JSAPIValueWrapper.cpp */; };
@@ -797,6 +798,7 @@
 		0FD3C82414115D2200FD81CB /* DFGPropagator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGPropagator.h; path = dfg/DFGPropagator.h; sourceTree = "<group>"; };
 		0FD82E1E14172C2F00179C94 /* DFGCapabilities.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGCapabilities.cpp; path = dfg/DFGCapabilities.cpp; sourceTree = "<group>"; };
 		0FD82E1F14172C2F00179C94 /* DFGCapabilities.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGCapabilities.h; path = dfg/DFGCapabilities.h; sourceTree = "<group>"; };
+		0FD82E37141AB14200179C94 /* CompactJITCodeMap.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CompactJITCodeMap.h; sourceTree = "<group>"; };
 		1400067612A6F7830064D123 /* OSAllocator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = OSAllocator.h; sourceTree = "<group>"; };
 		1400069212A6F9E10064D123 /* OSAllocatorPosix.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = OSAllocatorPosix.cpp; sourceTree = "<group>"; };
 		140D17D60E8AD4A9000CD17D /* JSBasePrivate.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSBasePrivate.h; sourceTree = "<group>"; };
@@ -1576,6 +1578,7 @@
 		1429D92C0ED22D7000B89619 /* jit */ = {
 			isa = PBXGroup;
 			children = (
+				0FD82E37141AB14200179C94 /* CompactJITCodeMap.h */,
 				A7B48DB60EE74CFC00DCBDB6 /* ExecutableAllocator.cpp */,
 				A7B48DB50EE74CFC00DCBDB6 /* ExecutableAllocator.h */,
 				86DB64630F95C6FC00D7D921 /* ExecutableAllocatorFixedVMPool.cpp */,
@@ -2394,6 +2397,7 @@
 				BC257DE80E1F51C50016B6C9 /* Arguments.h in Headers */,
 				86D3B2C410156BDE002865E7 /* ARMAssembler.h in Headers */,
 				86ADD1450FDDEA980006EEC2 /* ARMv7Assembler.h in Headers */,
+				0FD82E39141AB14D00179C94 /* CompactJITCodeMap.h in Headers */,
 				BC18C3E60E16F5CD00B34460 /* ArrayConstructor.h in Headers */,
 				BC18C46E0E16F5CD00B34460 /* TCSpinLock.h in Headers */,
 				0F963B2F13FC66BB0002D9B2 /* MetaAllocatorHandle.h in Headers */,
diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.h b/Source/JavaScriptCore/bytecode/CodeBlock.h
index 068cc69..7110099 100644
--- a/Source/JavaScriptCore/bytecode/CodeBlock.h
+++ b/Source/JavaScriptCore/bytecode/CodeBlock.h
@@ -30,6 +30,7 @@
 #ifndef CodeBlock_h
 #define CodeBlock_h
 
+#include "CompactJITCodeMap.h"
 #include "EvalCodeCache.h"
 #include "Instruction.h"
 #include "JITCode.h"
@@ -298,6 +299,17 @@
         void unlinkIncomingCalls();
 #endif
 
+#if ENABLE(TIERED_COMPILATION)
+        void setJITCodeMap(PassOwnPtr<CompactJITCodeMap> jitCodeMap)
+        {
+            m_jitCodeMap = jitCodeMap;
+        }
+        CompactJITCodeMap* jitCodeMap()
+        {
+            return m_jitCodeMap.get();
+        }
+#endif
+
 #if ENABLE(INTERPRETER)
         unsigned bytecodeOffset(Instruction* returnAddress)
         {
@@ -651,6 +663,9 @@
         MacroAssemblerCodePtr m_jitCodeWithArityCheck;
         SentinelLinkedList<CallLinkInfo, BasicRawSentinelNode<CallLinkInfo> > m_incomingCalls;
 #endif
+#if ENABLE(TIERED_COMPILATION)
+        OwnPtr<CompactJITCodeMap> m_jitCodeMap;
+#endif
 #if ENABLE(VALUE_PROFILER)
         SegmentedVector<ValueProfile, 8> m_valueProfiles;
 #endif
@@ -666,7 +681,7 @@
 
         SymbolTable* m_symbolTable;
 
-        OwnPtr<CodeBlock> m_alternative; // FIXME make this do something
+        OwnPtr<CodeBlock> m_alternative;
 
         struct RareData {
            WTF_MAKE_FAST_ALLOCATED;
diff --git a/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp b/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
index f1d58d7..a6cfd73 100644
--- a/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
+++ b/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
@@ -367,14 +367,18 @@
         ASSERT(m_codeBlock->getConstant(FirstConstantRegisterIndex + m_constant1).asInt32() == 1);
         return getJSConstant(m_constant1);
     }
-
+    
+    CodeOrigin currentCodeOrigin()
+    {
+        return CodeOrigin(m_currentIndex);
+    }
 
     // These methods create a node and add it to the graph. If nodes of this type are
     // 'mustGenerate' then the node  will implicitly be ref'ed to ensure generation.
     NodeIndex addToGraph(NodeType op, NodeIndex child1 = NoNode, NodeIndex child2 = NoNode, NodeIndex child3 = NoNode)
     {
         NodeIndex resultIndex = (NodeIndex)m_graph.size();
-        m_graph.append(Node(op, m_currentIndex, child1, child2, child3));
+        m_graph.append(Node(op, currentCodeOrigin(), child1, child2, child3));
 
         if (op & NodeMustGenerate)
             m_graph.ref(resultIndex);
@@ -383,7 +387,7 @@
     NodeIndex addToGraph(NodeType op, OpInfo info, NodeIndex child1 = NoNode, NodeIndex child2 = NoNode, NodeIndex child3 = NoNode)
     {
         NodeIndex resultIndex = (NodeIndex)m_graph.size();
-        m_graph.append(Node(op, m_currentIndex, info, child1, child2, child3));
+        m_graph.append(Node(op, currentCodeOrigin(), info, child1, child2, child3));
 
         if (op & NodeMustGenerate)
             m_graph.ref(resultIndex);
@@ -392,7 +396,7 @@
     NodeIndex addToGraph(NodeType op, OpInfo info1, OpInfo info2, NodeIndex child1 = NoNode, NodeIndex child2 = NoNode, NodeIndex child3 = NoNode)
     {
         NodeIndex resultIndex = (NodeIndex)m_graph.size();
-        m_graph.append(Node(op, m_currentIndex, info1, info2, child1, child2, child3));
+        m_graph.append(Node(op, currentCodeOrigin(), info1, info2, child1, child2, child3));
 
         if (op & NodeMustGenerate)
             m_graph.ref(resultIndex);
@@ -402,7 +406,7 @@
     NodeIndex addToGraph(Node::VarArgTag, NodeType op, OpInfo info1, OpInfo info2)
     {
         NodeIndex resultIndex = (NodeIndex)m_graph.size();
-        m_graph.append(Node(Node::VarArg, op, m_currentIndex, info1, info2, m_graph.m_varArgChildren.size() - m_numPassedVarArgs, m_numPassedVarArgs));
+        m_graph.append(Node(Node::VarArg, op, currentCodeOrigin(), info1, info2, m_graph.m_varArgChildren.size() - m_numPassedVarArgs, m_numPassedVarArgs));
         
         m_numPassedVarArgs = 0;
         
diff --git a/Source/JavaScriptCore/dfg/DFGGPRInfo.h b/Source/JavaScriptCore/dfg/DFGGPRInfo.h
index f4e9f76..b55c7be 100644
--- a/Source/JavaScriptCore/dfg/DFGGPRInfo.h
+++ b/Source/JavaScriptCore/dfg/DFGGPRInfo.h
@@ -42,6 +42,7 @@
     static const unsigned numberOfRegisters = 9;
 
     // These registers match the old JIT.
+    static const GPRReg cachedResultRegister = X86Registers::eax;
     static const GPRReg timeoutCheckRegister = X86Registers::r12;
     static const GPRReg callFrameRegister = X86Registers::r13;
     static const GPRReg tagTypeNumberRegister = X86Registers::r14;
diff --git a/Source/JavaScriptCore/dfg/DFGGenerationInfo.h b/Source/JavaScriptCore/dfg/DFGGenerationInfo.h
index 74b03c1..bf87f02 100644
--- a/Source/JavaScriptCore/dfg/DFGGenerationInfo.h
+++ b/Source/JavaScriptCore/dfg/DFGGenerationInfo.h
@@ -274,12 +274,10 @@
         u.fpr = fpr;
     }
 
-#ifndef NDEBUG
     bool alive()
     {
         return m_useCount;
     }
-#endif
 
 private:
     // The index of the node whose result is stored in this virtual register.
diff --git a/Source/JavaScriptCore/dfg/DFGGraph.cpp b/Source/JavaScriptCore/dfg/DFGGraph.cpp
index 3247612..fcb30e5 100644
--- a/Source/JavaScriptCore/dfg/DFGGraph.cpp
+++ b/Source/JavaScriptCore/dfg/DFGGraph.cpp
@@ -52,13 +52,12 @@
     NodeType op = node.op;
 
     unsigned refCount = node.refCount();
-    if (!refCount) {
-        printf("% 4d:\tskipped %s\n", (int)nodeIndex, opName(op));
-        return;
-    }
+    bool skipped = !refCount;
     bool mustGenerate = node.mustGenerate();
-    if (mustGenerate)
+    if (mustGenerate) {
+        ASSERT(refCount);
         --refCount;
+    }
 
     // Example/explanation of dataflow dump output
     //
@@ -77,8 +76,8 @@
     //         $#   - the index in the CodeBlock of a constant { for numeric constants the value is displayed | for integers, in both decimal and hex }.
     //         id#  - the index in the CodeBlock of an identifier { if codeBlock is passed to dump(), the string representation is displayed }.
     //         var# - the index of a var on the global object, used by GetGlobalVar/PutGlobalVar operations.
-    printf("% 4d:\t<%c%u:", (int)nodeIndex, mustGenerate ? '!' : ' ', refCount);
-    if (node.hasResult())
+    printf("% 4d:%s<%c%u:", (int)nodeIndex, skipped ? "  skipped  " : "           ", mustGenerate ? '!' : ' ', refCount);
+    if (node.hasResult() && !skipped)
         printf("%u", node.virtualRegister());
     else
         printf("-");
diff --git a/Source/JavaScriptCore/dfg/DFGJITCodeGenerator.cpp b/Source/JavaScriptCore/dfg/DFGJITCodeGenerator.cpp
index 1bd9c4d..20cd351 100644
--- a/Source/JavaScriptCore/dfg/DFGJITCodeGenerator.cpp
+++ b/Source/JavaScriptCore/dfg/DFGJITCodeGenerator.cpp
@@ -1126,17 +1126,17 @@
     m_jit.addPtr(Imm32(m_jit.codeBlock()->m_numCalleeRegisters * sizeof(Register)), GPRInfo::callFrameRegister);
     
     JITCompiler::Call fastCall = m_jit.nearCall();
-    m_jit.notifyCall(fastCall, m_jit.graph()[m_compileIndex].exceptionInfo);
+    m_jit.notifyCall(fastCall, m_jit.graph()[m_compileIndex].codeOrigin);
     
     JITCompiler::Jump done = m_jit.jump();
     
     slowPath.link(&m_jit);
     
     m_jit.addPtr(Imm32(m_jit.codeBlock()->m_numCalleeRegisters * sizeof(Register)), GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);
-    JITCompiler::Call slowCall = m_jit.appendCallWithFastExceptionCheck(slowCallFunction, m_jit.graph()[m_compileIndex].exceptionInfo);
+    JITCompiler::Call slowCall = m_jit.appendCallWithFastExceptionCheck(slowCallFunction, m_jit.graph()[m_compileIndex].codeOrigin);
     m_jit.move(Imm32(numArgs), GPRInfo::regT1);
     m_jit.addPtr(Imm32(m_jit.codeBlock()->m_numCalleeRegisters * sizeof(Register)), GPRInfo::callFrameRegister);
-    m_jit.notifyCall(m_jit.call(GPRInfo::returnValueGPR), m_jit.graph()[m_compileIndex].exceptionInfo);
+    m_jit.notifyCall(m_jit.call(GPRInfo::returnValueGPR), m_jit.graph()[m_compileIndex].codeOrigin);
     
     done.link(&m_jit);
     
@@ -1144,7 +1144,7 @@
     
     jsValueResult(resultGPR, m_compileIndex, DataFormatJS, UseChildrenCalledExplicitly);
     
-    m_jit.addJSCall(fastCall, slowCall, targetToCheck, isCall, m_jit.graph()[m_compileIndex].exceptionInfo);
+    m_jit.addJSCall(fastCall, slowCall, targetToCheck, isCall, m_jit.graph()[m_compileIndex].codeOrigin);
 }
 
 void JITCodeGenerator::speculationCheck(MacroAssembler::Jump jumpToFail)
diff --git a/Source/JavaScriptCore/dfg/DFGJITCodeGenerator.h b/Source/JavaScriptCore/dfg/DFGJITCodeGenerator.h
index 15902f3..4b56693 100644
--- a/Source/JavaScriptCore/dfg/DFGJITCodeGenerator.h
+++ b/Source/JavaScriptCore/dfg/DFGJITCodeGenerator.h
@@ -32,6 +32,7 @@
 #include <dfg/DFGGenerationInfo.h>
 #include <dfg/DFGGraph.h>
 #include <dfg/DFGJITCompiler.h>
+#include <dfg/DFGNode.h>
 #include <dfg/DFGOperations.h>
 #include <dfg/DFGRegisterBank.h>
 
@@ -907,7 +908,7 @@
 
     JITCompiler::Call appendCallWithExceptionCheck(const FunctionPtr& function)
     {
-        return m_jit.appendCallWithExceptionCheck(function, m_jit.graph()[m_compileIndex].exceptionInfo);
+        return m_jit.appendCallWithExceptionCheck(function, m_jit.graph()[m_compileIndex].codeOrigin);
     }
 
     void addBranch(const MacroAssembler::Jump& jump, BlockIndex destination)
diff --git a/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp b/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
index b2b9a6e..a21bd3f 100644
--- a/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
+++ b/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
@@ -100,6 +100,340 @@
     loadPtr(addressFor(node.virtualRegister()), gpr);
 }
 
+#if ENABLE(DFG_OSR_EXIT)
+void JITCompiler::exitSpeculativeWithOSR(const OSRExit& exit, SpeculationRecovery* recovery, Vector<BytecodeAndMachineOffset>& decodedCodeMap)
+{
+    // 1) Pro-forma stuff.
+    exit.m_check.link(this);
+
+#if ENABLE(DFG_DEBUG_VERBOSE)
+    fprintf(stderr, "OSR exit for Node @%d (bc#%u) at JIT offset 0x%x   ", (int)exit.m_nodeIndex, exit.m_bytecodeIndex, debugOffset());
+    exit.dump(stderr);
+#endif
+#if ENABLE(DFG_JIT_BREAK_ON_SPECULATION_FAILURE)
+    breakpoint();
+#endif
+    
+#if ENABLE(DFG_VERBOSE_SPECULATION_FAILURE)
+    SpeculationFailureDebugInfo* debugInfo = new SpeculationFailureDebugInfo;
+    debugInfo->codeBlock = m_codeBlock;
+    debugInfo->debugOffset = debugOffset();
+    
+    debugCall(debugOperationPrintSpeculationFailure, debugInfo);
+#endif
+    
+    // 2) Perform speculation recovery. This only comes into play when an operation
+    //    starts mutating state before verifying the speculation it has already made.
+    
+    GPRReg alreadyBoxed = InvalidGPRReg;
+    
+    if (recovery) {
+        switch (recovery->type()) {
+        case SpeculativeAdd:
+            sub32(recovery->src(), recovery->dest());
+            orPtr(GPRInfo::tagTypeNumberRegister, recovery->dest());
+            alreadyBoxed = recovery->dest();
+            break;
+            
+        case BooleanSpeculationCheck:
+            xorPtr(TrustedImm32(static_cast<int32_t>(ValueFalse)), recovery->dest());
+            break;
+            
+        default:
+            break;
+        }
+    }
+
+    // 3) Figure out how many scratch slots we'll need. We need one for every GPR/FPR
+    //    whose destination is now occupied by a DFG virtual register, and we need
+    //    one for every displaced virtual register if there are more than
+    //    GPRInfo::numberOfRegisters of them. Also see if there are any constants,
+    //    any undefined slots, any FPR slots, and any unboxed ints.
+            
+    Vector<bool> poisonedVirtualRegisters(exit.m_variables.size());
+    for (unsigned i = 0; i < poisonedVirtualRegisters.size(); ++i)
+        poisonedVirtualRegisters[i] = false;
+
+    unsigned numberOfPoisonedVirtualRegisters = 0;
+    unsigned numberOfDisplacedVirtualRegisters = 0;
+    
+    // Booleans for fast checks. We expect that most OSR exits do not have to rebox
+    // Int32s, have no FPRs, and have no constants. If there are constants, we
+    // expect most of them to be jsUndefined(); if that's true then we handle that
+    // specially to minimize code size and execution time.
+    bool haveUnboxedInt32s = false;
+    bool haveFPRs = false;
+    bool haveConstants = false;
+    bool haveUndefined = false;
+    
+    for (int index = 0; index < exit.numberOfRecoveries(); ++index) {
+        const ValueRecovery& recovery = exit.valueRecovery(index);
+        switch (recovery.technique()) {
+        case DisplacedInRegisterFile:
+            numberOfDisplacedVirtualRegisters++;
+            ASSERT((int)recovery.virtualRegister() >= 0);
+            
+            // See if we might like to store to this virtual register before doing
+            // virtual register shuffling. If so, we say that the virtual register
+            // is poisoned: it cannot be stored to until after displaced virtual
+            // registers are handled. We track poisoned virtual register carefully
+            // to ensure this happens efficiently. Note that we expect this case
+            // to be rare, so the handling of it is optimized for the cases in
+            // which it does not happen.
+            if (recovery.virtualRegister() < (int)exit.m_variables.size()) {
+                switch (exit.m_variables[recovery.virtualRegister()].technique()) {
+                case InGPR:
+                case UnboxedInt32InGPR:
+                case InFPR:
+                    if (!poisonedVirtualRegisters[recovery.virtualRegister()]) {
+                        poisonedVirtualRegisters[recovery.virtualRegister()] = true;
+                        numberOfPoisonedVirtualRegisters++;
+                    }
+                    break;
+                default:
+                    break;
+                }
+            }
+            break;
+            
+        case UnboxedInt32InGPR:
+            haveUnboxedInt32s = true;
+            break;
+            
+        case InFPR:
+            haveFPRs = true;
+            break;
+            
+        case Constant:
+            haveConstants = true;
+            if (recovery.constant().isUndefined())
+                haveUndefined = true;
+            break;
+            
+        default:
+            break;
+        }
+    }
+    
+    EncodedJSValue* scratchBuffer = static_cast<EncodedJSValue*>(globalData()->osrScratchBufferForSize(sizeof(EncodedJSValue) * (numberOfPoisonedVirtualRegisters + (numberOfDisplacedVirtualRegisters <= GPRInfo::numberOfRegisters ? 0 : numberOfDisplacedVirtualRegisters))));
+
+    // From here on, the code assumes that it is profitable to maximize the distance
+    // between when something is computed and when it is stored.
+    
+    // 4) Perform all reboxing of integers.
+    
+    if (haveUnboxedInt32s) {
+        for (int index = 0; index < exit.numberOfRecoveries(); ++index) {
+            const ValueRecovery& recovery = exit.valueRecovery(index);
+            if (recovery.technique() == UnboxedInt32InGPR && recovery.gpr() != alreadyBoxed)
+                orPtr(GPRInfo::tagTypeNumberRegister, recovery.gpr());
+        }
+    }
+    
+    // 5) Dump all non-poisoned GPRs. For poisoned GPRs, save them into the scratch storage.
+    //    Note that GPRs do not have a fast change (like haveFPRs) because we expect that
+    //    most OSR failure points will have at least one GPR that needs to be dumped.
+    
+    unsigned scratchIndex = 0;
+    for (int index = 0; index < exit.numberOfRecoveries(); ++index) {
+        const ValueRecovery& recovery = exit.valueRecovery(index);
+        int operand = exit.operandForIndex(index);
+        switch (recovery.technique()) {
+        case InGPR:
+        case UnboxedInt32InGPR:
+            if (exit.isVariable(index) && poisonedVirtualRegisters[exit.variableForIndex(index)])
+                storePtr(recovery.gpr(), scratchBuffer + scratchIndex++);
+            else
+                storePtr(recovery.gpr(), addressFor((VirtualRegister)operand));
+            break;
+        default:
+            break;
+        }
+    }
+    
+    // At this point all GPRs are available for scratch use.
+    
+    if (haveFPRs) {
+        // 6) Box all doubles (relies on there being more GPRs than FPRs)
+        
+        for (int index = 0; index < exit.numberOfRecoveries(); ++index) {
+            const ValueRecovery& recovery = exit.valueRecovery(index);
+            if (recovery.technique() != InFPR)
+                continue;
+            FPRReg fpr = recovery.fpr();
+            GPRReg gpr = GPRInfo::toRegister(FPRInfo::toIndex(fpr));
+            boxDouble(fpr, gpr);
+        }
+        
+        // 7) Dump all doubles into the register file, or to the scratch storage if
+        //    the destination virtual register is poisoned.
+        
+        for (int index = 0; index < exit.numberOfRecoveries(); ++index) {
+            const ValueRecovery& recovery = exit.valueRecovery(index);
+            if (recovery.technique() != InFPR)
+                continue;
+            GPRReg gpr = GPRInfo::toRegister(FPRInfo::toIndex(recovery.fpr()));
+            if (exit.isVariable(index) && poisonedVirtualRegisters[exit.variableForIndex(index)])
+                storePtr(gpr, scratchBuffer + scratchIndex++);
+            else
+                storePtr(gpr, addressFor((VirtualRegister)exit.operandForIndex(index)));
+        }
+    }
+    
+    ASSERT(scratchIndex == numberOfPoisonedVirtualRegisters);
+    
+    // 8) Reshuffle displaced virtual registers. Optimize for the case that
+    //    the number of displaced virtual registers is not more than the number
+    //    of available physical registers.
+    
+    if (numberOfDisplacedVirtualRegisters) {
+        if (numberOfDisplacedVirtualRegisters <= GPRInfo::numberOfRegisters) {
+            // So far this appears to be the case that triggers all the time, but
+            // that is far from guaranteed.
+        
+            unsigned displacementIndex = 0;
+            for (int index = 0; index < exit.numberOfRecoveries(); ++index) {
+                const ValueRecovery& recovery = exit.valueRecovery(index);
+                if (recovery.technique() != DisplacedInRegisterFile)
+                    continue;
+                loadPtr(addressFor(recovery.virtualRegister()), GPRInfo::toRegister(displacementIndex++));
+            }
+        
+            displacementIndex = 0;
+            for (int index = 0; index < exit.numberOfRecoveries(); ++index) {
+                const ValueRecovery& recovery = exit.valueRecovery(index);
+                if (recovery.technique() != DisplacedInRegisterFile)
+                    continue;
+                storePtr(GPRInfo::toRegister(displacementIndex++), addressFor((VirtualRegister)exit.operandForIndex(index)));
+            }
+        } else {
+            // FIXME: This should use the shuffling algorithm that we use
+            // for speculative->non-speculative jumps, if we ever discover that
+            // some hot code with lots of live values that get displaced and
+            // spilled really enjoys frequently failing speculation.
+        
+            // For now this code is engineered to be correct but probably not
+            // super. In particular, it correctly handles cases where for example
+            // the displacements are a permutation of the destination values, like
+            //
+            // 1 -> 2
+            // 2 -> 1
+            //
+            // It accomplishes this by simply lifting all of the virtual registers
+            // from their old (DFG JIT) locations and dropping them in a scratch
+            // location in memory, and then transferring from that scratch location
+            // to their new (old JIT) locations.
+        
+            for (int index = 0; index < exit.numberOfRecoveries(); ++index) {
+                const ValueRecovery& recovery = exit.valueRecovery(index);
+                if (recovery.technique() != DisplacedInRegisterFile)
+                    continue;
+                loadPtr(addressFor(recovery.virtualRegister()), GPRInfo::regT0);
+                storePtr(GPRInfo::regT0, scratchBuffer + scratchIndex++);
+            }
+        
+            scratchIndex = numberOfPoisonedVirtualRegisters;
+            for (int index = 0; index < exit.numberOfRecoveries(); ++index) {
+                const ValueRecovery& recovery = exit.valueRecovery(index);
+                if (recovery.technique() != DisplacedInRegisterFile)
+                    continue;
+                loadPtr(scratchBuffer + scratchIndex++, GPRInfo::regT0);
+                storePtr(GPRInfo::regT0, addressFor((VirtualRegister)exit.operandForIndex(index)));
+            }
+        
+            ASSERT(scratchIndex == numberOfPoisonedVirtualRegisters + numberOfDisplacedVirtualRegisters);
+        }
+    }
+    
+    // 9) Dump all poisoned virtual registers.
+    
+    scratchIndex = 0;
+    if (numberOfPoisonedVirtualRegisters) {
+        for (int virtualRegister = 0; virtualRegister < (int)exit.m_variables.size(); ++virtualRegister) {
+            if (!poisonedVirtualRegisters[virtualRegister])
+                continue;
+            
+            const ValueRecovery& recovery = exit.m_variables[virtualRegister];
+            switch (recovery.technique()) {
+            case InGPR:
+            case UnboxedInt32InGPR:
+            case InFPR:
+                loadPtr(scratchBuffer + scratchIndex++, GPRInfo::regT0);
+                storePtr(GPRInfo::regT0, addressFor((VirtualRegister)virtualRegister));
+                break;
+                
+            default:
+                break;
+            }
+        }
+    }
+    ASSERT(scratchIndex == numberOfPoisonedVirtualRegisters);
+    
+    // 10) Dump all constants. Optimize for Undefined, since that's a constant we see
+    //     often.
+
+    if (haveConstants) {
+        if (haveUndefined)
+            move(TrustedImmPtr(JSValue::encode(jsUndefined())), GPRInfo::regT0);
+        
+        for (int index = 0; index < exit.numberOfRecoveries(); ++index) {
+            const ValueRecovery& recovery = exit.valueRecovery(index);
+            if (recovery.technique() != Constant)
+                continue;
+            if (recovery.constant().isUndefined())
+                storePtr(GPRInfo::regT0, addressFor((VirtualRegister)exit.operandForIndex(index)));
+            else
+                storePtr(TrustedImmPtr(JSValue::encode(recovery.constant())), addressFor((VirtualRegister)exit.operandForIndex(index)));
+        }
+    }
+    
+    // 11) Load the result of the last bytecode operation into regT0.
+    
+    if (exit.m_lastSetOperand != std::numeric_limits<int>::max())
+        loadPtr(addressFor((VirtualRegister)exit.m_lastSetOperand), GPRInfo::cachedResultRegister);
+    
+    // 12) Fix call frame.
+    
+    ASSERT(codeBlock()->alternative()->getJITType() == JITCode::BaselineJIT);
+    storePtr(TrustedImmPtr(codeBlock()->alternative()), addressFor((VirtualRegister)RegisterFile::CodeBlock));
+    
+    // 13) Jump into the corresponding baseline JIT code.
+    
+    BytecodeAndMachineOffset* mapping = binarySearch<BytecodeAndMachineOffset, unsigned, BytecodeAndMachineOffset::getBytecodeIndex>(decodedCodeMap.begin(), decodedCodeMap.size(), exit.m_bytecodeIndex);
+    
+    ASSERT(mapping);
+    ASSERT(mapping->m_bytecodeIndex == exit.m_bytecodeIndex);
+    
+    void* jumpTarget = reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(codeBlock()->alternative()->getJITCode().start()) + mapping->m_machineCodeOffset);
+    
+    ASSERT(GPRInfo::regT1 != GPRInfo::cachedResultRegister);
+    
+    move(TrustedImmPtr(jumpTarget), GPRInfo::regT1);
+    jump(GPRInfo::regT1);
+
+#if ENABLE(DFG_DEBUG_VERBOSE)
+    fprintf(stderr, "   -> %p\n", jumpTarget);
+#endif
+}
+
+void JITCompiler::linkOSRExits(SpeculativeJIT& speculative)
+{
+    Vector<BytecodeAndMachineOffset> decodedCodeMap;
+    ASSERT(codeBlock()->alternative());
+    ASSERT(codeBlock()->alternative()->getJITType() == JITCode::BaselineJIT);
+    ASSERT(codeBlock()->alternative()->jitCodeMap());
+    codeBlock()->alternative()->jitCodeMap()->decode(decodedCodeMap);
+    
+    OSRExitVector::Iterator exitsIter = speculative.osrExits().begin();
+    OSRExitVector::Iterator exitsEnd = speculative.osrExits().end();
+    
+    while (exitsIter != exitsEnd) {
+        const OSRExit& exit = *exitsIter;
+        exitSpeculativeWithOSR(exit, speculative.speculationRecovery(exit.m_recoveryIndex), decodedCodeMap);
+        ++exitsIter;
+    }
+}
+#else // ENABLE(DFG_OSR_EXIT)
 class GeneralizedRegister {
 public:
     GeneralizedRegister() { }
@@ -803,6 +1137,7 @@
     ASSERT(!(checksIter != checksEnd));
     ASSERT(!(entriesIter != entriesEnd));
 }
+#endif // ENABLE(DFG_OSR_EXIT)
 
 void JITCompiler::compileEntry()
 {
@@ -844,12 +1179,16 @@
     // to allow it to check which nodes in the graph may bail out, and may need to reenter the
     // non-speculative path.
     if (compiledSpeculative) {
+#if ENABLE(DFG_OSR_EXIT)
+        linkOSRExits(speculative);
+#else
         SpeculationCheckIndexIterator checkIterator(speculative.speculationChecks());
         NonSpeculativeJIT nonSpeculative(*this);
         nonSpeculative.compile(checkIterator);
 
         // Link the bail-outs from the speculative path to the corresponding entry points into the non-speculative one.
         linkSpeculationChecks(speculative, nonSpeculative);
+#endif
     } else {
         // If compilation through the SpeculativeJIT failed, throw away the code we generated.
         m_calls.clear();
@@ -858,8 +1197,12 @@
         m_methodGets.clear();
         rewindToLabel(speculativePathBegin);
 
+#if ENABLE(DFG_OSR_EXIT)
+        SpeculationCheckIndexIterator checkIterator;
+#else
         SpeculationCheckVector noChecks;
         SpeculationCheckIndexIterator checkIterator(noChecks);
+#endif
         NonSpeculativeJIT nonSpeculative(*this);
         nonSpeculative.compile(checkIterator);
     }
@@ -907,7 +1250,7 @@
         for (unsigned i = 0; i < m_calls.size(); ++i) {
             if (m_calls[i].m_handlesExceptions) {
                 unsigned returnAddressOffset = linkBuffer.returnAddressOffset(m_calls[i].m_call);
-                unsigned exceptionInfo = m_calls[i].m_exceptionInfo;
+                unsigned exceptionInfo = m_calls[i].m_codeOrigin.bytecodeIndex();
                 m_codeBlock->callReturnIndexVector().append(CallReturnOffsetToBytecodeOffset(returnAddressOffset, exceptionInfo));
             }
         }
diff --git a/Source/JavaScriptCore/dfg/DFGJITCompiler.h b/Source/JavaScriptCore/dfg/DFGJITCompiler.h
index 9e42456..137e445 100644
--- a/Source/JavaScriptCore/dfg/DFGJITCompiler.h
+++ b/Source/JavaScriptCore/dfg/DFGJITCompiler.h
@@ -53,6 +53,7 @@
 
 struct EntryLocation;
 struct SpeculationCheck;
+struct OSRExit;
 
 #ifndef NDEBUG
 typedef void (*V_DFGDebugOperation_EP)(ExecState*, void*);
@@ -82,21 +83,21 @@
     }
 
     // Constructor for a call with an exception handler.
-    CallRecord(MacroAssembler::Call call, FunctionPtr function, MacroAssembler::Jump exceptionCheck, ExceptionInfo exceptionInfo)
+    CallRecord(MacroAssembler::Call call, FunctionPtr function, MacroAssembler::Jump exceptionCheck, CodeOrigin codeOrigin)
         : m_call(call)
         , m_function(function)
         , m_exceptionCheck(exceptionCheck)
-        , m_exceptionInfo(exceptionInfo)
+        , m_codeOrigin(codeOrigin)
         , m_handlesExceptions(true)
     {
     }
 
     // Constructor for a call that may cause exceptions, but which are handled
     // through some mechanism other than the in-line exception handler.
-    CallRecord(MacroAssembler::Call call, FunctionPtr function, ExceptionInfo exceptionInfo)
+    CallRecord(MacroAssembler::Call call, FunctionPtr function, CodeOrigin codeOrigin)
         : m_call(call)
         , m_function(function)
-        , m_exceptionInfo(exceptionInfo)
+        , m_codeOrigin(codeOrigin)
         , m_handlesExceptions(true)
     {
     }
@@ -104,7 +105,7 @@
     MacroAssembler::Call m_call;
     FunctionPtr m_function;
     MacroAssembler::Jump m_exceptionCheck;
-    ExceptionInfo m_exceptionInfo;
+    CodeOrigin m_codeOrigin;
     bool m_handlesExceptions;
 };
 
@@ -192,9 +193,9 @@
     }
 
     // Notify the JIT of a call that does not require linking.
-    void notifyCall(Call call, unsigned exceptionInfo)
+    void notifyCall(Call call, CodeOrigin codeOrigin)
     {
-        m_calls.append(CallRecord(call, FunctionPtr(), exceptionInfo));
+        m_calls.append(CallRecord(call, FunctionPtr(), codeOrigin));
     }
 
     // Add a call out from JIT code, without an exception check.
@@ -205,20 +206,20 @@
     }
 
     // Add a call out from JIT code, with an exception check.
-    Call appendCallWithExceptionCheck(const FunctionPtr& function, unsigned exceptionInfo)
+    Call appendCallWithExceptionCheck(const FunctionPtr& function, CodeOrigin codeOrigin)
     {
         Call functionCall = call();
         Jump exceptionCheck = branchTestPtr(NonZero, AbsoluteAddress(&globalData()->exception));
-        m_calls.append(CallRecord(functionCall, function, exceptionCheck, exceptionInfo));
+        m_calls.append(CallRecord(functionCall, function, exceptionCheck, codeOrigin));
         return functionCall;
     }
     
     // Add a call out from JIT code, with a fast exception check that tests if the return value is zero.
-    Call appendCallWithFastExceptionCheck(const FunctionPtr& function, unsigned exceptionInfo)
+    Call appendCallWithFastExceptionCheck(const FunctionPtr& function, CodeOrigin codeOrigin)
     {
         Call functionCall = call();
         Jump exceptionCheck = branchTestPtr(Zero, GPRInfo::returnValueGPR);
-        m_calls.append(CallRecord(functionCall, function, exceptionCheck, exceptionInfo));
+        m_calls.append(CallRecord(functionCall, function, exceptionCheck, codeOrigin));
         return functionCall;
     }
     
@@ -311,9 +312,9 @@
         m_methodGets.append(MethodGetRecord(slowCall, structToCompare, protoObj, protoStructToCompare, putFunction));
     }
     
-    void addJSCall(Call fastCall, Call slowCall, DataLabelPtr targetToCheck, bool isCall, unsigned exceptionInfo)
+    void addJSCall(Call fastCall, Call slowCall, DataLabelPtr targetToCheck, bool isCall, CodeOrigin codeOrigin)
     {
-        m_jsCalls.append(JSCallRecord(fastCall, slowCall, targetToCheck, isCall, exceptionInfo));
+        m_jsCalls.append(JSCallRecord(fastCall, slowCall, targetToCheck, isCall, codeOrigin));
     }
 
 private:
@@ -326,8 +327,14 @@
     void fillNumericToDouble(NodeIndex, FPRReg, GPRReg temporary);
     void fillInt32ToInteger(NodeIndex, GPRReg);
     void fillToJS(NodeIndex, GPRReg);
+    
+#if ENABLE(DFG_OSR_EXIT)
+    void exitSpeculativeWithOSR(const OSRExit&, SpeculationRecovery*, Vector<BytecodeAndMachineOffset>& decodedCodeMap);
+    void linkOSRExits(SpeculativeJIT&);
+#else
     void jumpFromSpeculativeToNonSpeculative(const SpeculationCheck&, const EntryLocation&, SpeculationRecovery*, NodeToRegisterMap& checkNodeToRegisterMap, NodeToRegisterMap& entryNodeToRegisterMap);
     void linkSpeculationChecks(SpeculativeJIT&, NonSpeculativeJIT&);
+#endif
 
     // The globalData, used to access constants such as the vPtrs.
     JSGlobalData* m_globalData;
@@ -386,12 +393,12 @@
     };
     
     struct JSCallRecord {
-        JSCallRecord(Call fastCall, Call slowCall, DataLabelPtr targetToCheck, bool isCall, unsigned exceptionInfo)
+        JSCallRecord(Call fastCall, Call slowCall, DataLabelPtr targetToCheck, bool isCall, CodeOrigin codeOrigin)
             : m_fastCall(fastCall)
             , m_slowCall(slowCall)
             , m_targetToCheck(targetToCheck)
             , m_isCall(isCall)
-            , m_exceptionInfo(exceptionInfo)
+            , m_codeOrigin(codeOrigin)
         {
         }
         
@@ -399,7 +406,7 @@
         Call m_slowCall;
         DataLabelPtr m_targetToCheck;
         bool m_isCall;
-        unsigned m_exceptionInfo;
+        CodeOrigin m_codeOrigin;
     };
 
     Vector<PropertyAccessRecord, 4> m_propertyAccesses;
diff --git a/Source/JavaScriptCore/dfg/DFGNode.h b/Source/JavaScriptCore/dfg/DFGNode.h
index 87a0468..9ef1418 100644
--- a/Source/JavaScriptCore/dfg/DFGNode.h
+++ b/Source/JavaScriptCore/dfg/DFGNode.h
@@ -26,8 +26,13 @@
 #ifndef DFGNode_h
 #define DFGNode_h
 
+#include <wtf/Platform.h>
+
 // Emit various logging information for debugging, including dumping the dataflow graphs.
 #define ENABLE_DFG_DEBUG_VERBOSE 0
+// Emit logging for OSR exit value recoveries at every node, not just nodes that
+// actually has speculation checks.
+#define ENABLE_DFG_VERBOSE_VALUE_RECOVERIES 0
 // Enable generation of dynamic checks into the instruction stream.
 #if !ASSERT_DISABLED
 #define ENABLE_DFG_JIT_ASSERT 1
@@ -50,6 +55,8 @@
 #define DFG_DEBUG_LOCAL_DISBALE 0
 // Disable the SpeculativeJIT without having to touch Platform.h!
 #define DFG_DEBUG_LOCAL_DISBALE_SPECULATIVE 0
+// Disable the non-speculative JIT and use OSR instead.
+#define ENABLE_DFG_OSR_EXIT ENABLE_TIERED_COMPILATION
 // Generate stats on how successful we were in making use of the DFG jit, and remaining on the hot path.
 #define ENABLE_DFG_SUCCESS_STATS 0
 
@@ -72,9 +79,32 @@
 typedef uint32_t NodeIndex;
 static const NodeIndex NoNode = UINT_MAX;
 
-// Information used to map back from an exception to any handler/source information.
+// Information used to map back from an exception to any handler/source information,
+// and to implement OSR.
 // (Presently implemented as a bytecode index).
-typedef uint32_t ExceptionInfo;
+class CodeOrigin {
+public:
+    CodeOrigin()
+        : m_bytecodeIndex(std::numeric_limits<uint32_t>::max())
+    {
+    }
+    
+    explicit CodeOrigin(uint32_t bytecodeIndex)
+        : m_bytecodeIndex(bytecodeIndex)
+    {
+    }
+    
+    bool isSet() const { return m_bytecodeIndex != std::numeric_limits<uint32_t>::max(); }
+    
+    uint32_t bytecodeIndex() const
+    {
+        ASSERT(isSet());
+        return m_bytecodeIndex;
+    }
+    
+private:
+    uint32_t m_bytecodeIndex;
+};
 
 // Entries in the NodeType enum (below) are composed of an id, a result type (possibly none)
 // and some additional informative flags (must generate, is constant, etc).
@@ -357,9 +387,9 @@
     enum VarArgTag { VarArg };
 
     // Construct a node with up to 3 children, no immediate value.
-    Node(NodeType op, ExceptionInfo exceptionInfo, NodeIndex child1 = NoNode, NodeIndex child2 = NoNode, NodeIndex child3 = NoNode)
+    Node(NodeType op, CodeOrigin codeOrigin, NodeIndex child1 = NoNode, NodeIndex child2 = NoNode, NodeIndex child3 = NoNode)
         : op(op)
-        , exceptionInfo(exceptionInfo)
+        , codeOrigin(codeOrigin)
         , m_virtualRegister(InvalidVirtualRegister)
         , m_refCount(0)
     {
@@ -370,9 +400,9 @@
     }
 
     // Construct a node with up to 3 children and an immediate value.
-    Node(NodeType op, ExceptionInfo exceptionInfo, OpInfo imm, NodeIndex child1 = NoNode, NodeIndex child2 = NoNode, NodeIndex child3 = NoNode)
+    Node(NodeType op, CodeOrigin codeOrigin, OpInfo imm, NodeIndex child1 = NoNode, NodeIndex child2 = NoNode, NodeIndex child3 = NoNode)
         : op(op)
-        , exceptionInfo(exceptionInfo)
+        , codeOrigin(codeOrigin)
         , m_virtualRegister(InvalidVirtualRegister)
         , m_refCount(0)
         , m_opInfo(imm.m_value)
@@ -384,9 +414,9 @@
     }
 
     // Construct a node with up to 3 children and two immediate values.
-    Node(NodeType op, ExceptionInfo exceptionInfo, OpInfo imm1, OpInfo imm2, NodeIndex child1 = NoNode, NodeIndex child2 = NoNode, NodeIndex child3 = NoNode)
+    Node(NodeType op, CodeOrigin codeOrigin, OpInfo imm1, OpInfo imm2, NodeIndex child1 = NoNode, NodeIndex child2 = NoNode, NodeIndex child3 = NoNode)
         : op(op)
-        , exceptionInfo(exceptionInfo)
+        , codeOrigin(codeOrigin)
         , m_virtualRegister(InvalidVirtualRegister)
         , m_refCount(0)
         , m_opInfo(imm1.m_value)
@@ -399,9 +429,9 @@
     }
     
     // Construct a node with a variable number of children and two immediate values.
-    Node(VarArgTag, NodeType op, ExceptionInfo exceptionInfo, OpInfo imm1, OpInfo imm2, unsigned firstChild, unsigned numChildren)
+    Node(VarArgTag, NodeType op, CodeOrigin codeOrigin, OpInfo imm1, OpInfo imm2, unsigned firstChild, unsigned numChildren)
         : op(op)
-        , exceptionInfo(exceptionInfo)
+        , codeOrigin(codeOrigin)
         , m_virtualRegister(InvalidVirtualRegister)
         , m_refCount(0)
         , m_opInfo(imm1.m_value)
@@ -643,6 +673,14 @@
         ASSERT(!(op & NodeHasVarArgs));
         return children.fixed.child1;
     }
+    
+    // This is useful if you want to do a fast check on the first child
+    // before also doing a check on the opcode. Use this with care and
+    // avoid it if possible.
+    NodeIndex child1Unchecked()
+    {
+        return children.fixed.child1;
+    }
 
     NodeIndex child2()
     {
@@ -671,7 +709,7 @@
     // This enum value describes the type of the node.
     NodeType op;
     // Used to look up exception handling information (currently implemented as a bytecode index).
-    ExceptionInfo exceptionInfo;
+    CodeOrigin codeOrigin;
     // References to up to 3 children (0 for no child).
     union {
         struct {
diff --git a/Source/JavaScriptCore/dfg/DFGNonSpeculativeJIT.cpp b/Source/JavaScriptCore/dfg/DFGNonSpeculativeJIT.cpp
index da42c38..f776b3b 100644
--- a/Source/JavaScriptCore/dfg/DFGNonSpeculativeJIT.cpp
+++ b/Source/JavaScriptCore/dfg/DFGNonSpeculativeJIT.cpp
@@ -434,6 +434,9 @@
 
 void NonSpeculativeJIT::compile(SpeculationCheckIndexIterator& checkIterator, Node& node)
 {
+#if ENABLE(DFG_OSR_EXIT)
+    UNUSED_PARAM(checkIterator);
+#else
     // Check for speculation checks from the corresponding instruction in the
     // speculative path. Do not check for NodeIndex 0, since this is checked
     // in the outermost compile layer, at the head of the non-speculative path
@@ -442,6 +445,7 @@
     // as speculation checks at this index).
     if (m_compileIndex && checkIterator.hasCheckAtIndex(m_compileIndex))
         trackEntry(m_jit.label());
+#endif
 
     NodeType op = node.op;
 
@@ -1283,8 +1287,10 @@
 void NonSpeculativeJIT::compile(SpeculationCheckIndexIterator& checkIterator)
 {
     // Check for speculation checks added at function entry (checking argument types).
+#if !ENABLE(DFG_OSR_EXIT)
     if (checkIterator.hasCheckAtIndex(m_compileIndex))
         trackEntry(m_jit.label());
+#endif
 
     ASSERT(!m_compileIndex);
     for (m_block = 0; m_block < m_jit.graph().m_blocks.size(); ++m_block)
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
index 82c2e74..b68453c 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
@@ -144,6 +144,7 @@
     return InvalidGPRReg;
 }
 
+#if !ENABLE(DFG_OSR_EXIT)
 SpeculationCheck::SpeculationCheck(MacroAssembler::Jump check, SpeculativeJIT* jit, unsigned recoveryIndex)
     : m_check(check)
     , m_nodeIndex(jit->m_compileIndex)
@@ -170,6 +171,73 @@
             m_fprInfo[iter.index()].nodeIndex = NoNode;
     }
 }
+#endif
+
+#ifndef NDEBUG
+void ValueSource::dump(FILE* out) const
+{
+    fprintf(out, "Node(%d)", m_nodeIndex);
+}
+
+void ValueRecovery::dump(FILE* out) const
+{
+    switch (technique()) {
+    case AlreadyInRegisterFile:
+        fprintf(out, "-");
+        break;
+    case InGPR:
+        fprintf(out, "%%%s", GPRInfo::debugName(gpr()));
+        break;
+    case UnboxedInt32InGPR:
+        fprintf(out, "int32(%%%s)", GPRInfo::debugName(gpr()));
+        break;
+    case InFPR:
+        fprintf(out, "%%%s", FPRInfo::debugName(fpr()));
+        break;
+    case DisplacedInRegisterFile:
+        fprintf(out, "*%d", virtualRegister());
+        break;
+    case Constant:
+        fprintf(out, "[%s]", constant().description());
+        break;
+    case DontKnow:
+        fprintf(out, "!");
+        break;
+    default:
+        fprintf(out, "?%d", technique());
+        break;
+    }
+}
+#endif
+
+#if ENABLE(DFG_OSR_EXIT)
+OSRExit::OSRExit(MacroAssembler::Jump check, SpeculativeJIT* jit, unsigned recoveryIndex)
+    : m_check(check)
+    , m_nodeIndex(jit->m_compileIndex)
+    , m_bytecodeIndex(jit->m_bytecodeIndexForOSR)
+    , m_recoveryIndex(recoveryIndex)
+    , m_arguments(jit->m_arguments.size())
+    , m_variables(jit->m_variables.size())
+    , m_lastSetOperand(jit->m_lastSetOperand)
+{
+    ASSERT(m_bytecodeIndex != std::numeric_limits<uint32_t>::max());
+    for (unsigned argument = 0; argument < m_arguments.size(); ++argument)
+        m_arguments[argument] = jit->computeValueRecoveryFor(jit->m_arguments[argument]);
+    for (unsigned variable = 0; variable < m_variables.size(); ++variable)
+        m_variables[variable] = jit->computeValueRecoveryFor(jit->m_variables[variable]);
+}
+
+#ifndef NDEBUG
+void OSRExit::dump(FILE* out) const
+{
+    for (unsigned argument = 0; argument < m_arguments.size(); ++argument)
+        m_arguments[argument].dump(out);
+    fprintf(out, " : ");
+    for (unsigned variable = 0; variable < m_variables.size(); ++variable)
+        m_variables[variable].dump(out);
+}
+#endif
+#endif
 
 GPRReg SpeculativeJIT::fillSpeculateInt(NodeIndex nodeIndex, DataFormat& returnFormat)
 {
@@ -650,6 +718,42 @@
     }
 
     case SetLocal: {
+        // SetLocal doubles as a hint as to where a node will be stored and
+        // as a speculation point. So before we speculate make sure that we
+        // know where the child of this node needs to go in the virtual
+        // register file.
+        compileMovHint(node);
+        
+        // As far as OSR is concerned, we're on the bytecode index corresponding
+        // to the *next* instruction, since we've already "executed" the
+        // SetLocal and whatever other DFG Nodes are associated with the same
+        // bytecode index as the SetLocal.
+        ASSERT(m_bytecodeIndexForOSR == node.codeOrigin.bytecodeIndex());
+        Node& nextNode = m_jit.graph()[m_compileIndex+1];
+        
+        // This assertion will fail if we ever emit multiple SetLocal's for
+        // a single bytecode instruction. That's unlikely to happen. But if
+        // it does, the solution is to to have this perform a search until
+        // it finds a Node with a different bytecode index from the one we've
+        // got, and to abstractly execute the SetLocal's along the way. Or,
+        // better yet, handle all of the SetLocal's at once: abstract interpret
+        // all of them, then emit code for all of them, with OSR exiting to
+        // the next non-SetLocal instruction. Note the special case for when
+        // both this SetLocal and the next op have a bytecode index of 0; this
+        // occurs for SetLocal's generated at the top of the code block to
+        // initialize locals to undefined. Ideally, we'd have a way of marking
+        // in the CodeOrigin that a SetLocal is synthetic. This will make the
+        // assertion more sensible-looking. We should then also assert that
+        // synthetic SetLocal's don't have speculation checks, since they
+        // should only be dropping values that we statically know we are
+        // allowed to drop into the variables. DFGPropagator will guarantee
+        // this, since it should have at least an approximation (if not
+        // exact knowledge) of the type of the SetLocal's child node, and
+        // should merge that information into the local that is being set.
+        ASSERT(m_bytecodeIndexForOSR != nextNode.codeOrigin.bytecodeIndex()
+               || (!m_bytecodeIndexForOSR && !nextNode.codeOrigin.bytecodeIndex()));
+        m_bytecodeIndexForOSR = nextNode.codeOrigin.bytecodeIndex();
+        
         PredictedType predictedType = m_jit.graph().getPrediction(node.local());
         if (isInt32Prediction(predictedType)) {
             SpeculateIntegerOperand value(this, node.child1());
@@ -670,6 +774,10 @@
             m_jit.storePtr(value.gpr(), JITCompiler::addressFor(node.local()));
             noResult(m_compileIndex);
         }
+        
+        // Indicate that it's no longer necessary to retrieve the value of
+        // this bytecode variable from registers or other locations in the register file.
+        valueSourceReferenceForOperand(node.local()) = ValueSource();
         break;
     }
 
@@ -1369,11 +1477,19 @@
         break;
     }
     }
-
+    
     if (node.hasResult() && node.mustGenerate())
         use(m_compileIndex);
 }
 
+void SpeculativeJIT::compileMovHint(Node& node)
+{
+    ASSERT(node.op == SetLocal);
+    
+    setNodeIndexForOperand(node.child1(), node.local());
+    m_lastSetOperand = node.local();
+}
+
 void SpeculativeJIT::compile(BasicBlock& block)
 {
     ASSERT(m_compileOkay);
@@ -1382,35 +1498,74 @@
 #if ENABLE(DFG_JIT_BREAK_ON_EVERY_BLOCK)
     m_jit.breakpoint();
 #endif
+    
+    for (size_t i = 0; i < m_arguments.size(); ++i)
+        m_arguments[i] = ValueSource();
+    for (size_t i = 0; i < m_variables.size(); ++i)
+        m_variables[i] = ValueSource();
+    m_lastSetOperand = std::numeric_limits<int>::max();
+    m_bytecodeIndexForOSR = std::numeric_limits<uint32_t>::max();
 
     for (; m_compileIndex < block.end; ++m_compileIndex) {
         Node& node = m_jit.graph()[m_compileIndex];
-        if (!node.shouldGenerate())
-            continue;
-        
+        m_bytecodeIndexForOSR = node.codeOrigin.bytecodeIndex();
+        if (!node.shouldGenerate()) {
 #if ENABLE(DFG_DEBUG_VERBOSE)
-        fprintf(stderr, "SpeculativeJIT generating Node @%d at JIT offset 0x%x   ", (int)m_compileIndex, m_jit.debugOffset());
+            fprintf(stderr, "SpeculativeJIT skipping Node @%d (bc#%u) at JIT offset 0x%x     ", (int)m_compileIndex, node.codeOrigin.bytecodeIndex(), m_jit.debugOffset());
+#endif
+            if (node.op == SetLocal)
+                compileMovHint(node);
+        } else {
+            
+#if ENABLE(DFG_DEBUG_VERBOSE)
+            fprintf(stderr, "SpeculativeJIT generating Node @%d (bc#%u) at JIT offset 0x%x   ", (int)m_compileIndex, node.codeOrigin.bytecodeIndex(), m_jit.debugOffset());
 #endif
 #if ENABLE(DFG_JIT_BREAK_ON_EVERY_NODE)
-        m_jit.breakpoint();
+            m_jit.breakpoint();
 #endif
-        checkConsistency();
-        compile(node);
-        if (!m_compileOkay) {
+            checkConsistency();
+            compile(node);
+            if (!m_compileOkay) {
 #if ENABLE(DYNAMIC_TERMINATE_SPECULATION)
-            m_compileOkay = true;
-            m_compileIndex = block.end;
-            clearGenerationInfo();
+                m_compileOkay = true;
+                m_compileIndex = block.end;
+                clearGenerationInfo();
 #endif
-            return;
-        }
+                return;
+            }
+            
 #if ENABLE(DFG_DEBUG_VERBOSE)
-        if (node.hasResult())
-            fprintf(stderr, "-> %s\n", dataFormatToString(m_generationInfo[node.virtualRegister()].registerFormat()));
-        else
-            fprintf(stderr, "\n");
+            if (node.hasResult()) {
+                GenerationInfo& info = m_generationInfo[node.virtualRegister()];
+                fprintf(stderr, "-> %s, vr#%d", dataFormatToString(info.registerFormat()), (int)node.virtualRegister());
+                if (info.registerFormat() != DataFormatNone) {
+                    if (info.registerFormat() == DataFormatDouble)
+                        fprintf(stderr, ", %s", FPRInfo::debugName(info.fpr()));
+                    else
+                        fprintf(stderr, ", %s", GPRInfo::debugName(info.gpr()));
+                }
+                fprintf(stderr, "    ");
+            } else
+                fprintf(stderr, "    ");
 #endif
-        checkConsistency();
+        }
+        
+#if ENABLE(DFG_VERBOSE_VALUE_RECOVERIES)
+        for (int operand = -m_arguments.size() - RegisterFile::CallFrameHeaderSize; operand < -RegisterFile::CallFrameHeaderSize; ++operand)
+            computeValueRecoveryFor(operand).dump(stderr);
+        
+        fprintf(stderr, " : ");
+        
+        for (int operand = 0; operand < (int)m_variables.size(); ++operand)
+            computeValueRecoveryFor(operand).dump(stderr);
+#endif
+      
+#if ENABLE(DFG_DEBUG_VERBOSE)
+        fprintf(stderr, "\n");
+#endif
+        
+        if (node.shouldGenerate())
+            checkConsistency();
     }
 }
 
@@ -1419,6 +1574,7 @@
 void SpeculativeJIT::checkArgumentTypes()
 {
     ASSERT(!m_compileIndex);
+    m_bytecodeIndexForOSR = 0;
     for (int i = 0; i < m_jit.codeBlock()->m_numParameters; ++i) {
         VirtualRegister virtualRegister = (VirtualRegister)(m_jit.codeBlock()->thisRegister() + i);
         PredictedType predictedType = m_jit.graph().getPrediction(virtualRegister);
@@ -1454,7 +1610,7 @@
     ASSERT(!m_compileIndex);
     for (m_block = 0; m_block < m_jit.graph().m_blocks.size(); ++m_block) {
         compile(*m_jit.graph().m_blocks[m_block]);
-#if !ENABLE(DYNAMIC_OPTIMIZATION)
+#if !ENABLE(DYNAMIC_TERMINATE_SPECULATION)
         if (!m_compileOkay)
             return false;
 #endif
@@ -1463,6 +1619,119 @@
     return true;
 }
 
+ValueRecovery SpeculativeJIT::computeValueRecoveryFor(const ValueSource& valueSource)
+{
+    if (!valueSource.isSet())
+        return ValueRecovery::alreadyInRegisterFile();
+
+    if (m_jit.isConstant(valueSource.nodeIndex()))
+        return ValueRecovery::constant(m_jit.valueOfJSConstant(valueSource.nodeIndex()));
+    
+    Node* nodePtr = &m_jit.graph()[valueSource.nodeIndex()];
+    if (!nodePtr->shouldGenerate()) {
+        // It's legitimately dead. As in, nobody will ever use this node, or operand,
+        // ever. Set it to Undefined to make the GC happy after the OSR.
+        return ValueRecovery::constant(jsUndefined());
+    }
+    
+    GenerationInfo* infoPtr = &m_generationInfo[nodePtr->virtualRegister()];
+    if (!infoPtr->alive() || infoPtr->nodeIndex() != valueSource.nodeIndex()) {
+        // Try to see if there is an alternate node that would contain the value we want.
+        // There are four possibilities:
+        //
+        // ValueToNumber: If the only live version of the value is a ValueToNumber node
+        //    then it means that all remaining uses of the value would have performed a
+        //    ValueToNumber conversion anyway. Thus, we can substitute ValueToNumber.
+        //
+        // ValueToInt32: Likewise, if the only remaining live version of the value is
+        //    ValueToInt32, then we can use it. But if there is both a ValueToInt32
+        //    and a ValueToNumber, then we better go with ValueToNumber because it
+        //    means that some remaining uses would have converted to number while
+        //    others would have converted to Int32.
+        //
+        // UInt32ToNumber: If the only live version of the value is a UInt32ToNumber
+        //    then the only remaining uses are ones that want a properly formed number
+        //    rather than a UInt32 intermediate.
+        //
+        // The reverse of the above: This node could be a UInt32ToNumber, but its
+        //    alternative is still alive. This means that the only remaining uses of
+        //    the number would be fine with a UInt32 intermediate.
+        
+        bool found = false;
+        
+        if (nodePtr->op == UInt32ToNumber) {
+            NodeIndex nodeIndex = nodePtr->child1();
+            nodePtr = &m_jit.graph()[nodeIndex];
+            infoPtr = &m_generationInfo[nodePtr->virtualRegister()];
+            if (infoPtr->alive() && infoPtr->nodeIndex() == nodeIndex)
+                found = true;
+        }
+        
+        if (!found) {
+            NodeIndex valueToNumberIndex = NoNode;
+            NodeIndex valueToInt32Index = NoNode;
+            NodeIndex uint32ToNumberIndex = NoNode;
+            
+            for (unsigned virtualRegister = 0; virtualRegister < m_generationInfo.size(); ++virtualRegister) {
+                GenerationInfo& info = m_generationInfo[virtualRegister];
+                if (!info.alive())
+                    continue;
+                if (info.nodeIndex() == NoNode)
+                    continue;
+                Node& node = m_jit.graph()[info.nodeIndex()];
+                if (node.child1Unchecked() != valueSource.nodeIndex())
+                    continue;
+                switch (node.op) {
+                case ValueToNumber:
+                    valueToNumberIndex = info.nodeIndex();
+                    break;
+                case ValueToInt32:
+                    valueToInt32Index = info.nodeIndex();
+                    break;
+                case UInt32ToNumber:
+                    uint32ToNumberIndex = info.nodeIndex();
+                    break;
+                default:
+                    break;
+                }
+            }
+            
+            NodeIndex nodeIndexToUse;
+            if (valueToNumberIndex != NoNode)
+                nodeIndexToUse = valueToNumberIndex;
+            else if (valueToInt32Index != NoNode)
+                nodeIndexToUse = valueToInt32Index;
+            else if (uint32ToNumberIndex != NoNode)
+                nodeIndexToUse = uint32ToNumberIndex;
+            else
+                nodeIndexToUse = NoNode;
+            
+            if (nodeIndexToUse != NoNode) {
+                nodePtr = &m_jit.graph()[nodeIndexToUse];
+                infoPtr = &m_generationInfo[nodePtr->virtualRegister()];
+                ASSERT(infoPtr->alive() && infoPtr->nodeIndex() == nodeIndexToUse);
+                found = true;
+            }
+        }
+        
+        if (!found)
+            return ValueRecovery::constant(jsUndefined());
+    }
+    
+    ASSERT(infoPtr->alive());
+
+    if (infoPtr->registerFormat() != DataFormatNone) {
+        if (infoPtr->registerFormat() == DataFormatDouble)
+            return ValueRecovery::inFPR(infoPtr->fpr());
+        return ValueRecovery::inGPR(infoPtr->gpr(), infoPtr->registerFormat());
+    }
+    if (infoPtr->spillFormat() != DataFormatNone)
+        return ValueRecovery::displacedInRegisterFile(static_cast<VirtualRegister>(nodePtr->virtualRegister()));
+    
+    ASSERT_NOT_REACHED();
+    return ValueRecovery();
+}
+
 } } // namespace JSC::DFG
 
 #endif
diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
index 7da00bd..120deb3 100644
--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
@@ -66,6 +66,7 @@
     GPRReg m_src;
 };
 
+#if !ENABLE(DFG_OSR_EXIT)
 // === SpeculationCheck ===
 //
 // This structure records a bail-out from the speculative path,
@@ -91,7 +92,196 @@
     RegisterInfo m_fprInfo[FPRInfo::numberOfRegisters];
 };
 typedef SegmentedVector<SpeculationCheck, 16> SpeculationCheckVector;
+#endif // !ENABLE(DFG_OSR_EXIT)
 
+class ValueSource {
+public:
+    ValueSource()
+        : m_nodeIndex(NoNode)
+    {
+    }
+    
+    explicit ValueSource(NodeIndex nodeIndex)
+        : m_nodeIndex(nodeIndex)
+    {
+    }
+    
+    bool isSet() const
+    {
+        return m_nodeIndex != NoNode;
+    }
+    
+    NodeIndex nodeIndex() const
+    {
+        ASSERT(isSet());
+        return m_nodeIndex;
+    }
+    
+#ifndef NDEBUG
+    void dump(FILE* out) const;
+#endif
+    
+private:
+    NodeIndex m_nodeIndex;
+};
+    
+// Describes how to recover a given bytecode virtual register at a given
+// code point.
+enum ValueRecoveryTechnique {
+    // It's already in the register file at the right location.
+    AlreadyInRegisterFile,
+    // It's in a register.
+    InGPR,
+    UnboxedInt32InGPR,
+    InFPR,
+    // It's in the register file, but at a different location.
+    DisplacedInRegisterFile,
+    // It's a constant.
+    Constant,
+    // Don't know how to recover it.
+    DontKnow
+};
+
+class ValueRecovery {
+public:
+    ValueRecovery()
+        : m_technique(DontKnow)
+    {
+    }
+    
+    static ValueRecovery alreadyInRegisterFile()
+    {
+        ValueRecovery result;
+        result.m_technique = AlreadyInRegisterFile;
+        return result;
+    }
+    
+    static ValueRecovery inGPR(GPRReg gpr, DataFormat dataFormat)
+    {
+        ASSERT(dataFormat != DataFormatNone);
+        ValueRecovery result;
+        if (dataFormat == DataFormatInteger)
+            result.m_technique = UnboxedInt32InGPR;
+        else
+            result.m_technique = InGPR;
+        result.m_source.gpr = gpr;
+        return result;
+    }
+    
+    static ValueRecovery inFPR(FPRReg fpr)
+    {
+        ValueRecovery result;
+        result.m_technique = InFPR;
+        result.m_source.fpr = fpr;
+        return result;
+    }
+    
+    static ValueRecovery displacedInRegisterFile(VirtualRegister virtualReg)
+    {
+        ValueRecovery result;
+        result.m_technique = DisplacedInRegisterFile;
+        result.m_source.virtualReg = virtualReg;
+        return result;
+    }
+    
+    static ValueRecovery constant(JSValue value)
+    {
+        ValueRecovery result;
+        result.m_technique = Constant;
+        result.m_source.constant = JSValue::encode(value);
+        return result;
+    }
+    
+    ValueRecoveryTechnique technique() const { return m_technique; }
+    
+    GPRReg gpr() const
+    {
+        ASSERT(m_technique == InGPR || m_technique == UnboxedInt32InGPR);
+        return m_source.gpr;
+    }
+    
+    FPRReg fpr() const
+    {
+        ASSERT(m_technique == InFPR);
+        return m_source.fpr;
+    }
+    
+    VirtualRegister virtualRegister() const
+    {
+        ASSERT(m_technique == DisplacedInRegisterFile);
+        return m_source.virtualReg;
+    }
+    
+    JSValue constant() const
+    {
+        ASSERT(m_technique == Constant);
+        return JSValue::decode(m_source.constant);
+    }
+    
+#ifndef NDEBUG
+    void dump(FILE* out) const;
+#endif
+    
+private:
+    ValueRecoveryTechnique m_technique;
+    union {
+        GPRReg gpr;
+        FPRReg fpr;
+        VirtualRegister virtualReg;
+        EncodedJSValue constant;
+    } m_source;
+};
+
+#if ENABLE(DFG_OSR_EXIT)
+// === OSRExit ===
+//
+// This structure describes how to exit the speculative path by
+// going into baseline code.
+struct OSRExit {
+    OSRExit(MacroAssembler::Jump, SpeculativeJIT*, unsigned recoveryIndex = 0);
+    
+    MacroAssembler::Jump m_check;
+    NodeIndex m_nodeIndex;
+    unsigned m_bytecodeIndex;
+    
+    unsigned m_recoveryIndex;
+    
+    // Convenient way of iterating over ValueRecoveries while being
+    // generic over argument versus variable.
+    int numberOfRecoveries() const { return m_arguments.size() + m_variables.size(); }
+    const ValueRecovery& valueRecovery(int index) const
+    {
+        if (index < (int)m_arguments.size())
+            return m_arguments[index];
+        return m_variables[index - m_arguments.size()];
+    }
+    bool isArgument(int index) const { return index < (int)m_arguments.size(); }
+    bool isVariable(int index) const { return !isArgument(index); }
+    int argumentForIndex(int index) const
+    {
+        return index;
+    }
+    int variableForIndex(int index) const
+    {
+        return index - m_arguments.size();
+    }
+    int operandForIndex(int index) const
+    {
+        if (index < (int)m_arguments.size())
+            return index - m_arguments.size() - RegisterFile::CallFrameHeaderSize;
+        return index - m_arguments.size();
+    }
+    
+#ifndef NDEBUG
+    void dump(FILE* out) const;
+#endif
+    
+    Vector<ValueRecovery, 0> m_arguments;
+    Vector<ValueRecovery, 0> m_variables;
+    int m_lastSetOperand;
+};
+typedef SegmentedVector<OSRExit, 16> OSRExitVector;
+#endif // ENABLE(DFG_OSR_EXIT)
 
 // === SpeculativeJIT ===
 //
@@ -105,21 +295,25 @@
 // only speculatively been asserted) through the dataflow.
 class SpeculativeJIT : public JITCodeGenerator {
     friend struct SpeculationCheck;
+    friend struct OSRExit;
 public:
-    SpeculativeJIT(JITCompiler& jit)
-        : JITCodeGenerator(jit, true)
-        , m_compileOkay(true)
-    {
-    }
+    SpeculativeJIT(JITCompiler&);
 
     bool compile();
 
     // Retrieve the list of bail-outs from the speculative path,
     // and additional recovery information.
+#if !ENABLE(DFG_OSR_EXIT)
     SpeculationCheckVector& speculationChecks()
     {
         return m_speculationChecks;
     }
+#else
+    OSRExitVector& osrExits()
+    {
+        return m_osrExits;
+    }
+#endif
     SpeculationRecovery* speculationRecovery(size_t index)
     {
         // SpeculationCheck::m_recoveryIndex is offset by 1,
@@ -139,6 +333,7 @@
     friend class JITCodeGenerator;
     
     void compile(Node&);
+    void compileMovHint(Node&);
     void compile(BasicBlock&);
 
     void checkArgumentTypes();
@@ -203,7 +398,11 @@
     {
         if (!m_compileOkay)
             return;
+#if !ENABLE(DFG_OSR_EXIT)
         m_speculationChecks.append(SpeculationCheck(jumpToFail, this));
+#else
+        m_osrExits.append(OSRExit(jumpToFail, this));
+#endif
     }
     // Add a speculation check with additional recovery.
     void speculationCheck(MacroAssembler::Jump jumpToFail, const SpeculationRecovery& recovery)
@@ -211,7 +410,11 @@
         if (!m_compileOkay)
             return;
         m_speculationRecoveryList.append(recovery);
+#if !ENABLE(DFG_OSR_EXIT)
         m_speculationChecks.append(SpeculationCheck(jumpToFail, this, m_speculationRecoveryList.size()));
+#else
+        m_osrExits.append(OSRExit(jumpToFail, this, m_speculationRecoveryList.size()));
+#endif
     }
 
     // Called when we statically determine that a speculation will fail.
@@ -234,18 +437,65 @@
 
     template<bool strict>
     GPRReg fillSpeculateIntInternal(NodeIndex, DataFormat& returnFormat);
-
+    
     // It is possible, during speculative generation, to reach a situation in which we
     // can statically determine a speculation will fail (for example, when two nodes
     // will make conflicting speculations about the same operand). In such cases this
     // flag is cleared, indicating no further code generation should take place.
     bool m_compileOkay;
+#if !ENABLE(DFG_OSR_EXIT)
     // This vector tracks bail-outs from the speculative path to the non-speculative one.
     SpeculationCheckVector m_speculationChecks;
+#else
+    // This vector tracks bail-outs from the speculative path to the old JIT.
+    OSRExitVector m_osrExits;
+#endif
     // Some bail-outs need to record additional information recording specific recovery
     // to be performed (for example, on detected overflow from an add, we may need to
     // reverse the addition if an operand is being overwritten).
     Vector<SpeculationRecovery, 16> m_speculationRecoveryList;
+    
+    // Tracking for which nodes are currently holding the values of arguments and bytecode
+    // operand-indexed variables.
+
+    ValueSource valueSourceForOperand(int operand)
+    {
+        return valueSourceReferenceForOperand(operand);
+    }
+    
+    void setNodeIndexForOperand(NodeIndex nodeIndex, int operand)
+    {
+        valueSourceReferenceForOperand(operand) = ValueSource(nodeIndex);
+    }
+    
+    // Call this with care, since it both returns a reference into an array
+    // and potentially resizes the array. So it would not be right to call this
+    // twice and then perform operands on both references, since the one from
+    // the first call may no longer be valid.
+    ValueSource& valueSourceReferenceForOperand(int operand)
+    {
+        if (operandIsArgument(operand)) {
+            int argument = operand + m_arguments.size() + RegisterFile::CallFrameHeaderSize;
+            return m_arguments[argument];
+        }
+        
+        if ((unsigned)operand >= m_variables.size())
+            m_variables.resize(operand + 1);
+        
+        return m_variables[operand];
+    }
+    
+    Vector<ValueSource, 0> m_arguments;
+    Vector<ValueSource, 0> m_variables;
+    int m_lastSetOperand;
+    uint32_t m_bytecodeIndexForOSR;
+    
+    ValueRecovery computeValueRecoveryFor(const ValueSource&);
+
+    ValueRecovery computeValueRecoveryFor(int operand)
+    {
+        return computeValueRecoveryFor(valueSourceForOperand(operand));
+    }
 };
 
 
@@ -465,11 +715,17 @@
     GPRReg m_gprOrInvalid;
 };
 
-
 // === SpeculationCheckIndexIterator ===
 //
 // This class is used by the non-speculative JIT to check which
 // nodes require entry points from the speculative path.
+#if ENABLE(DFG_OSR_EXIT)
+// This becomes a stub if OSR is enabled.
+class SpeculationCheckIndexIterator {
+public:
+    SpeculationCheckIndexIterator() { }
+};
+#else
 class SpeculationCheckIndexIterator {
 public:
     SpeculationCheckIndexIterator(SpeculationCheckVector& speculationChecks)
@@ -495,7 +751,17 @@
     SpeculationCheckVector::Iterator m_iter;
     SpeculationCheckVector::Iterator m_end;
 };
+#endif
 
+inline SpeculativeJIT::SpeculativeJIT(JITCompiler& jit)
+    : JITCodeGenerator(jit, true)
+    , m_compileOkay(true)
+    , m_arguments(jit.codeBlock()->m_numParameters)
+    , m_variables(jit.codeBlock()->m_numVars)
+    , m_lastSetOperand(std::numeric_limits<int>::max())
+    , m_bytecodeIndexForOSR(std::numeric_limits<uint32_t>::max())
+{
+}
 
 } } // namespace JSC::DFG
 
diff --git a/Source/JavaScriptCore/jit/CompactJITCodeMap.h b/Source/JavaScriptCore/jit/CompactJITCodeMap.h
new file mode 100644
index 0000000..8020701
--- /dev/null
+++ b/Source/JavaScriptCore/jit/CompactJITCodeMap.h
@@ -0,0 +1,254 @@
+/*
+ * Copyright (C) 2011 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1.  Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ * 2.  Redistributions in binary form must reproduce the above copyright
+ *     notice, this list of conditions and the following disclaimer in the
+ *     documentation and/or other materials provided with the distribution.
+ * 3.  Neither the name of Apple Computer, Inc. ("Apple") nor the names of
+ *     its contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE AND ITS CONTRIBUTORS "AS IS" AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL APPLE OR ITS CONTRIBUTORS BE LIABLE FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef CompactJITCodeMap_h
+#define CompactJITCodeMap_h
+
+#include <wtf/Assertions.h>
+#include <wtf/FastAllocBase.h>
+#include <wtf/FastMalloc.h>
+#include <wtf/OwnPtr.h>
+#include <wtf/PassOwnPtr.h>
+#include <wtf/UnusedParam.h>
+#include <wtf/Vector.h>
+
+namespace JSC {
+
+// Gives you a compressed map between between bytecode indices and machine code
+// entry points. The compression simply tries to use either 1, 2, or 4 bytes for
+// any given offset. The largest offset that can be stored is 2^30.
+
+// Example use:
+//
+// CompactJITCodeMap::Encoder encoder(map);
+// encoder.append(a, b);
+// encoder.append(c, d); // preconditions: c >= a, d >= b
+// OwnPtr<CompactJITCodeMap> map = encoder.finish();
+//
+// At some later time:
+//
+// Vector<BytecodeAndMachineOffset> decoded;
+// map->decode(decoded);
+
+struct BytecodeAndMachineOffset {
+    BytecodeAndMachineOffset() { }
+    
+    BytecodeAndMachineOffset(unsigned bytecodeIndex, unsigned machineCodeOffset)
+        : m_bytecodeIndex(bytecodeIndex)
+        , m_machineCodeOffset(machineCodeOffset)
+    {
+    }
+    
+    unsigned m_bytecodeIndex;
+    unsigned m_machineCodeOffset;
+    
+    static inline unsigned getBytecodeIndex(BytecodeAndMachineOffset* mapping)
+    {
+        return mapping->m_bytecodeIndex;
+    }
+    
+    static inline unsigned getMachineCodeOffset(BytecodeAndMachineOffset* mapping)
+    {
+        return mapping->m_machineCodeOffset;
+    }
+};
+
+class CompactJITCodeMap {
+    WTF_MAKE_FAST_ALLOCATED;
+public:
+    ~CompactJITCodeMap()
+    {
+        if (m_buffer)
+            fastFree(m_buffer);
+    }
+    
+    void decode(Vector<BytecodeAndMachineOffset>& result) const
+    {
+        unsigned previousBytecodeIndex = 0;
+        unsigned previousMachineCodeOffset = 0;
+        
+        result.resize(m_numberOfEntries);
+        unsigned j = 0;
+        for (unsigned i = 0; i < m_numberOfEntries; ++i) {
+            previousBytecodeIndex += decodeNumber(j);
+            previousMachineCodeOffset += decodeNumber(j);
+            result[i].m_bytecodeIndex = previousBytecodeIndex;
+            result[i].m_machineCodeOffset = previousMachineCodeOffset;
+        }
+        ASSERT(j == m_size);
+    }
+    
+private:
+    CompactJITCodeMap(uint8_t* buffer, unsigned size, unsigned numberOfEntries)
+        : m_buffer(buffer)
+#if !ASSERT_DISABLED
+        , m_size(size)
+#endif
+        , m_numberOfEntries(numberOfEntries)
+    {
+        UNUSED_PARAM(size);
+    }
+    
+    uint8_t at(unsigned index) const
+    {
+        ASSERT(index < m_size);
+        return m_buffer[index];
+    }
+    
+    unsigned decodeNumber(unsigned& index) const
+    {
+        uint8_t headValue = at(index++);
+        if (!(headValue & 128))
+            return headValue;
+        if (!(headValue & 64))
+            return (static_cast<unsigned>(headValue & ~128) << 8) | at(index++);
+        unsigned second = at(index++);
+        unsigned third  = at(index++);
+        unsigned fourth = at(index++);
+        return (static_cast<unsigned>(headValue & ~(128 + 64)) << 24) | (second << 16) | (third << 8) | fourth;
+    }
+    
+    uint8_t* m_buffer;
+#if !ASSERT_DISABLED
+    unsigned m_size;
+#endif
+    unsigned m_numberOfEntries;
+    
+public:
+    class Encoder {
+        WTF_MAKE_NONCOPYABLE(Encoder);
+    public:
+        Encoder();
+        ~Encoder();
+        
+        void ensureCapacityFor(unsigned numberOfEntriesToAdd);
+        void append(unsigned bytecodeIndex, unsigned machineCodeOffset);
+        PassOwnPtr<CompactJITCodeMap> finish();
+        
+    private:
+        void appendByte(uint8_t value);
+        void encodeNumber(uint32_t value);
+    
+        uint8_t* m_buffer;
+        unsigned m_size;
+        unsigned m_capacity;
+        unsigned m_numberOfEntries;
+        
+        unsigned m_previousBytecodeIndex;
+        unsigned m_previousMachineCodeOffset;
+    };
+
+private:
+    friend class Encoder;
+};
+
+inline CompactJITCodeMap::Encoder::Encoder()
+    : m_buffer(0)
+    , m_size(0)
+    , m_capacity(0)
+    , m_numberOfEntries(0)
+    , m_previousBytecodeIndex(0)
+    , m_previousMachineCodeOffset(0)
+{
+}
+
+inline CompactJITCodeMap::Encoder::~Encoder()
+{
+    if (m_buffer)
+        fastFree(m_buffer);
+}
+        
+inline void CompactJITCodeMap::Encoder::append(unsigned bytecodeIndex, unsigned machineCodeOffset)
+{
+    ASSERT(bytecodeIndex >= m_previousBytecodeIndex);
+    ASSERT(machineCodeOffset >= m_previousMachineCodeOffset);
+    ensureCapacityFor(1);
+    encodeNumber(bytecodeIndex - m_previousBytecodeIndex);
+    encodeNumber(machineCodeOffset - m_previousMachineCodeOffset);
+    m_previousBytecodeIndex = bytecodeIndex;
+    m_previousMachineCodeOffset = machineCodeOffset;
+    m_numberOfEntries++;
+}
+
+inline PassOwnPtr<CompactJITCodeMap> CompactJITCodeMap::Encoder::finish()
+{
+    m_capacity = m_size;
+    m_buffer = static_cast<uint8_t*>(fastRealloc(m_buffer, m_capacity));
+    OwnPtr<CompactJITCodeMap> result = adoptPtr(new CompactJITCodeMap(m_buffer, m_size, m_numberOfEntries));
+    m_buffer = 0;
+    m_size = 0;
+    m_capacity = 0;
+    m_numberOfEntries = 0;
+    m_previousBytecodeIndex = 0;
+    m_previousMachineCodeOffset = 0;
+    return result.release();
+}
+        
+inline void CompactJITCodeMap::Encoder::appendByte(uint8_t value)
+{
+    ASSERT(m_size + 1 <= m_capacity);
+    m_buffer[m_size++] = value;
+}
+    
+inline void CompactJITCodeMap::Encoder::encodeNumber(uint32_t value)
+{
+    ASSERT(m_size + 4 <= m_capacity);
+    ASSERT(value < (1 << 30));
+    if (value <= 127) {
+        uint8_t headValue = static_cast<uint8_t>(value);
+        ASSERT(!(headValue & 128));
+        appendByte(headValue);
+    } else if (value <= 16383) {
+        uint8_t headValue = static_cast<uint8_t>(value >> 8);
+        ASSERT(!(headValue & 128));
+        ASSERT(!(headValue & 64));
+        appendByte(headValue | 128);
+        appendByte(static_cast<uint8_t>(value));
+    } else {
+        uint8_t headValue = static_cast<uint8_t>(value >> 24);
+        ASSERT(!(headValue & 128));
+        ASSERT(!(headValue & 64));
+        appendByte(headValue | 128 | 64);
+        appendByte(static_cast<uint8_t>(value >> 16));
+        appendByte(static_cast<uint8_t>(value >> 8));
+        appendByte(static_cast<uint8_t>(value));
+    }
+}
+
+inline void CompactJITCodeMap::Encoder::ensureCapacityFor(unsigned numberOfEntriesToAdd)
+{
+    unsigned capacityNeeded = m_size + numberOfEntriesToAdd * 2 * 4;
+    if (capacityNeeded > m_capacity) {
+        m_capacity = capacityNeeded * 2;
+        m_buffer = static_cast<uint8_t*>(fastRealloc(m_buffer, m_capacity));
+    }
+}
+
+} // namespace JSC
+
+#endif // CompactJITCodeMap_h
diff --git a/Source/JavaScriptCore/jit/JIT.cpp b/Source/JavaScriptCore/jit/JIT.cpp
index 86dde0c..8bc289b 100644
--- a/Source/JavaScriptCore/jit/JIT.cpp
+++ b/Source/JavaScriptCore/jit/JIT.cpp
@@ -209,6 +209,11 @@
 
         m_labels[m_bytecodeOffset] = label();
 
+#if ENABLE(TIERED_COMPILATION)
+        if (m_canBeOptimized)
+            m_jitCodeMapEncoder.append(m_bytecodeOffset, differenceBetween(m_startOfCode, label()));
+#endif
+
         switch (m_interpreter->getOpcodeID(currentInstruction->u.opcode)) {
         DEFINE_BINARY_OP(op_del_by_val)
         DEFINE_BINARY_OP(op_in)
@@ -373,7 +378,6 @@
 #endif
 }
 
-
 void JIT::privateCompileLinkPass()
 {
     unsigned jmpTableCount = m_jmpTable.size();
@@ -502,6 +506,8 @@
 {
 #if ENABLE(TIERED_COMPILATION)
     m_canBeOptimized = m_codeBlock->canCompileWithDFG();
+    if (m_canBeOptimized)
+        m_startOfCode = label();
 #endif
     
     // Just add a little bit of randomness to the codegen
@@ -642,6 +648,11 @@
         info.callReturnLocation = m_codeBlock->structureStubInfo(m_methodCallCompilationInfo[i].propertyAccessIndex).callReturnLocation;
     }
 
+#if ENABLE(TIERED_COMPILATION)
+    if (m_canBeOptimized)
+        m_codeBlock->setJITCodeMap(m_jitCodeMapEncoder.finish());
+#endif
+
     if (m_codeBlock->codeType() == FunctionCode && functionEntryArityCheck)
         *functionEntryArityCheck = patchBuffer.locationOf(arityCheck);
     
diff --git a/Source/JavaScriptCore/jit/JIT.h b/Source/JavaScriptCore/jit/JIT.h
index c861576..0245661 100644
--- a/Source/JavaScriptCore/jit/JIT.h
+++ b/Source/JavaScriptCore/jit/JIT.h
@@ -40,6 +40,7 @@
 #define ASSERT_JIT_OFFSET(actual, expected) ASSERT_WITH_MESSAGE(actual == expected, "JIT Offset \"%s\" should be %d, not %d.\n", #expected, static_cast<int>(expected), static_cast<int>(actual));
 
 #include "CodeBlock.h"
+#include "CompactJITCodeMap.h"
 #include "Interpreter.h"
 #include "JSInterfaceJIT.h"
 #include "Opcode.h"
@@ -1054,6 +1055,8 @@
         
 #if ENABLE(TIERED_COMPILATION)
         bool m_canBeOptimized;
+        Label m_startOfCode;
+        CompactJITCodeMap::Encoder m_jitCodeMapEncoder;
 #endif
     } JIT_CLASS_ALIGNMENT;
 
diff --git a/Source/JavaScriptCore/runtime/JSGlobalData.cpp b/Source/JavaScriptCore/runtime/JSGlobalData.cpp
index 2bbc6e3..b603d82 100644
--- a/Source/JavaScriptCore/runtime/JSGlobalData.cpp
+++ b/Source/JavaScriptCore/runtime/JSGlobalData.cpp
@@ -191,6 +191,9 @@
     , parser(new Parser)
     , interpreter(0)
     , heap(this, heapSize)
+#if ENABLE(TIERED_COMPILATION)
+    , sizeOfLastOSRScratchBuffer(0)
+#endif
     , dynamicGlobalObject(0)
     , cachedUTCOffset(std::numeric_limits<double>::quiet_NaN())
     , maxReentryDepth(threadStackType == ThreadStackTypeSmall ? MaxSmallThreadReentryDepth : MaxLargeThreadReentryDepth)
@@ -356,6 +359,11 @@
 #if ENABLE(REGEXP_TRACING)
     delete m_rtTraceList;
 #endif
+
+#if ENABLE(TIERED_COMPILATION)
+    for (unsigned i = 0; i < osrScratchBuffers.size(); ++i)
+        fastFree(osrScratchBuffers[i]);
+#endif
 }
 
 PassRefPtr<JSGlobalData> JSGlobalData::createContextGroup(ThreadStackType type, HeapSize heapSize)
diff --git a/Source/JavaScriptCore/runtime/JSGlobalData.h b/Source/JavaScriptCore/runtime/JSGlobalData.h
index 321bc96..5914a33 100644
--- a/Source/JavaScriptCore/runtime/JSGlobalData.h
+++ b/Source/JavaScriptCore/runtime/JSGlobalData.h
@@ -234,6 +234,28 @@
 #ifndef NDEBUG
         int64_t debugDataBuffer[64];
 #endif
+#if ENABLE(TIERED_COMPILATION)
+        Vector<void*> osrScratchBuffers;
+        size_t sizeOfLastOSRScratchBuffer;
+        
+        void* osrScratchBufferForSize(size_t size)
+        {
+            if (!size)
+                return 0;
+            
+            if (size > sizeOfLastOSRScratchBuffer) {
+                // Protect against a N^2 memory usage pathology by ensuring
+                // that at worst, we get a geometric series, meaning that the
+                // total memory usage is somewhere around
+                // max(scratch buffer size) * 4.
+                sizeOfLastOSRScratchBuffer = size * 2;
+                
+                osrScratchBuffers.append(fastMalloc(sizeOfLastOSRScratchBuffer));
+            }
+            
+            return osrScratchBuffers.last();
+        }
+#endif
 #endif
 
         HashMap<OpaqueJSClass*, OpaqueJSClassContextData*> opaqueJSClassData;
diff --git a/Source/JavaScriptCore/runtime/JSValue.cpp b/Source/JavaScriptCore/runtime/JSValue.cpp
index 46a55de..2340a03 100644
--- a/Source/JavaScriptCore/runtime/JSValue.cpp
+++ b/Source/JavaScriptCore/runtime/JSValue.cpp
@@ -119,7 +119,7 @@
 #ifndef NDEBUG
 char* JSValue::description()
 {
-    static const size_t size = 32;
+    static const size_t size = 64;
     static char description[size];
 
     if (!*this)