fourthTier: get rid of op_call_put_result
https://bugs.webkit.org/show_bug.cgi?id=117047
Reviewed by Gavin Barraclough.
Work in progress. This still makes like 20 tests crash.
op_call_put_result is an oddball. Its semantics are that it takes the return
value of a call instruction, which is set aside in regT0/regT1, and places them
into some stack slot. This is weird since there is an implicit contract with the
preceding bytecode instruction, and it's even weirder since it means that it
doesn't make sense to jump to it; for example OSR exit from the preceding call
instruction must make sure to jump over the op_call_put_result.
So this patch gets rid of op_call_put_result:
- In bytecode, all calls return a value and we always allocate a temporary for
that value even if it isn't used.
- The LLInt does the return value saving as part of dispatchAfterCall().
- The JIT and DFG do the return value saving as part of normal code generation.
The DFG already did the right thing.
- DFG->JIT OSR exit in the case of inlining will make the return PC's point at
the CallLinkInfo::callReturnLocation, rather than the machine PC associated
with the op_call_put_result instruction.
- Tons of code gets removed. The DFG had to track whether or not a call had a
return value in a bunch of places. It had to track the fact that we would
exit to after the op_call_put_result. It was a mess. That mess is now gone.
* bytecode/CallLinkStatus.cpp:
(JSC::CallLinkStatus::computeFromLLInt):
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::printCallOp):
(JSC::CodeBlock::dumpArrayProfiling):
(JSC::CodeBlock::dumpBytecode):
(JSC::CodeBlock::CodeBlock):
* bytecode/CodeBlock.h:
* bytecode/Opcode.h:
(JSC):
(JSC::padOpcodeName):
* bytecompiler/BytecodeGenerator.cpp:
(JSC::BytecodeGenerator::emitCall):
(JSC::BytecodeGenerator::emitCallVarargs):
(JSC::BytecodeGenerator::emitConstruct):
* bytecompiler/NodesCodegen.cpp:
(JSC::NewExprNode::emitBytecode):
(JSC::FunctionCallValueNode::emitBytecode):
(JSC::FunctionCallResolveNode::emitBytecode):
(JSC::FunctionCallBracketNode::emitBytecode):
(JSC::FunctionCallDotNode::emitBytecode):
(JSC::CallFunctionCallDotNode::emitBytecode):
(JSC::ApplyFunctionCallDotNode::emitBytecode):
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::ByteCodeParser):
(ByteCodeParser):
(JSC::DFG::ByteCodeParser::currentCodeOrigin):
(JSC::DFG::ByteCodeParser::addCall):
(JSC::DFG::ByteCodeParser::getPredictionWithoutOSRExit):
(JSC::DFG::ByteCodeParser::getPrediction):
(JSC::DFG::ByteCodeParser::handleCall):
(JSC::DFG::ByteCodeParser::handleInlining):
(JSC::DFG::ByteCodeParser::handleMinMax):
(JSC::DFG::ByteCodeParser::handleIntrinsic):
(JSC::DFG::ByteCodeParser::handleConstantInternalFunction):
(JSC::DFG::ByteCodeParser::parseBlock):
* dfg/DFGCapabilities.cpp:
(JSC::DFG::capabilityLevel):
* dfg/DFGOSRExitCompiler.cpp:
* dfg/DFGOSRExitCompilerCommon.cpp:
(JSC::DFG::reifyInlinedCallFrames):
* jit/JIT.cpp:
(JSC::JIT::privateCompileMainPass):
* jit/JIT.h:
(JIT):
* jit/JITCall.cpp:
(JSC::JIT::emitPutCallResult):
(JSC::JIT::compileLoadVarargs):
(JSC::JIT::compileCallEval):
(JSC::JIT::compileCallEvalSlowCase):
(JSC::JIT::compileOpCall):
(JSC::JIT::compileOpCallSlowCase):
(JSC::JIT::emit_op_call):
(JSC):
(JSC::JIT::emit_op_call_eval):
(JSC::JIT::emit_op_call_varargs):
(JSC::JIT::emit_op_construct):
(JSC::JIT::emitSlow_op_call):
(JSC::JIT::emitSlow_op_call_eval):
(JSC::JIT::emitSlow_op_call_varargs):
(JSC::JIT::emitSlow_op_construct):
* jit/JITCall32_64.cpp:
(JSC::JIT::emitPutCallResult):
(JSC::JIT::compileLoadVarargs):
(JSC::JIT::compileCallEval):
(JSC::JIT::compileCallEvalSlowCase):
(JSC::JIT::compileOpCall):
(JSC::JIT::compileOpCallSlowCase):
* jit/JITOpcodes.cpp:
(JSC):
* llint/LLIntSlowPaths.cpp:
(JSC::LLInt::genericCall):
(JSC::LLInt::LLINT_SLOW_PATH_DECL):
* llint/LowLevelInterpreter.cpp:
(JSC::CLoop::execute):
* llint/LowLevelInterpreter32_64.asm:
* llint/LowLevelInterpreter64.asm:
git-svn-id: http://svn.webkit.org/repository/webkit/trunk@153200 268f45cc-cd09-0410-ab3c-d52691b4dbfc
diff --git a/Source/JavaScriptCore/ChangeLog b/Source/JavaScriptCore/ChangeLog
index 619ec86..9047c07 100644
--- a/Source/JavaScriptCore/ChangeLog
+++ b/Source/JavaScriptCore/ChangeLog
@@ -1,3 +1,115 @@
+2013-05-31 Filip Pizlo <fpizlo@apple.com>
+
+ fourthTier: get rid of op_call_put_result
+ https://bugs.webkit.org/show_bug.cgi?id=117047
+
+ Reviewed by Gavin Barraclough.
+
+ Work in progress. This still makes like 20 tests crash.
+
+ op_call_put_result is an oddball. Its semantics are that it takes the return
+ value of a call instruction, which is set aside in regT0/regT1, and places them
+ into some stack slot. This is weird since there is an implicit contract with the
+ preceding bytecode instruction, and it's even weirder since it means that it
+ doesn't make sense to jump to it; for example OSR exit from the preceding call
+ instruction must make sure to jump over the op_call_put_result.
+
+ So this patch gets rid of op_call_put_result:
+
+ - In bytecode, all calls return a value and we always allocate a temporary for
+ that value even if it isn't used.
+
+ - The LLInt does the return value saving as part of dispatchAfterCall().
+
+ - The JIT and DFG do the return value saving as part of normal code generation.
+ The DFG already did the right thing.
+
+ - DFG->JIT OSR exit in the case of inlining will make the return PC's point at
+ the CallLinkInfo::callReturnLocation, rather than the machine PC associated
+ with the op_call_put_result instruction.
+
+ - Tons of code gets removed. The DFG had to track whether or not a call had a
+ return value in a bunch of places. It had to track the fact that we would
+ exit to after the op_call_put_result. It was a mess. That mess is now gone.
+
+ * bytecode/CallLinkStatus.cpp:
+ (JSC::CallLinkStatus::computeFromLLInt):
+ * bytecode/CodeBlock.cpp:
+ (JSC::CodeBlock::printCallOp):
+ (JSC::CodeBlock::dumpArrayProfiling):
+ (JSC::CodeBlock::dumpBytecode):
+ (JSC::CodeBlock::CodeBlock):
+ * bytecode/CodeBlock.h:
+ * bytecode/Opcode.h:
+ (JSC):
+ (JSC::padOpcodeName):
+ * bytecompiler/BytecodeGenerator.cpp:
+ (JSC::BytecodeGenerator::emitCall):
+ (JSC::BytecodeGenerator::emitCallVarargs):
+ (JSC::BytecodeGenerator::emitConstruct):
+ * bytecompiler/NodesCodegen.cpp:
+ (JSC::NewExprNode::emitBytecode):
+ (JSC::FunctionCallValueNode::emitBytecode):
+ (JSC::FunctionCallResolveNode::emitBytecode):
+ (JSC::FunctionCallBracketNode::emitBytecode):
+ (JSC::FunctionCallDotNode::emitBytecode):
+ (JSC::CallFunctionCallDotNode::emitBytecode):
+ (JSC::ApplyFunctionCallDotNode::emitBytecode):
+ * dfg/DFGByteCodeParser.cpp:
+ (JSC::DFG::ByteCodeParser::ByteCodeParser):
+ (ByteCodeParser):
+ (JSC::DFG::ByteCodeParser::currentCodeOrigin):
+ (JSC::DFG::ByteCodeParser::addCall):
+ (JSC::DFG::ByteCodeParser::getPredictionWithoutOSRExit):
+ (JSC::DFG::ByteCodeParser::getPrediction):
+ (JSC::DFG::ByteCodeParser::handleCall):
+ (JSC::DFG::ByteCodeParser::handleInlining):
+ (JSC::DFG::ByteCodeParser::handleMinMax):
+ (JSC::DFG::ByteCodeParser::handleIntrinsic):
+ (JSC::DFG::ByteCodeParser::handleConstantInternalFunction):
+ (JSC::DFG::ByteCodeParser::parseBlock):
+ * dfg/DFGCapabilities.cpp:
+ (JSC::DFG::capabilityLevel):
+ * dfg/DFGOSRExitCompiler.cpp:
+ * dfg/DFGOSRExitCompilerCommon.cpp:
+ (JSC::DFG::reifyInlinedCallFrames):
+ * jit/JIT.cpp:
+ (JSC::JIT::privateCompileMainPass):
+ * jit/JIT.h:
+ (JIT):
+ * jit/JITCall.cpp:
+ (JSC::JIT::emitPutCallResult):
+ (JSC::JIT::compileLoadVarargs):
+ (JSC::JIT::compileCallEval):
+ (JSC::JIT::compileCallEvalSlowCase):
+ (JSC::JIT::compileOpCall):
+ (JSC::JIT::compileOpCallSlowCase):
+ (JSC::JIT::emit_op_call):
+ (JSC):
+ (JSC::JIT::emit_op_call_eval):
+ (JSC::JIT::emit_op_call_varargs):
+ (JSC::JIT::emit_op_construct):
+ (JSC::JIT::emitSlow_op_call):
+ (JSC::JIT::emitSlow_op_call_eval):
+ (JSC::JIT::emitSlow_op_call_varargs):
+ (JSC::JIT::emitSlow_op_construct):
+ * jit/JITCall32_64.cpp:
+ (JSC::JIT::emitPutCallResult):
+ (JSC::JIT::compileLoadVarargs):
+ (JSC::JIT::compileCallEval):
+ (JSC::JIT::compileCallEvalSlowCase):
+ (JSC::JIT::compileOpCall):
+ (JSC::JIT::compileOpCallSlowCase):
+ * jit/JITOpcodes.cpp:
+ (JSC):
+ * llint/LLIntSlowPaths.cpp:
+ (JSC::LLInt::genericCall):
+ (JSC::LLInt::LLINT_SLOW_PATH_DECL):
+ * llint/LowLevelInterpreter.cpp:
+ (JSC::CLoop::execute):
+ * llint/LowLevelInterpreter32_64.asm:
+ * llint/LowLevelInterpreter64.asm:
+
2013-05-30 Filip Pizlo <fpizlo@apple.com>
fourthTier: LLInt shouldn't store an offset call PC during op_call-like calls
diff --git a/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp b/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp
index a04bee7..4c45199 100644
--- a/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp
+++ b/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp
@@ -87,7 +87,7 @@
UNUSED_PARAM(bytecodeIndex);
#if ENABLE(LLINT)
Instruction* instruction = profiledBlock->instructions().begin() + bytecodeIndex;
- LLIntCallLinkInfo* callLinkInfo = instruction[4].u.callLinkInfo;
+ LLIntCallLinkInfo* callLinkInfo = instruction[5].u.callLinkInfo;
return CallLinkStatus(callLinkInfo->lastSeenCallee.get());
#else
diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.cpp b/Source/JavaScriptCore/bytecode/CodeBlock.cpp
index 351b48d..4b42b4a 100644
--- a/Source/JavaScriptCore/bytecode/CodeBlock.cpp
+++ b/Source/JavaScriptCore/bytecode/CodeBlock.cpp
@@ -417,12 +417,13 @@
#endif
}
-void CodeBlock::printCallOp(PrintStream& out, ExecState*, int location, const Instruction*& it, const char* op, CacheDumpMode cacheDumpMode)
+void CodeBlock::printCallOp(PrintStream& out, ExecState*, int location, const Instruction*& it, const char* op, CacheDumpMode cacheDumpMode, bool& hasPrintedProfiling)
{
+ int dst = (++it)->u.operand;
int func = (++it)->u.operand;
int argCount = (++it)->u.operand;
int registerOffset = (++it)->u.operand;
- out.printf("[%4d] %s\t %s, %d, %d", location, op, registerName(func).data(), argCount, registerOffset);
+ out.printf("[%4d] %s %s, %s, %d, %d", location, op, registerName(dst).data(), registerName(func).data(), argCount, registerOffset);
if (cacheDumpMode == DumpCaches) {
#if ENABLE(LLINT)
LLIntCallLinkInfo* callLinkInfo = it[1].u.callLinkInfo;
@@ -442,7 +443,9 @@
#endif
out.print(" status(", CallLinkStatus::computeFor(this, location), ")");
}
- it += 2;
+ ++it;
+ dumpArrayProfiling(out, it, hasPrintedProfiling);
+ dumpValueProfiling(out, it, hasPrintedProfiling);
}
void CodeBlock::printPutByIdOp(PrintStream& out, ExecState*, int location, const Instruction*& it, const char* op)
@@ -662,6 +665,8 @@
++it;
#if ENABLE(VALUE_PROFILER)
+ if (!it->u.arrayProfile)
+ return;
CString description = it->u.arrayProfile->briefDescription(locker, this);
if (!description.length())
return;
@@ -1275,20 +1280,22 @@
break;
}
case op_call: {
- printCallOp(out, exec, location, it, "call", DumpCaches);
+ printCallOp(out, exec, location, it, "call", DumpCaches, hasPrintedProfiling);
break;
}
case op_call_eval: {
- printCallOp(out, exec, location, it, "call_eval", DontDumpCaches);
+ printCallOp(out, exec, location, it, "call_eval", DontDumpCaches, hasPrintedProfiling);
break;
}
case op_call_varargs: {
+ int result = (++it)->u.operand;
int callee = (++it)->u.operand;
int thisValue = (++it)->u.operand;
int arguments = (++it)->u.operand;
int firstFreeRegister = (++it)->u.operand;
++it;
- out.printf("[%4d] call_varargs\t %s, %s, %s, %d", location, registerName(callee).data(), registerName(thisValue).data(), registerName(arguments).data(), firstFreeRegister);
+ out.printf("[%4d] call_varargs\t %s, %s, %s, %s, %d", location, registerName(result).data(), registerName(callee).data(), registerName(thisValue).data(), registerName(arguments).data(), firstFreeRegister);
+ dumpValueProfiling(out, it, hasPrintedProfiling);
break;
}
case op_tear_off_activation: {
@@ -1307,12 +1314,6 @@
out.printf("[%4d] ret\t\t %s", location, registerName(r0).data());
break;
}
- case op_call_put_result: {
- int r0 = (++it)->u.operand;
- out.printf("[%4d] call_put_result\t\t %s", location, registerName(r0).data());
- dumpValueProfiling(out, it, hasPrintedProfiling);
- break;
- }
case op_ret_object_or_this: {
int r0 = (++it)->u.operand;
int r1 = (++it)->u.operand;
@@ -1320,7 +1321,7 @@
break;
}
case op_construct: {
- printCallOp(out, exec, location, it, "construct", DumpCaches);
+ printCallOp(out, exec, location, it, "construct", DumpCaches, hasPrintedProfiling);
break;
}
case op_strcat: {
@@ -1783,7 +1784,7 @@
}
case op_to_this:
case op_get_by_id:
- case op_call_put_result:
+ case op_call_varargs:
case op_get_callee: {
ValueProfile* profile = &m_valueProfiles[pc[i + opLength - 1].u.operand];
ASSERT(profile->m_bytecodeOffset == -1);
@@ -1883,20 +1884,31 @@
case op_call:
case op_call_eval: {
#if ENABLE(DFG_JIT)
- int arrayProfileIndex = pc[i + opLength - 1].u.operand;
+ ValueProfile* profile = &m_valueProfiles[pc[i + opLength - 1].u.operand];
+ ASSERT(profile->m_bytecodeOffset == -1);
+ profile->m_bytecodeOffset = i;
+ instructions[i + opLength - 1] = profile;
+ int arrayProfileIndex = pc[i + opLength - 2].u.operand;
m_arrayProfiles[arrayProfileIndex] = ArrayProfile(i);
- instructions[i + opLength - 1] = &m_arrayProfiles[arrayProfileIndex];
+ instructions[i + opLength - 2] = &m_arrayProfiles[arrayProfileIndex];
#endif
#if ENABLE(LLINT)
- instructions[i + 4] = &m_llintCallLinkInfos[pc[i + 4].u.operand];
+ instructions[i + 5] = &m_llintCallLinkInfos[pc[i + 5].u.operand];
#endif
break;
}
- case op_construct:
+ case op_construct: {
#if ENABLE(LLINT)
- instructions[i + 4] = &m_llintCallLinkInfos[pc[i + 4].u.operand];
+ instructions[i + 5] = &m_llintCallLinkInfos[pc[i + 5].u.operand];
+#endif
+#if ENABLE(DFG_JIT)
+ ValueProfile* profile = &m_valueProfiles[pc[i + opLength - 1].u.operand];
+ ASSERT(profile->m_bytecodeOffset == -1);
+ profile->m_bytecodeOffset = i;
+ instructions[i + opLength - 1] = profile;
#endif
break;
+ }
case op_get_by_id_out_of_line:
case op_get_by_id_self:
case op_get_by_id_proto:
diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.h b/Source/JavaScriptCore/bytecode/CodeBlock.h
index 49030fe..dd6deea 100644
--- a/Source/JavaScriptCore/bytecode/CodeBlock.h
+++ b/Source/JavaScriptCore/bytecode/CodeBlock.h
@@ -965,7 +965,7 @@
void printGetByIdOp(PrintStream&, ExecState*, int location, const Instruction*&);
void printGetByIdCacheStatus(PrintStream&, ExecState*, int location);
enum CacheDumpMode { DumpCaches, DontDumpCaches };
- void printCallOp(PrintStream&, ExecState*, int location, const Instruction*&, const char* op, CacheDumpMode);
+ void printCallOp(PrintStream&, ExecState*, int location, const Instruction*&, const char* op, CacheDumpMode, bool& hasPrintedProfiling);
void printPutByIdOp(PrintStream&, ExecState*, int location, const Instruction*&, const char* op);
void beginDumpProfiling(PrintStream&, bool& hasPrintedProfiling);
void dumpValueProfiling(PrintStream&, const Instruction*&, bool& hasPrintedProfiling);
diff --git a/Source/JavaScriptCore/bytecode/Opcode.h b/Source/JavaScriptCore/bytecode/Opcode.h
index 553a823..5c0ce9f 100644
--- a/Source/JavaScriptCore/bytecode/Opcode.h
+++ b/Source/JavaScriptCore/bytecode/Opcode.h
@@ -177,16 +177,15 @@
\
macro(op_new_func, 4) \
macro(op_new_func_exp, 3) \
- macro(op_call, 6) \
- macro(op_call_eval, 6) \
- macro(op_call_varargs, 6) \
+ macro(op_call, 8) /* has value profiling */ \
+ macro(op_call_eval, 8) /* has value profiling */ \
+ macro(op_call_varargs, 8) /* has value profiling */ \
macro(op_tear_off_activation, 2) \
macro(op_tear_off_arguments, 3) \
macro(op_ret, 2) \
- macro(op_call_put_result, 3) /* has value profiling */ \
macro(op_ret_object_or_this, 3) \
\
- macro(op_construct, 6) \
+ macro(op_construct, 8) \
macro(op_strcat, 4) \
macro(op_to_primitive, 3) \
\
diff --git a/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp b/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp
index eff2846..ff2cd14 100644
--- a/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp
+++ b/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp
@@ -1845,7 +1845,10 @@
// Emit call.
UnlinkedArrayProfile arrayProfile = newArrayProfile();
- emitOpcode(opcodeID);
+ UnlinkedValueProfile profile = emitProfiledOpcode(opcodeID);
+ ASSERT(dst);
+ ASSERT(dst != ignoredResult());
+ instructions().append(dst->index()); // result
instructions().append(func->index()); // func
instructions().append(callArguments.argumentCountIncludingThis()); // argCount
instructions().append(callArguments.registerOffset()); // registerOffset
@@ -1855,11 +1858,7 @@
instructions().append(0);
#endif
instructions().append(arrayProfile);
- if (dst != ignoredResult()) {
- UnlinkedValueProfile profile = emitProfiledOpcode(op_call_put_result);
- instructions().append(kill(dst));
- instructions().append(profile);
- }
+ instructions().append(profile);
if (expectedFunction != NoExpectedFunction)
emitLabel(done.get());
@@ -1883,17 +1882,15 @@
emitExpressionInfo(divot, startOffset, endOffset, line, lineStart);
// Emit call.
- emitOpcode(op_call_varargs);
+ UnlinkedValueProfile profile = emitProfiledOpcode(op_call_varargs);
+ ASSERT(dst != ignoredResult());
+ instructions().append(dst->index());
instructions().append(func->index());
instructions().append(thisRegister->index());
instructions().append(arguments->index());
instructions().append(firstFreeRegister->index());
instructions().append(0); // Pad to make it as big as an op_call.
- if (dst != ignoredResult()) {
- UnlinkedValueProfile profile = emitProfiledOpcode(op_call_put_result);
- instructions().append(kill(dst));
- instructions().append(profile);
- }
+ instructions().append(profile);
if (m_shouldEmitProfileHooks) {
emitOpcode(op_profile_did_call);
instructions().append(profileHookRegister->index());
@@ -1962,7 +1959,9 @@
RefPtr<Label> done = newLabel();
expectedFunction = emitExpectedFunctionSnippet(dst, func, expectedFunction, callArguments, done.get());
- emitOpcode(op_construct);
+ UnlinkedValueProfile profile = emitProfiledOpcode(op_construct);
+ ASSERT(dst != ignoredResult());
+ instructions().append(dst->index());
instructions().append(func->index()); // func
instructions().append(callArguments.argumentCountIncludingThis()); // argCount
instructions().append(callArguments.registerOffset()); // registerOffset
@@ -1972,11 +1971,7 @@
instructions().append(0);
#endif
instructions().append(0);
- if (dst != ignoredResult()) {
- UnlinkedValueProfile profile = emitProfiledOpcode(op_call_put_result);
- instructions().append(kill(dst));
- instructions().append(profile);
- }
+ instructions().append(profile);
if (expectedFunction != NoExpectedFunction)
emitLabel(done.get());
diff --git a/Source/JavaScriptCore/bytecompiler/NodesCodegen.cpp b/Source/JavaScriptCore/bytecompiler/NodesCodegen.cpp
index 40f703c..4281a6d 100644
--- a/Source/JavaScriptCore/bytecompiler/NodesCodegen.cpp
+++ b/Source/JavaScriptCore/bytecompiler/NodesCodegen.cpp
@@ -373,8 +373,9 @@
else
expectedFunction = NoExpectedFunction;
RefPtr<RegisterID> func = generator.emitNode(m_expr);
+ RefPtr<RegisterID> returnValue = generator.finalDestination(dst, func.get());
CallArguments callArguments(generator, m_args);
- return generator.emitConstruct(generator.finalDestinationOrIgnored(dst), func.get(), expectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
+ return generator.emitConstruct(returnValue.get(), func.get(), expectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
}
inline CallArguments::CallArguments(BytecodeGenerator& generator, ArgumentsNode* argumentsNode)
@@ -419,9 +420,10 @@
RegisterID* FunctionCallValueNode::emitBytecode(BytecodeGenerator& generator, RegisterID* dst)
{
RefPtr<RegisterID> func = generator.emitNode(m_expr);
+ RefPtr<RegisterID> returnValue = generator.finalDestination(dst, func.get());
CallArguments callArguments(generator, m_args);
generator.emitLoad(callArguments.thisRegister(), jsUndefined());
- return generator.emitCall(generator.finalDestinationOrIgnored(dst, func.get()), func.get(), NoExpectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
+ return generator.emitCall(returnValue.get(), func.get(), NoExpectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
}
// ------------------------------ FunctionCallResolveNode ----------------------------------
@@ -433,28 +435,31 @@
if (RegisterID* local = resolveResult.local()) {
RefPtr<RegisterID> func = generator.emitMove(generator.tempDestination(dst), local);
+ RefPtr<RegisterID> returnValue = generator.finalDestination(dst, func.get());
CallArguments callArguments(generator, m_args);
generator.emitLoad(callArguments.thisRegister(), jsUndefined());
// This passes NoExpectedFunction because we expect that if the function is in a
// local variable, then it's not one of our built-in constructors.
- return generator.emitCall(generator.finalDestinationOrIgnored(dst, callArguments.thisRegister()), func.get(), NoExpectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
+ return generator.emitCall(returnValue.get(), func.get(), NoExpectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
}
if (resolveResult.isStatic()) {
RefPtr<RegisterID> func = generator.newTemporary();
+ RefPtr<RegisterID> returnValue = generator.finalDestination(dst, func.get());
CallArguments callArguments(generator, m_args);
generator.emitGetStaticVar(func.get(), resolveResult, m_ident);
generator.emitLoad(callArguments.thisRegister(), jsUndefined());
- return generator.emitCall(generator.finalDestinationOrIgnored(dst, func.get()), func.get(), expectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
+ return generator.emitCall(returnValue.get(), func.get(), expectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
}
RefPtr<RegisterID> func = generator.newTemporary();
+ RefPtr<RegisterID> returnValue = generator.finalDestination(dst, func.get());
CallArguments callArguments(generator, m_args);
int identifierStart = divot() - divotStartOffset();
generator.emitExpressionInfo(identifierStart + m_ident.length(), m_ident.length(), 0, divotLine(), divotLineStart());
generator.emitResolveWithThis(callArguments.thisRegister(), func.get(), resolveResult, m_ident);
- return generator.emitCall(generator.finalDestinationOrIgnored(dst, func.get()), func.get(), expectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
+ return generator.emitCall(returnValue.get(), func.get(), expectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
}
// ------------------------------ FunctionCallBracketNode ----------------------------------
@@ -465,9 +470,10 @@
RegisterID* property = generator.emitNode(m_subscript);
generator.emitExpressionInfo(subexpressionDivot(), subexpressionStartOffset(), subexpressionEndOffset(), subexpressionLine(), subexpressionLineStart());
RefPtr<RegisterID> function = generator.emitGetByVal(generator.tempDestination(dst), base.get(), property);
+ RefPtr<RegisterID> returnValue = generator.finalDestination(dst, function.get());
CallArguments callArguments(generator, m_args);
generator.emitMove(callArguments.thisRegister(), base.get());
- return generator.emitCall(generator.finalDestinationOrIgnored(dst, function.get()), function.get(), NoExpectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
+ return generator.emitCall(returnValue.get(), function.get(), NoExpectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
}
// ------------------------------ FunctionCallDotNode ----------------------------------
@@ -475,11 +481,12 @@
RegisterID* FunctionCallDotNode::emitBytecode(BytecodeGenerator& generator, RegisterID* dst)
{
RefPtr<RegisterID> function = generator.tempDestination(dst);
+ RefPtr<RegisterID> returnValue = generator.finalDestination(dst, function.get());
CallArguments callArguments(generator, m_args);
generator.emitNode(callArguments.thisRegister(), m_base);
generator.emitExpressionInfo(subexpressionDivot(), subexpressionStartOffset(), subexpressionEndOffset(), subexpressionLine(), subexpressionLineStart());
generator.emitGetById(function.get(), callArguments.thisRegister(), m_ident);
- return generator.emitCall(generator.finalDestinationOrIgnored(dst, function.get()), function.get(), NoExpectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
+ return generator.emitCall(returnValue.get(), function.get(), NoExpectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
}
RegisterID* CallFunctionCallDotNode::emitBytecode(BytecodeGenerator& generator, RegisterID* dst)
@@ -489,7 +496,7 @@
RefPtr<RegisterID> base = generator.emitNode(m_base);
generator.emitExpressionInfo(subexpressionDivot(), subexpressionStartOffset(), subexpressionEndOffset(), subexpressionLine(), subexpressionLineStart());
RefPtr<RegisterID> function = generator.emitGetById(generator.tempDestination(dst), base.get(), m_ident);
- RefPtr<RegisterID> finalDestinationOrIgnored = generator.finalDestinationOrIgnored(dst, function.get());
+ RefPtr<RegisterID> returnValue = generator.finalDestination(dst, function.get());
generator.emitJumpIfNotFunctionCall(function.get(), realCall.get());
{
if (m_args->m_listNode && m_args->m_listNode->m_expr) {
@@ -499,7 +506,7 @@
RefPtr<RegisterID> realFunction = generator.emitMove(generator.tempDestination(dst), base.get());
CallArguments callArguments(generator, m_args);
generator.emitNode(callArguments.thisRegister(), oldList->m_expr);
- generator.emitCall(finalDestinationOrIgnored.get(), realFunction.get(), NoExpectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
+ generator.emitCall(returnValue.get(), realFunction.get(), NoExpectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
generator.emitJump(end.get());
m_args->m_listNode = oldList;
@@ -507,7 +514,7 @@
RefPtr<RegisterID> realFunction = generator.emitMove(generator.tempDestination(dst), base.get());
CallArguments callArguments(generator, m_args);
generator.emitLoad(callArguments.thisRegister(), jsUndefined());
- generator.emitCall(finalDestinationOrIgnored.get(), realFunction.get(), NoExpectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
+ generator.emitCall(returnValue.get(), realFunction.get(), NoExpectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
generator.emitJump(end.get());
}
}
@@ -515,10 +522,10 @@
{
CallArguments callArguments(generator, m_args);
generator.emitMove(callArguments.thisRegister(), base.get());
- generator.emitCall(finalDestinationOrIgnored.get(), function.get(), NoExpectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
+ generator.emitCall(returnValue.get(), function.get(), NoExpectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
}
generator.emitLabel(end.get());
- return finalDestinationOrIgnored.get();
+ return returnValue.get();
}
static bool areTrivialApplyArguments(ArgumentsNode* args)
@@ -539,7 +546,7 @@
RefPtr<RegisterID> base = generator.emitNode(m_base);
generator.emitExpressionInfo(subexpressionDivot(), subexpressionStartOffset(), subexpressionEndOffset(), subexpressionLine(), subexpressionLineStart());
RefPtr<RegisterID> function = generator.emitGetById(generator.tempDestination(dst), base.get(), m_ident);
- RefPtr<RegisterID> finalDestinationOrIgnored = generator.finalDestinationOrIgnored(dst, function.get());
+ RefPtr<RegisterID> returnValue = generator.finalDestination(dst, function.get());
generator.emitJumpIfNotFunctionApply(function.get(), realCall.get());
{
if (mayBeCall) {
@@ -552,20 +559,20 @@
RefPtr<RegisterID> realFunction = generator.emitMove(generator.tempDestination(dst), base.get());
CallArguments callArguments(generator, m_args);
generator.emitNode(callArguments.thisRegister(), oldList->m_expr);
- generator.emitCall(finalDestinationOrIgnored.get(), realFunction.get(), NoExpectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
+ generator.emitCall(returnValue.get(), realFunction.get(), NoExpectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
} else {
m_args->m_listNode = m_args->m_listNode->m_next;
RefPtr<RegisterID> realFunction = generator.emitMove(generator.tempDestination(dst), base.get());
CallArguments callArguments(generator, m_args);
generator.emitNode(callArguments.thisRegister(), oldList->m_expr);
- generator.emitCall(finalDestinationOrIgnored.get(), realFunction.get(), NoExpectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
+ generator.emitCall(returnValue.get(), realFunction.get(), NoExpectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
}
m_args->m_listNode = oldList;
} else {
RefPtr<RegisterID> realFunction = generator.emitMove(generator.tempDestination(dst), base.get());
CallArguments callArguments(generator, m_args);
generator.emitLoad(callArguments.thisRegister(), jsUndefined());
- generator.emitCall(finalDestinationOrIgnored.get(), realFunction.get(), NoExpectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
+ generator.emitCall(returnValue.get(), realFunction.get(), NoExpectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
}
} else {
ASSERT(m_args->m_listNode && m_args->m_listNode->m_next);
@@ -586,7 +593,7 @@
while ((args = args->m_next))
generator.emitNode(args->m_expr);
- generator.emitCallVarargs(finalDestinationOrIgnored.get(), realFunction.get(), thisRegister.get(), argsRegister.get(), generator.newTemporary(), profileHookRegister.get(), divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
+ generator.emitCallVarargs(returnValue.get(), realFunction.get(), thisRegister.get(), argsRegister.get(), generator.newTemporary(), profileHookRegister.get(), divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
}
generator.emitJump(end.get());
}
@@ -594,10 +601,10 @@
{
CallArguments callArguments(generator, m_args);
generator.emitMove(callArguments.thisRegister(), base.get());
- generator.emitCall(finalDestinationOrIgnored.get(), function.get(), NoExpectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
+ generator.emitCall(returnValue.get(), function.get(), NoExpectedFunction, callArguments, divot(), divotStartOffset(), divotEndOffset(), divotLine(), divotLineStart());
}
generator.emitLabel(end.get());
- return finalDestinationOrIgnored.get();
+ return returnValue.get();
}
// ------------------------------ PostfixNode ----------------------------------
diff --git a/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp b/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
index ab1c28d..fe1f1b8 100644
--- a/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
+++ b/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
@@ -128,7 +128,6 @@
, m_graph(graph)
, m_currentBlock(0)
, m_currentIndex(0)
- , m_currentProfilingIndex(0)
, m_constantUndefined(UINT_MAX)
, m_constantNull(UINT_MAX)
, m_constantNaN(UINT_MAX)
@@ -160,19 +159,17 @@
void parseCodeBlock();
// Helper for min and max.
- bool handleMinMax(bool usesResult, int resultOperand, NodeType op, int registerOffset, int argumentCountIncludingThis);
+ bool handleMinMax(int resultOperand, NodeType op, int registerOffset, int argumentCountIncludingThis);
// Handle calls. This resolves issues surrounding inlining and intrinsics.
- void handleCall(Interpreter*, Instruction* currentInstruction, NodeType op, CodeSpecializationKind);
+ void handleCall(Instruction* currentInstruction, NodeType op, CodeSpecializationKind);
void emitFunctionChecks(const CallLinkStatus&, Node* callTarget, int registerOffset, CodeSpecializationKind);
void emitArgumentPhantoms(int registerOffset, int argumentCountIncludingThis, CodeSpecializationKind);
// Handle inlining. Return true if it succeeded, false if we need to plant a call.
- bool handleInlining(bool usesResult, Node* callTargetNode, int resultOperand, const CallLinkStatus&, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, CodeSpecializationKind);
- // Handle setting the result of an intrinsic.
- void setIntrinsicResult(bool usesResult, int resultOperand, Node*);
+ bool handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus&, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, CodeSpecializationKind);
// Handle intrinsic functions. Return true if it succeeded, false if we need to plant a call.
- bool handleIntrinsic(bool usesResult, int resultOperand, Intrinsic, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction);
- bool handleConstantInternalFunction(bool usesResult, int resultOperand, InternalFunction*, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction, CodeSpecializationKind);
+ bool handleIntrinsic(int resultOperand, Intrinsic, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction);
+ bool handleConstantInternalFunction(int resultOperand, InternalFunction*, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction, CodeSpecializationKind);
Node* handleGetByOffset(SpeculatedType, Node* base, unsigned identifierNumber, PropertyOffset);
void handleGetByOffset(
int destinationOperand, SpeculatedType, Node* base, unsigned identifierNumber,
@@ -689,7 +686,7 @@
CodeOrigin currentCodeOrigin()
{
- return CodeOrigin(m_currentIndex, inlineCallFrame(), m_currentProfilingIndex - m_currentIndex);
+ return CodeOrigin(m_currentIndex, inlineCallFrame(), 0);
}
bool canFold(Node* node)
@@ -760,29 +757,22 @@
m_numPassedVarArgs++;
}
- Node* addCall(Interpreter* interpreter, Instruction* currentInstruction, NodeType op)
+ Node* addCall(Instruction* currentInstruction, NodeType op)
{
- Instruction* putInstruction = currentInstruction + OPCODE_LENGTH(op_call);
-
- SpeculatedType prediction = SpecNone;
- if (interpreter->getOpcodeID(putInstruction->u.opcode) == op_call_put_result) {
- m_currentProfilingIndex = m_currentIndex + OPCODE_LENGTH(op_call);
- prediction = getPrediction();
- }
+ SpeculatedType prediction = getPrediction();
- addVarArgChild(get(currentInstruction[1].u.operand));
- int argCount = currentInstruction[2].u.operand;
+ addVarArgChild(get(currentInstruction[2].u.operand));
+ int argCount = currentInstruction[3].u.operand;
if (JSStack::CallFrameHeaderSize + (unsigned)argCount > m_parameterSlots)
m_parameterSlots = JSStack::CallFrameHeaderSize + argCount;
- int registerOffset = currentInstruction[3].u.operand;
+ int registerOffset = currentInstruction[4].u.operand;
int dummyThisArgument = op == Call ? 0 : 1;
for (int i = 0 + dummyThisArgument; i < argCount; ++i)
addVarArgChild(get(registerOffset + argumentToOperand(i)));
Node* call = addToGraph(Node::VarArg, op, OpInfo(0), OpInfo(prediction));
- if (interpreter->getOpcodeID(putInstruction->u.opcode) == op_call_put_result)
- set(putInstruction[1].u.operand, call);
+ set(currentInstruction[1].u.operand, call);
return call;
}
@@ -829,12 +819,12 @@
SpeculatedType getPredictionWithoutOSRExit()
{
- return getPredictionWithoutOSRExit(m_currentProfilingIndex);
+ return getPredictionWithoutOSRExit(m_currentIndex);
}
SpeculatedType getPrediction()
{
- return getPrediction(m_currentProfilingIndex);
+ return getPrediction(m_currentIndex);
}
ArrayMode getArrayMode(ArrayProfile* profile, Array::Action action)
@@ -975,8 +965,6 @@
BasicBlock* m_currentBlock;
// The bytecode index of the current instruction being generated.
unsigned m_currentIndex;
- // The bytecode index of the value profile of the current instruction being generated.
- unsigned m_currentProfilingIndex;
// We use these values during code generation, and to avoid the need for
// special handling we make sure they are available as constants in the
@@ -1147,11 +1135,11 @@
return shouldContinueParsing
-void ByteCodeParser::handleCall(Interpreter* interpreter, Instruction* currentInstruction, NodeType op, CodeSpecializationKind kind)
+void ByteCodeParser::handleCall(Instruction* currentInstruction, NodeType op, CodeSpecializationKind kind)
{
ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_construct));
- Node* callTarget = get(currentInstruction[1].u.operand);
+ Node* callTarget = get(currentInstruction[2].u.operand);
CallLinkStatus callLinkStatus;
@@ -1172,29 +1160,19 @@
// Oddly, this conflates calls that haven't executed with calls that behaved sufficiently polymorphically
// that we cannot optimize them.
- addCall(interpreter, currentInstruction, op);
+ addCall(currentInstruction, op);
return;
}
- int argumentCountIncludingThis = currentInstruction[2].u.operand;
- int registerOffset = currentInstruction[3].u.operand;
+ int argumentCountIncludingThis = currentInstruction[3].u.operand;
+ int registerOffset = currentInstruction[4].u.operand;
- // Do we have a result?
- bool usesResult = false;
- int resultOperand = 0; // make compiler happy
+ int resultOperand = currentInstruction[1].u.operand;
unsigned nextOffset = m_currentIndex + OPCODE_LENGTH(op_call);
- Instruction* putInstruction = currentInstruction + OPCODE_LENGTH(op_call);
- SpeculatedType prediction = SpecNone;
- if (interpreter->getOpcodeID(putInstruction->u.opcode) == op_call_put_result) {
- resultOperand = putInstruction[1].u.operand;
- usesResult = true;
- m_currentProfilingIndex = nextOffset;
- prediction = getPrediction();
- nextOffset += OPCODE_LENGTH(op_call_put_result);
- }
+ SpeculatedType prediction = getPrediction();
if (InternalFunction* function = callLinkStatus.internalFunction()) {
- if (handleConstantInternalFunction(usesResult, resultOperand, function, registerOffset, argumentCountIncludingThis, prediction, kind)) {
+ if (handleConstantInternalFunction(resultOperand, function, registerOffset, argumentCountIncludingThis, prediction, kind)) {
// This phantoming has to be *after* the code for the intrinsic, to signify that
// the inputs must be kept alive whatever exits the intrinsic may do.
addToGraph(Phantom, callTarget);
@@ -1203,7 +1181,7 @@
}
// Can only handle this using the generic call handler.
- addCall(interpreter, currentInstruction, op);
+ addCall(currentInstruction, op);
return;
}
@@ -1211,7 +1189,7 @@
if (intrinsic != NoIntrinsic) {
emitFunctionChecks(callLinkStatus, callTarget, registerOffset, kind);
- if (handleIntrinsic(usesResult, resultOperand, intrinsic, registerOffset, argumentCountIncludingThis, prediction)) {
+ if (handleIntrinsic(resultOperand, intrinsic, registerOffset, argumentCountIncludingThis, prediction)) {
// This phantoming has to be *after* the code for the intrinsic, to signify that
// the inputs must be kept alive whatever exits the intrinsic may do.
addToGraph(Phantom, callTarget);
@@ -1220,13 +1198,13 @@
m_graph.compilation()->noticeInlinedCall();
return;
}
- } else if (handleInlining(usesResult, callTarget, resultOperand, callLinkStatus, registerOffset, argumentCountIncludingThis, nextOffset, kind)) {
+ } else if (handleInlining(callTarget, resultOperand, callLinkStatus, registerOffset, argumentCountIncludingThis, nextOffset, kind)) {
if (m_graph.compilation())
m_graph.compilation()->noticeInlinedCall();
return;
}
- addCall(interpreter, currentInstruction, op);
+ addCall(currentInstruction, op);
}
void ByteCodeParser::emitFunctionChecks(const CallLinkStatus& callLinkStatus, Node* callTarget, int registerOffset, CodeSpecializationKind kind)
@@ -1261,7 +1239,7 @@
addToGraph(Phantom, get(registerOffset + argumentToOperand(i)));
}
-bool ByteCodeParser::handleInlining(bool usesResult, Node* callTargetNode, int resultOperand, const CallLinkStatus& callLinkStatus, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, CodeSpecializationKind kind)
+bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus& callLinkStatus, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, CodeSpecializationKind kind)
{
// First, the really simple checks: do we have an actual JS function?
if (!callLinkStatus.executable())
@@ -1329,16 +1307,13 @@
size_t argumentPositionStart = m_graph.m_argumentPositions.size();
InlineStackEntry inlineStackEntry(
- this, codeBlock, codeBlock, m_graph.m_blocks.size() - 1,
- callLinkStatus.function(), (VirtualRegister)m_inlineStackTop->remapOperand(
- usesResult ? resultOperand : InvalidVirtualRegister),
+ this, codeBlock, codeBlock, m_graph.m_blocks.size() - 1, callLinkStatus.function(),
+ (VirtualRegister)m_inlineStackTop->remapOperand(resultOperand),
(VirtualRegister)inlineCallFrameStart, argumentCountIncludingThis, kind);
// This is where the actual inlining really happens.
unsigned oldIndex = m_currentIndex;
- unsigned oldProfilingIndex = m_currentProfilingIndex;
m_currentIndex = 0;
- m_currentProfilingIndex = 0;
addToGraph(InlineStart, OpInfo(argumentPositionStart));
if (callLinkStatus.isClosureCall()) {
@@ -1349,7 +1324,6 @@
parseCodeBlock();
m_currentIndex = oldIndex;
- m_currentProfilingIndex = oldProfilingIndex;
// If the inlined code created some new basic blocks, then we have linking to do.
if (inlineStackEntry.m_callsiteBlockHead != m_graph.m_blocks.size() - 1) {
@@ -1445,29 +1419,22 @@
return true;
}
-void ByteCodeParser::setIntrinsicResult(bool usesResult, int resultOperand, Node* node)
-{
- if (!usesResult)
- return;
- set(resultOperand, node);
-}
-
-bool ByteCodeParser::handleMinMax(bool usesResult, int resultOperand, NodeType op, int registerOffset, int argumentCountIncludingThis)
+bool ByteCodeParser::handleMinMax(int resultOperand, NodeType op, int registerOffset, int argumentCountIncludingThis)
{
if (argumentCountIncludingThis == 1) { // Math.min()
- setIntrinsicResult(usesResult, resultOperand, constantNaN());
+ set(resultOperand, constantNaN());
return true;
}
if (argumentCountIncludingThis == 2) { // Math.min(x)
Node* result = get(registerOffset + argumentToOperand(1));
addToGraph(Phantom, Edge(result, NumberUse));
- setIntrinsicResult(usesResult, resultOperand, result);
+ set(resultOperand, result);
return true;
}
if (argumentCountIncludingThis == 3) { // Math.min(x, y)
- setIntrinsicResult(usesResult, resultOperand, addToGraph(op, get(registerOffset + argumentToOperand(1)), get(registerOffset + argumentToOperand(2))));
+ set(resultOperand, addToGraph(op, get(registerOffset + argumentToOperand(1)), get(registerOffset + argumentToOperand(2))));
return true;
}
@@ -1477,12 +1444,12 @@
// FIXME: We dead-code-eliminate unused Math intrinsics, but that's invalid because
// they need to perform the ToNumber conversion, which can have side-effects.
-bool ByteCodeParser::handleIntrinsic(bool usesResult, int resultOperand, Intrinsic intrinsic, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction)
+bool ByteCodeParser::handleIntrinsic(int resultOperand, Intrinsic intrinsic, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction)
{
switch (intrinsic) {
case AbsIntrinsic: {
if (argumentCountIncludingThis == 1) { // Math.abs()
- setIntrinsicResult(usesResult, resultOperand, constantNaN());
+ set(resultOperand, constantNaN());
return true;
}
@@ -1492,26 +1459,26 @@
Node* node = addToGraph(ArithAbs, get(registerOffset + argumentToOperand(1)));
if (m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, Overflow))
node->mergeFlags(NodeMayOverflow);
- setIntrinsicResult(usesResult, resultOperand, node);
+ set(resultOperand, node);
return true;
}
case MinIntrinsic:
- return handleMinMax(usesResult, resultOperand, ArithMin, registerOffset, argumentCountIncludingThis);
+ return handleMinMax(resultOperand, ArithMin, registerOffset, argumentCountIncludingThis);
case MaxIntrinsic:
- return handleMinMax(usesResult, resultOperand, ArithMax, registerOffset, argumentCountIncludingThis);
+ return handleMinMax(resultOperand, ArithMax, registerOffset, argumentCountIncludingThis);
case SqrtIntrinsic: {
if (argumentCountIncludingThis == 1) { // Math.sqrt()
- setIntrinsicResult(usesResult, resultOperand, constantNaN());
+ set(resultOperand, constantNaN());
return true;
}
if (!MacroAssembler::supportsFloatingPointSqrt())
return false;
-
- setIntrinsicResult(usesResult, resultOperand, addToGraph(ArithSqrt, get(registerOffset + argumentToOperand(1))));
+
+ set(resultOperand, addToGraph(ArithSqrt, get(registerOffset + argumentToOperand(1))));
return true;
}
@@ -1519,7 +1486,7 @@
if (argumentCountIncludingThis != 2)
return false;
- ArrayMode arrayMode = getArrayMode(m_currentInstruction[5].u.arrayProfile);
+ ArrayMode arrayMode = getArrayMode(m_currentInstruction[6].u.arrayProfile);
if (!arrayMode.isJSArray())
return false;
switch (arrayMode.type()) {
@@ -1529,8 +1496,7 @@
case Array::Contiguous:
case Array::ArrayStorage: {
Node* arrayPush = addToGraph(ArrayPush, OpInfo(arrayMode.asWord()), OpInfo(prediction), get(registerOffset + argumentToOperand(0)), get(registerOffset + argumentToOperand(1)));
- if (usesResult)
- set(resultOperand, arrayPush);
+ set(resultOperand, arrayPush);
return true;
}
@@ -1544,7 +1510,7 @@
if (argumentCountIncludingThis != 1)
return false;
- ArrayMode arrayMode = getArrayMode(m_currentInstruction[5].u.arrayProfile);
+ ArrayMode arrayMode = getArrayMode(m_currentInstruction[6].u.arrayProfile);
if (!arrayMode.isJSArray())
return false;
switch (arrayMode.type()) {
@@ -1553,8 +1519,7 @@
case Array::Contiguous:
case Array::ArrayStorage: {
Node* arrayPop = addToGraph(ArrayPop, OpInfo(arrayMode.asWord()), OpInfo(prediction), get(registerOffset + argumentToOperand(0)));
- if (usesResult)
- set(resultOperand, arrayPop);
+ set(resultOperand, arrayPop);
return true;
}
@@ -1571,8 +1536,7 @@
int indexOperand = registerOffset + argumentToOperand(1);
Node* charCode = addToGraph(StringCharCodeAt, OpInfo(ArrayMode(Array::String).asWord()), get(thisOperand), getToInt32(indexOperand));
- if (usesResult)
- set(resultOperand, charCode);
+ set(resultOperand, charCode);
return true;
}
@@ -1584,8 +1548,7 @@
int indexOperand = registerOffset + argumentToOperand(1);
Node* charCode = addToGraph(StringCharAt, OpInfo(ArrayMode(Array::String).asWord()), get(thisOperand), getToInt32(indexOperand));
- if (usesResult)
- set(resultOperand, charCode);
+ set(resultOperand, charCode);
return true;
}
case FromCharCodeIntrinsic: {
@@ -1595,8 +1558,7 @@
int indexOperand = registerOffset + argumentToOperand(1);
Node* charCode = addToGraph(StringFromCharCode, getToInt32(indexOperand));
- if (usesResult)
- set(resultOperand, charCode);
+ set(resultOperand, charCode);
return true;
}
@@ -1606,8 +1568,7 @@
return false;
Node* regExpExec = addToGraph(RegExpExec, OpInfo(0), OpInfo(prediction), get(registerOffset + argumentToOperand(0)), get(registerOffset + argumentToOperand(1)));
- if (usesResult)
- set(resultOperand, regExpExec);
+ set(resultOperand, regExpExec);
return true;
}
@@ -1617,8 +1578,7 @@
return false;
Node* regExpExec = addToGraph(RegExpTest, OpInfo(0), OpInfo(prediction), get(registerOffset + argumentToOperand(0)), get(registerOffset + argumentToOperand(1)));
- if (usesResult)
- set(resultOperand, regExpExec);
+ set(resultOperand, regExpExec);
return true;
}
@@ -1630,7 +1590,7 @@
int rightOperand = registerOffset + argumentToOperand(2);
Node* left = getToInt32(leftOperand);
Node* right = getToInt32(rightOperand);
- setIntrinsicResult(usesResult, resultOperand, addToGraph(ArithIMul, left, right));
+ set(resultOperand, addToGraph(ArithIMul, left, right));
return true;
}
@@ -1640,7 +1600,7 @@
}
bool ByteCodeParser::handleConstantInternalFunction(
- bool usesResult, int resultOperand, InternalFunction* function, int registerOffset,
+ int resultOperand, InternalFunction* function, int registerOffset,
int argumentCountIncludingThis, SpeculatedType prediction, CodeSpecializationKind kind)
{
// If we ever find that we have a lot of internal functions that we specialize for,
@@ -1654,16 +1614,14 @@
if (function->classInfo() == &ArrayConstructor::s_info) {
if (argumentCountIncludingThis == 2) {
- setIntrinsicResult(
- usesResult, resultOperand,
+ set(resultOperand,
addToGraph(NewArrayWithSize, OpInfo(ArrayWithUndecided), get(registerOffset + argumentToOperand(1))));
return true;
}
for (int i = 1; i < argumentCountIncludingThis; ++i)
addVarArgChild(get(registerOffset + argumentToOperand(i)));
- setIntrinsicResult(
- usesResult, resultOperand,
+ set(resultOperand,
addToGraph(Node::VarArg, NewArray, OpInfo(ArrayWithUndecided), OpInfo(0)));
return true;
} else if (function->classInfo() == &StringConstructor::s_info) {
@@ -1677,7 +1635,7 @@
if (kind == CodeForConstruct)
result = addToGraph(NewStringObject, OpInfo(function->globalObject()->stringObjectStructure()), result);
- setIntrinsicResult(usesResult, resultOperand, result);
+ set(resultOperand, result);
return true;
}
@@ -1991,8 +1949,6 @@
}
while (true) {
- m_currentProfilingIndex = m_currentIndex;
-
// Don't extend over jump destinations.
if (m_currentIndex == limit) {
// Ordinarily we want to plant a jump. But refuse to do this if the block is
@@ -2017,7 +1973,7 @@
m_currentInstruction = currentInstruction; // Some methods want to use this, and we'd rather not thread it through calls.
OpcodeID opcodeID = interpreter->getOpcodeID(currentInstruction->u.opcode);
- if (m_graph.compilation() && opcodeID != op_call_put_result) {
+ if (m_graph.compilation()) {
addToGraph(CountExecution, OpInfo(m_graph.compilation()->executionCounterFor(
Profiler::OriginStack(*m_vm->m_perBytecodeProfiler, m_codeBlock, currentCodeOrigin()))));
}
@@ -2037,10 +1993,10 @@
if (op1->op() != ToThis) {
ConcurrentJITLocker locker(m_inlineStackTop->m_profiledBlock->m_lock);
ValueProfile* profile =
- m_inlineStackTop->m_profiledBlock->valueProfileForBytecodeOffset(m_currentProfilingIndex);
+ m_inlineStackTop->m_profiledBlock->valueProfileForBytecodeOffset(m_currentIndex);
profile->computeUpdatedPrediction(locker);
#if DFG_ENABLE(DEBUG_VERBOSE)
- dataLogF("[bc#%u]: profile %p: ", m_currentProfilingIndex, profile);
+ dataLogF("[bc#%u]: profile %p: ", m_currentIndex, profile);
profile->dump(WTF::dataFile());
dataLogF("\n");
#endif
@@ -3014,8 +2970,8 @@
case op_ret:
flushArgumentsAndCapturedVariables();
if (inlineCallFrame()) {
- if (m_inlineStackTop->m_returnValue != InvalidVirtualRegister)
- setDirect(m_inlineStackTop->m_returnValue, get(currentInstruction[1].u.operand));
+ ASSERT(m_inlineStackTop->m_returnValue != InvalidVirtualRegister);
+ setDirect(m_inlineStackTop->m_returnValue, get(currentInstruction[1].u.operand));
m_inlineStackTop->m_didReturn = true;
if (m_inlineStackTop->m_unlinkedBlocks.isEmpty()) {
// If we're returning from the first block, then we're done parsing.
@@ -3056,29 +3012,23 @@
LAST_OPCODE(op_throw_static_error);
case op_call:
- handleCall(interpreter, currentInstruction, Call, CodeForCall);
+ handleCall(currentInstruction, Call, CodeForCall);
NEXT_OPCODE(op_call);
case op_construct:
- handleCall(interpreter, currentInstruction, Construct, CodeForConstruct);
+ handleCall(currentInstruction, Construct, CodeForConstruct);
NEXT_OPCODE(op_construct);
case op_call_varargs: {
ASSERT(inlineCallFrame());
- ASSERT(currentInstruction[3].u.operand == m_inlineStackTop->m_codeBlock->argumentsRegister());
+ ASSERT(currentInstruction[4].u.operand == m_inlineStackTop->m_codeBlock->argumentsRegister());
ASSERT(!m_inlineStackTop->m_codeBlock->symbolTable()->slowArguments());
// It would be cool to funnel this into handleCall() so that it can handle
// inlining. But currently that won't be profitable anyway, since none of the
// uses of call_varargs will be inlineable. So we set this up manually and
// without inline/intrinsic detection.
- Instruction* putInstruction = currentInstruction + OPCODE_LENGTH(op_call_varargs);
-
- SpeculatedType prediction = SpecNone;
- if (interpreter->getOpcodeID(putInstruction->u.opcode) == op_call_put_result) {
- m_currentProfilingIndex = m_currentIndex + OPCODE_LENGTH(op_call_varargs);
- prediction = getPrediction();
- }
+ SpeculatedType prediction = getPrediction();
addToGraph(CheckArgumentsNotCreated);
@@ -3086,21 +3036,17 @@
if (JSStack::CallFrameHeaderSize + argCount > m_parameterSlots)
m_parameterSlots = JSStack::CallFrameHeaderSize + argCount;
- addVarArgChild(get(currentInstruction[1].u.operand)); // callee
- addVarArgChild(get(currentInstruction[2].u.operand)); // this
+ addVarArgChild(get(currentInstruction[2].u.operand)); // callee
+ addVarArgChild(get(currentInstruction[3].u.operand)); // this
for (unsigned argument = 1; argument < argCount; ++argument)
addVarArgChild(get(argumentToOperand(argument)));
- Node* call = addToGraph(Node::VarArg, Call, OpInfo(0), OpInfo(prediction));
- if (interpreter->getOpcodeID(putInstruction->u.opcode) == op_call_put_result)
- set(putInstruction[1].u.operand, call);
+ set(currentInstruction[1].u.operand,
+ addToGraph(Node::VarArg, Call, OpInfo(0), OpInfo(prediction)));
NEXT_OPCODE(op_call_varargs);
}
- case op_call_put_result:
- NEXT_OPCODE(op_call_put_result);
-
case op_jneq_ptr:
// Statically speculate for now. It makes sense to let speculate-only jneq_ptr
// support simmer for a while before making it more general, since it's
diff --git a/Source/JavaScriptCore/dfg/DFGCapabilities.cpp b/Source/JavaScriptCore/dfg/DFGCapabilities.cpp
index 3273361..6133cc5 100644
--- a/Source/JavaScriptCore/dfg/DFGCapabilities.cpp
+++ b/Source/JavaScriptCore/dfg/DFGCapabilities.cpp
@@ -184,7 +184,6 @@
case op_loop_hint:
case op_ret:
case op_end:
- case op_call_put_result:
case op_new_object:
case op_new_array:
case op_new_array_with_size:
@@ -208,7 +207,7 @@
return CanCompileAndInline;
case op_call_varargs:
- if (codeBlock->usesArguments() && pc[3].u.operand == codeBlock->argumentsRegister())
+ if (codeBlock->usesArguments() && pc[4].u.operand == codeBlock->argumentsRegister())
return CanInline;
return CannotCompile;
diff --git a/Source/JavaScriptCore/dfg/DFGOSRExitCompiler.cpp b/Source/JavaScriptCore/dfg/DFGOSRExitCompiler.cpp
index 6f88319..1fc31c4 100644
--- a/Source/JavaScriptCore/dfg/DFGOSRExitCompiler.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOSRExitCompiler.cpp
@@ -101,8 +101,8 @@
exit.m_code = FINALIZE_CODE_IF(
shouldShowDisassembly(),
patchBuffer,
- ("DFG OSR exit #%u (bc#%u, %s) from %s",
- exitIndex, exit.m_codeOrigin.bytecodeIndex,
+ ("DFG OSR exit #%u (%s, %s) from %s",
+ exitIndex, toCString(exit.m_codeOrigin).data(),
exitKindToString(exit.m_kind), toCString(*codeBlock).data()));
}
diff --git a/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp b/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
index efe1efd..3edeb1a 100644
--- a/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
+++ b/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
@@ -77,7 +77,6 @@
void reifyInlinedCallFrames(CCallHelpers& jit, const OSRExitBase& exit)
{
-#if USE(JSVALUE64)
ASSERT(jit.baselineCodeBlock()->jitType() == JITCode::BaselineJIT);
jit.storePtr(AssemblyHelpers::TrustedImmPtr(jit.baselineCodeBlock()), AssemblyHelpers::addressFor((VirtualRegister)JSStack::CodeBlock));
@@ -85,15 +84,12 @@
InlineCallFrame* inlineCallFrame = codeOrigin.inlineCallFrame;
CodeBlock* baselineCodeBlock = jit.baselineCodeBlockFor(codeOrigin);
CodeBlock* baselineCodeBlockForCaller = jit.baselineCodeBlockFor(inlineCallFrame->caller);
- Vector<BytecodeAndMachineOffset>& decodedCodeMap = jit.decodedCodeMapFor(baselineCodeBlockForCaller);
- unsigned returnBytecodeIndex = inlineCallFrame->caller.bytecodeIndex + OPCODE_LENGTH(op_call);
- BytecodeAndMachineOffset* mapping = binarySearch<BytecodeAndMachineOffset, unsigned>(decodedCodeMap, decodedCodeMap.size(), returnBytecodeIndex, BytecodeAndMachineOffset::getBytecodeIndex);
+ unsigned callBytecodeIndex = inlineCallFrame->caller.bytecodeIndex;
+ CallLinkInfo& callLinkInfo = baselineCodeBlockForCaller->getCallLinkInfo(callBytecodeIndex);
- ASSERT(mapping);
- ASSERT(mapping->m_bytecodeIndex == returnBytecodeIndex);
-
- void* jumpTarget = baselineCodeBlockForCaller->jitCode()->executableAddressAtOffset(mapping->m_machineCodeOffset);
+ void* jumpTarget = callLinkInfo.callReturnLocation.executableAddress();
+#if USE(JSVALUE64)
GPRReg callerFrameGPR;
if (inlineCallFrame->caller.inlineCallFrame) {
jit.addPtr(AssemblyHelpers::TrustedImm32(inlineCallFrame->caller.inlineCallFrame->stackOffset * sizeof(EncodedJSValue)), GPRInfo::callFrameRegister, GPRInfo::regT3);
@@ -109,24 +105,7 @@
jit.store32(AssemblyHelpers::TrustedImm32(inlineCallFrame->arguments.size()), AssemblyHelpers::payloadFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ArgumentCount)));
if (!inlineCallFrame->isClosureCall())
jit.store64(AssemblyHelpers::TrustedImm64(JSValue::encode(JSValue(inlineCallFrame->callee.get()))), AssemblyHelpers::addressFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::Callee)));
- }
#else // USE(JSVALUE64) // so this is the 32-bit part
- ASSERT(jit.baselineCodeBlock()->jitType() == JITCode::BaselineJIT);
- jit.storePtr(AssemblyHelpers::TrustedImmPtr(jit.baselineCodeBlock()), AssemblyHelpers::addressFor((VirtualRegister)JSStack::CodeBlock));
-
- for (CodeOrigin codeOrigin = exit.m_codeOrigin; codeOrigin.inlineCallFrame; codeOrigin = codeOrigin.inlineCallFrame->caller) {
- InlineCallFrame* inlineCallFrame = codeOrigin.inlineCallFrame;
- CodeBlock* baselineCodeBlock = jit.baselineCodeBlockFor(codeOrigin);
- CodeBlock* baselineCodeBlockForCaller = jit.baselineCodeBlockFor(inlineCallFrame->caller);
- Vector<BytecodeAndMachineOffset>& decodedCodeMap = jit.decodedCodeMapFor(baselineCodeBlockForCaller);
- unsigned returnBytecodeIndex = inlineCallFrame->caller.bytecodeIndex + OPCODE_LENGTH(op_call);
- BytecodeAndMachineOffset* mapping = binarySearch<BytecodeAndMachineOffset, unsigned>(decodedCodeMap, decodedCodeMap.size(), returnBytecodeIndex, BytecodeAndMachineOffset::getBytecodeIndex);
-
- ASSERT(mapping);
- ASSERT(mapping->m_bytecodeIndex == returnBytecodeIndex);
-
- void* jumpTarget = baselineCodeBlockForCaller->jitCode()->executableAddressAtOffset(mapping->m_machineCodeOffset);
-
GPRReg callerFrameGPR;
if (inlineCallFrame->caller.inlineCallFrame) {
jit.add32(AssemblyHelpers::TrustedImm32(inlineCallFrame->caller.inlineCallFrame->stackOffset * sizeof(EncodedJSValue)), GPRInfo::callFrameRegister, GPRInfo::regT3);
@@ -145,8 +124,8 @@
jit.store32(AssemblyHelpers::TrustedImm32(JSValue::CellTag), AssemblyHelpers::tagFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::Callee)));
if (!inlineCallFrame->isClosureCall())
jit.storePtr(AssemblyHelpers::TrustedImmPtr(inlineCallFrame->callee.get()), AssemblyHelpers::payloadFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::Callee)));
- }
#endif // USE(JSVALUE64) // ending the #else part, so directly above is the 32-bit part
+ }
}
void adjustAndJumpToTarget(CCallHelpers& jit, const OSRExitBase& exit)
diff --git a/Source/JavaScriptCore/jit/JIT.cpp b/Source/JavaScriptCore/jit/JIT.cpp
index 79fbfa4..a87dffc 100644
--- a/Source/JavaScriptCore/jit/JIT.cpp
+++ b/Source/JavaScriptCore/jit/JIT.cpp
@@ -199,7 +199,7 @@
OpcodeID opcodeID = m_interpreter->getOpcodeID(currentInstruction->u.opcode);
- if (m_compilation && opcodeID != op_call_put_result) {
+ if (m_compilation) {
add64(
TrustedImm32(1),
AbsoluteAddress(m_compilation->executionCounterFor(Profiler::OriginStack(Profiler::Origin(
@@ -325,7 +325,6 @@
DEFINE_OP(op_resolve_with_base)
DEFINE_OP(op_resolve_with_this)
DEFINE_OP(op_ret)
- DEFINE_OP(op_call_put_result)
DEFINE_OP(op_ret_object_or_this)
DEFINE_OP(op_rshift)
DEFINE_OP(op_urshift)
diff --git a/Source/JavaScriptCore/jit/JIT.h b/Source/JavaScriptCore/jit/JIT.h
index 67dac93..4b05a67 100644
--- a/Source/JavaScriptCore/jit/JIT.h
+++ b/Source/JavaScriptCore/jit/JIT.h
@@ -431,8 +431,9 @@
void compileOpCall(OpcodeID, Instruction*, unsigned callLinkInfoIndex);
void compileOpCallSlowCase(OpcodeID, Instruction*, Vector<SlowCaseEntry>::iterator&, unsigned callLinkInfoIndex);
void compileLoadVarargs(Instruction*);
- void compileCallEval();
- void compileCallEvalSlowCase(Vector<SlowCaseEntry>::iterator&);
+ void compileCallEval(Instruction*);
+ void compileCallEvalSlowCase(Instruction*, Vector<SlowCaseEntry>::iterator&);
+ void emitPutCallResult(Instruction*);
enum CompileOpStrictEqType { OpStrictEq, OpNStrictEq };
void compileOpStrictEq(Instruction* instruction, CompileOpStrictEqType type);
@@ -643,7 +644,6 @@
void emit_op_call(Instruction*);
void emit_op_call_eval(Instruction*);
void emit_op_call_varargs(Instruction*);
- void emit_op_call_put_result(Instruction*);
void emit_op_catch(Instruction*);
void emit_op_construct(Instruction*);
void emit_op_get_callee(Instruction*);
diff --git a/Source/JavaScriptCore/jit/JITCall.cpp b/Source/JavaScriptCore/jit/JITCall.cpp
index 5520a4d..51ec36b 100644
--- a/Source/JavaScriptCore/jit/JITCall.cpp
+++ b/Source/JavaScriptCore/jit/JITCall.cpp
@@ -51,20 +51,25 @@
namespace JSC {
-void JIT::emit_op_call_put_result(Instruction* instruction)
+void JIT::emitPutCallResult(Instruction* instruction)
{
int dst = instruction[1].u.operand;
emitValueProfilingSite();
emitPutVirtualRegister(dst);
- if (canBeOptimizedOrInlined())
- killLastResultRegister(); // Make lastResultRegister tracking simpler in the DFG.
+ if (canBeOptimizedOrInlined()) {
+ // Make lastResultRegister tracking simpler in the DFG. This is needed because
+ // the DFG may have the SetLocal corresponding to this Call's return value in
+ // a different basic block, if inlining happened. The DFG isn't smart enough to
+ // track the baseline JIT's last result register across basic blocks.
+ killLastResultRegister();
+ }
}
void JIT::compileLoadVarargs(Instruction* instruction)
{
- int thisValue = instruction[2].u.operand;
- int arguments = instruction[3].u.operand;
- int firstFreeRegister = instruction[4].u.operand;
+ int thisValue = instruction[3].u.operand;
+ int arguments = instruction[4].u.operand;
+ int firstFreeRegister = instruction[5].u.operand;
killLastResultRegister();
@@ -124,7 +129,7 @@
end.link(this);
}
-void JIT::compileCallEval()
+void JIT::compileCallEval(Instruction* instruction)
{
JITStubCall stubCall(this, cti_op_call_eval); // Initializes ScopeChain; ReturnPC; CodeBlock.
stubCall.call();
@@ -132,9 +137,11 @@
emitGetFromCallFrameHeaderPtr(JSStack::CallerFrame, callFrameRegister);
sampleCodeBlock(m_codeBlock);
+
+ emitPutCallResult(instruction);
}
-void JIT::compileCallEvalSlowCase(Vector<SlowCaseEntry>::iterator& iter)
+void JIT::compileCallEvalSlowCase(Instruction* instruction, Vector<SlowCaseEntry>::iterator& iter)
{
linkSlowCase(iter);
@@ -142,11 +149,13 @@
emitNakedCall(m_vm->getCTIStub(virtualCallGenerator).code());
sampleCodeBlock(m_codeBlock);
+
+ emitPutCallResult(instruction);
}
void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned callLinkInfoIndex)
{
- int callee = instruction[1].u.operand;
+ int callee = instruction[2].u.operand;
/* Caller always:
- Updates callFrameRegister to callee callFrame.
@@ -165,14 +174,14 @@
if (opcodeID == op_call_varargs)
compileLoadVarargs(instruction);
else {
- int argCount = instruction[2].u.operand;
- int registerOffset = instruction[3].u.operand;
+ int argCount = instruction[3].u.operand;
+ int registerOffset = instruction[4].u.operand;
if (opcodeID == op_call && shouldEmitProfiling()) {
emitGetVirtualRegister(registerOffset + CallFrame::argumentOffsetIncludingThis(0), regT0);
Jump done = emitJumpIfNotJSCell(regT0);
loadPtr(Address(regT0, JSCell::structureOffset()), regT0);
- storePtr(regT0, instruction[5].u.arrayProfile->addressOfLastSeenStructure());
+ storePtr(regT0, instruction[6].u.arrayProfile->addressOfLastSeenStructure());
done.link(this);
}
@@ -188,7 +197,7 @@
move(regT1, callFrameRegister);
if (opcodeID == op_call_eval) {
- compileCallEval();
+ compileCallEval(instruction);
return;
}
@@ -209,12 +218,14 @@
m_callStructureStubCompilationInfo[callLinkInfoIndex].hotPathOther = emitNakedCall();
sampleCodeBlock(m_codeBlock);
+
+ emitPutCallResult(instruction);
}
-void JIT::compileOpCallSlowCase(OpcodeID opcodeID, Instruction*, Vector<SlowCaseEntry>::iterator& iter, unsigned callLinkInfoIndex)
+void JIT::compileOpCallSlowCase(OpcodeID opcodeID, Instruction* instruction, Vector<SlowCaseEntry>::iterator& iter, unsigned callLinkInfoIndex)
{
if (opcodeID == op_call_eval) {
- compileCallEvalSlowCase(iter);
+ compileCallEvalSlowCase(instruction, iter);
return;
}
@@ -223,6 +234,8 @@
m_callStructureStubCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(opcodeID == op_construct ? m_vm->getCTIStub(linkConstructGenerator).code() : m_vm->getCTIStub(linkCallGenerator).code());
sampleCodeBlock(m_codeBlock);
+
+ emitPutCallResult(instruction);
}
void JIT::privateCompileClosureCall(CallLinkInfo* callLinkInfo, CodeBlock* calleeCodeBlock, Structure* expectedStructure, ExecutableBase* expectedExecutable, MacroAssemblerCodePtr codePtr)
@@ -271,6 +284,46 @@
callLinkInfo->stub = stubRoutine.release();
}
+void JIT::emit_op_call(Instruction* currentInstruction)
+{
+ compileOpCall(op_call, currentInstruction, m_callLinkInfoIndex++);
+}
+
+void JIT::emit_op_call_eval(Instruction* currentInstruction)
+{
+ compileOpCall(op_call_eval, currentInstruction, m_callLinkInfoIndex);
+}
+
+void JIT::emit_op_call_varargs(Instruction* currentInstruction)
+{
+ compileOpCall(op_call_varargs, currentInstruction, m_callLinkInfoIndex++);
+}
+
+void JIT::emit_op_construct(Instruction* currentInstruction)
+{
+ compileOpCall(op_construct, currentInstruction, m_callLinkInfoIndex++);
+}
+
+void JIT::emitSlow_op_call(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
+{
+ compileOpCallSlowCase(op_call, currentInstruction, iter, m_callLinkInfoIndex++);
+}
+
+void JIT::emitSlow_op_call_eval(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
+{
+ compileOpCallSlowCase(op_call_eval, currentInstruction, iter, m_callLinkInfoIndex);
+}
+
+void JIT::emitSlow_op_call_varargs(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
+{
+ compileOpCallSlowCase(op_call_varargs, currentInstruction, iter, m_callLinkInfoIndex++);
+}
+
+void JIT::emitSlow_op_construct(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
+{
+ compileOpCallSlowCase(op_construct, currentInstruction, iter, m_callLinkInfoIndex++);
+}
+
} // namespace JSC
#endif // USE(JSVALUE64)
diff --git a/Source/JavaScriptCore/jit/JITCall32_64.cpp b/Source/JavaScriptCore/jit/JITCall32_64.cpp
index c8be312..e1ed79d 100644
--- a/Source/JavaScriptCore/jit/JITCall32_64.cpp
+++ b/Source/JavaScriptCore/jit/JITCall32_64.cpp
@@ -50,7 +50,7 @@
namespace JSC {
-void JIT::emit_op_call_put_result(Instruction* instruction)
+void JIT::emitPutCallResult(Instruction* instruction)
{
int dst = instruction[1].u.operand;
emitValueProfilingSite();
@@ -138,9 +138,9 @@
void JIT::compileLoadVarargs(Instruction* instruction)
{
- int thisValue = instruction[2].u.operand;
- int arguments = instruction[3].u.operand;
- int firstFreeRegister = instruction[4].u.operand;
+ int thisValue = instruction[3].u.operand;
+ int arguments = instruction[4].u.operand;
+ int firstFreeRegister = instruction[5].u.operand;
JumpList slowCase;
JumpList end;
@@ -200,7 +200,7 @@
end.link(this);
}
-void JIT::compileCallEval()
+void JIT::compileCallEval(Instruction* instruction)
{
JITStubCall stubCall(this, cti_op_call_eval); // Initializes ScopeChain; ReturnPC; CodeBlock.
stubCall.call();
@@ -208,9 +208,11 @@
emitGetFromCallFrameHeaderPtr(JSStack::CallerFrame, callFrameRegister);
sampleCodeBlock(m_codeBlock);
+
+ emitPutCallResult(instruction);
}
-void JIT::compileCallEvalSlowCase(Vector<SlowCaseEntry>::iterator& iter)
+void JIT::compileCallEvalSlowCase(Instruction* instruction, Vector<SlowCaseEntry>::iterator& iter)
{
linkSlowCase(iter);
@@ -218,11 +220,13 @@
emitNakedCall(m_vm->getCTIStub(virtualCallGenerator).code());
sampleCodeBlock(m_codeBlock);
+
+ emitPutCallResult(instruction);
}
void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned callLinkInfoIndex)
{
- int callee = instruction[1].u.operand;
+ int callee = instruction[2].u.operand;
/* Caller always:
- Updates callFrameRegister to callee callFrame.
@@ -241,14 +245,14 @@
if (opcodeID == op_call_varargs)
compileLoadVarargs(instruction);
else {
- int argCount = instruction[2].u.operand;
- int registerOffset = instruction[3].u.operand;
+ int argCount = instruction[3].u.operand;
+ int registerOffset = instruction[4].u.operand;
if (opcodeID == op_call && shouldEmitProfiling()) {
emitLoad(registerOffset + CallFrame::argumentOffsetIncludingThis(0), regT0, regT1);
Jump done = branch32(NotEqual, regT0, TrustedImm32(JSValue::CellTag));
loadPtr(Address(regT1, JSCell::structureOffset()), regT1);
- storePtr(regT1, instruction[5].u.arrayProfile->addressOfLastSeenStructure());
+ storePtr(regT1, instruction[6].u.arrayProfile->addressOfLastSeenStructure());
done.link(this);
}
@@ -265,7 +269,7 @@
move(regT3, callFrameRegister);
if (opcodeID == op_call_eval) {
- compileCallEval();
+ compileCallEval(instruction);
return;
}
@@ -288,12 +292,13 @@
m_callStructureStubCompilationInfo[callLinkInfoIndex].hotPathOther = emitNakedCall();
sampleCodeBlock(m_codeBlock);
+ emitPutCallResult(instruction);
}
-void JIT::compileOpCallSlowCase(OpcodeID opcodeID, Instruction*, Vector<SlowCaseEntry>::iterator& iter, unsigned callLinkInfoIndex)
+void JIT::compileOpCallSlowCase(OpcodeID opcodeID, Instruction* instruction, Vector<SlowCaseEntry>::iterator& iter, unsigned callLinkInfoIndex)
{
if (opcodeID == op_call_eval) {
- compileCallEvalSlowCase(iter);
+ compileCallEvalSlowCase(instruction, iter);
return;
}
@@ -303,6 +308,7 @@
m_callStructureStubCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(opcodeID == op_construct ? m_vm->getCTIStub(linkConstructGenerator).code() : m_vm->getCTIStub(linkCallGenerator).code());
sampleCodeBlock(m_codeBlock);
+ emitPutCallResult(instruction);
}
void JIT::privateCompileClosureCall(CallLinkInfo* callLinkInfo, CodeBlock* calleeCodeBlock, Structure* expectedStructure, ExecutableBase* expectedExecutable, MacroAssemblerCodePtr codePtr)
diff --git a/Source/JavaScriptCore/jit/JITOpcodes.cpp b/Source/JavaScriptCore/jit/JITOpcodes.cpp
index 13b22b8..cf245cc 100644
--- a/Source/JavaScriptCore/jit/JITOpcodes.cpp
+++ b/Source/JavaScriptCore/jit/JITOpcodes.cpp
@@ -241,26 +241,6 @@
emitPutVirtualRegister(dst);
}
-void JIT::emit_op_call(Instruction* currentInstruction)
-{
- compileOpCall(op_call, currentInstruction, m_callLinkInfoIndex++);
-}
-
-void JIT::emit_op_call_eval(Instruction* currentInstruction)
-{
- compileOpCall(op_call_eval, currentInstruction, m_callLinkInfoIndex);
-}
-
-void JIT::emit_op_call_varargs(Instruction* currentInstruction)
-{
- compileOpCall(op_call_varargs, currentInstruction, m_callLinkInfoIndex++);
-}
-
-void JIT::emit_op_construct(Instruction* currentInstruction)
-{
- compileOpCall(op_construct, currentInstruction, m_callLinkInfoIndex++);
-}
-
void JIT::emit_op_tear_off_activation(Instruction* currentInstruction)
{
int activation = currentInstruction[1].u.operand;
@@ -1091,26 +1071,6 @@
stubCall.call(dst);
}
-void JIT::emitSlow_op_call(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
-{
- compileOpCallSlowCase(op_call, currentInstruction, iter, m_callLinkInfoIndex++);
-}
-
-void JIT::emitSlow_op_call_eval(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
-{
- compileOpCallSlowCase(op_call_eval, currentInstruction, iter, m_callLinkInfoIndex);
-}
-
-void JIT::emitSlow_op_call_varargs(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
-{
- compileOpCallSlowCase(op_call_varargs, currentInstruction, iter, m_callLinkInfoIndex++);
-}
-
-void JIT::emitSlow_op_construct(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
-{
- compileOpCallSlowCase(op_construct, currentInstruction, iter, m_callLinkInfoIndex++);
-}
-
void JIT::emitSlow_op_to_number(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
{
linkSlowCase(iter);
diff --git a/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp b/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
index b7d4441..d5b08a2 100644
--- a/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
+++ b/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
@@ -1459,16 +1459,16 @@
// - If possible, link the call's inline cache.
// - Return a tuple of machine code address to call and the new call frame.
- JSValue calleeAsValue = LLINT_OP_C(1).jsValue();
+ JSValue calleeAsValue = LLINT_OP_C(2).jsValue();
- ExecState* execCallee = exec + pc[3].u.operand;
+ ExecState* execCallee = exec + pc[4].u.operand;
- execCallee->setArgumentCountIncludingThis(pc[2].u.operand);
+ execCallee->setArgumentCountIncludingThis(pc[3].u.operand);
execCallee->uncheckedR(JSStack::Callee) = calleeAsValue;
execCallee->setCallerFrame(exec);
- ASSERT(pc[4].u.callLinkInfo);
- return setUpCall(execCallee, pc, kind, calleeAsValue, pc[4].u.callLinkInfo);
+ ASSERT(pc[5].u.callLinkInfo);
+ return setUpCall(execCallee, pc, kind, calleeAsValue, pc[5].u.callLinkInfo);
}
LLINT_SLOW_PATH_DECL(slow_path_call)
@@ -1491,11 +1491,11 @@
// - Figure out what to call and compile it if necessary.
// - Return a tuple of machine code address to call and the new call frame.
- JSValue calleeAsValue = LLINT_OP_C(1).jsValue();
+ JSValue calleeAsValue = LLINT_OP_C(2).jsValue();
ExecState* execCallee = loadVarargs(
exec, &vm.interpreter->stack(),
- LLINT_OP_C(2).jsValue(), LLINT_OP_C(3).jsValue(), pc[4].u.operand);
+ LLINT_OP_C(3).jsValue(), LLINT_OP_C(4).jsValue(), pc[5].u.operand);
LLINT_CALL_CHECK_EXCEPTION(exec, pc);
execCallee->uncheckedR(JSStack::Callee) = calleeAsValue;
@@ -1508,11 +1508,11 @@
LLINT_SLOW_PATH_DECL(slow_path_call_eval)
{
LLINT_BEGIN_NO_SET_PC();
- JSValue calleeAsValue = LLINT_OP(1).jsValue();
+ JSValue calleeAsValue = LLINT_OP(2).jsValue();
- ExecState* execCallee = exec + pc[3].u.operand;
+ ExecState* execCallee = exec + pc[4].u.operand;
- execCallee->setArgumentCountIncludingThis(pc[2].u.operand);
+ execCallee->setArgumentCountIncludingThis(pc[3].u.operand);
execCallee->setCallerFrame(exec);
execCallee->uncheckedR(JSStack::Callee) = calleeAsValue;
execCallee->setScope(exec->scope());
diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter.cpp b/Source/JavaScriptCore/llint/LowLevelInterpreter.cpp
index 4340ed7..595cc45 100644
--- a/Source/JavaScriptCore/llint/LowLevelInterpreter.cpp
+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter.cpp
@@ -463,11 +463,10 @@
// from ArgumentCount.tag() (see the dispatchAfterCall() macro used in
// the callTargetFunction() macro in the llint asm files).
//
- // For the C loop, we don't have the JIT stub to this work for us.
- // So, we need to implement the equivalent of dispatchAfterCall() here
- // before dispatching to the PC.
+ // For the C loop, we don't have the JIT stub to this work for us. So,
+ // we jump to llint_generic_return_point.
- vPC = callFrame->currentVPC() + OPCODE_LENGTH(op_call);
+ vPC = callFrame->currentVPC();
#if USE(JSVALUE64)
// Based on LowLevelInterpreter64.asm's dispatchAfterCall():
@@ -490,7 +489,7 @@
rBasePC.vp = codeBlock->instructions().begin();
#endif // USE(JSVALUE64)
- NEXT_INSTRUCTION();
+ goto op_generic_return_point;
} // END doReturnHelper.
diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
index 847c3b1..837809a 100644
--- a/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
@@ -101,7 +101,12 @@
macro dispatchAfterCall()
loadi ArgumentCount + TagOffset[cfr], PC
- dispatch(6)
+ loadi 4[PC], t2
+ loadi 28[PC], t3
+ storei t1, TagOffset[cfr, t2, 8]
+ storei t0, PayloadOffset[cfr, t2, 8]
+ valueProfile(t1, t0, t3)
+ dispatch(8)
end
macro cCall2(function, arg1, arg2)
@@ -1576,29 +1581,29 @@
macro arrayProfileForCall()
if VALUE_PROFILER
- loadi 12[PC], t3
+ loadi 16[PC], t3
bineq ThisArgumentOffset + TagOffset[cfr, t3, 8], CellTag, .done
loadi ThisArgumentOffset + PayloadOffset[cfr, t3, 8], t0
loadp JSCell::m_structure[t0], t0
- loadp 20[PC], t1
+ loadp 24[PC], t1
storep t0, ArrayProfile::m_lastSeenStructure[t1]
.done:
end
end
macro doCall(slowPath)
- loadi 4[PC], t0
- loadi 16[PC], t1
+ loadi 8[PC], t0
+ loadi 20[PC], t1
loadp LLIntCallLinkInfo::callee[t1], t2
loadConstantOrVariablePayload(t0, CellTag, t3, .opCallSlow)
bineq t3, t2, .opCallSlow
- loadi 12[PC], t3
+ loadi 16[PC], t3
lshifti 3, t3
addp cfr, t3 # t3 contains the new value of cfr
loadp JSFunction::m_scope[t2], t0
storei t2, Callee + PayloadOffset[t3]
storei t0, ScopeChain + PayloadOffset[t3]
- loadi 8[PC], t2
+ loadi 12[PC], t2
storei PC, ArgumentCount + TagOffset[cfr]
storep cfr, CallerFrame[t3]
storei t2, ArgumentCount + PayloadOffset[t3]
@@ -1639,16 +1644,6 @@
doReturn()
-_llint_op_call_put_result:
- loadi 4[PC], t2
- loadi 8[PC], t3
- storei t1, TagOffset[cfr, t2, 8]
- storei t0, PayloadOffset[cfr, t2, 8]
- valueProfile(t1, t0, t3)
- traceExecution() # Needs to be here because it would clobber t1, t0
- dispatch(3)
-
-
_llint_op_ret_object_or_this:
traceExecution()
checkSwitchToJITForEpilogue()
diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
index 775529c..d2ab8518 100644
--- a/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
@@ -55,7 +55,11 @@
loadi ArgumentCount + TagOffset[cfr], PC
loadp CodeBlock[cfr], PB
loadp CodeBlock::m_instructions[PB], PB
- dispatch(6)
+ loadisFromInstruction(1, t1)
+ loadpFromInstruction(7, t2)
+ storeq t0, [cfr, t1, 8]
+ valueProfile(t0, t2)
+ dispatch(8)
end
macro cCall2(function, arg1, arg2)
@@ -1414,29 +1418,29 @@
macro arrayProfileForCall()
if VALUE_PROFILER
- loadisFromInstruction(3, t3)
+ loadisFromInstruction(4, t3)
loadq ThisArgumentOffset[cfr, t3, 8], t0
btqnz t0, tagMask, .done
loadp JSCell::m_structure[t0], t0
- loadpFromInstruction(5, t1)
+ loadpFromInstruction(6, t1)
storep t0, ArrayProfile::m_lastSeenStructure[t1]
.done:
end
end
macro doCall(slowPath)
- loadisFromInstruction(1, t0)
- loadpFromInstruction(4, t1)
+ loadisFromInstruction(2, t0)
+ loadpFromInstruction(5, t1)
loadp LLIntCallLinkInfo::callee[t1], t2
loadConstantOrVariable(t0, t3)
bqneq t3, t2, .opCallSlow
- loadisFromInstruction(3, t3)
+ loadisFromInstruction(4, t3)
lshifti 3, t3
addp cfr, t3
loadp JSFunction::m_scope[t2], t0
storeq t2, Callee[t3]
storeq t0, ScopeChain[t3]
- loadisFromInstruction(2, t2)
+ loadisFromInstruction(3, t2)
storei PC, ArgumentCount + TagOffset[cfr]
storeq cfr, CallerFrame[t3]
storei t2, ArgumentCount + PayloadOffset[t3]
@@ -1475,15 +1479,6 @@
doReturn()
-_llint_op_call_put_result:
- loadisFromInstruction(1, t2)
- loadpFromInstruction(2, t3)
- storeq t0, [cfr, t2, 8]
- valueProfile(t0, t3)
- traceExecution()
- dispatch(3)
-
-
_llint_op_ret_object_or_this:
traceExecution()
checkSwitchToJITForEpilogue()